-
Posts
18725 -
Joined
-
Last visited
-
Days Won
707
Everything posted by Nytro
-
Publicat pe 28 sept. 2014 Bitcoins are mined using a cryptographic algorithm called SHA-256. This algorithm is simple enough to be done with pencil and paper, as I show in this video. Not surprisingly, this is a thoroughly impractical way to mine. One round of the algorithm takes 16 minutes, 45 seconds which works out to a hash rate of 0.67 hashes per day.
- 1 reply
-
- 3
-
-
-
Payloads All The Things A list of useful payloads and bypasses for Web Application Security. Feel free to improve with your payloads and techniques ! I <3 pull requests Tools Kali Linux Web Developper Hackbar Burp Proxy Fiddler DirBuster GoBuster Knockpy SQLmap Nikto Nessus Recon-ng Wappalyzer Metasploit Docker docker pull remnux/metasploit - docker-metasploit docker pull paoloo/sqlmap - docker-sqlmap docker pull kalilinux/kali-linux-docker official Kali Linux docker pull owasp/zap2docker-stable - official OWASP ZAP docker pull wpscanteam/wpscan - official WPScan docker pull infoslack/dvwa - Damn Vulnerable Web Application (DVWA) docker pull danmx/docker-owasp-webgoat - OWASP WebGoat Project docker image docker pull opendns/security-ninjas - Security Ninjas docker pull ismisepaul/securityshepherd - OWASP Security Shepherd docker-compose build && docker-compose up - OWASP NodeGoat docker pull citizenstig/nowasp - OWASP Mutillidae II Web Pen-Test Practice Application docker pull bkimminich/juice-shop - OWASP Juice Shop More resources Book's list: Web Hacking 101 OWASP Testing Guide v4 Penetration Testing: A Hands-On Introduction to Hacking The Hacker Playbook 2: Practical Guide to Penetration Testing The Mobile Application Hacker’s Handbook Black Hat Python: Python Programming for Hackers and Pentesters Metasploit: The Penetration Tester's Guide The Database Hacker's Handbook, David Litchfield et al., 2005 The Shellcoders Handbook by Chris Anley et al., 2007 The Mac Hacker's Handbook by Charlie Miller & Dino Dai Zovi, 2009 The Web Application Hackers Handbook by D. Stuttard, M. Pinto, 2011 iOS Hackers Handbook by Charlie Miller et al., 2012 Android Hackers Handbook by Joshua J. Drake et al., 2014 The Browser Hackers Handbook by Wade Alcorn et al., 2014 The Mobile Application Hackers Handbook by Dominic Chell et al., 2015 Car Hacker's Handbook by Craig Smith, 2016 Blogs/Websites http://blog.zsec.uk/101-web-testing-tooling/ https://blog.innerht.ml https://blog.zsec.uk https://www.exploit-db.com/google-hacking-database https://www.arneswinnen.net https://forum.bugcrowd.com/t/researcher-resources-how-to-become-a-bug-bounty-hunter/1102 Youtube Hunting for Top Bounties - Nicolas Grégoire BSidesSF 101 The Tales of a Bug Bounty Hunter - Arne Swinnen Security Fest 2016 The Secret life of a Bug Bounty Hunter - Frans Rosén Practice Root-Me Zenk-Security W3Challs NewbieContest Vulnhub The Cryptopals Crypto Challenges Penetration Testing Practice Labs alert(1) to win Hacksplaining HackThisSite PentesterLab : Learn Web Penetration Testing: The Right Way Bug Bounty HackerOne BugCrowd Bounty Factory List of Bounty Program Sursa: https://github.com/swisskyrepo/PayloadsAllTheThings
-
- 4
-
-
-
slavco Aug 22 Wordpress SQLi There won’t be an intro, let us jump to the problem. This is the wordpress database abstraction prepare method code: public function prepare( $query, $args ) { if ( is_null( $query ) ) return; // This is not meant to be foolproof — but it will catch obviously incorrect usage. if ( strpos( $query, ‘%’ ) === false ) { _doing_it_wrong( ‘wpdb::prepare’, sprintf( __( ‘The query argument of %s must have a placeholder.’ ), ‘wpdb::prepare()’ ), ‘3.9.0’ ); } $args = func_get_args(); array_shift( $args ); // If args were passed as an array (as in vsprintf), move them up if ( isset( $args[0] ) && is_array($args[0]) ) $args = $args[0]; $query = str_replace( “‘%s’”, ‘%s’, $query ); // in case someone mistakenly already singlequoted it $query = str_replace( ‘“%s”’, ‘%s’, $query ); // doublequote unquoting $query = preg_replace( ‘|(?<!%)%f|’ , ‘%F’, $query ); // Force floats to be locale unaware $query = preg_replace( ‘|(?<!%)%s|’, “‘%s’”, $query ); // quote the strings, avoiding escaped strings like %%s array_walk( $args, array( $this, ‘escape_by_ref’ ) ); return @vsprintf( $query, $args ); } From the code there are 2 interesting unsafe PHP practices that could guide towards huge vulnerabilities towards wordpress system. Before we jump to the SQLi case I’ll cover another issue. This issue is rised from following functionality: if ( isset( $args[0] ) && is_array($args[0]) ) $args = $args[0]; This means that if you have something like this: $wpdb->prepare($sql, $input_param1, $sanitized_param2, $sanitized_param3); then if you control the $input_param1 e.g. is part of the $input_param1 = $_REQUEST[“input”], this means that you can add your own values for the remaining parameters. This could mean nothing in some cases, but in some cases could easy lead to RCE having on mind nature and architecture of the wp itself. SQLi vulnerability In order to achieve SQLi in wp framework based on this prepare method we must know how core PHP function of this method works. It is vspritfwhich is in fact sprintf. This means that $query is format string and $args are parameters => directives in the format string define how the args will be placed in the format string e.g. query. Very, very important feature of sprintf are swapping arguments :) As extra there we have the following lines of code: $query = str_replace( “‘%s’”, ‘%s’, $query ); // in case someone mistakenly already singlequoted it $query = str_replace( ‘“%s”’, ‘%s’, $query ); // doublequote unquoting e.g. will replace any %s into '%s'. From everything above we got following conclusion: If we are able to put into $query some string that will hold %1$%s then we can salute our SQLi => after prepare method is called then we will have an extra 'into query, because %1$%s will become %1$'%s' and after sprintf will become $arg[1]'. For now this is just theory and most probably improper usage of the prepare method, but if we find something interesting in the wp core than nobody could blame the lousy developers who don’t follow coding standards and recomendations from the API docs. Most interesting function is delete_metadata function and this function perform the desired actions from description above and when it is called with all of the 5 parameters set and $meta_value != “” and $delete_all = true; then we have our working POC e.g. if ( $delete_all ) { $value_clause = ‘’; if ( ‘’ !== $meta_value && null !== $meta_value && false !== $meta_value ) { $value_clause = $wpdb->prepare( “ AND meta_value = %s”, $meta_value ); } $object_ids = $wpdb->get_col( $wpdb->prepare( “SELECT $type_column FROM $table WHERE meta_key = %s $value_clause”, $meta_key ) ); } $value_clause will hold our input, but we need to be sure $meta_valuealready exists in the DB in order this SQLi vulnerable snippet is executed — remember this one. This delete_metadata function called with desired number of parameters is called in wp_delete_attachment function and this function is called in wp-admin/upload.php where $post_id_del input is value taken directly from $_REQUEST. Let us check the wp_delete_attachment function and its constraints before we reach the desired line e.g. delete_metadata( ‘post’, null, ‘_thumbnail_id’, $post_id, true );. The only obstacle that prevents this code to be executed is the following: if ( !$post = $wpdb->get_row( $wpdb->prepare(“SELECT * FROM $wpdb->posts WHERE ID = %d”, $post_id) ) ) return $post; but again due the nature of sprintf and %d directive we have bypass => attachment_post_id %1$%s your sql payload. Here I’ll stop for today (see you tomorrow with part 2: https://medium.com/websec/wordpress-sqli-poc-f1827c20bf8e), because in order authenticated user that have permission to create posts to execute successful SQLi attack need to insert the attachment_post_id %1$%s your sql payload as _thumbnail_id meta value. Fast fix for this use case (if you allow `author` or bigger role to your wp setup): At the top of the wp_delete_attachment function, right after global $wpdb;add the following line: $post_id = (int) $post_id; Impact for the wp eco system This unsafe method have quite huge impact towards wp eco system. There are affected plugins. Some of them already were informed and patched their issues, some of them put credits, some not. Another ones have pushed `silent` patches, but no one cares regarding safety of all. In the next writings of this topic I’ll release most common places/practices where issues like this ones occurs and will release the vulnerable core methods beside pointed one, so everyone can help this issue being solved. Responsible disclosure This approach is more than responsible disclosure and I’ll reffer to the paragraph for the impact and this H1 report https://hackerone.com/reports/179920 Promo If you are wp developer or wp host provider or wp security product provider with valuable list of clients, we offer subscription list and we are exceptional (B2B only). Sursa: https://medium.com/websec/wordpress-sqli-bbb2afcc8e94
-
- 4
-
-
-
Primii speakeri: https://def.camp/speakers/
-
Sources: https://github.com/doadam/ziVA https://blog.zimperium.com/ziva-video-audio-ios-kernel-exploit/ ziVA An iOS kernel exploit designated to work on all 64-bit iOS devices <= 10.3.1 More general information https://blog.zimperium.com/zimperium-zlabs-ios-security-advisories/ https://blog.zimperium.com/ziva-video-audio-ios-kernel-exploit/ Offsets modifications for other iOS devices Like a lot (if not most) of the iOS kernel exploits, this also requires offsets for each iOS device and version. Those will be posted in the close future (when I get more time) but should be acquired from AppleAVEDriver (you can get a hint on the offsets from the comments above them). Sandbox escape Like mentioned, AppleAVEDriver direct access requires sandbox escape (either mediaserverd sandbox context or no sandbox at all). Fortunately, Sandbox escape exploits have been released by P0, which means this can be used to completely compromise a kernel, and a step towards a full jailbreak. Is it a Jailbreak? This is a crucial part in a Jailbreak chain, but this never aimed to become a Jailbreak. Is this going to be a jailbreak? Maybe, if someone wants to work on that Credits Credit for finding the vulnerabilities, chaining them together, writing the exploit go to Adam Donenfeld (@doadam). Special thanks to Zuk Avraham (@ihackbanme), Yaniv Karta (@shokoluv) and the rest of the Zimperium team for the opportunity (and the paycheck). Proof of Concept: https://github.com/doadam/ziVA https://github.com/offensive-security/exploit-database-bin-sploits/raw/master/sploits/42555.zip Sursa: https://www.exploit-db.com/exploits/42555/
-
- 1
-
-
OWASP AppSec Bucharest 2017 - Call for Speakers
Nytro replied to Nytro's topic in Anunturi importante
Se pare ca mai sunt vreo 2 locuri, daca e cineva interesat sa prezinte: https://www.owasp.org/index.php/OWASP_Bucharest_AppSec_Conference_2017 -
Da, e foarte detaliat, recomand.
- 1 reply
-
- 1
-
-
Table of ContentsVisual/Mechanical Inspection 2 •Service Eligibility 2 •Swollen Battery 3 .•Display Modification 3 •Liquid Contact 4 •Debris or Corrosion 6 •Enclosure Wear 6 iPhone 6 Hardware Overview 7 iPhone 6 Plus Hardware Overview 9 iPhone 6s Hardware Overview 11 iPhone 6s Plus Hardware Overview 13 iPhone 7 Hardware Overview 15 iPhone 7 Plus Hardware Overview 17 Service Eligibility Guidelines 19 Model Numbers and Configuration Codes 20 Download: https://www.dropbox.com/s/igvowila1q317ys/070-00167-I_EN.pdf?dl=0
- 1 reply
-
- 2
-
-
-
Deep Analysis of New Poison Ivy Variant by Xiaopeng Zhang | Aug 23, 2017 | Filed in: Security Research Recently, the FortiGuard Labs research team observed that a new variant of Poison Ivy was being spread through a compromised PowerPoint file. We captured a PowerPoint file named Payment_Advice.ppsx, which is in OOXML format. Once the victim opens this file using the MS PowerPoint program, the malicious code contained in the file is executed. It downloads the Poison Ivy malware onto the victim’s computer and then launches it. In this blog, I’ll show the details of how this happens, what techniques are used by this malware, as well as what it does to the victim’s computer. The PowerPoint Sample Figure 1 shows a screenshot of when the ppsx file is opened. Figure 1. Open Payment_Advice.ppsx As you can see, the ppsx file is played automatically. The “ppsx” extension stands for “PowerPoint Show,” which opens the file in presentation mode. This allows the malicious code to be executed automatically. The warning message box alerts the user that it might run an unsafe external program. Usually, the implied content of the document beguiles the user into pressing the Enable button. Let’s take a look at the malicious code embedded inside this PowerPoint file. OOXML file is a zip format file. By decompressing this file we can see the file/folder structure, shown below. Figure 2. PPSX file structure Going into its .\ppt\slides\ subfolder, slide1.xml is the slide automatically shown in Figure 1. The file “.\_rels\slide1.xml.rels” is the relationship file where the resources used in slide1.xml are defined. In slide1.xml, I found the xml code: . This means that when the user's mouse hovers over this element, something named “rId2” in slide1.xml.rels file is executed. Figure 3 shows the relationship between them. Figure 3. The code defined in “rId2” Being Added into the Startup Group The code defined in “rId2” uses an echo command of cmd.exe to output vbs codes into the Thumbs.vbs file in the “Startup” folder of the Start menu. This allows the Thumbs.vbs file to be executed when the victim’s system starts. We’ll take a look at the content of this Thumb.vbs file below. Figure 4. Thumb.vbs in the Startup folder and its content The Downloaded File Thumbs.vbs downloads a file from hxxp://203.248.116.182/images/Thumbs.bmp and runs it using msiexec.exe. As you may know, msiexec.exe is the Microsoft Windows Installer program, which is the default handler of .MSI files. Msiexec.exe can be used to install/uninstall/update software on Windows. The MSI file is an Installer Package. It contains a PE file (in a stream) that is executed when it’s loaded by msiexec.exe. This PE file could be replaced with malware to bypass any AV software detection. We have also observed that more and more malware authors have started using this method to run their malware. The MSI file is in the Microsoft OLE Compound File format. In Figure 5 we can see the downloaded Thumbs.bmp file content in the DocFile Viewer. Figure 5. The downloaded Thumb.bmp in DocFile viewer Next, I’m going to extract this PE file from the stream into a file (exported_thumbs). By checking with a PE analysis tool, we can see that it’s a 64-bit .Net program. This means that this malware only afftects 64bit Windows. Analyzing the .Net code and Running It After putting this extracted file into dnSpy to be analyzed, we can see the entry function Main(), as shown in Figure 6. Figure 6. Main function It then calls the rGHDcvkN.Exec() function in Main(), which contains a huge array. Actually, the data in the array is the code that is executed as a thread function by a newly-created thread. Figure 7 clearly shows how the code in the array is executed. Figure 7. .Net program runs a thread to execute the code in a huge array If the code is run on a 64-bit platform, IntPtr.Size is 8. So the huge array is passed to array3. It then allocates memory buffer by calling rGHDcvkN.VirtualAlloc() and copies the code from array3 into the new memory by calling Marshal.Copy(). It eventually calls rGHDcvkN.CreateThread() to run the code up. I started the .Net program in the debugger, and set a breakpoint on CreateThread API to see what the array code would do when it’s hit. Per my analysis of the array code, it is a kind of loader. Its main purpose is to dynamically load the main part of the malware code from the memory space into a newly-allocated memory buffer. It then repairs any relocation issues according to the new base address and repairs APIs’ offset for the main part code. Finally, the main code’s entry function is called. Anti-Analysis Techniques All APIs are hidden. They are restored when being called. The snippet below is the hidden CreateRemoteThread call. sub_1B0E6122 proc near mov rax, 0FFFFFFFF88E23B10h neg rax jmp rax ;; CreateRemoteThread sub_1B0E6122 endp All strings are encrypted. They are decrypted before using. For example, this is the encrypted “ntdll” string. unk_1AFD538C db 54h, 0B2h, 9Bh, 0F1h, 47h, 0Ch ; ==> "ntdll" It runs a thread (I named it ThreadFun6) to check if the API has been set as a breakpoint. If yes, it calls TerminateProcess in another thread to exit the process immediately. The thread function checks all APIs in the following modules: “ntdll”, “kernel32”, “kernelbase” and “user32”. In Figure 8, you can see how this works: Figure 8. Checking for breakpoints on exported APIs in “ntdll” It runs a thread to check if any analysis tools are running. It does this by creating specially named pipes that are created by some analysis tools. For example, “\\.\Regmon” for registry monitor tool RegMon; “\\.\FileMon” for local file monitor tool FileMon; “\\.\NTICE” for SoftIce, so on. If one of the named pipes cannot be created, it means one of the analysis tools is running. It then exits process soon thereafter. It then goes through all the running program windows to check if any windows class name contains a special string to determine if an analysis tool is running. For example, “WinDbgFrameClass” is Windbg main window’s class name. This check runs in a thread as well (I named it as Threadfun3). Below, Figure 9 shows how this thread function works. Figure 9. Check Windows’ Class Name By checking to see if the “Wireshark-is-running-{…}” named mutex object exists (by calling OpenMutex), it could implement anti-WireShark. By calling the API “IsDebuggerPresent”, it can check to see ] if this process is running in a debugger (returns with 1). It’s a kind of anti-debugging check. It also checks how much time is spent by calling IsDebuggerPresent. If the time is more than 1000ms, it means that the process runs in a debugger or VM, and it then exits the process. These are all the ways that this malware performs anti-analysis. Most of these checks run in their own threads, and are called every second. It then exits the process if any check is matched. To continue the analysis of this malware, we have to first skip these checks. We can dynamically modify its code to do so. For example, changing “IsDebuggerPresent”’s return value as 0 allows us to bypass the running-in-debugger detection. Generating A Magic String from a Decrypted String By decrypting three strings and putting them together, we get the magic string "Poison Ivy C++", which will be saved in a global variable qword_1B0E4A10. From the code snippet below you can see how it makes this string. Figure 10. Generating the magic string Hiding Key-functions in Six Different Modules It next loads several modules from its encrypted data. It creates a doubly-linked list, which is used to save and manage these loaded modules. There are many export functions from each of these modules that achieve the malware’s main work. In this way, it’s also a challenge for dynamic debugging. The variable qword_1AFE45D0 saves the header of that doubly-linked list. Each object in the list has the structure below: +00H pointer to previous object in the list +08H pointer to next object in the list +18H for Critical Section object use +28H the base address of the module this object is related to +30H pointer to export function table It then decrypts and decompresses six modules one by one, and adds each of them into the doubly-linked list. Figure 11 shows a code snippet from decrypting these six modules. Figure 11. Decrypting and decompressing modules Each module has an Initialization function (like DllMain function for Dll files) that is called once the module is completely decrypted and decompressed. Three of these modules have an anti-analysis ability similar to the one I described in the Anti-Analysis section above. So to continue the analysis of this malware, I needed to modify their codes to bypass their detection function. After that it calls the export functions of those modules. It decrypts the configuration data from the buffer at unk_1AFE3DA0. This configuration data is decrypted many times during the process running, and it tells the malware how to work. I’ll talk more about the configuration data in a later section. The malware then picks a string from the configuration data, which is “%windir%\system32\svchost.exe”. It later calls CreatProcess to run svchost.exe, and then injects some code and data from malware memory into the newly-created svchost.exe. It finally calls the injected code and exits its current process. The malware’s further work is now done in the svchost.exe side. Starting over in SVCHOST.exe Through my analysis I could see that the injected codes and data represent the entire malware. It all starts over again in the svchost.exe process. Everything I have reviewed about is repeated in svchost.exe. For example, executing the anti-analysis detection code, getting the magic string, creating a doubly-linked list, decrypting six modules and adding them into the doubly-linked list, and so on. It then goes to different code branch when executing the instruction 01736C2 cmp dword ptr [rdi+0Ch], 1 in module2. [rdi+0ch] is a flag that was passed when the entire code was initialized. When the flag is 0, it takes the code branch to run svchost.exe and inject code in it; when it’s 1, it takes the code branch to connect to the C&C server. Before the injected code in svchost.exe is executed, the flag is set to 1. Figure 12 shows the code branches. Figure 12. Snippet of code branches Obtaining the C&C Server from PasteBin The C&C server’s IP addresses and ports are encrypted and saved on the PasteBin website. PasteBin is a text code sharing website. A registered user can paste text code on it in order to share the text content to everyone. The malware author created 4 such pages, and put the C&C server IP addresses and ports there. Do you remember when I talked previously about encrypted configuration data? It contains the 4 PasteBin URLs. They are hxxps://pastebin.com/Xhpmhhuy hxxps://pastebin.com/m3TPwxQs hxxps://pastebin.com/D8A2azM8 hxxps://pastebin.com/KQAxvdvJ Figure 13 shows the decrypted configuration data. Figure 13. Decrypted configuration data If you access any one of these URLs, you will find there are normal Python codes on it. The encrypted server IP address and port are hidden in the normal python code. Let’s take a look. While looking at the main function you will find the code below: win32serviceutil.HandleCommandLine({65YbRI+gEtvlZpo0qw6CrNdWDoev}), the data between “{“ and “}”, is the encrypted IP address and port. See Figure 14 for more information. Figure 14. Encrypted C&C IP address and Port on PasteBin Let’s see what we can see after decryption in Figure 15. Figure 15. Decrypted IP address and Port From Figure 15, we can determine that the decrypted C&C server IP address is 172.104.100.53 and the Port is 1BBH i.e. 443. It should be noted that the IP addresses and Ports on the four pages are not the same. The author of this malware can update these IP addresses and Ports by simply updating the python codes on the four PasteBin pages. Communicating with the C&C server The malware starts connecting and sending data to its C&C server once it gets the IP address and Port. All the packets traveling between the malware and its server are encrypted using a private algorithm. The structure of the packet is like this: (the first 14H bytes is the header part, from 14H on is the data part) +00 4 bytes are a key for encryption or decryption. +04 4 byte, are the packet command. +0c 4 bytes is the length in bytes of the data portion of the packet. +14 4 bytes. From this point on is the real data. Once the malware has connected to the server, it first sends a “30001” command, and the server replies with command “30003”. The command “30003” requests the client to collect the victim’s system information. Once the malware receives this command, it calls tons of APIs to collect the system information. It gathers the system's current usage of both physical and virtual memory by calling GlobalmemoryStatusEx. It gets the CPU speed from the system registry from “HKLM\HARDWARE\DESCRIPTION\SYSTEM\CENTRALPROCESSOR\0\~MHz". It gets the free disk space of all partitions by calling GetDiskFreeSpaceExA. It gets the CPU architecture by calling GetNativeSysstemInfo. It collects display settings by calling EnumDisplaySetting. It collects file information from kernel32.dll. It gets the current computer name and user name by calling GetComputerName and GetUserName. It also gets the System time by calling GetSystemTime, and the system version by calling GetVersionEx. Finally, it copies the svchost.exe’s full path and a constant string, “PasteBin83”, which is from the decrypted configuration data (see Figure 13 again). In Figure 16 you can see the collected system information before encryption. Figure 17 shows the data after encryption as it’s about to be sent to the C&C server. The first four bytes are used to encrypt or decrypt the following data. Figure 16. Collected information from the victim’s system Figure 17. Encrypted system information from victim’s system From my analysis during the malware runtime, I could determine that the malware keeps obtaining the C&C server’s IP address from PasteBin and communicating with the C&C server in an infinite loop (by calling Sleep(1000) to suspend the execution). So far, I only saw that the commands “030001” and “030003” are used. I’ll continue to monitor and analyze the malware’s behavior to see what else it will do. Solution The FortiGuard Antivirus service has detected the files "Payment_Advice.ppsx" as MSOFFICE/PoisonIvy.A!tr.dldr and "Thumbs.bmp" as MSOFFICE/PoisonIvy.A!tr. IOC URL: hxxp://203.248.116.182/images/Thumbs.bmp Sample SHA-256 hashes: Payment_Advice.ppsx E7931270A89035125E6E6655C04FEE00798C4C2D15846947E41DF6BBA36C75AE Thumbs.bmp A3E8ECF21D2A8046D385160CA7E291390E3C962A7107B06D338C357002D2C2D9 by Xiaopeng Zhang | Aug 23, 2017 | Filed in: Security Research Sursa: https://blog.fortinet.com/2017/08/23/deep-analysis-of-new-poison-ivy-variant
-
- 2
-
-
ziVA: Zimperium’s iOS Video Audio Kernel Exploit Adam Donenfeld Aug 23 2017 Following my previous post, I’m releasing ziVA: a fully chained iOS kernel exploit that (should) work on all the iOS devices running iOS 10.3.1 or earlier. The exploit itself consists of multiple vulnerabilities that were discovered all in the same module: AppleAVEDriver. The exploit will be covered in depth in my HITBGSEC talk held on August 25th. For those of you who are not interested in iOS research and would like to protect themselves against these vulnerabilities, we urge you to update your iOS device to the latest version. Without an advanced mobile security and mitigation solution on the device (such as Zimperium zIPS), there’s little chance a user would notice any malicious or abnormal activity. The POC is released for educational purposes and evaluation by IT Administrators and Pentesters alike, and should not be used in any unintended way. The CVEs explanations, as written by Apple, can be found here. iOS vulnerabilities discovered and reported to Apple AVEVideoEncoder Available for: iPhone 5 and later, iPad 4th generation and later, and iPod touch 6th generation Impact: An application may be able to gain kernel privileges Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2017-6989: Adam Donenfeld (@doadam) of the Zimperium zLabs Team CVE-2017-6994: Adam Donenfeld (@doadam) of the Zimperium zLabs Team CVE-2017-6995: Adam Donenfeld (@doadam) of the Zimperium zLabs Team CVE-2017-6996: Adam Donenfeld (@doadam) of the Zimperium zLabs Team CVE-2017-6997: Adam Donenfeld (@doadam) of the Zimperium zLabs Team CVE-2017-6998: Adam Donenfeld (@doadam) of the Zimperium zLabs Team CVE-2017-6999: Adam Donenfeld (@doadam) of the Zimperium zLabs Team IOSurface Available for: iPhone 5 and later, iPad 4th generation and later, and iPod touch 6th generation Impact: An application may be able to gain kernel privileges Description: A race condition was addressed through improved locking. CVE-2017-6979: Adam Donenfeld (@doadam) of the Zimperium zLabs Team I will provide an in depth analysis of the vulnerabilities and exploitation techniques at HITBGSEC. After the conference, I will publish the rest of the disclosures as well as my slides and whitepaper. A brief description of one of the vulnerabilities, CVE-2017-6979: The function IOSurfaceRoot::createSurface is responsible for the creation of the IOSurface object. It receives an OSDictionary, which it forwards to the function IOSurface::init. IOSurface::init parses the properties and in case one of these are invalid (e.g, a width that exceeds 32 bits), returns 0, and the creation of the IOSurface is halted. The IOSurfaceRoot object must hold a lock while calling IOSurface::init because IOSurface::init adds the IOSurface object to the IOSurfaceRoot’s list of surfaces. Here’s the code that used to call IOSurface::init before Apple’s fix: surface = (IOSurface *)OSMetaClass::allocClassWithName(“IOSurface”); IORecursiveLockLock(provider->iosurface_array_lock); if ( !surface ) { IORecursiveLockUnlock(provider->iosurface_array_lock); return 0; } init_ret_code = surface->init(surface, provider, task_owner, surface_data); /* At this point, the surfaces’ list is unlocked, and an invalid IOSurface object is in the list */ IORecursiveLockUnlock(provider->iosurface_array_lock);if ( !init_ret_code ) { surface->release(surface); return 0; } In case the IOSurface::init function fails, IORecursiveLockUnlock will be called. A bogus IOSurface object will still be in the system and in the IOSurfaceRoot’s list of surfaces (thus accessible to everyone). At this particular moment, an attacker can increase the refcount of the IOSurface (creating, for instance, an IOSurfaceSendRight object attached to the surface) and prevent the bogus IOSurface object from being destroyed. This leads to the creation and existence of an IOSurface in the kernel which the attacker controls its properties (IOSurface->width = -1 for example). Such an IOSurface object can be given to other mechanisms in the kernel which might rely on a valid width/height/one of the properties to work, thus causing heap overflows/other problems that might lead to an elevation of privileges by the attacker. Our proposed solution to Apple was to call IOSurface::release while the lock provider->iosurface_array_lock is still held. Therefore moving the IORecursiveLockUnlock call just below IOSurface::release and putting it after the entire if statement would fix the problem because the IOSurfaceRoot’s list of surfaces will only be available once the bogus IOSurface is already cleaned up. Further reverse engineering of the function reveals that Apple changed the code according to our suggestions: surface = (IOSurface *)OSMetaClass::allocClassWithName(“IOSurface”); IORecursiveLockLock(provider->iosurface_array_lock); if ( !surface ) { IORecursiveLockUnlock(provider->iosurface_array_lock); return 0; } init_ret_code = surface->init(surface, provider, task_owner, surface_data);if ( !init_ret_code ) { surface->release(surface); /* Here our bad surface is freed *before* the kernel unlocks the surfaces’ list, Hence our bad surface is not accessible at anytime in case IOSurface::init fails. */ IORecursiveLockUnlock(provider->iosurface_array_lock); return 0; } IORecursiveLockUnlock(provider->iosurface_array_lock); The issues are severe and could lead to a full device compromise. The vulnerabilities ultimately lead to an attacker with initial code execution to fully control any iOS device on the market prior to version 10.3.2. Fortunately, we responsibly disclosed these bugs to Apple and a proper fix was coordinated. iOS users that update their device to the latest iOS version should be protected. We discovered more vulnerabilities, and the written exploit POC didn’t take advantage of CVE-2017-6979! The vulnerabilities used for the POC will be covered in depth. We plan to release the security advisories as we sent them to Apple right after my talk at HITBGSEC Zimperium’s patented machine-learning technology, z9, detects the exploitation of this vulnerability. We recommend to strengthen iOS security using a solution like Zimperium zIPS. Powered by z9, zIPS offers protection against known and unknown threats targeting Apple iOS and Google Android devices. z9 has detected every discovered exploit over the last five years without requiring updates. The exploit source code is available here. Disclosure timeline: 24/01/2017 – First Bug discovered 20/03/2017 – Shared bugs with Apple 29/03/2017 – Apple confirmed the bugs 15/05/2017 – Apple distributed patches I would like to thank Apple for their quick and professional response, Zuk Avraham (@ihackbanme) and Yaniv Karta (@shokoluv) that helped in the process. Sursa: https://blog.zimperium.com/ziva-video-audio-ios-kernel-exploit/
-
- 2
-
-
My little sister's phone got stolen/lost a week ago. Yesterday, I got a strange text. Today, I peaked! (source) 6 days My little sister who is going to be a senior worked hard all summer to buy an iPhone, only to have it stolen (or fall out, she is still not entirely sure) out of her boyfriends car a week ago. She had not activated the find my iphone app so we reported it stolen but were pretty sure someone was just being gifted a free iphone courtesy of my sister's summer wages. On top of that, she had also bought a wallet case and lost all her ID's and cards as well. So yesterday I get this text from a strange number. I give my sister a quick call to make sure it is not her and effectively realize someone is fishing through her contacts or documents to get her password. I am well aware after the WTH that his is not my sister BTW. I was bored so I say to myself, why not have some fun. We will never get the phone back, but what the heck, might as well kill some time. I am certain that his or her insistence will at the very least make this an lengthy exchange. I'm convinced this person is humoring me and just stringing me along until I cave. No one can be buying this! .... Again, I am thinking, he is humoring me but I was like, let's see how long this lasts until he stops texting me back. AVERY FTW!!!! My sister got an email this morning. I have peaked! Behold! Sursa: https://imgur.com/r/funny/USjnb Nu stiu daca e real, dar e interesanta ideea.
-
- 3
-
-
TUESDAY, AUGUST 22, 2017 Exploiting Industrial Collaborative Robots By Lucas Apa (@lucasapa) Traditional industrial robots are boring. Typically, they are autonomous or operate with limited guidance and execute repetitive, programmed tasks in manufacturing and production settings.1 They are often used to perform duties that are dangerous or unsuitable for workers; therefore, they operate in isolation from humans and other valuable machinery. This is not the case with the latest generation collaborative robots (“cobots”) though. They function with co-workers in shared workspaces while respecting safety standards. This generation of robots works hand-in-hand with humans, assisting them, rather than just performing automated, isolated operations. Cobots can learn movements, “see” through HD cameras, or “hear” through microphones to contribute to business success. UR5 by Universal Robots2 Baxter by Rethink Robotics3 So cobots present a much more interesting attack surface than traditional industrial robots. But are cobots only limited to industrial applications? NO, they can also be integrated into other settings! The Moley Robotic Kitchen (2xUR10 Arms)4 DARPA's ALIAS Robot (UR3 Arm)5 Last February, Cesar Cerrudo (@cesarcer) and I published a non-technical paper "Hacking Robots Before Skynet" previewing our research on several home, business, and industrial robots from multiple well-known vendors. We discovered nearly 50 critical security issues. Within the cobot sector, we audited leading robots, including Baxter/Sawyer from Rethink Robotics and UR by Universal Robots. ● Baxter/Sawyer: We found authentication issues, insecure transport in their protocols, default deployment problems, susceptibility to physical attacks, and the usage of ROS, a research framework known to be vulnerable to multiple issues. The major problems we reported appear to have been patched by the company in February 2017. ● UR: We found authentication issues in many of the control protocols, susceptibility to physical attacks, memory corruption vulnerabilities, and insecure communication transport. All of the issues remain unpatched in the latest version (3.4.2.65, May 2017).6 In accordance with IOActive's responsible disclosure policy we contacted the vendors last January, so they have had ample time to address the vulnerabilities and inform their customers. Our goal is to make cobots more secure and prevent vulnerabilities from being exploited by attackers to cause serious harm to industries, employees, and their surroundings. I truly hope this blog entry moves the collaborative industry forward so we can safely enjoy this and future generations of robots. In this post, I will discuss how an attacker can chain multiple vulnerabilities in a leading cobot (UR3, UR5, UR10 - Universal Robots) to remotely modify safety settings, violating applicable safety laws and, consequently, causing physical harm to the robot’s surroundings by moving it arbitrarily. This attack serves as an example of how dangerous these systems can be if they are hacked. Manipulating safety limits and disabling emergency buttons could directly threaten human life. Imagine what could happen if an attack targeted an array of 64 cobots as is found in a Chinese industrial corporation.7 The final exploit abuses six vulnerabilities to change safety limits and disable safety planes and emergency buttons/sensors remotely over the network. The cobot arm swings wildly about, wreaking havoc. This video demonstrates the attack: https://www.youtube.com/watch?v=cNVZF7ZhE-8 Q: Can these robots really harm a person? A: Yes, a study8 by the Control and Robotics Laboratory at the École de technologie supérieure (ÉTS) in Montreal (Canada) clearly shows that even the smaller UR5 model is powerful enough to seriously harm a person. While running at slow speeds, their force is more than sufficient to cause a skull fracture.9 Q: Wait...don't they have safety features that prevent them from harming nearby humans? A: Yes, but they can be hacked remotely, and I will show you how in the next technical section. Q: Where are these deployed? A: All over the world, in multiple production environments every day.10 INTEGRATORS DEFINE ALL SAFETY SETTINGS Universal Robots is the manufacturer of UR robots. The company that installs UR robots in a specific application is the integrator. Only an integrated and installed robot is considered a complete machine. The integrators of UR robots are responsible for ensuring that any significant hazard in the entire robot system is eliminated. This includes, but is not limited to:11 Conducting a risk assessment of the entire system. In many countries this is required by law Interfacing other machines and additional safety devices if deemed appropriate by the risk assessment Setting up the appropriate safety settings in the Polyscope software (control panel) Ensuring that the user will not modify any safety measures by using a "safety password." Validating that the entire system is designed and installed correctly Universal Robots has recognized potentially significant hazards, which must be considered by the integrator, for example: Penetration of skin by sharp edges and sharp points on tools or tool connectors Penetration of skin by sharp edges and sharp points on obstacles near the robot track Bruising due to stroke from the robot Sprain or bone fracture due to strokes between a heavy payload and a hard surface Mistakes due to unauthorized changes to the safety configuration parameters Some safety-related features are purposely designed for cobot applications. These features are particularly relevant when addressing specific areas in the risk assessment conducted by the integrator, including: Force and power limiting: Used to reduce clamping forces and pressures exerted by the robot in the direction of movement in case of collisions between the robot and operator. Momentum limiting: Used to reduce high-transient energy and impact forces in case of collisions between robot and operator by reducing the speed of the robot. Tool orientation limiting: Used to reduce the risk associated with specific areas and features of the tool and work-piece (e.g., to avoid sharp edges to be pointed towards the operator). Speed limitation: Used to ensure the robot arm operates a low speed. Safety boundaries: Used to restrict the workspace of the robot by forcing it to stay on the correct side of defined virtual planes and not pass through them. Safety planes in action12 Safety I/O: When this input safety function is triggered (via emergency buttons, sensors, etc.), a low signal is sent to the inputs and causes the safety system to transition to "reduced" mode. Safety scanner13 Safety settings are effective in preventing many potential incidents. But what could happen if malicious actors targeted these measures in order to threaten human life? Statement from the UR User Guide CHANGING SAFETY CONFIGURATIONS REMOTELY "The safety configuration settings shall only be changed in compliance with the risk assessment conducted by the integrator.14 If any safety parameter is changed the complete robot system shall be considered new, meaning that the overall safety approval process, including risk assessment, shall be updated accordingly." The exploitation process to remotely change the safety configuration is as follows: Step 1. Confirm the remote version by exploiting an authentication issue on the UR Dashboard Server. Step 2. Exploit a stack-based buffer overflow in UR Modbus TCP service, and execute commands as root. Step 3. Modify the safety.conf file. This file overrides all safety general limits, joints limits, boundaries, and safety I/O values. Step 4. Force a collision in the checksum calculation, and upload the new file. We need to fake this number since integrators are likely to write a note with the current checksum value on the hardware, as this is a common best practice. Step 5. Restart the robot so the safety configurations are updated by the new file. This should be done silently. Step 6. Move the robot in an arbitrary, dangerous manner by exploiting an authentication issue on the UR control service. By analyzing and reverse engineering the firmware image ursys-CB3.1-3.3.4-310.img, I was able to understand the robot’s entry points and the services that allow other machines on the network to interact with the operating system. For this demo I used the URSim simulator provided by the vendor with the real core binary from the robot image. I was able to create modified versions of this binary to run partially on a standard Linux machine, even though it was clearer to use the simulator for this example exploit. Different network services are exposed in the URControl core binary, and most of the proprietary protocols do not implement strong authentication mechanisms. For example, any user on the network can issue a command to one of these services and obtain the remote version of the running process (Step 1): Now that I have verified the remote target is running a vulnerable image, ursys-CB3.1-3.3.4-310 (UR3, UR5 or UR10), I exploit a network service to compromise the robot (Step 2). The UR Modbus TCP service (port 502) does not provide authentication of the source of a command; therefore, an adversary could corrupt the robot in a state that negatively affects the process being controlled. An attacker with IP connectivity to the robot can issue Modbus read/write requests and partially change the state of the robot or send requests to actuators to change the state of the joints being controlled. It was not possible to change any safety settings by sending Modbus write requests; however, a stack-based buffer overflow was found in the UR Modbus TCP receiver (inside the URControl core binary). A stack buffer overflows with the recv function, because it uses a user-controlled buffer size to determine how many bytes from the socket will be copied there. This is a very common issue. Before proceeding with the exploit, let's review the exploit mitigations in place. The robot's Linux kernel is configured to randomize (randomize_va_space=1 => ASLR) the positions of the stack, virtual dynamic shared object page, and shared memory regions. Moreover, this core binary does not allow any of its segments to be both writable and executable due to the "No eXecute" (NX) bit. While overflowing the destination buffer, we also overflow pointers to the function's arguments. Before the function returns, these arguments are used in other function calls, so we have to provide these calls with a valid value/structure. Otherwise, we will never reach the end of the function and be able to control the execution flow. edx+0x2c is dereferenced and used as an argument for the call to 0x82e0c90. The problem appears afterwards when EBX (which is calculated from our previous controlled pointer on EDX) needs to also point to a structure where a file descriptor is obtained and is closed with "close" afterwards. To choose a static address that might comply with these two requirements, I used the following static region, since all others change due to ASLR. I wrote some scripts to find a suitable memory address and found 0x83aa1fc to be perfect since it suits both conditions: ● 0x83aa1fc+0x2c points to valid memory -> 0x831c00a ("INT32") ● 0x83aa1fc+0x1014 contains 0 (nearly all this region is zeroed) Now that I have met both conditions, execution can continue to the end of this function, and I get EIP control because the saved register on the stack was overflowed: At this point, I control most of the registers, so I need to place my shellcode somewhere and redirect the execution flow there. For this, I used returned-oriented programming (ROP), the challenge will be to find enough gadgets to set everything I need for clean and reliable exploitation. Automatic ROP-chain tools did not work well for this binary, so I did it manually. First, I focus on my ultimate goal: to execute a reverse shell that connects to my machine. One key point when building a remote ROP-based exploit in Linux are system calls. Depending on the quality of gadgets that use int instructions I find, I will be able to use primitives such as write or dup2 to reuse the socket that was already created to return a shell or other post-exploitation strategies. In this binary, I only found one int 0x80 instruction. This is used to invoke system calls in Linux on x86. Because this is a one-shot gadget, I can only perform a single system call: I will use the execve system call to execute a program. This int 0x80 instruction requires setting up a register with the system call number (EAX in this case 0xb) and then set a register (EBX) that points to a special structure. This structure contains an array of pointers, each of which points to the arguments of the command to be executed. Because of how this vulnerability is triggered, I cannot use null bytes (0x00) on the request buffer. This is a problem because we need to send commands and arguments and also create an array of pointers that end with a null byte. To overcome this, I send a placeholder, like chunks of 0xFF bytes, and later on I replace them with 0x00 at runtime with ROP. In pseudocode this call would be (calls a TCP Reverse Shell): All the controlled data is in the stack, so first I will try to align the stack pointer (ESP) to my largest controlled section (STAGE 1). I divided the largest controlled sections into two stages, since they both can potentially contain many ROP gadgets. As seen before, at this point I control EBX and EIP. Next, I have to align ESP to any of the controlled segments so I can start doing ROP. The following gadget (ROP1 0x8220efa) is used to adjust ESP: This way ESP = ESP + EBX - 1 (STAGE 1 address in the stack). This aligns ESP to my STAGE 1 section. EBX should decrease ESP by 0x137 bytes, so I use the number 0xfffffec9 (4294966985) because when adding it to ESP, it wraps around to the desired value. When the retn instruction of the gadget is executed, ROP gadgets at STAGE 1 start doing their magic. STAGE 1 of the exploit does the following: Zero out the at the end of the arguments structure. This way the processor will know that EXECVE arguments are only those three pointers. Save a pointer to our first command of cmd[] in our arguments structure. Jump to STAGE 2, because I don't have much space here. STAGE 2 of the exploit does the following: Zero out the \xff\xff\xff\xff at the end of each argument in cmd[]. Save a pointer to the 2nd and 3rd argument of cmd[] in on our arguments structure. Prepare registers for EXECVE. As seen before, we needed EBX=*args[] EDX = 0 EAX=0xb Call the int 0x80 gadget and execute the reverse shell. Once the TCP reverse shell payload is executed, a connection is made back to my computer. Now I can execute commands and use sudo to execute commands as root in the robot controller. Safety settings are saved in the safety.conf file (Step 3). Universal Robots implemented a CRC (STM-32) algorithm to provide integrity to this file and save the calculated checksum on disk. This algorithm does not provide any real integrity to the settings, as it is possible to generate collisions or calculate new checksum values for new settings overriding special files on the filesystem. I reversed how this calculation is being made for each safety setting and created an algorithm that replicates it. On the video demo, I did not fake the new CRC value to remain the same (located on top-right of the screen), even though this is possible and easy (Step 4). Before modifying any safety setting on the robot, I setup a process that automatically starts a new instance of the robot controller after 25 seconds with the new settings. This will give me enough time to download, modify, and upload a new safety setting file. The following command sets up a new URControl process. I used Python since it allows me to close all current file descriptors of the running process when forking. Remember that I am forking from the reverse shell object, so I need to create a new process that does not inherit any of the file descriptors so they are closed when the parent URControl process dies (in order to restart and apply new safety settings). Now I have 25 seconds to download the current file, modify it, calculate new CRC, reupload it, and kill the running URControl process (which has an older safety settings config). I can programmatically use the kill command to target the current URControl instance (Step 5). Finally, I send this command to the URControl service in order to load the new installation we uploaded. I also close any popup that might appear on the UI. Finally, an attacker can simply call the movej function in the URControl service to move joints remotely, with a custom speed and acceleration (Step 6). This is shown at the end of the video. Once again, I see novel and expensive technology which is vulnerable and exploitable. A very technical bug, like a buffer overflow in one of the protocols, exposed the integrity of the entire robot system to remote attacks. We reported the complete flow of vulnerabilities to the vendors back in January, and they have yet to be patched. What are we waiting for? 1https://www.robots.com/faq/show/what-is-an-industrial-robot 2https://www.roboticsbusinessreview.com/manufacturing/cobot-market-boom-lifts-universal-robots-fortunes-2016/ 3http://www.rethinkrobotics.com/blog/humans-and-collaborative-robots-working-together-in-grand-rapids-mi/ https://www.youtube.com/watch?v=G6_LCwu7dOg 4https://www.wired.com/2016/11/darpa-alias-autonomous-aircraft-aurora-sikorsky/ 5https://www.universal-robots.com/how-tos-and-faqs/faq/ur-faq/release-note-software-version-34xx/ 6https://www.youtube.com/watch?v=PtncirKiBXQ&t=1s 7http://coro.etsmtl.ca/blog/?p=299 8http://www.forensicmed.co.uk/pathology/head-injury/skull-fracture/ 9https://www.universal-robots.com/case-stories/ 10https://www.universal-robots.com/download/?option=22045#section22032 (UR5 User Manual) 11https://academy.universal-robots.com 12https://academy.universal-robots.com 13Software_Manual_en_US.pdf - Universal Robots Posted by Alex Barnsbee at 4:45 AM Sursa: http://blog.ioactive.com/2017/08/Exploiting-Industrial-Collaborative-Robots.html
-
Red Teams When you can’t find the bad guys, make some up You've spent money on security products that escalate nothing. You have a 24/7 SOC that hardly pays attention to their tools, or knows how to use them. You have intelligence feeds but have no idea what consumes them. Logs are inaccessible, slow to query, or non-existent. Defenders have stopped hunting and lost a sense of purpose. That means it’s time for a Red Team to come in and fuck shit up. Why Red Teams High immersion, staged intrusions will exercise incident response capability in a way that further improves all facets of a security program. It’s possible to stage incidents that live somewhere between reality and drill. These exercises occur on our terms as Red Team designers, but are responded to as a real incident by a security response team (defenders). Observe that professional athletes compete on a schedule. A clear schedule to hone their craft to peak at any moment. Defenders have no schedule. They are called to action by any number of adversaries at any time and expected to be at a constant peak or preparedness. Likewise, it must be satisfying after training for a boxing match to finally punch someone in the face. So while putting millions into your company’s security, you better be damn sure you’re dusting off for a battle every now and then. Otherwise there’s no emotional or tangible reward for your preparation. Previous Work We ran four “live fire” Red Team exercises at Facebook, for two of which I can discuss the design decisions that mix story telling and realism with red teaming. Each one is named after the incident handling codename. Red Team One: “Vampire” We had a third party contracted Red Team take over our corporate Domain via a physically planted laptop that was hidden behind a cabinet, and connected to our network. The Red Team planners walked them right towards Domain Admin and chaperoned their access afterward. This allowed (supervised) lateral movement, malware, and exfiltration. Let’s be very clear: to get the Red Team beyond “defense in depth”, we placed them behind defenses they’d normally have to breach. Without this it would only be a penetration test. This is cheating, and is OK. An effective exercise must take you through a worst case scenario. Cheating helps you get there in reasonable time. It’s about the final production. With this access, the Red Team then impersonated one of our greatest historical adversaries and threatened our security team for ~24 hours with terrifying extortion emails. This placed our team into a worst case scenario situation appearing as if the only mitigation was a complete wipe of corporate IT from servers to laptops. Defenders weren’t given a heads up about this exercise. They had to deal with it anyway. Yep. During the response, we had to “cancel” vacation plans, calm panicked employees, and negotiate the tough decisions around production system shutdowns. Defenders were tasked with building a recovery plan which considered how much, if not all, we’d have to be rebuild. This is a significant amount of the headache you deal with when you’re really in the thick of a worst case intrusion, and has little to do with forensics or IR. Next morning, we broke the news to the team that this was a staged (albeit real) good-guy intrusion. There was long silence, red faces, and we were doing a lot of explaining. I wasn’t sure how the team would react. A former FBI agent we hired to do malware research broke the silence with a “…Then that’s fucking awesome!” and the room lit up. Someone then reminded the room that it was April Fools Day. Even after explaining, several defenders came up to me saying “but we found real and unknown malware” so that “it must have been a real incident”. They were in temporary disbelief when we told them we had malware custom written for the exercise to add to the realism. People showed up late to this meeting, which caused a lot of awkward laughs as we had to break the news to every late arrival in front of everyone over and over. That evening we had plans to take the team to a bar to decompress. They were a bit dazed for a day or so, but were absolutely pumped about the future. We now had a group of defenders who have been through the stress of a worst possible breach, together. That experience is invaluable Vampire murdering trophies for impactful incident responders We rewarded the contributors of the incident response with these “Vampire” Stakes and let them know we were proud of their response. The rest of the week was cleanup and post mortem, all very calm and settled. Deep relationships were formed between the Security and IT teams because of the shared experience. There was also a new level of empathy from IT towards Security’s goals after experiencing how bad a full intrusion could be for the company. Evolving the Exercise It was very clear that Facebook wanted to continue these exercises after experiencing a terrifying incident firsthand. There were three more comprehensive exercises in the years afterward, while I was there. These were some of the big parts of the exercises we’ve made public: We made use of a browser zero day that was used to give a Red Team access to an engineer’s laptop (While we disclosed it to a vendor). A former employee sat in on the Red Team, to simulate some “insider” knowledge an APT group may gain over prolonged access. We asked the FBI to give us a the equivalent of a breach notification to kick off an unannounced exercise. Larger production systems were added to the scope of exercises, increasing risk of a damaging mishandling. My team was fully aware of the amount of resources we’d put into these exercises based on the previous two rounds, and knew that future exercises would involve new firepower (or we wouldn’t be doing them). They all involved strategic cheating, exploits, lateral movement, and some high value target at the end of it. The Window In later exercises, we announced a two month time window when these Red Team exercises would take place at, giving FB Security their “bout” to train for. This was different than being totally unannounced, like Vampire. This tiny detail was an 100x lesson for me related to Red Team design. Telling the team when the incident could happen had a hugely positive side effect. For the month leading up to the Red Team window, the team leaned towards obsession. IR Weaponization occurred. Old tools were dusted off. Comprehensive training. Re-calibrated paranoia. They wanted to beat the Red Team instead of a faceless, description-less adversary that might not even exist. Everyone wanted their contribution to be the one that caught a massive Red Team exercise. As a result, our prevention, detection, and IR momentum increased beyond our measure at the time. We hit an even higher security peak that lasted beyond the exercise. The Box We gave the Incident Response team a theme to follow: One Lead is All You Need This means that after an intrusion is discovered from a single IOC, all lateral movement should be discovered if our response capability is strong. To symbolize this, we gave responders a wooden box. The box had 10 or so wax sealed envelopes. Each envelope had an IOC that the Red Team knew about. The response team could open them whenever they needed to move things along, the goal being to open the least amount possible. This was mostly used as a time box for the game, not any measure of success. But it became a point of pride for the team to open the least amount possible. Almost all of the IOC’s were discovered through our existing forensic and monitoring tools. This was a strong sign that we’re getting better as we probably wouldn’t have discovered them all years earlier, or at least as quickly. Puzzle Pieces with IOC’s etched into them in envelopes to gamify the IR process Some envelopes had a “black spot” from pirate lore, which meant that the responder had to sit out and show another team member how to respond in their place. For Red Team Four, we used puzzle pieces. Red Team Design These influences were important in designing the four exercises. “Game Master” You need someone running the show, preferably the one who designed the exercise. This role talks to the Red Team, the Defenders, as-senior-as-possible leadership if things get wacky. They’ll make judgement calls when the Defenders need to ask the Red Team a question, and whether it’s allowed to be answered. It’s very likely that the defenders might find a real compromise, and will need to ask the Red Team whether it was them or not. The Game Master should hold the Red Team accountable for their notes. Making sure they have detailed record of their intrusion is pure gold for follow up, and comparing how well the defenders discovered their badness. This role should also be very sensitive to issues around panic, shaming, and generally how much effort and exhaustion is building up from the responders. They should have strong relationships to the organizations that manage PR, legal, any sort of law enforcement outreach, etc. Alternate Reality Games An goal of an ARG is to partially buy into an overwhelming fictional experience. I did my best to design Red Teams to have a sense of realism to them, even if they were fictional. If you’re not familiar with ARG’s, there are many lessons to be pulled from them. It’s most important to have participated in one and understood how it can sort of wash over your normal day to day. I personally had a lot of fun with The Jejune Institute in San Francisco, but there are many online ones as well. While participants should feel the urgency of a real incident, it shouldn’t feel so over the top that they can’t function. You want your team to stretch their incident legs with this experience, so that any future incidents run much smoother. Panic It’s important to manage a healthy amount of urgency, reasons to panic, and actual panic in designing an exercise. Don’t make physical threats part of an unannounced exercise. That goes too far. Even with an announced exercise, it could go screwy. Vampire was high panic because it was unannounced. We didn’t allow the drills to run more than 24 hours without an “all clear — this was a drill”. They were extremely stressful this way. We would not let fully immersive drills go longer than 24 hours. Announced exercises involving a window were able to develop panic for several reasons despite being announced. The first was that large networks and production systems were put at actual risk, to do the potential of botching the IR. Second, everyone wanted to discover what sort of new capability would be used, and be the one to catch it. So despite it being real panic — emotions and enthusiasm were positive. Gamification I didn’t want to gamify things too much to deter from realism, but The Box was a very important tool to prevent these exercises from becoming a months long exercise. Real intrusions can easily become that long and painful. Each envelope served as a series of wins — opening an envelope and finding an IOC that was already discovered was a reason to celebrate. It kept momentum going, which is satisfying for a responder. Other than that — it was important to use our real tools, defenses, and systems in the exercise. None of that was pretend or table topped. Rewards Red Teams are a terrible way to find under-performers and a great way to find rockstars. Because of the high amount of stress, there has to be a high amount of reward as well. Be sure to celebrate a Red Team season and have fun however possible. Red Team Cheating (Fourth Wall Sabotage) Real adversaries have unlimited time and we do not. We have to force the Red Team’s intrusion quickly. So we have to cheat! Give the Red Team important passwords. Walk them through doors, hand over design docs, or outright share a vulnerability they can exploit. Cheating effectively creates an incident to respond to, which is more important than finding actual vulnerabilities for our purposes of improving response and empathy towards security among participants. This goes strongly against the use of a Red Team as an assessment tool. Remember — this is an exercise to improve full incident response and empathy towards security. I would argue that this more effectively improves actual security. Focusing strictly on assessments builds a policy of never-having-a-vulnerability. When measuring risk, there are no denominators to comfort you. Positioning Observe the kill-chain when designing an exercise and consider each milestone for realism. Billion dollar adversaries do not shoulder surf their way into your company, so imitate realistic scenarios for better panic. Have a plausible storyline with motivated adversary, their tactics, a successful intrusion, lateral movement, and exfiltration. Plan for interesting forensic artifacts in each step that a security team can discover and pivot their investigation with. Involve systems that would complicate forensics. Target production. See what happens. Choose organizational areas with weak security and involve their leadership in planning the exercise. Making them a part of the experience will be useful. For instance, If you’re having trouble with corporate endpoint security, design an exercise around spear phishing and malware that stretches those muscles (or lack thereof). However, don’t go so far that a team is simply decimated by the exercise and is forced to observe their uselessness up close. Time and Preparation These exercises took 1–2 months of preparation, and the windows the defenders expected a potential Red Team attack were 1–2 months long. For Red Team Three, we had 5 consultants from two firms for two weeks. We planned ahead for Post-Mortem resources, company all hands to describe lessons learned, etc. Security Team Knowledge The first exercise was an un-announced emotional system shock. It was a reality check and a pretty significant experience for several team members. The second was not, and announcing ahead of time for the third and fourth exercise became a major hype tool and inspired internal motivation from the team to win. Red Teams need to be planned in absolute secrecy. The responding team cannot know a thing about what’s going on to be effective. No spoilers! Involving senior leadership to bless the exercise pans is also a great way to involve them for remediation and post mortem functions. It will be less about getting them on the hook for helping, but getting them interested in the crazy stuff you’re about to pull. Follow Up Never let a good crisis go to waste. Because it’s not a real incident, extensive and calm note taking can take place without pressure. This helps set an example for future incidents which will have a very different form of urgency. The lessons from Security Breach 101 can be followed very closely. Things to measure Make sure team members that are panicking are comforted. Security incidents make people feel like their careers are in jeopardy, especially if they were hired to prevent the same intrusion they have to respond to. If you’re not emotionally in tune with this, do not do a Red Team exercise. If someone is panicking, tell them it’s a Red Team if it’s not entirely clear. The time lag between each response milestone are important. How quickly was a system imaged and distributed to investigators? How fast was an IOC turned around to a firewall rule? How quickly could you clear employees as non-compromised? Understand which team members were able to run down important resources in other organizations because of their tight personal networks with other employees. Discover any bad blood between two organizations. Find the boundary spanners on your team and make sure they are appreciated for repairing any bonds. Watch for Legal, PR, Sales, etc working without full information or approval. You can’t let a sales guy tell you “I wanted my customer to hear about the breach from me first!” when they somehow hear about something. See more about this in Security Breach 101. Shame Don’t fuck this up because you shamed employees or teams with bad security awareness. Seriously, don’t. They’ll never come back and will never involve you or your team again. Future Security prevention should be unit test-able like any other technology. With the status quo — unit testing of security is a very manual process or only vulnerability assessment focused. I’m advising a company called AttackIQ — they’re working to automate Red Team lessons and hold defensive technology accountable when not actually working, much like a Red Team would. For instance, it really shouldn't take an enormous Red Team exercise to know if you’re highly responsive to an incident or if a malware appliance, you know, actually catches malware. Same with firewalls, ids, multifactor, etc. Conclusion Immersive Red Teams are extremely high risk, high reward. They give Defenders something to fight on a regular basis, improve morale, and weaponize security at a company to match up with reality. They’re really fun to plan, too. @magoo I’m a security guy, former Facebook, Coinbase, and currently an advisor and consultant for a handful of startups. Incident Response and security team building is generally my thing, but I’m mostly all over the place. Sursa: https://medium.com/starting-up-security/red-teams-6faa8d95f602
-
Bypassing VirtualBox Process Hardening on Windows Posted by James Forshaw, Project Zero Processes on Windows are securable objects, which prevents one user logged into a Windows machine from compromising another user’s processes. This is a pretty important security feature, at least from the perspective of a non-administrator user. The security prevents a non-administrator user from compromising the integrity of an arbitrary process. This security barrier breaks down when trying to protect against administrators, specifically administrators with Debug privilege, as enabling this privilege allows the administrator to open any process regardless of the security applied to it. There are cases where applications or the operating system want to actively defend processes from users such as administrators or even, in some cases, the same user as the running process who’d normally have full access. Protecting the processes is a pretty hard challenge if done entirely from user mode applications. Therefore many solutions use kernel support to perform the protection. In the majority of cases these sorts of techniques still have flaws, which we can exploit to compromise the “protected” process. This blog post will describe the implementation of Oracle’s VirtualBox protected process and detail three different, but now fixed, ways of bypassing the protection and injecting arbitrary code into the process. The techniques I’ll present can equally be applied to similar implementations of “protected” processes in other applications. Oracle VirtualBox Process Hardening Protecting processes entirely in user mode is pretty much impossible, there are just too many ways of injecting content into a process. This is especially true when the process you’re trying to protect is running under the same context as the user you’re trying to block. An attacker could, for example, open a handle to the process with PROCESS_CREATE_THREAD access and directly inject a new thread. Or they could open a thread in the process with THREAD_SET_CONTEXT access and directly change the Instruction Pointer to jump to an arbitrary location. These are just the direct attacks. The attacker could also modify the registry or environment the process is running under, then force the process to load arbitrary COM objects, or Windows Hooks. The list of possible modifications is almost endless. Therefore, VirtualBox (VBOX) enlists the help of the kernel to try to protect its processes. The source code refers to this as Process Hardening. VBOX tries to protect the processes from the same user the process is running under. A detailed rationale and technical overview is provided in source code comments. The TL;DR; is the protection gates access to the VBOX kernel drivers, which due to design have a number of methods which can be used to compromise the kernel, or at least elevate privileges. This is why VBOX tries to prevent the current user compromising the process, getting access to the VBOX kernel driver would be a route to Kernel or System privileges. As we’ll see though while some protections also prevent administrators compromising the processes that’s not the aim of the hardening code. Multiple examples of issues with the driver and protection from device access were discovered by my colleague Jann in VBOX on Linux. On Linux, VBOX limits access to the VBOX driver to root only, and uses SUID binaries to allow the VBOX user processes to get access to the driver before dropping privileges. On Windows instead of SUID binaries the VBOX driver uses kernel APIs to try to stop users and administrators opening protected processes and injecting code. The core of the kernel component is in the Support\win\SUPDrv-win.cpp file. This code registers with two callback mechanisms supported by modern Windows kernels: PsSetCreateProcessNotifyRoutineEx - Driver is notified when a new process is created. ObRegisterCallback - Driver is notified when Process and Thread handles are created or duplicated. The notification from PsSetCreateProcessNotifyRoutineEx is used to configure the protection structures for a new process. When the process subsequently tries to open a handle to the VBOX driver the hardening will only permit access after the following verification steps are performed in the call to supHardenedWinVerifyProcess: Ensure there are no debuggers attached to the process. Ensure there is only a single thread in the process, which should be the one opening the driver to prevent in-process races. Ensure there are no executable memory pages outside of a small set of permitted DLLs. Verify the signatures of all loaded DLLs. Check the main executable’s signature and that it is of a permitted type of executable (e.g. VirtualBox.exe). Signature verification in the kernel is done using custom runtime code compiled into the driver. Only a limited set of Trusted Roots are permitted to be verified at this step, primarily Microsoft’s OS and Authenticode certificates as well as the Oracle certificate that all VBOX binaries are signed with. You can find the list of permitted certificates in the source repository. The ObRegisterCallback notification is used to limit the maximum access any other user process on the system can be granted to the protected process. The ObRegisterCallback API was designed for Anti-Virus to protect processes from being injected into or terminated by malicious code. VBOX uses a similar approach and limits any handle to the protected process to the following access rights: PROCESS_TERMINATE PROCESS_VM_READ PROCESS_QUERY_INFORMATION PROCESS_QUERY_LIMITED_INFORMATION PROCESS_SUSPEND_RESUME DELETE READ_CONTROL SYNCHRONIZE The permitted access rights give the user most of the typical rights they’d expect, such as being able to read memory, synchronize to the process and terminate it but does not allow injecting new code into the process. Similarly, access to threads is restricted to the following access rights to prevent modification of a thread’s context or similar attacks. THREAD_TERMINATE THREAD_GET_CONTEXT THREAD_QUERY_INFORMATION THREAD_QUERY_LIMITED_INFORMATION DELETE READ_CONTROL SYNCHRONIZE We can verify this access limitation by opening the VirtualBox process and one of its threads and see what access rights we’re granted. For example the following picture highlights the process and thread granted access. While the kernel callbacks prevent direct modification of the process as well as a user trying to compromise the integrity of the process at startup they do very little against runtime DLL injection such as through COM. The hardening implementation needs to decide on what modules it’ll allow to be loaded into the process. The decision, fundamentally, is based on Authenticode code signing. There are mitigation options to enable loading only Microsoft signed binaries (such as PROCESS_MITIGATION_BINARY_SIGNATURE_POLICY). However, this policy isn’t very flexible. Therefore, protected VBOX processes install hooks to a couple of internal functions in user-mode to verify the integrity of any DLL which is being loaded into memory. The hooked functions are: LdrLoadDll - Called to load a DLL into memory. NtCreateSection - Called to create an Image Section object for a PE file on disk. LdrRegisterDllNotification - This is a quasi-officially supported callback which notifies the application when a new DLL is loaded or unloaded. These hooks expand the permitted set of signed DLLs which can be loaded. The kernel signature verification is okay for bootstrapping the process as only Oracle and Microsoft code should be present. However, when it comes to running a non-trivial application ( VirtualBox.exe is certainly non-trivial) you’re likely to need to load third-party signed code such as GPU drivers. As the hooks are in user mode it’s easier to call the system WinVerifyTrust API which will verify certificate chains using the system certificate stores as well as handling the verification of files signed in a Catalog file. If the DLL being loaded doesn’t meet VBOX’s expected criteria for signing then the user-mode hooks will reject loading that DLL. VBOX still doesn't completely trust the user; WinVerifyTrust will chain certificates back to a root certificate in the user’s CA certificates. However, VBOX will only trust system CA certificates. As a non-administrator cannot add a new trusted root certificate to the system’s list of CA certificates this should severely limit the injection of malicious DLLs. You can get a real code signing certificate which should also be trusted, but the assumption is malicious code wouldn’t want to go down that route. Even if the code is signed the loader also checks that the DLL file is owned by the TrustedInstaller user. This is checked in supHardNtViCheckIsOwnedByTrustedInstallerOrSimilar. A normal user should not be able to change the owner of a file to anything but themselves, therefore it should limit the impact of the behavior to allow any signed file to load. The VBOX code does have a function which is supposed to restrict what certificates are permitted supR3HardenedWinIsDesiredRootCA as roots. In official builds the function’s whitelist of specific CAs is commented out. There’s a blacklist of certificates, however, unless your company is called “U.S. Robots and Mechanical Men, Inc” the blacklist won’t affect you. Even with all this protection the process isn’t secure against an administrator. While an administrator can’t bypass the security on opening the process, they can install a local machine Trusted Root CA certificate and sign a DLL, set its owner and force it to be loaded. This will bypass the image verification and load into the verified VBOX process. In summary the VBOX hardening is attempting to provide the following protections: Ensure that no code is injected into protected binaries during initialization. Prevent user processes from opening “writable” handles to protected processes or threads which would allow arbitrary code injection. Prevent injection of untrusted DLLs through normal loading routes such as COM. This whole process is likely to have some bugs and edge cases. There’s so many different verification checks which must all fit together. So, assuming we don’t want to get a code signing certificate and we don’t have administrator rights how can we get arbitrary code running inside a protected VBOX process? We’ll focus primarily on the third protection in the list, as this is perhaps the most complex part of the protection and therefore is likely to have the most issues. Exploiting the Chain-of-Trust in COM Registration The first bug I’m going to describe was fixed as CVE-2017-3563 in VBOX version 5.0.38/5.1.20. This issue exploits the chain-of-trust for DLL loading to trick VBOX into loading Microsoft signed DLLs which just happen to allow untrusted arbitrary code execution. If you run Process Monitor against the protected VBOX process you’ll notice that it uses COM, specifically it uses the VirtualBoxClient class which is implemented in the VBoxC.dll COM server. The nice thing about COM server registration, at least from the perspective of an attacker, is the registration for a COM object can be in one of two places, the user’s registry hive, or the local machine registry hive. For reasons of compatibility the user’s hive is checked first, before falling back to the local machine hive. Therefore it’s possible to override a COM registration with a normal user’s permission, so when an application tries to load the designated COM object the application will instead load whatever DLL we’ve overridden it with. Hijacking COM objects is not a new technique, it’s been known for many years especially for the purposes of Malware persistence. It’s seen a resurgence of late because of the renewed interest in all things COM. However, it’s rare that COM hijacking is of importance for elevation of privilege outside of UAC bypasses. As an aside, the connection between UAC and COM hijacking is the COM runtime actively tries to prevent the hijack being used as an EoP route by disabling certain User registry lookups if the current process is elevated. Of course it wasn’t always successful. This behavior only makes sense if you view UAC through the prism of it being a defendable security boundary, which Microsoft categorically claim it’s not and never was. For example this blog post from early 2007 specifically states this behavior is to prevent Elevation of Privilege. I think the COM lookup behavior is one of the clearest indicators that UAC was originally designed to be a security boundary. It failed to meet the security bar and so was famously retconned into helping “developers” write better code. If we could replace the COM registration with our own code we should be able to get code execution inside the hardened process. In theory all the hardening signing checks should stop us from loading untrusted code. In research, it’s always worth trying something which you believe should fail just in case as sometimes you get a nice surprise. At minimum it’ll give you insight into how the protection really works. I registered a COM object to hijack the VirtualBoxClient class in the user’s hive and pointed it at an unsigned DLL (Full Disclosure, I used an admin account to tweak the Owner to TrustedInstaller just to test). When I tried to start a Virtual Machine I got the following dialog. It’s possible that I just made a mistake in the COM registration, however testing the COM object in a separate application worked as expected. Therefore this error is likely a result of failing to load the DLL. Fortunately, VBOX is generous and enables by default a log of all Process Hardening events. It’s named VBoxHardening.log and is located in the Logs folder in the Virtual Machine you tried to start. Searching for the name of the DLL we find the following entries (heavily modified for brevity): supHardenedWinVerifyImageByHandle: -> -22900 (c:\dummy\testdll.dll) supR3HardenedScreenImage/LdrLoadDll: c:\dummy\testdll.dll: Not signed. supR3HardenedMonitor_LdrLoadDll: rejecting 'c:\dummy\testdll.dll' supR3HardenedMonitor_LdrLoadDll: returns rcNt=0xc0000190 So clearly our test DLL isn’t signed and so the LdrLoadDll hook rejects it. The LdrLoadDll hook returns an error code which propagates back up to the COM DLL loader, which results in COM thinking the class doesn’t exist. While it’s not surprising that it wasn’t as simple as just specifying our own DLL (and don’t forget we cheated with setting the Owner) it at least gives us hope as this result means the VBOX process will use our hijacked COM registration. All we need therefore is a COM object which meets the following criteria: It’s signed by a trusted certificate. It’s owned by TrustedInstaller. When loaded will do something that allows for arbitrary code execution in the process. Criteria 1 and 2 are easy to meet, any Microsoft COM object on the system is signed by a trusted certificate (one of Microsoft’s publisher certificates) and is almost certainly owned by TrustedInstaller. However, criteria 3 would seem much more difficult to meet, a COM object is usually implemented inside the DLL and we can’t modify the DLL itself, otherwise it would no longer be signed. It just so happens that there is a Microsoft signed COM object installed by default which will allow us to meet criteria 3, Windows Script Components (WSC). WSC, also sometimes called Scriptlets are also having a good run at the moment. They can be used as an AppLocker bypass as well as being loaded from HTTP URLs. What’s of most interest in this case is they can also be registered as a COM object. A registered WSC consists of two parts: The WSC runtime scrobj.dll which acts as the in-process COM server. A file which contains the implementation of the Scriptlet in a compatible scripting language. When an application tries to load the registered class scrobj.dll gets loaded into memory. The COM runtime requests a new object of the required class which causes the WSC runtime to go back to the registry to lookup the URL to the implementation Scriptlet file. The WSC runtime then loads the Scriptlet file and executes the embedded script contained in the file in-process. The key here is that as long as scrobj.dll (and any associated script language libraries such as JScript.dll) are valid signed DLLs from VBOX’s perspective then the script code will run as it can never be checked by the hardening code. This would get arbitrary code running inside the hardened process. First let’s check that scrobj.dll is likely to be allowed to be loaded by VBOX. The following screenshot shows the DLL is both signed by Microsoft and is also owned by TrustedInstaller. So what does a valid Scriptlet file look like? It’s a simple XML file, I’m not going to go into much detail about what each XML element means, other than to point out the script block which will execute arbitrary JScript code. In this case all this Scriptlet will do when loaded is start the Calculator process. <scriptlet> <registration description ="Component" progid="Component" version="1.00" classid="{DD3FC71D-26C0-4FE1-BF6F-67F633265BBA}" /> <public/> <script language = "JScript" > <![CDATA[ new ActiveXObject('WScript.Shell').Exec('calc'); ]]> </script> </scriptlet> If you’re written much code in JScript or VBScript you might now notice a problem, these languages can’t do that much unless it’s implemented by a COM object. In the example Scriptlet file we can’t create a new process without loading the WScript.Shell COM object and calling its Exec method. In order to talk to the VBOX driver, which is whole purpose of injecting code in the first place, we’d need a COM object which gives us that functionality. We can’t implement the code in another COM object as that wouldn’t pass the image signing checks we’re trying to bypass. Of course, there’s always memory corruption bugs in scripting engines but, as everyone already knows by now, I’m not a fan of exploiting memory corruptions so we need some other way of getting fully arbitrary code execution. Time to bring in the big guns, the .NET Framework. The .NET runtime loads code into memory using the normal DLL loading routines. We can’t therefore load a .NET DLL which isn’t signed into memory as that would still get caught by VBOX’s hardening code. However, .NET does support loading arbitrary code from an in-memory array using the Assembly::Load method and once loaded this code can basically act as if it was native code, calling arbitrary APIs and inspecting/modifying memory. As the .NET framework is signed by Microsoft all we need to do is somehow call the Load method from our Scriptlet file and we can get full arbitrary code running inside the process. Where do we even start on achieving this goal? From a previous blog post it’s possible to expose .NET objects as COM objects through registration and by abusing Binary Serialization we can load arbitrary code from a byte array. Many core .NET runtime classes are automatically registered as COM objects which can be loaded and manipulated by a scripting engine. The big question can now be asked, is BinaryFormatter exposed as a COM object? Why, yes it is. BinaryFormatter is a .NET object that a scripting engine can load and interact with via COM. We could now take the final binary stream from my previous post and execute arbitrary code from memory. In the previous blog post the execution of the untrusted code had to occur during deserialization, in this case we can interact with the results of deserialization in a script which can make the serialization gadgets we need much simpler. In the end I chose to deserialize a Delegate object which when executed by the script engine would load an Assembly from memory and return the Assembly instance. The script engine could then instantiate an instance of a Type in that Assembly and run arbitrary code. It does sound simple in principle, in reality there are a number of caveats. Rather than bog down this blog post with more detail than necessary the tool I used to generate the Scriptlet file, DotNetToJScript is available so you can read how it works yourself. Also the PoC is available on the issue tracker here. The chain from the JScript component to being able to call the VBOX driver looks something like the following: I’m not going to go into what you can now do with the VBOX driver once you’ve got arbitrary code running the hardened process, that’s certainly a topic for another post. Although you might want to look at one of Jann’s issues which describes what you might do on Linux. How did Oracle fix the issue? They added a blacklist of DLLs which are not allowed to be loaded by the hardened VBOX process. The only DLL currently in that list is scrobj.dll. The list is checked after the verification of the file has taken place and covers both the current filename as well as the internal Original Filename in the version resources. This prevents you just renaming the file to something else, as the version resources are part of the signed PE data and so cannot be modified without invalidating the signature. In fairness to Oracle I’m not sure there was any other sensible way of blocking this attack vector other than a DLL blacklist. Exploiting User-Mode DLL Loading Behavior The second bug I’m going to describe was fixed as CVE-2017-10204 in VBOX version 5.1.24. This issue exploits the behavior of the Windows DLL loader and some bugs in VBOX to trick the hardening code to allow an unverified DLL to be loaded into memory and executed. While this bug doesn’t rely on exploiting COM loading as such, the per-user COM registration is a convenient technique to get LoadLibrary called with an arbitrary path. Therefore we’ll continue to use the technique of hijacking the VirtualBoxClient COM object and just use the in-process server path as a means to load the DLL. LoadLibrary is an API with a number of well known, but strange behaviors. One of the more interesting from our perspective is the behavior with filename extensions. Depending on the extension the LoadLibrary API might add or remove the extension before trying to load the file. I can summarise it in a table, showing the file name as passed to LoadLibrary and the file it actually tries to load. Original File Name Loaded File Name c:\test\abc.dll c:\test\abc.dll c:\test\abc c:\test\abc.dll c:\test\abc.blah c:\test\abc.blah c:\test\abc. c:\test\abc I’ve highlighted in green the two important cases. These are the cases where the filename passed into LoadLibrary doesn’t match the filename which eventually gets loaded. The problem for any code trying to verify a DLL file before loading it is CreateFile doesn’t follow these rules so in the highlighted cases if you opened the file for signature verification using the original file name you’d verify a different file to the one which eventually gets loaded. In Windows there’s usually a clear separation between Kernel32 code, which tends to deal with the many weird behaviors Win32 has built up over the years and the “clean” NT layer exposed by the kernel through NTDLL. Therefore as LoadLibrary is in Kernel32 and LdrLoadDll (which is the function the hardening hooks) is in NTDLL then this weird extension behavior would be handled in the former. Let’s look at a very simplified version of LoadLibrary to see if that’s the case: HMODULE LoadLibrary(LPCWSTR lpLibFileName) { UNICODE_STRING DllPath; HMODULE ModuleHandle; ULONG Flags = // Flags; RtlInitUnicodeString(&DllPath, lpLibFileName); if (NT_SUCCESS(LdrLoadDll(DEFAULT_SEARCH_PATH, &Flags, &DllPath, &ModuleHandle))) { return ModuleHandle; } return NULL; } We can see in this code that for all intents and purposes LoadLibrary is just a wrapper around LdrLoadDll. While it’s really more complex than that in reality the takeaway is that LoadLibrary does not modify the path it passes to LdrLoadDll in any way other than converting it to a UNICODE_STRING. Therefore perhaps if we specify a DLL to load without an extension VBOX will check the extension-less file for the signature but LdrLoadDll will instead load the file with the .DLL extension. Before we can test that we’ve got another problem to deal with, the requirement that the file is owned by TrustedInstaller. For the file we want VBOX to signature check all we need to do is give an existing valid, signed file a different filename. This is what hard links were created for; we can create a different name in a directory we control which actually links to a system file which is signed and also maintains its original security descriptor including the owner. The trouble with hard links is, as I described almost 2 years ago in a blog post, while Windows supports creating links to system files you can’t write to, the Win32 APIs, and by extension the easy to access “mklink” command in the CMD shell require the file be opened with FILE_WRITE_ATTRIBUTES access. Instead of using another application to create the link we’ll just copy the file, however the copy will no longer have the original security descriptor and so it’ll no longer be owned by TrustedInstaller. To get around that let’s look at the checking code to see if there’s a way around it. The main check for the Owner is in supHardenedWinVerifyImageByLdrMod. Almost the first thing that function does is call supHardNtViCheckIsOwnedByTrustedInstallerOrSimilar which we saw earlier. However as the comments above the check indicate the code will also allow files under System32 and WinSxS directories to not be owned by TrustedInstaller. This is a bus sized hole in the point of the check, as all we need is one writeable directory under System32. We can find some by running the Get-AccessibleFile cmdlet in my NtObjectManager PS module. There are plenty to choose from, we’ll just pick the Tasks folder as it’s guaranteed to always be there. So the exploit should be as follows: Copy a signed binary to %SystemRoot%\System32\Tasks\Dummy\ABC Copy an unsigned binary to %SystemRoot%\System32\Tasks\Dummy\ABC.DLL Register a COM hijack pointing the in-process server to the signed file path from 1. If you try to start a Virtual Machine you’ll find that this trick works. The hardening code checks the ABC file for the signature, but LdrLoadDll ends up loading ABC.DLL. Just to check we didn’t just exploit something else let’s check the hardening log: \..\Tasks\dummy\ABC: Owner is not trusted installer \..\Tasks\dummy\ABC: Relaxing the TrustedInstaller requirement for this DLL (it's in system32). supHardenedWinVerifyImageByHandle: -> 0 (\..\Tasks\dummy\ABC) supR3HardenedMonitor_LdrLoadDll: pName=c:\..\tasks\dummy\ABC [calling] The first two lines indicate the bypass of the Owner check as we expected. The second two indicate it’s verified the ABC file and therefore will call the original LdrLoadDll, which ultimately will append the extension and try to load ABC.DLL instead. But, wait, how come the other checks in NtCreateSection and the loader callback don’t catch loading a completely different file? Let’s search for any instance of ABC.DLL in the rest of the hardening log to find out: \..\Tasks\dummy\ABC.dll: Owner is not trusted installer \..\Tasks\dummy\ABC.dll: Relaxing the TrustedInstaller requirement for this DLL (it's in system32). supHardenedWinVerifyImageByHandle: -> 22900 (\..\Tasks\dummy\ABC.dll) supR3HardenedWinVerifyCacheInsert: \..\Tasks\dummy\ABC.dll supR3HardenedDllNotificationCallback: c:\..\tasks\dummy\ABC.DLL supR3HardenedScreenImage/LdrLoadDll: cache hit (Unknown Status 22900) on \...\Tasks\dummy\ABC.dll Again the first two lines indicate we bypassed the Owner check because of our file's location. The next line, supHardenedWinVerifyImageByHandle is more interesting however. This function verifies the image file. If you look back in this blog at the earlier log of this check you’ll find it returned the result -22900, which was considered an error. However in this case it’s returning 22900, which as VBOX is treating any result >= 0 as success the hardening code gets confused and assumes that the file is valid. The negative error code is VERR_LDRVI_NOT_SIGNED in the source code, whereas the positive “success” code is VINF_LDRVI_NOT_SIGNED. This seems to be a bug in the verification code when calling code in the DLL Loader Lock, such as in the NtCreateSection hook. The code can’t call WinVerifyTrust in case it tries to load another DLL, which would cause a deadlock. What would normally happen is VINF_LDRVI_NOT_SIGNED is returned from the internal signature checking implementation. That implementation can only handle files with embedded signatures, so if a file isn’t signed it returns that information code to get the verification code to check if the file is catalog signed. What’s supposed to happen is WinVerifyTrust is called and if the file is still not signed it returns the error code, however as WinVerifyTrust can’t be called due to the lock the information code gets propagated to the caller which assumed it’s a success code. The final question is why the final Loader Callback doesn’t catch the unsigned file? VBOX implements a signed file cache based on the path to avoid checking a file multiple times. When the call to supHardenedWinVerifyImageByHandle was taken to be a success the verifier called supR3HardenedWinVerifyCacheInsert to add a cache entry for this path with the “success” code. We can see that in the Loader Callback it tries to verify the file but gets back a “success” code from the cache so assumes everything's okay, and the loading process is allowed to complete. Quite a complex set of interactions to get code running. How did Oracle fix this issue? They just add the DLL extension if there’s no extension present. They also handle the case where the filename has a trailing period (which would be removed when loading the DLL). Exploiting Kernel-Mode Image Loading Behavior The final bug I’m going to describe was fixed as CVE-2017-10129 in VBOX version 5.1.24. This isn’t really a bug in VBOX as much as it’s an unexpected behavior in Windows. Through all this it’s worth noting that there’s an implicit race condition in what the hardening code is trying to do, specifically if you could change the file between the verification point and the point where the file is mapped. In theory you could do this to VBOX but the timing window is somewhat short. You could use OPLOCKs and the like but it’s a bit of a pain, instead it’d be nice to get the TOCTOU attack for free. Let’s look at how image files are handled in the kernel. Mapping an image file on Windows is expensive, the OS doesn’t use position independent code and so can’t just map the DLL into memory as a simple file. Instead the DLL must be relocated to a specific memory address. This requires modifying pages of the DLL file to ensure any pointers are correctly fixed up. This is even more important when you bring ASLR into the mix as ASLR will almost always force a DLL to be relocated from its base address. Therefore, Windows caches an instance of an image mapping whenever it can, this is why the load address of a DLL doesn’t change between processes on the same system, it’s using the same cached image section. The caching is actually in part under control of the filesystem driver. When a file is opened the IO manager will allocate a new instance of the FILE_OBJECT structure and pass it to the IRP_MJ_CREATE handler for the driver. One of the fields that the driver can then initialize is the SectionObjectPointer. This is an instance of the SECTION_OBJECT_POINTERS structure, which looks like the following: struct SECTION_OBJECT_POINTERS { PVOID DataSectionObject; PVOID SharedCacheMap; PVOID ImageSectionObject; }; The fields themselves are managed by the Cache manager, but the structure itself must be allocated by the File System driver. Specifically the allocation should be one per-file in the filesystem; while each open instance of a specific file will have unique FILE_OBJECT instances the SectionObjectPointer should be the same. This allows the Cache manager to fill in the different fields and then reuse them if another instance of the same file tries to be mapped. The important field here is ImageSectionObject which contains the cached data for the mapped image section. I’m not going to delve into detail of what the ImageSectionObject pointer contains as it’s not really relevant. The important thing is if the SectionObjectPointer and by extension the ImageSectionObject pointers are the same for a FILE_OBJECT instance then mapping that file as an image will map the same cached image mapping. However, as ImageSectionObject pointer is not used when reading from a file it doesn’t follow that what’s actually cached still matches what’s on disk. Trying to desynchronize the file data from the SectionObjectPointer seems to be pretty tricky with an NTFS volume, at least without administrator privileges. One scenario where you can do this desynchronization is via the SMB redirector when accessing network shares. The reason is pretty simple, it’s the local redirector’s responsibility to allocate the SectionObjectPointer structure when a file is opened on a remote server. As far as the the redirector’s concerned if it opens the file \Share\File.dll on a server twice then it’s the same file. There’s no real other information the redirector can use to verify the identity of the file, it has to guess. Any property you can think of, Object ID, Modification Time can just be a lie. You could easily modify a copy of SAMBA to do this lying for you. The redirector also can’t lock the file and ensure it stays locked. So it seems the redirector just doesn’t bother with any of it, if it looks like the same file from its perspective it assumes it’s fine. However this is only for the SectionObjectPointer, if the caller wants to read the contents of the file the SMB redirector will go out to the server and try to read the current state of the file. Again this could all be lies, and the server could return any data it likes. This is how we can create a desynchronization; if we map an image file from a SMB server, change the underlying file data then reopen the file and map the image again the mapped image will be the cached one, but any data read from the file will be what’s current on the server. This way we can map an untrusted DLL first, then replace the file data with a signed, valid file (SMB supports reading the owner of the file, so we can spoof TrustedInstaller), when VBOX tries to load it it will verify the signed file but map the cached untrusted image and it will never know. Having a remote server isn’t ideal, however we can do everything we need by using the local loopback SMB server and access files via the admin shares. Contrary to their names admin shares are not limited to administrators if you’re coming from localhost. The key to getting this to work is to use a Directory Junction. Junctions are resolved on the server, the redirector client knows nothing about them. Therefore as far as the client is concerned if it opens the file \\localhost\c$\Dir\File.dll once, then reopens the same file these could be two completely different files as shown in the following diagram: Fortunately, one thing which should be evident from the previous two issues is that VBOX’s hardening code doesn’t really care where the DLL is located as long as it meets its two criteria, it’s owned by TrustedInstaller and it’s signed. We can point the COM hijack to a SMB share on the local system. Therefore we can perform the attack as follows: Set up a junction on the C: drive pointing at a directory containing our untrusted file. Map the file via the junction over the c$ admin share using LoadLibrary, do not release the mapping until the exploit is complete. Change the junction to point to another directory with a valid, signed file with the same name as our untrusted file. Start VBOX with the COM hijack pointing at the file. VBOX will read the file and verify it’s signed and owned by TrustedInstaller, however when it maps it the cached, untrusted image section will be used instead. So how did Oracle fix this? They now check that the mapped file isn’t on a network share by comparing the path against the prefix \Device\Mup. Conclusions The implementation of process hardening in VirtualBox is complex and because of that it is quite error prone. I’m sure there are other ways of bypassing the protection, it just requires people to go looking. Of course none of this would be necessary if they didn’t need to protect access to the VirtualBox kernel driver from malicious use, but that’s a design decision that’s probably going to be difficult to fix in the short term. Posted by Ben at 9:10 AM Sursa: https://googleprojectzero.blogspot.ro/2017/08/bypassing-virtualbox-process-hardening.html
-
Adapting Burp Extensions for Tailored Pentesting Burp Suite is privileged to serve as a platform for numerous extensions developed and shared by our community of users. These expand Burp’s capabilities in a range of intriguing ways. That said, many extensions were built to solve a very specific problem, and you might have ideas for how to adapt an extension to better fulfil your needs. Altering third party Burp extensions used to be pretty difficult, but we’ve recently made sure all Burp extensions are open source and share a similar build process. In this post, I’ll show you just how easy it’s become to customize an extension and build a bespoke Burp environment for effective and efficient audits. I’ll personalize the Collaborator Everywhere extension by making it inject extra query parameters that are frequently vulnerable to SSRF, as identified by Bugcrowd for their excellent HUNT extension. Development Environment Prerequisites First, create your development environment. To edit an extension written in Java, you’ll need to install the Java JDK and Gradle. Extensions written in Python and Ruby don’t have any equivalent requirements, but Git is always useful. This is all you’ll need to build the majority of Burp extensions - Gradle will automatically handle any extension-specific dependencies for you. I’ll use Windows because it’s reliably the most awkward development environment. Obtain code The next step is to obtain the code you want to hack up. Find your target extension on https://portswigger.net/bappstore and click the ‘View Source Code’ button. This will land you on a GitHub Page something like https://github.com/portswigger/collaborator-everywhere To get the code, either click download to get a zip or open a terminal, type git clone https://github.com/portswigger/collaborator-everywhere, and cd into the new folder. Verify environment (Java only) Before you make any changes, ensure you can successfully build the jar and load it into Burp. To find out how to build the jar, look for the BuildCommand line in the BappManifest.bmf file. For Collaborator Everywhere, it’s simply gradle fatJar. The EntryPoint line shows where the resulting jar will appear. Apply & test changes If you can load the freshly built jar into Burp and it works as expected, you’re ready to make your changes and rebuild. Collaborator Everywhere reads its payloads from resources/injections, so I’ve simply added an extra line for each parameter I want to inject. For example, the following line adds a GET parameter called 'feed', formatted as a HTTP URL: param,feed,http://%s/ If a particular payload is causing you grief, you can comment it out using a #. The extension Flow may come in useful for verifying your modifications work as expected - it shows requests made by all Burp components, including the scanner. Here, we can see our modified extension is working as intended: Finally, be aware that innocuous changes may have unexpected side effects. Conclusion If you feel like sharing your enhanced extension with the community, feel free to submit your changes back to the PortSwigger repository as a pull request, or release them as a fork. I haven’t pushed my Collaborator Everywhere tweak into an official release because the extra parameters unfortunately upset quite a few websites. Some extensions may be more difficult to modify than others, but we’ve seen that with a little environment setup, you can modify Burp extensions with impunity. Enjoy - @albinowax Posted by James Kettle at 2:47 PM Sursa: http://blog.portswigger.net/2017/08/adapting-burp-extensions-for-tailored.html
-
- 1
-
-
The Ultimate Online Game Hacking Resource A curated list of tutorials/resources for hacking online games! From dissecting game clients to cracking network packet encryption, this is a go-to reference for those interested in the topic of hacking online games. I'll be updating this list whenever I run across excellent resources, so be sure to Watch/Star it! If you know of an excellent resource that isn't yet on the list, feel free to email it to me for consideration. Blog Posts, Articles, and Presentations Title/Link Description KeyIdentity's Pwn Adventure 3 Blog Series A series of blog posts detailing various approaches to hacking Pwn Adventure 3. How to Hack an MMO An article from 2014 providing general insight into hacking an online game. Reverse Engineering Online Games - Dragomon Hunter An in-depth tutorial showing how to reverse engineer online games via the game Dragomon Hunter. Hacking/Exploiting/Cheating in Online Games (PDF) A presentation from 2013 that delves deeply into hacking online games, from defining terminology to providing code examples of specific hacks. Hacking Online Games A presentation from 2012 discussing various aspects of hacking online games. For 20 Years, This Man Has Survived Entirely by Hacking Online Games A hacker says he turned finding and exploiting flaws in popular MMO video games into a lucrative, full-time, job. Hackers in Multiplayer Games A Reddit post discussing hacking in multiplayer games. Reverse Engineering Network Protocols A very helpful comment from a Reddit post inquiring about reversing network protocols. Deciphering MMORPG Protocol Encoding An informative discussion from a question on Stack Overflow. Reverse Engineering of a Packet Encryption Function of a Game An informative discussion from a question on StackExchange. Videos Title/Link Description How to Hack Local Values in Browser-Based Games with Cheat Engine This video teaches you how to find and change local values (which might appear as server-based values) in browser-based games. Reverse-Engineering a Proprietary Game Server with Erlang This talk details advantages Erlang has over other languages for reverse engineering protocols and analyzing client files. A live demo showcasing some of these tools and techniques is also given. DEFCON 19: Hacking MMORPGs for Fun and Mostly Profit This talk presents a pragmatic view of both threats and defenses in relating to hacking online games. Books Title/Link Description Game Hacking Game Hacking shows programmers how to dissect computer games and create bots. Attacking Network Protocols Attacking Network Protocols is a deep-dive into network vulnerability discovery. Practical Packet Analysis, 3rd Edition Practical Packet Analysis, 3rd Ed. teaches you how to use Wireshark for packet capture and analysis. Exploiting Online Games: Cheating Massively Distributed Systems This book takes a close look at security problems associated with advanced, massively distributed software in relation to video games. Online Game Hacking Forums Title/Link Description Guided Hacking Discussion of multiplayer and single-player game hacks and cheats. UnKnoWnCheaTs Forum Discussion of multiplayer game hacks and cheats. MPGH (Multi-Player Game Hacking) Forum Discussion of multiplayer game hacks and cheats. ElitePVPers Discussion of MMO hacks, bots, cheats, guides and more. OwnedCore An MMO gaming community for guides, exploits, trading, hacks, model editing, emulation servers, programs, bots and more. Sursa: https://github.com/dsasmblr/hacking-online-games/
-
- 3
-
-
Defcon 23 latest open source tool NetRipper code analysis and utilization Any sub-line 2017-08-21 0 × 01 research background In the analysis of the Russian people exposed several bank Trojan source code, found that most of them exist through the hijacking of the browser data packets to obtain the user's personal information module, by intercepting the browser memory before or after encryption of encrypted packets Get the plaintext data of the packet.The tools released in Defcon 23 NetRipper has the ability to use the above malicious bank Trojan, its open source code structure is clear and easy to expand, the study of the tool for the study of such malicious behavior is very meaningful.The github address in [github], the author also provides metasploit and powershell version of the use of the module, this paper will analyze its different versions of the module will be used to achieve the core of the c ++ code. 0 × 02 NetRipper tool summary The open source tool to achieve the function, mainly through the Hook process of the network function key (packet encryption and packet decryption before the network function) to hijack the client program plaintext data.Which includes a number of mainstream clients, such as: Chrome, Firefox, IE, WinSCP, Putty and some of the code library provided in the network packet encryption and decryption function interface, according to the function of the function interface function points, can be divided into " Function interface "and" exported function interface ".Which Chrome, Putty, SecureCrt and WinSCP in the network encryption and decryption interface is UnExported, through reverse engineering to find the location of its Signature, and then hijacked by HOOK; for example, Mozilla Firefox uses nss3.dll and nspr4.dll these two modules In the encryption and decryption function, nss3.dll derived PR_Read, PR_Write and PR_GetDescType, which derived PR_Send and PR_Recv.Others such as ncrypt.dll, secur32.dll and ssh2core73u.dll. There are also under the ordinary network transmission function winsock2 Hook to directly access to some unencrypted information. For the non-export function hook processing need to first find the hook point, which is known than the hook derived function of the process of many complex, first through the reverse analysis process of the process of sending and receiving packets to find the key point (before encryption and decrypted packet processing Of the function interface).For example, for the chrome / putty / winscp process is the need to do so, through its open source code as an auxiliary analysis, first find the network function of the Signature, HOOK before the process of memory space to search for its address: With the software upgrade and security enhancements, there may be some changes in the level of the packet function, then the NetRipper code needs to be modified to adapt to these changes, re-debug analysis to find the corresponding Signature, and then reset the Hook point. To putty as an example to verify the next: Use CE to find the identity of the send function at position 0x00408AD7. IDA showsSub_408ad7 The prototype definition for this function is consistent with the declaration in the code: As for how to debug to find out the function of the HOOK point, this content is more, the next article detailed analysis.For the putty and winscp client, because they are open source, you can refer to its open source code; for chrome, then you need to reverse debugging procedures to locate the HOOK point. 0 × 03 Hook offset address calculation E8 XXXXXXXX Where XXXXXXXX = destination address - the original address - 5 For example, the OD loads calc.exe: Offset address in instruction: 0xFFFF99EB Destination Address: 0x6c768 Current instruction address: 0x72d78 Calculation formula: 0xFFFFFFFF - (0x72d78 + 5 - 0x6c768) = 0xFFFF99eb QA1: Why do I need to use 0xFFFFFFFF minus the offset value? Calculate the complement Address is a DWORD (unsigned long) accounted for 4 bytes of integer, can represent the address range is 2 times the symbol can represent the range is 0 × 00000000 ~ 0xFFFFFFFF. QA2: Why is the current instruction address plus 5, and then subtract the target address to calculate the offset? This involves the CALL / JMP instruction to calculate the basis of the offset, first CALL / JMP (E8 or E9) are occupied by 5 bytes, to jump to the target address, then first need to skip the length of the current instruction, and then Jump to destination address.In the above example can also be seen through the calculation is the correct result. NetRipper practical example: NetRipper also handles the case of Hot-Patching, which is handled in the same way as above, except that the function address is added to 5 bytes and the new location is used as the HOOK point of the function. NetRipper on Hook processing is also very interesting: (1) the use of a structure HookStruct to store (or called a function Hook information) HOOK function of the information, using a vector maintenance. (2) callback function written using the inline assembly, the code function is: when the original function is called to perform this piece of assembly code, and then in the assembly code call Hooker :: GetHookStructByOriginalAddress function, the function of the original function of the address as a parameter, In all have registered HOOK structure of the vector <HokStruct> in the function of the HOOK search information, according to the address of the function to determine the callback function. An explanation of this inline assembly code is given below. Note: For Recv such a function, only the first call to the original function, can get recv information.This has a Hook post-call function in the handling problem. 0 × 04 NetRipper Hook processing 0 × 05 injection in NetRipper NetRipper provides both conventional remote injection and reflection injection methods, where reflection injection is now very common, except that malicious code is often used, and this approach is also used for the metasploit permeation framework.About this injection method, more information, not here started. 0 × 06 code frame analysis In order to make the tool extensible, including the core code, the other auxiliary modules are encapsulated by C ++ class, with lower coupling, easy to configure to complete different tasks. (1) injection and dynamic configuration The core module is in a DLL, so it needs to be injected into the target process, which provides the injection code, which provides a choice of conventional remote thread injection and reflection injection techniques. The injector is in the form of a command line and can be used to configure the Injected DLL. (2) plug-in system The code uses a plug-in system written by the author, encapsulated in a C ++ class, with several plug-in functions in the form of member functions, or easily extended according to its code. (3) debug log Provides the function of debugging information output, the author provides the package of this class, the user can configure whether to use. (4) function flow control Can be for each Hook thread, to ensure that its Hook operation after processing only one type of operation, through a function flow control class to control.For example, Hook callback function to output information to the file, so you can control a thread Hook function is only output to a log file. 0 × 07 NetRipper use NetRipper is mainly used for post-infiltration, the target host is captured, the need for further deep penetration of the time when you need more information, NetRipper by hijacking the browser / client express information to achieve this purpose.NetRipper provides a hijacking of browsers and some common clients, and hires the browser (IE / Chrome / Firefox) to get the information requested by the user; for WinSCP and putty and other clients can directly get the user input account and other information , To help penetrate testers and attackers from the Windows system to the Linux system to complete the attack to maximize.The following to putty as an example test (1) the DLL into the putty process to complete the use (2) use putty login SSH server to verify (3) acquiescence in the user directory under the temp generated log file: (4) putty packet decryption data You can see the input user name root and password qwe and the input command ifconfig has been recorded, this is the decryption operation of the packet process. (5) hook send / recv function to get the putty encrypted data * Author: Renzi line, please indicate FreeBuf.COM Any child rows Sursa: http://www.freebuf.com/articles/web/144709.html (Google Translate)
- 1 reply
-
- 7
-
-
Ce inseamna "necurate"?
-
Nu considera ceea ce spun valid, dar probabil daca sunt trimisi prin banca, ajung in SWIFT unde sunt verificati, sunt de asemenea luate niste taxe ceva mai mari (probabil) si totul (cred ca) este in regula. PS: Daca ii ai cash si te prind e nasol: http://www.gandul.info/florin-salam-a-fost-prins-de-americani-pe-aeroport-suma-imensa-pe-care-i-ar-fi-confiscat-o-16173322.html
-
Samsung - Hardware iPhone - Software
-
Super util, nu e complicat (mai putin partile de encoding si chunked - manual) si poate sa fie extrem de util. Face cineva un plugin de Burp pentru bypass-uri?
-
[RST] NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to Nytro's topic in Proiecte RST
Am adaugat suport pentru x64: https://github.com/NytroRST/NetRipper Cine ar putea sa teseze daca e totul OK?