-
Posts
18785 -
Joined
-
Last visited
-
Days Won
738
Everything posted by Nytro
-
DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, Yuval Elovici (Submitted on 11 Aug 2016) Air-gapped computers are disconnected from the Internet physically and logically. This measure is taken in order to prevent the leakage of sensitive data from secured networks. In the past, it has been shown that malware can exfiltrate data from air-gapped computers by transmitting ultrasonic signals via the computer's speakers. However, such acoustic communication relies on the availability of speakers on a computer. In this paper, we present 'DiskFiltration,' a covert channel which facilitates the leakage of data from an air-gapped compute via acoustic signals emitted from its hard disk drive (HDD). Our method is unique in that, unlike other acoustic covert channels, it doesn't require the presence of speakers or audio hardware in the air-gapped computer. A malware installed on a compromised machine can generate acoustic emissions at specific audio frequencies by controlling the movements of the HDD's actuator arm. Digital Information can be modulated over the acoustic signals and then be picked up by a nearby receiver (e.g., smartphone, smartwatch, laptop, etc.). We examine the HDD anatomy and analyze its acoustical characteristics. We also present signal generation and detection, and data modulation and demodulation algorithms. Based on our proposed method, we developed a transmitter on a personal computer and a receiver on a smartphone, and we provide the design and implementation details. We also evaluate our covert channel on various types of internal and external HDDs in different computer chassis and at various distances. With DiskFiltration we were able to covertly transmit data (e.g., passwords, encryption keys, and keylogging data) between air-gapped computers to a smartphone at an effective bit rate of 180 bits/minute (10,800 bits/hour) and a distance of up to two meters (six feet). Subjects: Cryptography and Security (cs.CR) Cite as: arXiv:1608.03431 [cs.CR] (or arXiv:1608.03431v1 [cs.CR] for this version) Submission history From: Mordechai Guri [view email] [v1] Thu, 11 Aug 2016 12:06:12 GMT (3304kb) Sursa: https://arxiv.org/abs/1608.03431
-
Every Windows 10 in-place Upgrade is a SEVERE Security risk
Nytro posted a topic in Stiri securitate
Monday, November 28, 2016 Every Windows 10 in-place Upgrade is a SEVERE Security risk This is a big issue and it has been there for a long time. Just a month ago I finally got verification that the Microsoft Product Groups not only know about this but that they have begun working on a fix. As I want to be known as a white hat I had to wait for this to happen before I blog this. There is a small but CRAZY bug in the way the "Feature Update" (previously known as "Upgrade") is installed. The installation of a new build is done by reimaging the machine and the image installed by a small version of Windows called Windows PE (Preinstallation Environment). This has a feature for troubleshooting that allows you to press SHIFT+F10 to get a Command Prompt. This sadly allows for access to the hard disk as during the upgrade Microsoft disables BitLocker. I demonstrate this in the following video. This would take place when you take the following update paths: Windows 10 RTM --> 1511 or 1607 release (November Update or Anniversary Update) Any build to a newer Insider Build (up to end of October 2016 at least) The real issue here is the Elevation of Privilege that takes a non-admin to SYSTEM (the root of Windows) even on a BitLocker (Microsoft's hard disk encryption) protected machine. And of course that this doesn't require any external hardware or additional software. It's just a crazy bug I would say Here's the video: Why would a bad guy do this: An internal threat who wants to get admin access just has to wait for the next upgrade or convince it's OK for him to be an insider An external threat having access to a computer waits for it to start an upgrade to get into the system I sadly can't offer solutions better than: Don't allow unattended upgrades Keep very tight watch on the Insiders Stick to LTSB version of Windows 10 for now I am known to share how I do things myself and I'm happy to say I have instructed my customers to stay on the Long Time Servicing Branch for now. At least they can wait until this is fixed and move to a more current branch then. I meet people all the time who say that LTSB is a legacy way but when I say I'm going to wait a year or two to get the worst bugs out of this new "Just upgrade" model - this is what I meant… Posted by Sami Laiho at 6:14 PM Sursa: http://blog.win-fu.com/2016/11/every-windows-10-in-place-upgrade-is.html-
- 1
-
-
By Fahmida Y. Rashid Senior Writer CERT to Microsoft: Keep EMET alive Windows systems with Enhanced Mitigation Experience Toolkit properly configured is more secure than a standalone Windows 10 system, says CERT InfoWorld | Nov 29, 2016 Credit: Thinkstock Microsoft wants to stop supporting its Enhanced Mitigation Experience Toolkit (EMET) because all of the security features have been baked into Windows 10. A vulnerability analyst says Windows with EMET offers additional protection not available in standalone Windows 10. "Even a Windows 7 system with EMET configured protects your application more than a stock Windows 10 system," said Will Dormann, a vulnerability analyst with the Computer Emergency Response Team(CERT) at Carnegie Mellon University’s Software Engineering Institute. [ InfoWorld's deep look: Why (and how) you should manage Windows 10 PCs like iPhones. | The essentials for Windows 10 installation: Download the Windows 10 Installation Superguide today. ] Originally introduced in 2009, EMET adds exploit mitigations, including address space layout randomization (ASLR) and data execution prevention (DEP), to Windows systems to make it harder for malware to trigger unpatched vulnerabilities. Since Windows 10 includes EMET’s anti-exploit protections by default, Microsoft is planning to end-of-life the free tool in July 2018. CERT’s Dormann said Microsoft should keep supporting the toolkit because Windows 10 does not provide all of the application-specific mitigations available in EMET. “Windows 10 does indeed provide some nice exploit mitigations. The problem is that the software you are running needs to be specifically compiled to take advantage of them,” Dormann said. OS-level vs application-level defenses Dormann argues that Microsoft should keep supporting the toolkit -- currently EMET 5.51 -- because it provides both systemwide protection and application-specific mitigations that make the toolkit relevant for Windows security, even on Windows 10 systems. EMET’s systemwide protections include the aforementioned ASLR and DEP, Structured Exception Handler Overwrite Protection (SEHOP), Certificate Trust (Pinning), and Block Untrusted Fonts. EMET’s application-specific protections include DEP, SEHOP, ASLR, Null Page Allocation, Heapspray Allocations, Export Address Table Access Filtering (EAF), Export Address Table Access Filtering Plus (EAF+), Bottom-up Randomization (BottomUP ASLR), Attack Surface Reduction (ASR), Block Untrusted Fonts, and Return-Oriented Programming mitigations. Microsoft’s principal lead program for OS security, Jeffrey Sutherland, recently said that users should upgrade to Windows 10 since the latest operating system natively includes the security features provided by EMET. That is true to some extent, as DEP, SEHOP, ASLR, BottomupASLR, and ROP mitigation (as Control Flow Guard) are part of Windows 10, but many of the application-specific mitigations are not. What Sutherland neglected to consider was that most Windows administrators rely on EMET to apply all of the available exploit mitigations to applications. Consider that a Windows 10 system with EMET properly configured has 13 additional mitigations -- the application-specific controls -- than a standalone Windows 10 system. "It is pretty clear that an application running on a stock Windows 10 system does not have the same protections as one running on a Windows 10 system with EMET properly configured," Dormann said. Application defenses still lagging Windows 10 may be the most secure Windows ever, but the applications have to be compiled to utilize the exploit mitigation features to actually benefit from those enhanced security features. For example, if the application isn’t designed to use Control Flow Guard, then the application doesn’t benefit from Return-Oriented Programming (ROP) defenses, despite the fact that Control Flow Guard is part of Windows 10. "Out of all of the applications you run in your enterprise, do you know which ones are built with Control Flow Guard support? If an application is not built to use Control Flow Guard, it doesn't matter if your underlying operating system supports it or not," Dormann said. The problem isn’t limited to third-party and custom enterprise applications, as there are older -- but still widely used -- Microsoft applications that don’t access the advanced exploit mitigations. For example, Microsoft does not compile all of Office 2010 with the /DYNAMICBASE flag to indicate compatibility with ASLR. An attacker could potentially bypass ASLR and exploit a memory corruption vulnerability by loading a malicious library into the vulnerable application’s process space. Ironically, administrators would protect the application from being targeted in this way by running EMET with application-specific mitigations. "Because we cannot rely on all software vendors to produce code that uses all the exploit mitigations available, EMET puts this control back in our hands," Dormann said. Don’t pick sides; do both Microsoft says to start migrating to Windows 10 and stop using EMET by 2018. A senior engineer at CERT, tasked by the United States Department of Homeland Security to make security recommendations of national significance, says EMET still offers better security than standalone Windows 10. What is a Windows administrator to do? The answer, according to Dormann, is to follow both recommendations: Upgrade to Windows 10 to take advantage of native exploit mitigation features, and install EMET to apply application-specific mitigations. EMET will continue to keep working even after its end-of-life date, which means administrators can still use the tool to protect unsupported software against possible zero-day vulnerabilities. Several other Microsoft applications are nearing their end-of-life dates, including Microsoft Office 2007. Administrators can continue to use EMET to protect these applications from attacks looking for zero-day vulnerabilities. “With such out-of-support applications, it is even more important to provide additional exploit protection with a product like EMET,” Dormann said. It’s possible that with Microsoft’s new Windows-as-a-service model, the remaining EMET defenses will be added to Windows 10 before the end-of-life date, at which point Windows 10 would be able to handle the application-specific protections without EMET. Until then, EMET is “still an important tool to help prevent exploitation of vulnerabilities,” Dormann said. To comment on this article and other InfoWorld content, visit InfoWorld's LinkedIn page, Facebook page and Twitter stream. Sursa: http://www.infoworld.com/article/3145565/security/cert-to-microsoft-keep-emet-alive.html#tk.rss_security
-
- 1
-
-
Fldbg, a Pykd script to debug FlashPlayer November 29, 2016Exploit Development, Offensive Security A few months ago, we decided to make a new module for our Advanced Windows Exploitation class. After evaluating a few options we chose to work with an Adobe Flash 1day vulnerability originally discovered by the Google Project Zero team. Since we did not have any previous experience with Flash internals, we expected a pretty steep learning curve. We started by trying to debug the Flash plugin on Firefox while running the proof-of-concept (PoC) file and quickly realized that debugging the player can be rather time consuming without appropriate tools due to multiple reasons. First of all, the FlashPlayerPlugin.exe process is spawned by Firefox through the help of an auxiliary process named plugin-container.exe. The latter facilitates communication between the Flash plugin process and the Firefox browser process. Additionally, if protected mode is enabled (default behavior), the FlashPlayerPlugin.exe acts as a broker process and loads a second instance of the player in a sandboxed environment. This sandboxed instance is responsible for parsing and rendering of Flash content. Most of the functions responsible for rendering of Flash content, including the code exploited in our PoC, are wrapped in the NPSWF32_X_X_X_X dynamic library (DLL), which is loaded by FlashPlayerPlugin.exe. As a result, to successfully debug our process, we need to explicitly inform the debugger that we want it to debug processes spawned by Firefox. Furthermore, in order to set breakpoints on NPSWF32 functions, we need to intercept the second instance of the FlashPlayerPlugin is loaded (sandboxed process), which is the one that loads our target DLL. All of these preliminary tasks could easily be automated directly from WinDgb. However, since we realized that we would need to automate other functionality as well, we decided to start writing a pykd script that would facilitate debugging the Flash player on Firefox in a less painful way. One of the problems you encounter when working with the Flash player is the ability to dynamically analyze the ActionScript client code by setting appropriate breakpoints. The ActionScript 3 architecture runs on the ActionScript Virtual Machine 2 (AVM2) engine and computation in the AVM2 is based on executing the code of a “method body”. As explained in Haifei Li paper “Inside AVM”, a method can be identified from its MethodInfo class as: Native – a function in the Flash .text section “Normal” – our own code in the AS3 source converted to native code through the Just-In-Time (JIT) compiler Static init – executed in interpreter mode. Since there are no symbols exported for native functions and AS3 client code is dynamically translated into processor-specific instructions at runtime, tracing the execution flow can be quite challenging. We decided to take an approach similar to the one exposed in “Discover Flash Player Zero-day Attacks In The Wild From Big Data”, where we hook specific functions in the NPSWF32 library to be able to resolve native and jitted methods. Specifically we hook BaseExecMgr::setJit and BaseExecMgr::setNative which we dynamically identify in NPSWF32 by comparing their opcode signatures with the ones found in the compiled avmplus code. Articol complet: https://www.offensive-security.com/vulndev/fldbg-a-pykd-script-to-debug-flashplayer/
-
InsecureBankv2 – Vulnerable Android Application Information security awareness training may include several demo that describe how attacker may exploit vulnerabilities on system to gain full control on remote devices. If you are looking to demonstrate android application you can use InsecureBankv2. This tool was updated during the BlackHat arsenal and is available for users online, the purpose of this project is to provide security enthusiasts and developers a way to learn the Android insecurities by testing this vulnerable application. The list of vulnerabilities that are currently included in this release are: Flawed broadcast receivers Weak authorization mechanism Root detection and bypass Local encryption issues Vulnerable activity components Insecure content provider access Insecure webview implementation Weak cryptography implementation Application patching Sensitive information in memory You can read more and download the tool over here: https://github.com/dineshshetty/ Sursa: http://www.sectechno.com/insecurebankv2-vulnerable-android-application/
-
Cosa Nostra Cosa Nostra is an open source software clustering toolkit with a focus on malware analysis. It can create phylogenetic trees of binary malware samples that are structurally similar. It was initially released during SyScan360 Shanghai (2016). Getting started Required 3rd party tools In order to use Cosa Nostra you will need the source code, of course, a 2.7 version of Python, as well as one of the following tools in order to perform code analysis: Pyew Written in Python, it supports analysis of PE, ELF, Bios and Boot files for x86 or x86_64. IDA Written in C++. It supports analysing a plethora of executable types that you probably never even heard about. Commercial product. Radare2 Written in pure C. Same as with IDA, with support for extremely rare CPUs and binary formats. Also, it's open source! Link: https://github.com/joxeankoret/cosa-nostra
-
- 2
-
-
NEUTRALIZING INTEL’S MANAGEMENT ENGINE by: Brian Benchoff November 28, 2016 Five or so years ago, Intel rolled out something horrible. Intel’s Management Engine (ME) is a completely separate computing environment running on Intel chipsets that has access to everything. The ME has network access, access to the host operating system, memory, and cryptography engine. The ME can be used remotely even if the PC is powered off. If that sounds scary, it gets even worse: no one knows what the ME is doing, and we can’t even look at the code. When — not ‘if’ — the ME is finally cracked open, every computer running on a recent Intel chip will have a huge security and privacy issue. Intel’s Management Engine is the single most dangerous piece of computer hardware ever created. Researchers are continuing work on deciphering the inner workings of the ME, and we sincerely hope this Pandora’s Box remains closed. Until then, there’s now a new way to disable Intel’s Management Engine. Previously, the first iteration of the ME found in GM45 chipsets could be removed. This technique was due to the fact the ME was located on a chip separate from the northbridge. For Core i3/i5/i7 processors, the ME is integrated to the northbridge. Until now, efforts to disable an ME this closely coupled to the CPU have failed. Completely removing the ME from these systems is impossible, however disabling parts of the ME are not. There is one caveat: if the ME’s boot ROM (stored in an SPI Flash) does not find a valid Intel signature, the PC will shut down after 30 minutes. A few months ago, [Trammell Hudson] discovered erasing the first page of the ME region did not shut down his Thinkpad after 30 minutes. This led [Nicola Corna] and [Frederico Amedeo Izzo] to write a script that uses this exploit. Effectively, ME still thinks it’s running, but it doesn’t actually do anything. With a BeagleBone, an SOIC-8 chip clip, and a few breakout wires, this script will run and effectively disable the ME. This exploit has only been confirmed to work on Sandy Bridge and Ivy Bridge processors. It should work on Skylake processors, and Haswell and Broadwell are untested. Separating or disabling the ME from the CPU has been a major focus of the libreboot and coreboot communities. The inability to do so has, until now, made the future prospects of truly free computing platforms grim. The ME is in everything, and CPUs without an ME are getting old. Even though we don’t have the ability to remove the ME, disabling it is the next best thing. Sursa: https://hackaday.com/2016/11/28/neutralizing-intels-management-engine/
-
- 1
-
-
Tuesday, November 29, 2016 Breaking the Chain Posted by James Forshaw, Wielder of Bolt Cutters. Much as we’d like it to be true, it seems undeniable that we’ll never fix all security bugs just by looking for them. One of most productive ways to dealing with this fact is to implement exploit mitigations. Project Zero considers mitigation work just as important as finding vulnerabilities. Sometimes we can get our hands dirty, such as helping out Adobe and Microsoft in Flash mitigations. Sometimes we can only help indirectly via publishing our research and giving vendors an incentive to add their own mitigations. This blog post is about an important exploit mitigation I developed for Chrome on Windows. It will detail many of the challenges I faced when trying to get this mitigation released to protect end-users of Chrome. It’s recently shipped to users of Chrome on Windows 10 (in M54), and ended up blocking the sandbox escape of an exploit chain being used in the wild. For information on the Chromium bug that contains the list of things we implemented in order to get this mitigation working, look here. The Problem with Win32k It’s possible to lockdown a sandbox such as Chrome’s pretty comprehensively using Restricted Tokens. However one of the big problems on Windows is locking down access to system calls. On Windows you have both the normal NT system calls and Win32k system calls for accessing the GUI which combined represents a significant attack surface. While the NT system calls do have exploitable vulnerabilities now and again (for example issue 865) it’s nothing compared to Win32k. From just one research project alone 31 issues were discovered, and this isn’t counting the many font issues Mateusz has found and the hundreds of other issues found by other researchers. Much of Win32k’s problems come from history. In the first versions of Windows NT almost all the code responsible for the windowing system existed in user-mode. Unfortunately for 90’s era computers this wasn’t exactly good for performance so for NT 4 Microsoft moved a significant portion of what was user-mode code into the kernel (becoming the driver, win32k.sys). This was a time before Slammer, before Blaster, before the infamous Trustworthy Computing Memo which focussed Microsoft to think about security first. Perhaps some lone voice spoke for security that day, but was overwhelmed by performance considerations. We’ll never know for sure, however what it did do was make Win32k a large fragile mess which seems to have persisted to this day. And the attack surface this large fragile mess exposed could not be removed from any sandboxed process. That all changed with the release of Windows 8. Microsoft introduced the System Call Disable Policy, which allows a developer to completely block access to the Win32k system call table. While it doesn’t do anything for normal system calls the fact that you could eliminate over a thousand win32k system calls, many of which have had serious security issues, would be a crucial reduction in the attack surface. However no application in a default Windows installation used this policy (it’s said to have been introduced for non-GUI applications such as on Azure) and using it for something as complex as Chrome wasn’t going to be easy. The process of shipping Win32k lockdown required a number of architectural changes to be made to Chrome. This included replacing the GDI-based font code with Microsoft’s DirectWrite library. After around two years of effort Win32k lockdown was shipping by default. The Problems with Flash in Chrome Chrome uses a multi-process model, in which web page content is parsed inside Renderer processes, which are covered by the Win32k Lockdown policy for the Chrome sandbox. Plugins such as Flash and PDFium load into a different type of process, a PPAPI process, and due to circumstance these could not have the lockdown policy enabled. This would seem a pretty large weak point. Flash has not had the best security track record (relevant), making the likelihood of Flash being an RCE vector very high. Combine that with the relative ease of finding and exploiting Win32k vulnerabilities and you’ve got a perfect storm. It would seem reasonable to assume that real attackers are finding Win32k vulnerabilities and using them to break out of restrictive sandboxes including Chrome’s using Flash as the RCE vector. The question was whether that was true. The first real confirmation that this was true came from the Hacking Team breach, which occurred in July 2015. In the dumped files was an unfixed Chrome exploit which used Flash as the RCE vector and a Win32k exploit to escape the sandbox. While both vulnerabilities were quickly fixed I came upon the idea that perhaps I could spend some time to implement the lockdown policy for PPAPI and eliminate this entire attack chain. Analysing the Problem The first thing I needed to do was to determine what Win32k APIs were used by a plugin such as Flash. There are actually 3 main system DLLs that can be called by an application which end up issuing system calls to Win32k: USER32, GDI32 and IMM32. Each has slightly different responsibilities. The aim would be to enumerate all calls to these DLLs and replace them with APIs which didn’t rely on Win32k. Still it wasn’t just Flash that might call Win32k API but also the Pepper APIs implemented in Chrome. I decided to take two approaches to finding out what code I needed to remove, import inspection and dynamic analysis. Import inspection is fairly simple, I just dumped any imports for the plugins such as the Pepper Flash plugin DLL and identified anything which came from the core windowing DLLs. I then ran Flash and PDFium with a number of different files to try and exercise the code paths which used Win32k system calls. I attached WinDBG to the process and set breakpoints on all functions starting with NtUser and NtGdi which I could find. These are the system call stubs used to call Win32k from the various DLLs. This allowed me to catch functions which were in the PPAPI layer or not directly imported. Win32k system call using code in Flash and PDFium was almost entirely to enumerate font information, either directly or via the PPAPI. There was some OpenSSL code in Flash which uses the desktop window as a source of entropy, but as this could never work in the Chrome sandbox it’s clear that this was vestigial (or Flash’s SSL random number generator is broken, chose one or the other). Getting rid of the font enumeration code used through PPAPI was easy. Chrome already supported replacing GDI based font rendering and enumeration with DirectWrite which does all the rendering in User Mode. Most of the actual rendering in Flash and PDFium is done using their own TrueType font implementations (such as FreeType). Enabling DirectWrite for PPAPI processes was implemented in a number of stages, with the final enabling of DirectWrite in this commit. Now I just needed to get rid of the GDI font code in Flash and PDFium itself. For PDFium I was able to repurpose existing font code used for Linux and macOS. After much testing to ensure the font rendering didn’t regress from GDI I was able to put the patch into PDFium. Now the only problem was Flash. As a prototype I implemented shims for all the font APIs used by Flash and emulated them using DirectWrite. For a better, more robust solution I needed to get changes made to Flash. I don’t have access to the Flash source code, however Google does have a good working relationship with Adobe and I used this to get the necessary changes implemented. It turned out that there was a Pepper API which did all that was needed to replace the GDI font handling, pp::flash::FontFile. Unfortunately that was only implemented on Linux, however I was able to put together a proof-of-concept Windows implementation of pp::flash::FontFile and through Xing Zhang of Adobe we got a full implementation in Chrome and Flash. Doomed Rights Management From this point I could enable Win32k lockdown for plugins and after much testing everything seemed to be working, until I tried to test some DRM protected video. While encrypted video worked, any Flash video file which required output protection (such as High-bandwidth Digital Content Protection (HDCP)) would not. HDCP works by encrypting the video data between the graphics output and the display, designed to prevent people capturing a digital video. Still this presents a problem, as video along with games are some of the only residual uses of Flash. In testing, this also affected the Widevine plugin that implements the Encrypted Media Extensions for Chrome. Widevine uses PPAPI under the hood; not fixing this issue would break all HD content playback. Enabling HDCP on Windows requires the use of a small number of Win32k APIs. I’d not discovered this during my initial analysis because, a) I didn’t run any protected content through the Flash player and all functions were imported at runtime using LoadLibrary and GetProcAddress only when needed. The function Flash was accessing was OPMGetVideoOutputsFromHMONITOR which is exposed by dxva2.dll. This function in turn maps down to multiple Win32k calls such as NtGdiCreateOPMProtectedOutputs. The ideal way of fixing this would be to implement a new API in Chrome which exposed enabling HDCP then get Adobe and Widevine to use that implementation. It turns out that the Adobe DRM and Widevine teams are under greater constraints than normal development teams. After discussion with my original contact at Adobe they didn’t have access to the DRM code for Flash. I was able to have meetings with Widevine (they’re part of Google) and the Adobe DRM team but in the end I decided to go it alone and implement redirection of these APIs as part of the sandbox code. Fortunately this doesn’t compromise the security guarantees of the original API because of the way Microsoft designed it. To prevent a MitM attack against the API calls (i.e. you hook the API and return the answer the caller expects, such as HDCP is enabled) the call is secured between the caller and graphics driver using a X.509 certificate chain returned during initialization. Once the application such as Flash verifies this certificate chain is valid it will send back a session key to the graphics driver encrypted using the end certificate’s public key. The driver then decrypts the session key and all communication from then on is encrypted and hashed using variants of this key. Of course this means that the driver must contain the private key corresponding to the public key in the end certificate, though at least in the case on my workstation that shouldn’t be a major issue as the end certificate has a special Key Usage OID (1.3.6.1.4.1.311.10.5.8) and the root “Microsoft Digital Media Authority” certificate isn’t in the trusted certificate store so the chain wouldn’t be trusted anyway. Users of the API can embed the root certificate directly in their code and verify its trust before continuing. As the APIs assume that it’s already been brokered (at minimum via Win32k.sys) then adding another broker, in this case one which brokers from the PPAPI process to another process in Chrome without the Win32k lockdown policy in place, doesn’t affect the security guarantees of the API. Of course I made best efforts to verify the data being brokered to limit the potential attack surface, though I’ll admit something about sending binary blobs to a graphics driver gives me the chills. This solved the issue with enabling output protection for DRM’ed content and finally the mitigation could be enabled by default. The commit for this code can be found here. Avoiding #yolosec Implementation wise it turned out to be not too complex once I’d found all the different possible places that Win32k functions could be called. Much of the groundwork was already in place with the original Win32k Renderer lockdown, the implementation of DirectWrite and the way the Pepper APIs were structured. So ship it already! Well not so fast, this is where reality kicks in. Chrome on Windows is relied upon by millions upon millions of users worldwide and Win32k lockdown for PPAPI would affect not only Flash, but PDFium (which is used in things like the Print Preview window) and Widevine. It’s imperative that this code is tested in the real world but in such a way that the impact on stability and functionality can be measured. Chrome supports something called Variations which allow developers to selectively enable experimental features remotely and deploy them to a randomly selected group of users who’ve opted into returning usage and crash statistics to Google. For example you can do a simple A/B test with one proportion of the Chrome users left as a Control and another with Win32k lockdown enabled. Statistical analysis can be performed on the results of that test based on various metrics, such as number of crashes, hanging processes and startup performance to detect anomalous behaviour which is due to the experimental code. To avoid impacting users of Stable this is typically only done on Beta, Dev and Canary users. Having these early release versions of Chrome are really important for ensuring features work as expected and we appreciate anyone who takes the time to run them. In the end this process of testing took longer than the implementation. Issues were discovered and fixed, stability measured until finally we were ready to ship. Unfortunately in that process there was a noticeable stability issue on Windows 8.1 and below which we couldn’t track down. The stability issues are likely down to interactions with third party code (such as AV) which inject their own code into Chrome processes. If this injected code relies on calling Win32k APIs for anything there’s a high chance of this causing a crash. This stability issue led to the hard decision to initially only allow the PPAPI Win32k lockdown to run on Windows 10 (where if anything stability improved). I hope to revisit this decision in the future. As third party code is likely to be updated to support the now shipping Windows 10 lockdown it might improve stability on Windows 8/8.1. As of M54 of Chrome, Win32k lockdown is enabled by default for users on Windows 10 (with an option to disable it remotely in the unlikely event a problem surfaces). As of M56 (set for release approximately the end of January 2017) it can only be disabled with a command line switch to disable all Win32k lockdown including Renderer processes. Wrap Up From the first patch submitted in September 2015 to the final patch in June it took almost 10 months of effort to come up with a shipping mitigation. The fact that it’s had its first public success (and who knows how many non-public ones) shows that it was worth implementing this mitigation. In the latest version of Windows 10, Anniversary Edition, Microsoft have implemented a Win32k filter which makes it easier to reduce the attack surface without completely disabling all the system calls which might have sped up development. Microsoft are also taking pro-active effort to improve the Win32k code base. The Win32k filter is already used in Edge, however at the moment only Microsoft can use it as the executable signature is checked before allowing the filter to be enabled. Also it’s not clear that the filter even completely blocked the vulnerability in the recent in-the-wild exploit chain. Microsoft would only state it would “stop all observed in-the-wild instances of this exploit”. Nuking the Win32k system calls from orbit is the only way to be sure that an attacker can’t find a bug which passes through the filter. Hopefully this blog post demonstrates the time and effort required to implement what seems on the face of it a fairly simple and clear mitigation policy for an application as complex as Chrome. We’ll continue to try and use the operating system provided sandboxing mechanisms to make all users on Chrome more secure. Thanks While I took on a large proportion of the technical work it’s clear this mitigation could not have shipped without the help of others in Chrome and outside. I’d like to especially mention the following: Anantanarayanan Lyengar, for landing the original Win32k mitigations for the Renderer processes on which all this is based. Will Harris, for dealing with variations and crash reporting to ensure everything was stable. Adobe Security Team and Xing Zhang for helping to remove GDI font code from Flash. The Widevine team for advice on DRM issues. Sursa: https://googleprojectzero.blogspot.ro/2016/11/breaking-chain.html
-
Intr-o luna, maxim doua, poti vedea toate videoclipurile. Nu mi se pare mult 140 USD (600 RON) pentru a putea invata o gramada de lucruri care pe viitor iti pot aduce lejer 1-2K euro pe luna.
-
Nu sunt scumpe, ati putea incerca sa le cumparati.
-
Cea mai smechera.
-
Abusing of Protocols to Load Local Files, bypass the HTML5 Sandbox, Open Popups and more On October 25th, the fellows @MSEdgeDev twitted a link that called my attention because when I clicked on it (being on Chrome) the Windows Store App opened. It might not surprise you, but it surprised me! As far as I remembered, Chrome had this healthy habit of asking the user before opening external programs but in this case it opened directly, without warnings. This was different and caught my attention because I never accepted to open the Windows Store in Chrome. There are some extensions and protocols that will open automatically but I’ve never approved the Windows Store. The shortened Twitter link redirected to https://aka.ms/extensions-storecollection which (again) redirected to the interesting: ms-windows-store://collection/?CollectionId=edgeExtensions. It was a protocol that I was not aware of, so I immediately tried to find it in the place where most protocol associations reside: the registry. A search of “ms-windows-store“ immediately returned our string inside the PackageId of the what seemed to be the Windows Store app. Noting that we were also in a key called “Windows.Protocol” I scrolled up and down a bit to see if there were other apps inside, and found that tons of them (including MS Edge) had their own protocols registered. This is nice because it opens a new attack surface straight from the browser. But let’s press F3 to see if we find other matches. It seems that the ms-windows-store: protocol is also accepting search arguments, so we can try opening our custom search straight from Google Chrome. In fact, the Windows Store app seems to be rendering HTML with the Edge engine, which is also interesting because we might try to XSS it or if the app is native, send big chunks of data and see what happens. But we won’t be doing that now, let’s go back to regedit and press F3 to see what else we can find. This one is interesting also, because it gives us clues on how to quickly find more protocols if they are prepended with the string “URL:”. Let’s reset our search to “URL:” and see what we get. Pressing the [Home] key takes us back to the top of the registry and a search of “URL:” immediately returns the first match “URL:about:blank“, confirming that we are not crazy. Press F3 again and we find the bingnews: protocol but this time Chrome requests us confirmation to open it. No problem, let’s try it on Edge to see what happens. It opens! Next match in the registry is the calculator: protocol. Will this work? Wow! I’m sure this will piss off exploit writers. What program will they pop now? Both calc and notepadcan be open without memory corruptions, and cmd.exe is deprecated in favor to powershell now. Microsoft removed the fun out of you guys. This could be a good moment to enumerate all loadable protocols and see which apps accept arguments so we can try to inject code into them (binary or pure javascript, depending on how the app was coded and how it treats the arguments). There is a lot of interesting stuff here to play with, and if we keep searching for protocols we will find tons of apps that open (including Candy Crush which I didn’t know it was on my PC). By pressing F3 a few times I learned a lot. For example, there’s a microsoft-edge: protocol that loads URLs in a new tab. It doesn’t seem to be important, until we remember the limits that HTML pages should have. Will the popUp blocker prevent us from opening 20 microsoft-edge:http://www.google.com tabs? [ PoC – Open popUps on MS Edge ] What about the HTML5 Sandbox? If you are not familiar with it, it’s just a way to impose restrictions to a webpage using the sandbox iframe attribute or the sandbox http header. For example, if we want to render content inside an iframe and make sure it does not run javascript (not even open new tabs) we can just use this tag: <iframe src=”sandboxed.html” sandbox></iframe> And the rendered page will be completely restricted. Essentially it can only render HTML/CSS but no javascript or access to things like cookies. In fact, if we use the sandbox granularity and allow at least new windows/tabs, all of them should inherit the sandboxed attributes and opened links from that iframe will still be sanboxed. However, using the microsoft-edge protocol bypasses this completely. [ PoC – Bypass HTML5 Sandbox on MS Edge ] Nice to see that the microsoft-edge protocol allows us to bypass different restrictions. I haven’t went further than that but you can try! This is a journey of discovery, remember that a single tweet fired my motivation to play a bit and ended up giving us stuff that truly deserves more research. I continued pressing F3 in regedit and found the read: protocol which called my attention because when reading its (javascript) source code. It had the potential for a UXSS but Edge kept crashing again and again while trying. It crashed too much. For example setting the location of an iframe to “read:” was enough to crash the browser including all tabs. Want to see it? [ PoC – Crash on MS Edge ] OK, I was curious about what was happening so I appended a few bytes to the read protocol and fired up WinDbg to see if the crash was related to invalid data. Something quick and simple, no fuzzing or anything special: read:xncbmx,qwieiwqeiu;asjdiw!@#$%^&* Oh yes, I really typed something like that. The only way that I found not to crash the read protocol was to load anything coming from http. Everything else crashed the browser. So let’s attach WinDbg to Edge. A quick dirty method that I use it to simply kill the Edge process and children, reopen it and attach to the latest process that uses EdgeHtml.dll. Of course there are easier ways but … yeah, I’m just like that. Open a command line and… taskkill /f /t /im MicrosoftEdge.exe ** Open Edge and load the webpage but make sure it doesn't crash yet ** tasklist /m EdgeHtml.dll Enough. Now load WinDbg and attach to the latest listed Edge process that uses EdgeHtml. And remember to use Symbols in WinDbg. Once attached, just press F5 or g [ENTER] inside WinDbg so Edge keeps running. This is how my screen looks right now. On the left I have the page that I use to test everything and on the right, WinDbg attached to that particular Edge process. We will use a window.open to play with the read: protocol instead of an iframe because it’s more comfortable. Think about it, there are protocols/urls that might end up changing the top location regardless of how framed they are. If we start playing with a protocol inside an iframe there are chances that our own page (the top) will be unloaded, losing the code that we’ve just typed. My particular test-page saves everything that I type, so if the browser crashes it’s highly likely that I will be able to repro my manual work. But even with everything saved, when I’m playing with code that could change the URL of my test-page, I open it in a new window. Just a habit. On the left screen we can type and execute JavaScript code quickly, on the right we have WinDbg prepared to reveal us what’s happening behind this crash. Go ahead, let’s run the JavaScript code and… Bang! WinDbg breaks. ModLoad: ce960000 ce996000 C:\Windows\SYSTEM32\XmlLite.dll ModLoad: c4110000 c4161000 C:\Windows\System32\OneCoreCommonProxyStub.dll ModLoad: d6a20000 d6ab8000 C:\Windows\SYSTEM32\sxs.dll (2c90.33f0): Security check failure or stack buffer overrun - code c0000409 (!!! second chance !!!) EdgeContent!wil::details::ReportFailure+0x120: 84347de0 cd29 int 29h OK, it seems that Edge knew something went wrong because it’s in a function called “ReportFailure”, right? Come on, I know we can immediately assume that if Edge is here, it failed somewhat “gracefully”. So let’s inspect the stack trace to see where are we coming from. Type “k” in WinDbg. 0:030> k # Child-SP RetAddr Call Site 00 af248b30 88087f80 EdgeContent!wil::details::ReportFailure+0x120 01 af24a070 880659a5 EdgeContent!wil::details::ReportFailure_Hr+0x44 02 af24a0d0 8810695c EdgeContent!wil::details::in1diag3::FailFast_Hr+0x29 03 af24a120 88101bcb EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x7c 04 af24a170 880da669 EdgeContent!CReadingModeViewer::Load+0x6b 05 af24a1b0 880da5ab EdgeContent!CBrowserTab::_ReadingModeViewerLoadViaPersistMoniker+0x85 06 af24a200 880da882 EdgeContent!CBrowserTab::_ReadingModeViewerLoad+0x3f 07 af24a240 880da278 EdgeContent!CBrowserTab::_ShowReadingModeViewer+0xb2 08 af24a280 88079a9e EdgeContent!CBrowserTab::_EnterReadingMode+0x224 09 af24a320 d9e4b1d9 EdgeContent!BrowserTelemetry::Instance::2::dynamic 0a af24a3c0 8810053e shlwapi!IUnknown_Exec+0x79 0b af24a440 880fee33 EdgeContent!CReadingModeController::_NavigateToUrl+0x52 0c af24a4a0 88074f98 EdgeContent!CReadingModeController::Open+0x1d3 0d af24a500 b07df508 EdgeContent!BrowserTelemetry::Instance'::2::dynamic 0e af24a5d0 b0768c47 edgehtml!FireEvent_BeforeNavigate+0x118 Check out the first two lines, both called blah blah ReportFailure, don’t you think Edge is here because something went wrong? Of course! Let’s keep going down until we find a function name that makes sense. The next one is called blah FailFast which also smells like it’s a function Edge called knowing that something went wrong. But we want to find the code that made Edge unhappy so continue reading down. The next one is blah _LoadRMHTML. This looks much better to me, don’t you agree? In fact, its name makes me think it Loads HTML. It would be interesting to break before the crash, so why not setting a breakpoint a few lines above _LoadRMHTML? We were watching the stack trace, let’s look at the code now. Let’s first unassemble back from that point (function + offset). It’s easy, using the “ub” command in WinDbg. 0:030> ub EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x7c EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a: 8810693a call qword ptr [EdgeContent!_imp_SHCreateStreamOnFileEx (882562a8)] 88106940 test eax,eax 88106942 jns EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x7d (8810695d) 88106944 mov rcx,qword ptr [rbp+18h] 88106948 lea r8,[EdgeContent!`string (88261320)] 8810694f mov r9d,eax 88106952 mov edx,1Fh 88106957 call EdgeContent!wil::details::in1diag3::FailFast_Hr (8806597c) We will focus on the names only and ignore everything else, OK? Just like when we were trying to find a variation for the mimeType bug, we are going to speculate here and if we fail we would of course keep going deeper. But sometimes a quick look on the debugger can reveal many things. We know that Edge will crash if it arrives to the last instruction of this snippet (address 88106957, FailFast_Hr). Our intention is to see why we ended up at that location, who the hell is sending us there. But let’s start from the beginning, the first instruction on this snippet seems to be calling a function with a complicated name which apparently reveals us tons of stuff. EdgeContent!_imp_SHCreateStreamOnFileEx The first part before the ! is the module (exe, dll, etc) where this instruction is located. In this case it is EdgeContent and we don’t even care about its extension, it’s just code. After the ! comes a funny name _imp_ and then SHCreateStreamOnFileEx which seems to be a function name that “creates a stream on file”. Do you agree? In fact, the _imp_ part makes me think that maybe this is an imported function loaded from a different binary. Let’s google that name to see if we find something interesting. That’s pretty nice. The first result came with the exact name that we searched for. Let’s click on it. OK. The first parameter that this function receives is a “A pointer to a null-terminated string that specifies the file name“. Interesting! If this snippet of code is being executed, then, it should be receiving a pointer to a file name as the first argument. But how can we see the first parameter? It’s easy, we are working on Winx64, and the calling convention / parameter passing says that “First 4 parameters – RCX, RDX, R8, R9” (speaking about integers/pointers). This means that the first parameter (pointer to a file name) will be loaded in the register RCX. With this information, we can set a breakpoint before Edge calls that function and see what the RCX has at that precise moment. But let’s restart because it’s a bit late at this point: Edge already crashed, Please, re-do what’s described above (kill Edge, open it, load the page, find the process and attach). This time, instead of running (F5) the process, we will set a breakpoint. The exact address of the instruction, we don’t know, but WinDbg revealed the exact offset, when we executed our “ub” command. 0:030> ub EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x7c EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a: 8810693a ff1568f91400 call qword ptr [EdgeContent!_imp_SHCreateStreamOnFileEx (882562a8)] 88106940 85c0 test eax,eax So the breakpoint should go in EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a We type “bp” and the function name + offset [ENTER]. Then “g” to let Edge run. 0:029> bp EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a 0:029> g Great. This is exciting. We want to see what’s the file name (or string) located in the register RCX right before SHCreateStreamOnFileEx executes. Let’s run the code and feel the break. Well, I feel it baby =) breakpoints connect me to my childhood. Let’s run the JavaScript code and bang! WinDbg breaks right there. Breakpoint 0 hit EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a: 8820693a ff1568f91400 call qword ptr [EdgeContent!_imp_SHCreateStreamOnFileEx (883562a8)] That’s great, now we can inspect the content where RCX is pointing. To do this we will use the “d” command (display memory) @ and the register name, just like this: 0:030> d @rcx 02fac908 71 00 77 00 69 00 65 00-69 00 77 00 71 00 65 00 q.w.i.e.i.w.q.e. 02fac918 69 00 75 00 3b 00 61 00-73 00 6a 00 64 00 69 00 i.u.;.a.s.j.d.i. 02fac928 77 00 21 00 40 00 23 00-24 00 25 00 5e 00 26 00 w.!.@.#.$.%.^.&. 02fac938 2a 00 00 00 00 00 08 00-60 9e f8 02 db 01 00 00 *.......`....... 02fac948 10 a9 70 02 db 01 00 00-01 00 00 00 00 00 00 00 ..p............. 02fac958 05 00 00 00 00 00 00 00-00 00 00 00 19 6c 01 00 .............l.. 02fac968 44 14 00 37 62 de 77 46-9d 68 27 f3 e0 92 00 00 D..7b.wF.h'..... 02fac978 00 00 00 00 00 00 08 00-00 00 00 00 00 00 00 00 ................ This isn’t nice on my eyes but on the right of the first line I see something which looks similar to a unicode string. Let’s display again as unicode (du). 0:030> du @rcx 02fac908 "qwieiwqeiu;asjdiw!@#$%^&*" Nice! The string rings me! Look at the JavaScript code that we’ve just ran. It seems that the argument passed to this function is whatever we type after the comma. With this knowledge plus knowing that it is expecting a file, we can try a full path to something in my drive. Because Edge runs inside an AppContainer, we will try a file that’s accessible. For example something from the windows/system32 directory. read:,c:\windows\system32\drivers\etc\hosts We are also removing the garbage before the comma which seems unrelated (albeit it deserves more research!). Let’s quickly detach, restart Edge, and run our new code url = "read:,c:\\windows\\system32\\drivers\\etc\\hosts"; w = window.open(url, "", "width=300,height=300"); And as expected, the local file loads in the new window without crashes. [ PoC – Open hosts on MS Edge ] Fellow bug hunter, I will stop here but I believe all these things deserve a bit more of research depending on what’s fun for you: A) Enumerate all loadable protocols and attack those applications via query-strings. Play with microsoft-edge: which bypasses the HTML5 sandbox, popup blocker and who knows what else. C) Keep going with the read: protocol. We found a way to stop it from crashing but remember there is a function SHCreateStreamOnFileEx expecting things that we can influence! It’s worth trying more. Also, we can continue working on the arguments to see if commas are used to split arguments, etc. If debugging binaries is boring for you, then you can still try to XSS the reading view. I hope you find tons of vulnerabilities! If you have questions, ping me at @magicmac2000. Have a nice day! Reported to MSRC on 2016-10-26 Sursa: https://www.brokenbrowser.com/abusing-of-protocols/
-
- 2
-
-
Cryptology ePrint Archive: Report 2016/1114 Full Disk Encryption: Bridging Theory and Practice Louiza Khati and Nicky Mouha and Damien Vergnaud Abstract: We revisit the problem of Full Disk Encryption (FDE), which refers to the encryption of each sector of a disk volume. In the context of FDE, it is assumed that there is no space to store additional data, such as an IV (Initialization Vector) or a MAC (Message Authentication Code) value. We formally define the security notions in this model against chosen-plaintext and chosen-ciphertext attacks. Then, we classify various FDE modes of operation according to their security in this setting, in the presence of various restrictions on the queries of the adversary. We will find that our approach leads to new insights for both theory and practice. Moreover, we introduce the notion of a diversifier, which does not require additional storage, but allows the plaintext of a particular sector to be encrypted to different ciphertexts. We show how a 2-bit diversifier can be implemented in the EagleTree simulator for solid state drives (SSDs), while decreasing the total number of Input/Output Operations Per Second (IOPS) by only 4%. Category / Keywords: secret-key cryptography / disk encryption theory, full disk encryption, FDE, XTS, IEEE P1619, unique first block, diversifier, provable security Original Publication (with major differences): CT-RSA 2017 - RSA Conference Cryptographers' Track Date: received 25 Nov 2016 Contact author: nicky at mouha be Available format(s): PDF | BibTeX Citation Version: 20161125:195305 (All versions of this report) Short URL: ia.cr/2016/1114 Discussion forum: Show discussion | Start new discussion Sursa: https://eprint.iacr.org/2016/1114
-
Publicat pe 25 nov. 2016 This is the first part of a full analysis for a packed Fleercivet sample. We start with basic static analysis, detection name interpretation and unpacking.
-
- 1
-
-
F-Scrack F-Scrack is a single file bruteforcer supports multi-protocol, no extra library requires except python standard library, which is ideal for a quick test. Currently support protocol: FTP, MySQL, MSSQL,MongoDB,Redis,Telnet,Elasticsearch,PostgreSQL. Compatible with OSX, Linux, Windows, Python 2.6+. Sursa: https://github.com/ysrc/F-Scrack
-
The Linux Kernel Hidden Inside Windows 10 Publicat pe 22 nov. 2016 by Alex Ionescu Initially known as "Project Astoria" and delivered in beta builds of Windows 10 Threshold 2 for Mobile, Microsoft implemented a full blown Linux 3.4 kernel in the core of the Windows operating system, including full support for VFS, BSD Sockets, ptrace, and a bonafide ELF loader. After a short cancellation, it's back and improved in Windows 10 Anniversary Update ("Redstone"), under the guise of Bash Shell interoperability. This new kernel and related components can run 100% native, unmodified Linux binaries, meaning that NT can now execute Linux system calls, schedule thread groups, fork processes, and access the VDSO! As it's implemented using a full-blown, built-in, loaded-by-default, Ring 0 driver with kernel privileges, this not a mere wrapper library or user-mode system call converter like the POSIX subsystem of yore. The very thought of an alternate virtual file system layer, networking stack, memory and process management logic, and complicated ELF parser and loader in the kernel should tantalize exploit writers - why choose from the attack surface of a single kernel, when there's now two? But it's not just about the attack surface - what effects does this have on security software? Do these frankenLinux processes show up in Procmon or other security drivers? Do they have PEBs and TEBs? Is there even an EPROCESS? And can a Windows machine, and the kernel, now be attacked by Linux/Android malware? How are Linux system calls implemented and intercepted? As usual, we'll take a look at the internals of this entirely new paradigm shift in the Windows OS, and touch the boundaries of the undocumented and unsupported to discover interesting design flaws and abusable assumptions, which lead to a wealth of new security challenges on Windows 10 Anniversary Update ("Redstone") machines.
-
- 1
-
-
Demystifying the Secure Enclave Processor Publicat pe 22 nov. 2016 by Tarjei Mandt & Mathew Solnik & David Wang The secure enclave processor (SEP) was introduced by Apple as part of the A7 SOC with the release of the iPhone 5S, most notably to support their fingerprint technology, Touch ID. SEP is designed as a security circuit configured to perform secure services for the rest of the SOC, with with no direct access from the main processor. In fact, the secure enclave processor runs it own fully functional operating system - dubbed SEPOS - with its own kernel, drivers, services, and applications. This isolated hardware design prevents an attacker from easily recovering sensitive data (such as fingerprint information and cryptographic keys) from an otherwise fully compromised device. Despite almost three years have passed since its inception, little is still known about the inner workings of the SEP and its applications. The lack of public scrutiny in this space has consequently led to a number of misconceptions and false claims about the SEP. In this presentation, we aim to shed some light on the secure enclave processor and SEPOS. In particular, we look at the hardware design and boot process of the secure enclave processor, as well as the SEPOS architecture itself. We also detail how the iOS kernel and the SEP exchange data using an elaborate mailbox mechanism, and how this data is handled by SEPOS and relayed to its services and applications. Last, but not least, we evaluate the SEP attack surface and highlight some of the findings of our research, including potential attack vectors.
-
Pangu 9 Internals Publicat pe 22 nov. 2016 by Tielei Wang & Hao Xu & Xiaobo Chen Pangu 9, the first (and only) untethered jailbreak tool for iOS 9, exploited a sequence of vulnerabilities in the iOS userland to achieve final arbitrary code execution in the kernel and persistent code signing bypass. Although these vulnerabilities were fixed in iOS 9.2, there are no details disclosed. This talk will reveal the internals of Pangu 9. Specifically, this talk will first present a logical error in a system service that is exploitable by any container app through XPC communication to gain arbitrary file read/write as mobile. Next, this talk will explain how Pangu 9 gains arbitrary code execution outside the sandbox through the system debugging feature. This talk will then elaborate a vulnerability in the process of loading the dyld_shared_cache file that enables Pangu 9 to achieve persistent code signing bypass. Finally, this talk will present a vulnerability in the backup-restore process that allows apps signed by a revoked enterprise certificate to execute without the need of the user's explicit approval of the certificate.
-
- 1
-
-
Secure Host Baseline About the Secure Host Baseline The Secure Host Baseline (SHB) provides an automated and flexible approach for assisting the DoD in deploying the latest releases of Windows 10 using a framework that can be consumed by organizations of all sizes. The DoD CIO issued a memo on November 20, 2015 directing Combatant Commands, Services, Agencies and Field Activities (CC/S/As) to rapidly deploy the Windows 10 operating system throughout their respective organizations with the objective of completing deployment by the end of January 2017. The Deputy Secretary of Defense issued a memo on February 26, 2016directing the DoD to complete a rapid deployment and transition to Microsoft Windows 10 Secure Host Baseline by the end of January 2017.[1] Formal product evaluations also support the move to Windows 10. The National Information Assurance Partnership (NIAP) oversees evaluations of commercial IT products for use in National Security Systems. The Common Criteria evaluation of Windows 10 against the NIAP Protection Profile for General Purpose Operating Systems completed April 5, 2016. The Common Criteria evaluation of Windows 10 against the NIAP Protection Profile for Mobile Device Fundamentals completed January 29, 2016. NIST FIPS 140-2 validation of Windows 10 modules was completed on June 2, 2016 (see certificate numbers 2600, 2601, 2602, 2603, 2604, 2605, 2606, and 2607). Using a Secure Host Baseline is one of NSA Information Assurance top 10 mitigation strategies. The DoD Secure Host Baseline also exemplifies other IAD top 10 mitigation strategies such as using application whitelisting, enabling anti-exploitation features, and using the latest version of the operating system and applications. About this repository This repository hosts Group Policy objects, compliance checks, and configuration tools in support of the DoD Secure Host Baseline (SHB) framework for Windows 10. Administrators of National Security Systems, such as those who are part of the Defense Industrial Base, can leverage this repository in lieu of access to the DoD SHB framework for Windows 10 which requires a Common Access Card (CAC) or Personal Identification Verification (PIV) smart card to access. Questions or comments can be submitted to the repository issue tracker or posted on Windows 10 Secure Host Baseline project forums on Software Forge which requires a CAC or PIV smart card to access. Link: https://github.com/iadgov/Secure-Host-Baseline
-
Ar mai fi de incercat si asta: https://github.com/MooseDojo/apt2
-
Understanding Systemd Linux distributions are adopting or planning to adopt the systemd init system fast. systemd is a suite of system management daemons, libraries, and utilities designed as a central management and configuration platform for the Linux computer operating system. Described by its authors as a “basic building block” for an operating system, systemd primarily aims to replace the Linux init system (the first process executed in user space during the Linux startup process) inherited from UNIX System V and Berkeley Software Distribution (BSD). The name systemd adheres to the Unix convention of making daemons easier to distinguish by having the letter d as the last letter of the filename. systemd is designed for Linux and programmed exclusively for the Linux API. It is published as free and open-source software under the terms of the GNU Lesser General Public License (LGPL) version 2.1 or later. The design of systemd generated significant controversy within the free software community, leading the critics to argue that systemd’s architecture violates the Unix philosophy and that it will eventually form a system of interlocking dependencies. However, as of 2015 most major Linux distributions have adopted it as their default init system. Lennart Poettering and Kay Sievers, software engineers that initially developed systemd, sought to surpass the efficiency of the init daemon in several ways. They wanted to improve the software framework for expressing dependencies, to allow more processing to be done concurrently or in parallel during system booting, and to reduce the computational overhead of the shell. Poettering describes systemd development as “never finished, never complete, but tracking progress of technology”. In May 2014, Poettering further defined systemd as aiming to unify “pointless differences between distributions”, by providing the following three general functions: A system and service manager (manages both the system, as by applying various configurations, and its services) A software platform (serves as a basis for developing other software) The glue between applications and the kernel (provides various interfaces that expose functionalities provided by the kernel) systemd is not just the name of the init daemon but also refers to the entire software bundle around it, which, in addition to the systemd init daemon, includes the daemons journald, logind and networkd, and many other low-level components. In January 2013, Poettering described systemd not as one program, but rather a large software suite that includes 69 individual binaries. As an integrated software suite, systemd replaces the startup sequences and runlevels controlled by the traditional init daemon, along with the shell scripts executed under its control. systemd also integrates many other services that are common on Linux systems by handling user logins, the system console, device hotplugging, scheduled execution (replacing cron) logging, hostnames and locales. Like the init daemon, systemd is a daemon that manages other daemons, which, including systemd itself, are background processes. systemd is the first daemon to start during booting and the last daemon to terminate during shutdown. The systemd daemon serves as the root of the user space’s process tree; the first process (pid 1) has a special role on Unix systems, as it receives a SIGCHLD signal when a daemon process (which has detached from its parent) terminates. Therefore, the first process is particularly well suited for the purpose of monitoring daemons; systemd attempts to improve in that particular area over the traditional approach, which would usually not restart daemons automatically but only launch them once without further monitoring. systemd executes elements of its startup sequence in parallel, which is faster than the traditional startup sequence’s sequential approach. For inter-process communication (IPC), systemd makes Unix domain sockets and D-Bus available to the running daemons. The state of systemd itself can also be preserved in a snapshot for future recall. systemd records initialization instructions for each daemon in a configuration file (referred to as a “unit file”) that uses a declarative language, replacing the traditionally used per-daemon startup shell scripts. Unit file types include service, socket, device, mount, automount, swap, target, path, timer (which can be used as a cron-like job scheduler), snapshot, slice and scope. Articol complet: https://n0where.net/understanding-systemd/
-
Forum | Hash Manager | Hash Finder | Hash Verifier InsidePro Software offers professional and free solutions for recovering passwords to hashes! Our Forum unites the world's best experts in hash and password recovery; its features: – Devoted to recovering passwords to hashes of all types. – Here you can always get help on recovering your passwords. – Forum already contains over 200 thousand messages! Hash Manager is a solution for recovering passwords to hashes; its features: – Supports over 450 hashing algorithms. – Contains over 70 additional utilities for handling hashes, passwords, and dictionaries. – Optimized for working with large hash lists. – Comes in 64-bit version, which is much faster on many algorithms. – Supports unlimited number of loadable hashes, as well as dictionaries, rules, and masks. – Supports all most efficient hash attacks. – Supports multithreading. – Recovers passwords in Unicode. – Has modular architecture. – And much more! Hash Finder service is designed for looking up hashes in a huge database; its features: – Does not require registration. – Supports over 100 hashing algorithms, including salted hashes. – Supports hash list lookup. – Contains only real hashes and passwords (over 1 billion records). – Automatically detects algorithm. – For mixed lists, returns results on each algorithm separately. – Accumulates found hashes in a queue, which is continuously being processed. – Acquires new real hashes and passwords on a daily basis. – The service has checked over 10 billion hashes! Hash Verifier service is designed for automatic verification of hashes and passwords; its features: – Does not require registration. – Supports all most popular hashing algorithms. – Supports hash list verification (up to 1000 lines). – Stores links to successful verification for specified amount of time. – Supports user names and salts in the Hex format. – The service has verified over 80 thousand paid hashes! Sursa: http://www.insidepro.com/
-
Matthew Green in Uncategorized November 24, 2016 2,305 Words The limitations of Android N Encryption Over the past few years we’ve heard more about smartphone encryption than, quite frankly, most of us expected to hear in a lifetime. We learned that proper encryption can slow down even sophisticated decryption attempts if done correctly. We’ve also learned that incorrect implementations can undo most of that security. In other words, phone encryption is an area where details matter. For the past few weeks I’ve been looking a bit at the details of Android Nougat’s new file-based encryption to see how well they’ve addressed some of those details in their latest release. The answer, unfortunately, is that there’s still lots of work to do. In this post I’m going to talk about a bit of that. (As an aside: the inspiration for this post comes from Grugq, who has been loudly and angrily trying to work through these kinks to develop a secure Android phone. So credit where credit is due.) Background: file and disk encryption Disk encryption is much older than smartphones. Indeed, early encrypting filesystems date back at least to the early 1990s and proprietary implementations may go back before that. Even in the relatively new area of PCs operating systems, disk encryption has been a built-in feature since the early 2000s. The typical PC disk encryption system operates as follows. At boot time you enter a password. This is fed through a key derivation function to derive a cryptographic key. If a hardware co-processor is available (e.g., a TPM), your key is further strengthened by “tangling” it with some secrets stored in the hardware. This helps to lock encryption to a particular device. The actual encryption can be done in one of two different ways: Full Disk Encryption (FDE) systems (like Truecrypt, BitLocker and FileVault) encrypt disks at the level of disk sectors. This is an all-or-nothing approach, since the encryption drivers won’t necessarily have any idea what files those sectors represent. At the same time, FDE is popular — mainly because it’s extremely easy to implement. File-based Encryption (FBE) systems (like EncFS and eCryptFS) encrypt individual files. This approach requires changes to the filesystem itself, but has the benefit of allowing fine grained access controls where individual files are encrypted using different keys. Most commercial PC disk encryption software has historically opted to use the full-disk encryption (FDE) approach. Mostly this is just a matter of expediency: FDE is just significantly easier to implement. But philosophically, it also reflects a particular view of what disk encryption was meant to accomplish. In this view, encryption is an all-or-nothing proposition. Your machine is either on or off; accessible or inaccessible. As long as you make sure to have your laptop stolen only when it’s off, disk encryption will keep you perfectly safe. So what does this have to do with Android? Android’s early attempts at adding encryption to their phones followed the standard PC full-disk encryption paradigm. Beginning in Android 4.4 (Kitkat) through Android 6.0 (Marshmallow), Android systems shipped with a kernel device mapper called dm-crypt designed to encrypt disks at the sector level. This represented a quick and dirty way to bring encryption to Android phones, and it made sense — if you believe that phones are just very tiny PCs. The problem is that smartphones are not PCs. The major difference is that smartphone users are never encouraged to shut down their device. In practice this means that — after you enter a passcode once after boot — normal users spend their whole day walking around with all their cryptographic keys in RAM. Since phone batteries live for a day or more (a long time compared to laptops) encryption doesn’t really offer much to protect you against an attacker who gets their hands on your phone during this time. Of course, users do lock their smartphones. In principle, a clever implementation could evict sensitive cryptographic keys from RAM when the device locks, then re-derive them the next time the user logs in. Unfortunately, Android doesn’t do this — for the very simple reason that Android users want their phones to actually work. Without cryptographic keys in RAM, an FDE system loses access to everything on the storage drive. In practice this turns it into a brick. For this very excellent reason, once you boot an Android FDE phone it will never evict its cryptographic keys from RAM. And this is not good. So what’s the alternative? Android is not the only game in town when it comes to phone encryption. Apple, for its part, also gave this problem a lot of thought and came to a subtly different solution. Starting with iOS 4, Apple included a “data protection” feature to encrypt all data stored a device. But unlike Android, Apple doesn’t use the full-disk encryption paradigm. Instead, they employ a file-based encryption approach that individually encrypts each file on the device. In the Apple system, the contents of each file is encrypted under a unique per-file key (metadata is encrypted separately). The file key is in turn encrypted with one of several “class keys” that are derived from the user passcode and some hardware secrets embedded in the processor. iOS data encryption. Source: iOS Security Guide. The main advantage of the Apple approach is that instead of a single FDE key to rule them all, Apple can implement fine-grained access control for individual files. To enable this, iOS provides an API developers can use to specify which class key to use in encrypting any given file. The available “protection classes” include: Complete protection. Files encrypted with this class key can only be accessed when the device is powered up and unlocked. To ensure this, the class key is evicted from RAM a few seconds after the device locks. Protected Until First User Authentication. Files encrypted with this class key are protected until the user first logs in (after a reboot), and the key remains in memory. No protection. These files are accessible even when the device has been rebooted, and the user has not yet logged in. By giving developers the option to individually protect different files, Apple made it possible to build applications that can work while the device is locked, while providing strong protection for files containing sensitive data. Apple even created a fourth option for apps that simply need to create new encrypted files when the class key has been evicted from RAM. This class uses public key encryption to write new files. This is why you can safely take pictures even when your device is locked. Apple’s approach isn’t perfect. What it is, however, is the obvious result of a long and careful thought process. All of which raises the following question… Why the hell didn’t Android do this as well? The short answer is Android is trying to. Sort of. Let me explain. As of Android 7.0 (Nougat), Google has moved away from full-disk encryption as the primary mechanism for protecting data at rest. If you set a passcode on your device, Android N systems can be configured to support a more Apple-like approach that uses file encryption. So far so good. The new system is called Direct Boot, so named because it addresses what Google obviously saw as fatal problem with Android FDE — namely, that FDE-protected phones are useless bricks following a reboot. The main advantage of the new model is that it allows phones to access some data even before you enter the passcode. This is enabled by providing developers with two separate “encryption contexts”: Credential encrypted storage. Files in this area are encrypted under the user’s passcode, and won’t be available until the user enters their passcode (once). Device encrypted storage. These files are not encrypted under the user’s passcode (though they may be encrypted using hardware secrets). Thus they are available after boot, even before the user enters a passcode. Direct Boot even provides separate encryption contexts for different users on the phone — something I’m not quite sure what to do with. But sure, why not? If Android is making all these changes, what’s the problem? One thing you might have noticed is that where Apple had four categories of protection, Android N only has two. And it’s the two missing categories that cause the problems. These are the “complete protection” categories that allow the user to lock their device following first user authentication — and evict the keys from memory. Of course, you might argue that Android could provide this by forcing application developers to switch back to “device encrypted storage” following a device lock. The problem with this idea is twofold. First, Android documentation and sample code is explicit that this isn’t how things work: Moreover, a quick read of the documentation shows that even if you wanted to, there is no unambiguous way for Android to tell applications when the system has been re-locked. If keys are evicted when the device is locked, applications will unexpectedly find their file accesses returning errors. Even system applications tend to do badly when this happens. And of course, this assumes that Android N will even try to evict keys when you lock the device. Here’s how the current filesystem encryption code handles locks: While the above is bad, it’s important to stress that the real problem here is not really in the cryptography. The problem is that since Google is not giving developers proper guidance, the company may be locking Android into years of insecurity. Without (even a half-baked) solution to define a “complete” protection class, Android app developers can’t build their apps correctly to support the idea that devices can lock. Even if Android O gets around to implementing key eviction, the existing legacy app base won’t be able to handle it — since this will break a million apps that have implemented their security according to Android’s current recommendations. In short: this is a thing you get right from the start, or you don’t do at all. It looks like — for the moment — Android isn’t getting it right. Are keys that easy to steal? Of course it’s reasonable to ask whether it’s having keys in RAM is that big of concern in the first place. Can these keys actually be accessed? The answer to that question is a bit complicated. First, if you’re up against somebody with a hardware lab and forensic expertise, the answer is almost certainly “yes”. Once you’ve entered your passcode and derived the keys, they aren’t stored in some magically secure part of the phone. People with the ability to access RAM or the bus lines of the device can potentially nick them. But that’s a lot of work. From a software perspective, it’s even worse. A software attack would require a way to get past the phone’s lockscreen in order to get running code on the device. In older (pre-N) versions of Android the attacker might need to then escalate privileges to get access to Kernel memory. Remarkably, Android N doesn’t even store its disk keys in the Kernel — instead they’re held by the “vold” daemon, which runs as user “root” in userspace. This doesn’t make exploits trivial, but it certainly isn’t the best way to handle things. Of course, all of this is mostly irrelevant. The main point is that if the keys are loaded you don’t need to steal them. If you have a way to get past the lockscreen, you can just access files on the disk. What about hardware? Although a bit of a tangent, it’s worth noting that many high-end Android phones use some sort of trusted hardware to enable encryption. The most common approach is to use a trusted execution environment (TEE) running with ARM TrustZone. This definitely solves a problem. Unfortunately it’s not quite the same problem as discussed above. ARM TrustZone — when it works correctly, which is not guaranteed — forces attackers to derive their encryption keys on the device itself, which should make offline dictionary attacks on the password much harder. In some cases, this hardware can be used to cache the keys and reveal them only when you input a biometric such as a fingerprint. The problem here is that in Android N, this only helps you at the time the keys are being initially derived. Once that happens (i.e., following your first login), the hardware doesn’t appear to do much. The resulting derived keys seem to live forever in normal userspace RAM. While it’s possible that specific phones (e.g., Google’s Pixel, or Samsung devices) implement additional countermeasures, on stock Android N phones hardware doesn’t save you. So what does it all mean? How you feel about this depends on whether you’re a “glass half full” or “glass half empty” kind of person. If you’re an optimistic type, you’ll point out that Android is clearly moving in the right direction. And while there’s a lot of work still to be done, even a half-baked implementation of file-based implementation is better than the last generation of dumb FDE Android encryption. Also: you probably also think clowns are nice. On the other hand, you might notice that this is a pretty goddamn low standard. In other words, in 2016 Android is still struggling to deploy encryption that achieves (lock screen) security that Apple figured out six years ago. And they’re not even getting it right. That doesn’t bode well for the long term security of Android users. And that’s a shame, because as many have pointed out, the users who rely on Android phones are disproportionately poorer and more at-risk. By treating encryption as a relatively low priority, Google is basically telling these people that they shouldn’t get the same protections as other users. This may keep the FBI off Google’s backs, but in the long term it’s bad judgement on Google’s part. Sursa: https://blog.cryptographyengineering.com/2016/11/24/android-n-encryption/
-
Command Injection/Elevation – Environment Variables Revisited Yotam Gottesman November 24, 2016 Windows environment variables can be used to run commands and can also be used to bypass UAC, allowing an attacker with limited privileges to take complete control of the system. This code leverages a rather unusual scenario within Windows OS. This is a continuation of our research as described in a previous post: Elastic Boundaries on BreakingMalware.com Background and Research Basis In the last post on this topic, we have demonstrated that changing a location referred to by environment variables can divert file operations from a legitimate path to a possibly malicious one. Looking through the registry suggests different scenarios and possibilities that exist for environment variable expansion (ab)use. Let’s continue from where we left last time. Scenario 6: Command Injection Assumption: If a command contains an environment variable, it can be expanded into multiple executable commands. Possibility: An attacker can set up commands that will be executed when a different, unrelated file is opened or otherwise accessed. Application: A regular text file (.txt) opens with notepad.exe. The command to open such a file is: %SystemRoot%\System32\NOTEPAD.EXE %1 Effectively running this command: C:\Windows\System32\NOTEPAD.EXE <filename.txt> Now, by using this command: setx SystemRoot “C:\Windows\System32\cmd.exe && C:\Windows” The resulting line changes to: C:\Windows\System32\cmd.exe && C:\Windows\System32\NOTEPAD.EXE <filename.txt> Which means opening a command window before Notepad is called. “&&” means Notepad will run after the command exits, if it succeeds. There are other operators that could be used here instead. A command string, containing environment variables Scenario 7: Parameter Manipulation The Windows registry contains commands that parse and expand a string that contains multiple percent signs (‘%’) A parameter string, vulnerable to fake variable expansion Assumption: Anything between two percent signs is considered an environment variable and could be expanded as one. Possibility: An attacker can set an environment variable-like string to be expanded by Windows, manipulating command parameters. Application: Setting an environment variable named 1”, and pointing it to any dll file. Quote symbols must be escaped. setx “1\”,” “C:\Temp\evil.dll\”,” Result: Running any .cpl file on the system will run evil.dll instead. Scenario 8: Elevation using environment variables expansion. Again. Right-clicking “My Computer” (or “This PC”, on Windows 10) and choosing “Manage” from the context menu causes the “Computer Management” console to open with elevated privileges and without showing the UAC prompt. Behind the scenes, this behavior is defined by the verb “Manage” of the computer item’s class, as can be seen in the registry at this path: HKCR\CLSID\{20D04FE0-3AEA-1069-A2D8-08002B30309D}\shell\Manage\command The value for this key is: %SystemRoot%\system32\CompMgmtLauncher.exe Assumption: CompMgmtLauncher.exe runs with elevated privileges. Possibility: An attacker can take control of this command by setting SystemRoot and gain elevated privileges. Result: Failure. Our assumption is incorrect at this point. Changing the path did cause a different executable to launch instead of CompMgmtLauncher.exe, but it was running with medium integrity (i.e, not elevated). Further Research: So, what does CompMgmtLauncher.exe do to achieve elevated status? The Anomaly: CompMgmtLauncher.exe actually runs another link in the chain – a .lnk file, found in the Start Menu’s Administrative Tools folder: C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Administrative Tools\Computer Management.lnk This link file points to the already familiar mmc.exe in Windows\System32, giving it an argument in the form of a .msc file, specifically compmgmt.msc. It appears that running mmc.exe by itself shows the UAC prompt, but running it with some specific .msc files does not. Assumption: CompMgmtLauncher.exe runs the file that the .lnk file points to with elevated privileges. Possibility: An attacker can control the target of the .lnk file and bypass UAC. Result: Failure. Not quite there yet. Writing to the directory and over the .lnk file requires high integrity to begin with. Further Research: The folder of interest is referenced by two environment variables: ALLUSERSPROFILE=C:\ProgramData ProgramData=C:\ProgramData Assumption: CompMgmtLauncher.exe uses one of these variables to access the .lnk file. Possibility: An attacker can change one or both of the mentioned environment variables and gain control over the called .lnk file. Application: Set ProgramData to point to a directory other than C:\ProgramData Create the correct directory tree: Microsoft\Windows\Start Menu\Programs\Administrative Tools Create a link (.lnk) that points to a string containing a command Call “Manage” on “My Computer”/”This PC”. or Run CompMgmtLauncher.exe Result: Success. Elevated command window using CompMgmtLauncher Conclusion and Thoughts The methods described here are not surprising news given previous findings. They also rely on an attacker having some access to the machine and possessing some privileges to initiate an attack. Nevertheless, environment variables can aid attackers in compromising a system and they provide some meaningful additions to their toolset. The images in the post are taken from a machine running Windows 7 32-bit. The methods have been tested on Windows 7 and Windows 10, both 32 and 64 bit versions and require no adjustments. There is still a lot more research to be conducted on the matter, on Windows and other operating systems. https://github.com/BreakingMalwareResearch/eleven Sursa: https://breakingmalware.com/vulnerabilities/command-injection-and-elevation-environment-variables-revisited/
-
Quickstart DriverBuddy Installation Instructions Copy DriverBuddy folder and DriverBuddy.py file into the IDA plugins folder C:\Program Files (x86)\IDA 6.8\plugins or wherever you installed IDA DriverBuddy Usage Instructions Start IDA and open a Windows kernel driver Go to Edit->Plugins and select Driver Buddy or press ctrl-alt-d Check Output window for DriverBuddy analysis results To decode IOCTLs, highlight the suspected IOCTL and press ctrl-alt-i DriverBuddy DriverBuddy is an IDAPython plugin that helps automate some of the tedium surrounding the reverse engineering of Windows Kernel Drivers. It has a number of handy features, such as: Identifying the type of driver Locating DispatchDeviceControl and DispatchInternalDeviceControl functions Populating common structs for WDF and WDM drivers Attempts to identify and label structs like the IRP and IO_STACK_LOCATION Labels calls to WDF functions that would normally be unlabeled Finding known IOCTL codes and decoding them Flagging functions prone to misuse Link: https://github.com/nccgroup/DriverBuddy