-
Posts
18794 -
Joined
-
Last visited
-
Days Won
742
Everything posted by Nytro
-
Agenda 1. Introduction (Jason) 2. Compute Architecture Evolution (Jason) 3. Chip Level Architecture (Jason) Subslices, slices, products 4. Gen Compute Architecture (Maiyuran) Execution units 5. Instruction Set Architecture (Ken) 6. Memory Sharing Architecture (Jason) 7. Mapping Programming Models to Architecture (Jason) 8. Summary Download slides: https://software.intel.com/sites/default/files/managed/89/92/Intel-Graphics-Architecture-ISA-and-microarchitecture.pdf
-
- 1
-
-
The Tor Phone prototype: a truly private smartphone? 29 NOV 2016 Get the latest security news in your inbox. by Bill Camarda The Tor Project has long offered high-security alternatives for folk who are especially concerned about their privacy. But as the world goes mobile, and is increasingly accessed through smartphones, users become vulnerable to a whole new set of compromises. That’s where the Tor Phone prototype comes in – and it’s just been significantly improved. According to developer Mike Perry, Tor Phone aims: …to demonstrate that it is possible to build a phone that respects user choice and freedom, vastly reduces vulnerability surface, and sets a direction for the ecosystem with respect to how to meet the needs of high-security users. It’s also “meant to show that it is still possible to replace and modify your mobile phone’s operating system while retaining verified boot security – though only just barely”. Tor Phone starts with Copperhead OS, an open-source Android fork focused on security. As Perry writes: Copperhead is also the only Android ROM that supports Verified Boot, which prevents exploits from modifying the boot, system, recovery, and vendor device partitions… Copperhead has also extended this protection by preventing system applications from being overridden by Google Play Store apps, or from writing bytecode to writable partitions (where it could be modified and infected). Therein lies a huge obstacle to Tor Phone deployment, however. Together with Copperhead, Tor Phone installs the Orbot Tor proxy app, OrWall firewall, F-Droid alternative app repository, additional tools, and finally, Google Play (primarily, Perry says, so you can retrieve the Signal app for encrypted voice calling and instant messaging). Its components must install to the system partition. Therefore, says Perry: We must re-sign the Copperhead image and updates… to [maintain] system integrity from Verified Boot. Unfortunately, only selected Google Nexus/Pixel devices let users control this with their own keys, while still supporting Verified Boot. So you can’t do this with your own cheap-o Android device, no matter how strong your Linux and related skills are – what’s more, a quick look at the directions confirms that setting up Tor Phone is non-trivial. You can jumpstart the process by purchasing a smartphone with Copperhead pre-installed – for the moment, of course, while supplies last. And, with the right hardware, says Perry, Tor Phone works: notwithstanding some “rough edges,” he relies on his right now. Sophos Home Free home computer security software for all the family Learn More Why bother with all this? Perry and Tor argue that Google is increasingly moving to lock down the Android platform, claiming it’s the only way to overcome Android’s “fragmentation and resulting insecurity”. Tor argues instead for a strategy based on transparency: [As] more components and apps are moved to closed source versions, Google [reduces] its ability to resist the demand that backdoors be introduced. Those might come from nefarious governments, of course. But, in Ars Technica, Perry notes that untraceable backdoors might also be introduced by hackers purely interested in financial gain. This is less likely, he argues, if a mobile OS remains fully open… We are concerned that the freedom of users to use, study, share, and improve the operating system software on their phones is being threatened. If we lose these freedoms on mobile, we may never get them back. For Tor Phone to gain traction, it’ll probably need to run on more than a couple of high-end devices manufactured by Google itself. In Ars Technica, Perry stresses that Tor won’t enter the secure hardware business. But someone could, he says, citing the crowdfunded Neo900 project as a model: What I’ve found is that posts like [his Tor Phone update] energise the Android hobbyist/free software ecosystem, and make us aware of each other and common purpose. If you’re thinking “sounds like there’s a long way to go,” Perry might agree. He named his current prototype “Mission Improbable”. But that’s big progress: he named the previous prototype “Mission Impossible”. Follow @NakedSecurity Sursa: https://nakedsecurity.sophos.com/2016/11/29/the-tor-phone-prototype-a-truly-private-smartphone/
-
testssl.sh: Testing TLS/SSL encryption Name Last Modified Size Type legacy-2.6/ 2016-Nov-13 12:37:56 -- Directory openssl-1.0.2i-chacha.pm.ipv6.contributed/ 2016-Sep-26 23:15:22 -- Directory CHANGELOG.txt 2015-Sep-15 10:56:45 12.27KB TXT Type Document LICENSE.txt 2014-May-03 11:04:22 17.59KB TXT Type Document OPENSSL-LICENSE.txt 2015-Oct-13 00:12:23 58.08KB TXT Type Document bash-heartbleed.changelog.txt 2014-May-03 17:37:15 572.00B TXT Type Document bash-heartbleed.sh 2015-Oct-27 15:11:18 3.98KB SH File ccs-injection.sh 2014-Jun-14 23:44:42 3.94KB SH File mapping-rfc.txt 2014-Dec-21 00:52:13 15.88KB TXT Type Document openssl-1.0.2e-chacha.pm.tar.gz 2015-Sep-15 09:23:07 11.65MB GZ Compressed Archive openssl-1.0.2e-chacha.pm.tar.gz.asc 2015-Sep-16 01:17:54 828.00B ASC File openssl-1.0.2i-chacha.pm.ipv6.Linux+FreeBSD.tar.gz 2016-Jun-23 11:34:57 9.45MB GZ Compressed Archive openssl-1.0.2i-chacha.pm.ipv6.Linux+FreeBSD.tar.gz.asc 2016-Jun-23 11:33:36 811.00B ASC File openssl-ms14-066.Linux.x86_64 2016-Apr-15 12:36:23 4.24MB X86_64 File openssl-rfc.mappping.html 2016-Feb-06 16:19:09 57.88KB HTML File testssl.sh 2016-Nov-20 19:26:02 427.82KB SH File testssl.sh is a free command line tool which checks a server's service on any port for the support of TLS/SSL ciphers, protocols as well as recent cryptographic flaws and more. Version 2.8 is being finalized. 2.6 is outdated, has more bugs and less features. Please checkout 2.8rc3 @ github or download the stable 2.8rc3 version from here Key features Clear output: you can tell easily whether anything is good or bad Ease of installation: It works for Linux, Darwin, FreeBSD and MSYS2/Cygwin out of the box: no need to install or configure something, no gems, CPAN, pip or the like. Flexibility: You can test any SSL/TLS enabled and STARTTLS service, not only webservers at port 443 Toolbox: Several command line options help you to run YOUR test and configure YOUR output Reliability: features are tested thoroughly Verbosity: If a particular check cannot be performed because of a missing capability on your client side, you'll get a warning Privacy: It's only you who sees the result, not a third party Freedom: It's 100% open source. You can look at the code, see what's going on and you can change it. Heck, even the development is open (github) Link: https://testssl.sh/
-
Malware Sample Sources for Researchers Malware researchers have the need to collect malware samples to research threat techniques and develop defenses. Researchers can collect such samples using honeypots. They can also download samples from known malicious URLs. They can also obtain malware samples from the following sources: Contagio Malware Dump: Free; password required Das Malwerk: Free FreeTrojanBotnet: Free; registration required KernelMode.info: Free; registration required MalShare: Free; registration required Malware.lu’s AVCaesar: Free; registration required MalwareBlacklist: Free; registration required Malware DB: Free Malwr: Free; registration required Open Malware: Free theZoo aka Malware DB: Free Virusign: Free VirusShare: Free Be careful not to infect yourself when accessing and experimenting with malicious software! My other lists of online security resources outline Automated Malware Analysis Services and On-Line Tools for Malicious Website Lookups. Also, take a look at tips sharing malware samples with other researchers. Updated November 28, 2016 Sursa: https://zeltser.com/malware-sample-sources/
-
- 2
-
-
ATM Insert Skimmers: A Closer Look KrebsOnSecurity has featured multiple stories about the threat from ATM fraud devices known as “insert skimmers,” wafer-thin data theft tools made to be completely hidden inside of a cash’s machine’s card acceptance slot. For a closer look at how stealthy insert skimmers can be, it helps to see videos of these things being installed and removed. Here’s a look at promotional sales videos produced by two different ATM insert skimmer peddlers. Traditional ATM skimmers are fraud devices made to be placed over top of the cash machine’s card acceptance slot, usually secured to the ATM with glue or double-sided tape. Increasingly, however, more financial institutions are turning to technologies that can detect when something has been affixed to the ATM. As a result, more fraudsters are selling and using insert skimming devices — which are completely hidden from view once inserted into an ATM. The fraudster demonstrating his insert skimmer in the short video above spends the first half of the demo showing how a regular bank card can freely move in and out of the card acceptance slot while the insert skimmer is nestled inside. Toward the end of the video, the scammer retrieves the insert skimmer using what appears to be a rather crude, handmade tool thin enough to fit inside a wallet. A sales video produced by yet another miscreant in the cybercrime underground shows an insert skimmer being installed and removed from a motorized card acceptance slot that has been fully removed from an ATM so that the fraud device can be seen even while it is inserted. In a typical setup, insert skimmers capture payment card data from the magnetic stripe on the backs of cards inserted into a hacked ATM, while a pinhole spy camera hidden above or beside the PIN pad records time-stamped video of cardholders entering their PINs. The data allows thieves to fabricate new cards and use PINs to withdraw cash from victim accounts. Covering the PIN pad with your hand blocks any hidden camera from capturing your PIN — and hidden cameras are used on the vast majority of the more than three dozen ATM skimming incidents that I’ve covered here. Shockingly, few people bother to take this simple and effective step, as detailed in this skimmer tale from 2012, wherein I obtained hours worth of video seized from two ATM skimming operations and saw customer after customer walk up, insert their cards and punch in their digits — all in the clear. Once you understand how stealthy these ATM fraud devices are, it’s difficult to use a cash machine without wondering whether the thing is already hacked. The truth is most of us probably have a better chance of getting physically mugged after withdrawing cash than encountering a skimmer in real life. However, here are a few steps we can all take to minimize the success of skimmer gangs. -Cover the PIN pad while you enter your PIN. -Keep your wits about you when you’re at the ATM, and avoid dodgy-looking and standalone cash machines in low-lit areas, if possible. -Stick to ATMs that are physically installed in a bank. Stand-alone ATMs are usually easier for thieves to hack into. -Be especially vigilant when withdrawing cash on the weekends; thieves tend to install skimming devices on a weekend — when they know the bank won’t be open again for more than 24 hours. -Keep a close eye on your bank statements, and dispute any unauthorized charges or withdrawals immediately. If you liked this piece and want to learn more about skimming devices, check out my series All About Skimmers. Sursa: https://krebsonsecurity.com/2016/11/atm-insert-skimmers-a-closer-look/
-
- 2
-
-
Monday, 28 November 2016 All your Paypal OAuth tokens belong to me - localhost for the win tl;dr I was able to hijack the OAuth tokens of EVERY Paypal OAuth application with a really simple trick. Introduction If you have been following this blog you might have got tired of how many times I have stressed out the importance of the redirect_uri parameter in the OAuth flow. This simple parameter might be source of many headaches for any maintainer of OAuth installations being it a client or a server. Accepting the risk of repeating myself here is two simple suggestions that may help you stay away from troubles (you can always skip this part and going directly to the Paypal Vulnerability section): If you are building an OAuth client, Thou shall register a redirect_uri as much as specific as you can i.e. if your OAuth client callback is https://yourouauthclient.com/oauth/oauthprovider/callback then DO register https://yourouauthclient.com/oauth/oauthprovider/callback NOT JUST https://yourouauthclient.com/ or https://yourouauthclient.com/oauth If you are still not convinced here is how I hacked Google leveraging this mistake. Second suggestion is The ONLY safe validation method for redirect_uri the authorization server should adopt is exact matching Although other methods offer client developers desirable flexibility in managing their application’s deployment, they are exploitable. From “OAuth 2 In Action” by Justin Richer and Antonio Sanso, Copyrights 2016 Again here you can find examples of providers that were vulnerable to this attack Egor Homakov hacking Github me/myself hacking Facebook bypassing the regex redirect_uri validation Paypal Vulnerability So after this long premise the legitimate question is what was wrong with Paypal ? Basically like many online internet services Paypal offers the option to register your own Paypal application via a Dashboard. So far so good :). The better news (for Paypal) is that they actually employs an exact matching policy for redirect_uri So what was wrong ? While testing my own OAuth client I have noticed something a bit fishy. The easier way to describe it is using an OAuth application from Paypal itself (remember the vulnerability I found is universal aka worked with every client!). Basically Paypal has setup a Demo Paypal application to showcases their OAuth functionalities. The initial OAuth request looked like: https://www.paypal.com/signin/authorize?client_id=AdcKahCXxhLAuoIeOotpvizsVOX5k2A0VZGHxZnQHoo1Ap9ChOV0XqPdZXQt&response_type=code&scope=openid profile email address phone https://uri.paypal.com/services/paypalattributes https://uri.paypal.com/services/paypalattributes/business https://uri.paypal.com/services/expresscheckout&redirect_uri=https://demo.paypal.com/loginsuccessful&nonce=&newUI=Y As you can see the registered redirect_uri for this application is https://demo.paypal.com/loginsuccessful What I have found out is that the Paypal Authorization Server was also accepting localhost as redirect_uri. So https://www.paypal.com/signin/authorize?client_id=AdcKahCXxhLAuoIeOotpvizsVOX5k2A0VZGHxZnQHoo1Ap9ChOV0XqPdZXQt&response_type=code&scope=openid profile email address phone https://uri.paypal.com/services/paypalattributes https://uri.paypal.com/services/paypalattributes/business https://uri.paypal.com/services/expresscheckout&redirect_uri=https://localhost&nonce=&newUI=Y was still a valid request and the authorization code was then delivered back to localhost . Cute right? But still not a vulnerability Well the next natural step was to create a DNS entry for my website looking lke http://localhost.intothesymmetry.com/ and try: https://www.paypal.com/signin/authorize?client_id=AdcKahCXxhLAuoIeOotpvizsVOX5k2A0VZGHxZnQHoo1Ap9ChOV0XqPdZXQt&response_type=code&scope=openid profile email address phone https://uri.paypal.com/services/paypalattributes https://uri.paypal.com/services/paypalattributes/business https://uri.paypal.com/services/expresscheckout&redirect_uri=http://localhost.intothesymmetry.com/&nonce=&newUI=Y and you know what? BINGO : So it really looks like that even if Paypal did actually performed exact matching validation, localhost was a magic word and it override the validation completely!!! Worth repeating is this vulnerability worked for any Paypal OAuth client hence was Universal making my initial claim All your Paypal tokens belong to me - localhost for the win not so crazy anymore. For more follow me on Twitter. Disclosure timeline 08-09-2016 - Reported to Paypal security team. 26-09-2016 - Paypal replied this is not a vulnerability!! 26-09-2016 - I replied to Paypal saying ok no problem. Are you sure you do not want to give an extra look into it ? 28-09-2016 - Paypal replied the will give another try. 07-11-2016 - Paypal fixed the issue (bounty awarded) 28 -11-2016 - Public disclosure. Acknowledgement I would like to thank the Paypal Security team for the constant and quick support. Posted by Antonio Sanso at 02:00 Sursa: http://blog.intothesymmetry.com/2016/11/all-your-paypal-tokens-belong-to-me.html
-
DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, Yuval Elovici (Submitted on 11 Aug 2016) Air-gapped computers are disconnected from the Internet physically and logically. This measure is taken in order to prevent the leakage of sensitive data from secured networks. In the past, it has been shown that malware can exfiltrate data from air-gapped computers by transmitting ultrasonic signals via the computer's speakers. However, such acoustic communication relies on the availability of speakers on a computer. In this paper, we present 'DiskFiltration,' a covert channel which facilitates the leakage of data from an air-gapped compute via acoustic signals emitted from its hard disk drive (HDD). Our method is unique in that, unlike other acoustic covert channels, it doesn't require the presence of speakers or audio hardware in the air-gapped computer. A malware installed on a compromised machine can generate acoustic emissions at specific audio frequencies by controlling the movements of the HDD's actuator arm. Digital Information can be modulated over the acoustic signals and then be picked up by a nearby receiver (e.g., smartphone, smartwatch, laptop, etc.). We examine the HDD anatomy and analyze its acoustical characteristics. We also present signal generation and detection, and data modulation and demodulation algorithms. Based on our proposed method, we developed a transmitter on a personal computer and a receiver on a smartphone, and we provide the design and implementation details. We also evaluate our covert channel on various types of internal and external HDDs in different computer chassis and at various distances. With DiskFiltration we were able to covertly transmit data (e.g., passwords, encryption keys, and keylogging data) between air-gapped computers to a smartphone at an effective bit rate of 180 bits/minute (10,800 bits/hour) and a distance of up to two meters (six feet). Subjects: Cryptography and Security (cs.CR) Cite as: arXiv:1608.03431 [cs.CR] (or arXiv:1608.03431v1 [cs.CR] for this version) Submission history From: Mordechai Guri [view email] [v1] Thu, 11 Aug 2016 12:06:12 GMT (3304kb) Sursa: https://arxiv.org/abs/1608.03431
-
Every Windows 10 in-place Upgrade is a SEVERE Security risk
Nytro posted a topic in Stiri securitate
Monday, November 28, 2016 Every Windows 10 in-place Upgrade is a SEVERE Security risk This is a big issue and it has been there for a long time. Just a month ago I finally got verification that the Microsoft Product Groups not only know about this but that they have begun working on a fix. As I want to be known as a white hat I had to wait for this to happen before I blog this. There is a small but CRAZY bug in the way the "Feature Update" (previously known as "Upgrade") is installed. The installation of a new build is done by reimaging the machine and the image installed by a small version of Windows called Windows PE (Preinstallation Environment). This has a feature for troubleshooting that allows you to press SHIFT+F10 to get a Command Prompt. This sadly allows for access to the hard disk as during the upgrade Microsoft disables BitLocker. I demonstrate this in the following video. This would take place when you take the following update paths: Windows 10 RTM --> 1511 or 1607 release (November Update or Anniversary Update) Any build to a newer Insider Build (up to end of October 2016 at least) The real issue here is the Elevation of Privilege that takes a non-admin to SYSTEM (the root of Windows) even on a BitLocker (Microsoft's hard disk encryption) protected machine. And of course that this doesn't require any external hardware or additional software. It's just a crazy bug I would say Here's the video: Why would a bad guy do this: An internal threat who wants to get admin access just has to wait for the next upgrade or convince it's OK for him to be an insider An external threat having access to a computer waits for it to start an upgrade to get into the system I sadly can't offer solutions better than: Don't allow unattended upgrades Keep very tight watch on the Insiders Stick to LTSB version of Windows 10 for now I am known to share how I do things myself and I'm happy to say I have instructed my customers to stay on the Long Time Servicing Branch for now. At least they can wait until this is fixed and move to a more current branch then. I meet people all the time who say that LTSB is a legacy way but when I say I'm going to wait a year or two to get the worst bugs out of this new "Just upgrade" model - this is what I meant… Posted by Sami Laiho at 6:14 PM Sursa: http://blog.win-fu.com/2016/11/every-windows-10-in-place-upgrade-is.html-
- 1
-
-
By Fahmida Y. Rashid Senior Writer CERT to Microsoft: Keep EMET alive Windows systems with Enhanced Mitigation Experience Toolkit properly configured is more secure than a standalone Windows 10 system, says CERT InfoWorld | Nov 29, 2016 Credit: Thinkstock Microsoft wants to stop supporting its Enhanced Mitigation Experience Toolkit (EMET) because all of the security features have been baked into Windows 10. A vulnerability analyst says Windows with EMET offers additional protection not available in standalone Windows 10. "Even a Windows 7 system with EMET configured protects your application more than a stock Windows 10 system," said Will Dormann, a vulnerability analyst with the Computer Emergency Response Team(CERT) at Carnegie Mellon University’s Software Engineering Institute. [ InfoWorld's deep look: Why (and how) you should manage Windows 10 PCs like iPhones. | The essentials for Windows 10 installation: Download the Windows 10 Installation Superguide today. ] Originally introduced in 2009, EMET adds exploit mitigations, including address space layout randomization (ASLR) and data execution prevention (DEP), to Windows systems to make it harder for malware to trigger unpatched vulnerabilities. Since Windows 10 includes EMET’s anti-exploit protections by default, Microsoft is planning to end-of-life the free tool in July 2018. CERT’s Dormann said Microsoft should keep supporting the toolkit because Windows 10 does not provide all of the application-specific mitigations available in EMET. “Windows 10 does indeed provide some nice exploit mitigations. The problem is that the software you are running needs to be specifically compiled to take advantage of them,” Dormann said. OS-level vs application-level defenses Dormann argues that Microsoft should keep supporting the toolkit -- currently EMET 5.51 -- because it provides both systemwide protection and application-specific mitigations that make the toolkit relevant for Windows security, even on Windows 10 systems. EMET’s systemwide protections include the aforementioned ASLR and DEP, Structured Exception Handler Overwrite Protection (SEHOP), Certificate Trust (Pinning), and Block Untrusted Fonts. EMET’s application-specific protections include DEP, SEHOP, ASLR, Null Page Allocation, Heapspray Allocations, Export Address Table Access Filtering (EAF), Export Address Table Access Filtering Plus (EAF+), Bottom-up Randomization (BottomUP ASLR), Attack Surface Reduction (ASR), Block Untrusted Fonts, and Return-Oriented Programming mitigations. Microsoft’s principal lead program for OS security, Jeffrey Sutherland, recently said that users should upgrade to Windows 10 since the latest operating system natively includes the security features provided by EMET. That is true to some extent, as DEP, SEHOP, ASLR, BottomupASLR, and ROP mitigation (as Control Flow Guard) are part of Windows 10, but many of the application-specific mitigations are not. What Sutherland neglected to consider was that most Windows administrators rely on EMET to apply all of the available exploit mitigations to applications. Consider that a Windows 10 system with EMET properly configured has 13 additional mitigations -- the application-specific controls -- than a standalone Windows 10 system. "It is pretty clear that an application running on a stock Windows 10 system does not have the same protections as one running on a Windows 10 system with EMET properly configured," Dormann said. Application defenses still lagging Windows 10 may be the most secure Windows ever, but the applications have to be compiled to utilize the exploit mitigation features to actually benefit from those enhanced security features. For example, if the application isn’t designed to use Control Flow Guard, then the application doesn’t benefit from Return-Oriented Programming (ROP) defenses, despite the fact that Control Flow Guard is part of Windows 10. "Out of all of the applications you run in your enterprise, do you know which ones are built with Control Flow Guard support? If an application is not built to use Control Flow Guard, it doesn't matter if your underlying operating system supports it or not," Dormann said. The problem isn’t limited to third-party and custom enterprise applications, as there are older -- but still widely used -- Microsoft applications that don’t access the advanced exploit mitigations. For example, Microsoft does not compile all of Office 2010 with the /DYNAMICBASE flag to indicate compatibility with ASLR. An attacker could potentially bypass ASLR and exploit a memory corruption vulnerability by loading a malicious library into the vulnerable application’s process space. Ironically, administrators would protect the application from being targeted in this way by running EMET with application-specific mitigations. "Because we cannot rely on all software vendors to produce code that uses all the exploit mitigations available, EMET puts this control back in our hands," Dormann said. Don’t pick sides; do both Microsoft says to start migrating to Windows 10 and stop using EMET by 2018. A senior engineer at CERT, tasked by the United States Department of Homeland Security to make security recommendations of national significance, says EMET still offers better security than standalone Windows 10. What is a Windows administrator to do? The answer, according to Dormann, is to follow both recommendations: Upgrade to Windows 10 to take advantage of native exploit mitigation features, and install EMET to apply application-specific mitigations. EMET will continue to keep working even after its end-of-life date, which means administrators can still use the tool to protect unsupported software against possible zero-day vulnerabilities. Several other Microsoft applications are nearing their end-of-life dates, including Microsoft Office 2007. Administrators can continue to use EMET to protect these applications from attacks looking for zero-day vulnerabilities. “With such out-of-support applications, it is even more important to provide additional exploit protection with a product like EMET,” Dormann said. It’s possible that with Microsoft’s new Windows-as-a-service model, the remaining EMET defenses will be added to Windows 10 before the end-of-life date, at which point Windows 10 would be able to handle the application-specific protections without EMET. Until then, EMET is “still an important tool to help prevent exploitation of vulnerabilities,” Dormann said. To comment on this article and other InfoWorld content, visit InfoWorld's LinkedIn page, Facebook page and Twitter stream. Sursa: http://www.infoworld.com/article/3145565/security/cert-to-microsoft-keep-emet-alive.html#tk.rss_security
-
- 1
-
-
Fldbg, a Pykd script to debug FlashPlayer November 29, 2016Exploit Development, Offensive Security A few months ago, we decided to make a new module for our Advanced Windows Exploitation class. After evaluating a few options we chose to work with an Adobe Flash 1day vulnerability originally discovered by the Google Project Zero team. Since we did not have any previous experience with Flash internals, we expected a pretty steep learning curve. We started by trying to debug the Flash plugin on Firefox while running the proof-of-concept (PoC) file and quickly realized that debugging the player can be rather time consuming without appropriate tools due to multiple reasons. First of all, the FlashPlayerPlugin.exe process is spawned by Firefox through the help of an auxiliary process named plugin-container.exe. The latter facilitates communication between the Flash plugin process and the Firefox browser process. Additionally, if protected mode is enabled (default behavior), the FlashPlayerPlugin.exe acts as a broker process and loads a second instance of the player in a sandboxed environment. This sandboxed instance is responsible for parsing and rendering of Flash content. Most of the functions responsible for rendering of Flash content, including the code exploited in our PoC, are wrapped in the NPSWF32_X_X_X_X dynamic library (DLL), which is loaded by FlashPlayerPlugin.exe. As a result, to successfully debug our process, we need to explicitly inform the debugger that we want it to debug processes spawned by Firefox. Furthermore, in order to set breakpoints on NPSWF32 functions, we need to intercept the second instance of the FlashPlayerPlugin is loaded (sandboxed process), which is the one that loads our target DLL. All of these preliminary tasks could easily be automated directly from WinDgb. However, since we realized that we would need to automate other functionality as well, we decided to start writing a pykd script that would facilitate debugging the Flash player on Firefox in a less painful way. One of the problems you encounter when working with the Flash player is the ability to dynamically analyze the ActionScript client code by setting appropriate breakpoints. The ActionScript 3 architecture runs on the ActionScript Virtual Machine 2 (AVM2) engine and computation in the AVM2 is based on executing the code of a “method body”. As explained in Haifei Li paper “Inside AVM”, a method can be identified from its MethodInfo class as: Native – a function in the Flash .text section “Normal” – our own code in the AS3 source converted to native code through the Just-In-Time (JIT) compiler Static init – executed in interpreter mode. Since there are no symbols exported for native functions and AS3 client code is dynamically translated into processor-specific instructions at runtime, tracing the execution flow can be quite challenging. We decided to take an approach similar to the one exposed in “Discover Flash Player Zero-day Attacks In The Wild From Big Data”, where we hook specific functions in the NPSWF32 library to be able to resolve native and jitted methods. Specifically we hook BaseExecMgr::setJit and BaseExecMgr::setNative which we dynamically identify in NPSWF32 by comparing their opcode signatures with the ones found in the compiled avmplus code. Articol complet: https://www.offensive-security.com/vulndev/fldbg-a-pykd-script-to-debug-flashplayer/
-
InsecureBankv2 – Vulnerable Android Application Information security awareness training may include several demo that describe how attacker may exploit vulnerabilities on system to gain full control on remote devices. If you are looking to demonstrate android application you can use InsecureBankv2. This tool was updated during the BlackHat arsenal and is available for users online, the purpose of this project is to provide security enthusiasts and developers a way to learn the Android insecurities by testing this vulnerable application. The list of vulnerabilities that are currently included in this release are: Flawed broadcast receivers Weak authorization mechanism Root detection and bypass Local encryption issues Vulnerable activity components Insecure content provider access Insecure webview implementation Weak cryptography implementation Application patching Sensitive information in memory You can read more and download the tool over here: https://github.com/dineshshetty/ Sursa: http://www.sectechno.com/insecurebankv2-vulnerable-android-application/
-
Cosa Nostra Cosa Nostra is an open source software clustering toolkit with a focus on malware analysis. It can create phylogenetic trees of binary malware samples that are structurally similar. It was initially released during SyScan360 Shanghai (2016). Getting started Required 3rd party tools In order to use Cosa Nostra you will need the source code, of course, a 2.7 version of Python, as well as one of the following tools in order to perform code analysis: Pyew Written in Python, it supports analysis of PE, ELF, Bios and Boot files for x86 or x86_64. IDA Written in C++. It supports analysing a plethora of executable types that you probably never even heard about. Commercial product. Radare2 Written in pure C. Same as with IDA, with support for extremely rare CPUs and binary formats. Also, it's open source! Link: https://github.com/joxeankoret/cosa-nostra
-
- 2
-
-
NEUTRALIZING INTEL’S MANAGEMENT ENGINE by: Brian Benchoff November 28, 2016 Five or so years ago, Intel rolled out something horrible. Intel’s Management Engine (ME) is a completely separate computing environment running on Intel chipsets that has access to everything. The ME has network access, access to the host operating system, memory, and cryptography engine. The ME can be used remotely even if the PC is powered off. If that sounds scary, it gets even worse: no one knows what the ME is doing, and we can’t even look at the code. When — not ‘if’ — the ME is finally cracked open, every computer running on a recent Intel chip will have a huge security and privacy issue. Intel’s Management Engine is the single most dangerous piece of computer hardware ever created. Researchers are continuing work on deciphering the inner workings of the ME, and we sincerely hope this Pandora’s Box remains closed. Until then, there’s now a new way to disable Intel’s Management Engine. Previously, the first iteration of the ME found in GM45 chipsets could be removed. This technique was due to the fact the ME was located on a chip separate from the northbridge. For Core i3/i5/i7 processors, the ME is integrated to the northbridge. Until now, efforts to disable an ME this closely coupled to the CPU have failed. Completely removing the ME from these systems is impossible, however disabling parts of the ME are not. There is one caveat: if the ME’s boot ROM (stored in an SPI Flash) does not find a valid Intel signature, the PC will shut down after 30 minutes. A few months ago, [Trammell Hudson] discovered erasing the first page of the ME region did not shut down his Thinkpad after 30 minutes. This led [Nicola Corna] and [Frederico Amedeo Izzo] to write a script that uses this exploit. Effectively, ME still thinks it’s running, but it doesn’t actually do anything. With a BeagleBone, an SOIC-8 chip clip, and a few breakout wires, this script will run and effectively disable the ME. This exploit has only been confirmed to work on Sandy Bridge and Ivy Bridge processors. It should work on Skylake processors, and Haswell and Broadwell are untested. Separating or disabling the ME from the CPU has been a major focus of the libreboot and coreboot communities. The inability to do so has, until now, made the future prospects of truly free computing platforms grim. The ME is in everything, and CPUs without an ME are getting old. Even though we don’t have the ability to remove the ME, disabling it is the next best thing. Sursa: https://hackaday.com/2016/11/28/neutralizing-intels-management-engine/
-
- 1
-
-
Tuesday, November 29, 2016 Breaking the Chain Posted by James Forshaw, Wielder of Bolt Cutters. Much as we’d like it to be true, it seems undeniable that we’ll never fix all security bugs just by looking for them. One of most productive ways to dealing with this fact is to implement exploit mitigations. Project Zero considers mitigation work just as important as finding vulnerabilities. Sometimes we can get our hands dirty, such as helping out Adobe and Microsoft in Flash mitigations. Sometimes we can only help indirectly via publishing our research and giving vendors an incentive to add their own mitigations. This blog post is about an important exploit mitigation I developed for Chrome on Windows. It will detail many of the challenges I faced when trying to get this mitigation released to protect end-users of Chrome. It’s recently shipped to users of Chrome on Windows 10 (in M54), and ended up blocking the sandbox escape of an exploit chain being used in the wild. For information on the Chromium bug that contains the list of things we implemented in order to get this mitigation working, look here. The Problem with Win32k It’s possible to lockdown a sandbox such as Chrome’s pretty comprehensively using Restricted Tokens. However one of the big problems on Windows is locking down access to system calls. On Windows you have both the normal NT system calls and Win32k system calls for accessing the GUI which combined represents a significant attack surface. While the NT system calls do have exploitable vulnerabilities now and again (for example issue 865) it’s nothing compared to Win32k. From just one research project alone 31 issues were discovered, and this isn’t counting the many font issues Mateusz has found and the hundreds of other issues found by other researchers. Much of Win32k’s problems come from history. In the first versions of Windows NT almost all the code responsible for the windowing system existed in user-mode. Unfortunately for 90’s era computers this wasn’t exactly good for performance so for NT 4 Microsoft moved a significant portion of what was user-mode code into the kernel (becoming the driver, win32k.sys). This was a time before Slammer, before Blaster, before the infamous Trustworthy Computing Memo which focussed Microsoft to think about security first. Perhaps some lone voice spoke for security that day, but was overwhelmed by performance considerations. We’ll never know for sure, however what it did do was make Win32k a large fragile mess which seems to have persisted to this day. And the attack surface this large fragile mess exposed could not be removed from any sandboxed process. That all changed with the release of Windows 8. Microsoft introduced the System Call Disable Policy, which allows a developer to completely block access to the Win32k system call table. While it doesn’t do anything for normal system calls the fact that you could eliminate over a thousand win32k system calls, many of which have had serious security issues, would be a crucial reduction in the attack surface. However no application in a default Windows installation used this policy (it’s said to have been introduced for non-GUI applications such as on Azure) and using it for something as complex as Chrome wasn’t going to be easy. The process of shipping Win32k lockdown required a number of architectural changes to be made to Chrome. This included replacing the GDI-based font code with Microsoft’s DirectWrite library. After around two years of effort Win32k lockdown was shipping by default. The Problems with Flash in Chrome Chrome uses a multi-process model, in which web page content is parsed inside Renderer processes, which are covered by the Win32k Lockdown policy for the Chrome sandbox. Plugins such as Flash and PDFium load into a different type of process, a PPAPI process, and due to circumstance these could not have the lockdown policy enabled. This would seem a pretty large weak point. Flash has not had the best security track record (relevant), making the likelihood of Flash being an RCE vector very high. Combine that with the relative ease of finding and exploiting Win32k vulnerabilities and you’ve got a perfect storm. It would seem reasonable to assume that real attackers are finding Win32k vulnerabilities and using them to break out of restrictive sandboxes including Chrome’s using Flash as the RCE vector. The question was whether that was true. The first real confirmation that this was true came from the Hacking Team breach, which occurred in July 2015. In the dumped files was an unfixed Chrome exploit which used Flash as the RCE vector and a Win32k exploit to escape the sandbox. While both vulnerabilities were quickly fixed I came upon the idea that perhaps I could spend some time to implement the lockdown policy for PPAPI and eliminate this entire attack chain. Analysing the Problem The first thing I needed to do was to determine what Win32k APIs were used by a plugin such as Flash. There are actually 3 main system DLLs that can be called by an application which end up issuing system calls to Win32k: USER32, GDI32 and IMM32. Each has slightly different responsibilities. The aim would be to enumerate all calls to these DLLs and replace them with APIs which didn’t rely on Win32k. Still it wasn’t just Flash that might call Win32k API but also the Pepper APIs implemented in Chrome. I decided to take two approaches to finding out what code I needed to remove, import inspection and dynamic analysis. Import inspection is fairly simple, I just dumped any imports for the plugins such as the Pepper Flash plugin DLL and identified anything which came from the core windowing DLLs. I then ran Flash and PDFium with a number of different files to try and exercise the code paths which used Win32k system calls. I attached WinDBG to the process and set breakpoints on all functions starting with NtUser and NtGdi which I could find. These are the system call stubs used to call Win32k from the various DLLs. This allowed me to catch functions which were in the PPAPI layer or not directly imported. Win32k system call using code in Flash and PDFium was almost entirely to enumerate font information, either directly or via the PPAPI. There was some OpenSSL code in Flash which uses the desktop window as a source of entropy, but as this could never work in the Chrome sandbox it’s clear that this was vestigial (or Flash’s SSL random number generator is broken, chose one or the other). Getting rid of the font enumeration code used through PPAPI was easy. Chrome already supported replacing GDI based font rendering and enumeration with DirectWrite which does all the rendering in User Mode. Most of the actual rendering in Flash and PDFium is done using their own TrueType font implementations (such as FreeType). Enabling DirectWrite for PPAPI processes was implemented in a number of stages, with the final enabling of DirectWrite in this commit. Now I just needed to get rid of the GDI font code in Flash and PDFium itself. For PDFium I was able to repurpose existing font code used for Linux and macOS. After much testing to ensure the font rendering didn’t regress from GDI I was able to put the patch into PDFium. Now the only problem was Flash. As a prototype I implemented shims for all the font APIs used by Flash and emulated them using DirectWrite. For a better, more robust solution I needed to get changes made to Flash. I don’t have access to the Flash source code, however Google does have a good working relationship with Adobe and I used this to get the necessary changes implemented. It turned out that there was a Pepper API which did all that was needed to replace the GDI font handling, pp::flash::FontFile. Unfortunately that was only implemented on Linux, however I was able to put together a proof-of-concept Windows implementation of pp::flash::FontFile and through Xing Zhang of Adobe we got a full implementation in Chrome and Flash. Doomed Rights Management From this point I could enable Win32k lockdown for plugins and after much testing everything seemed to be working, until I tried to test some DRM protected video. While encrypted video worked, any Flash video file which required output protection (such as High-bandwidth Digital Content Protection (HDCP)) would not. HDCP works by encrypting the video data between the graphics output and the display, designed to prevent people capturing a digital video. Still this presents a problem, as video along with games are some of the only residual uses of Flash. In testing, this also affected the Widevine plugin that implements the Encrypted Media Extensions for Chrome. Widevine uses PPAPI under the hood; not fixing this issue would break all HD content playback. Enabling HDCP on Windows requires the use of a small number of Win32k APIs. I’d not discovered this during my initial analysis because, a) I didn’t run any protected content through the Flash player and all functions were imported at runtime using LoadLibrary and GetProcAddress only when needed. The function Flash was accessing was OPMGetVideoOutputsFromHMONITOR which is exposed by dxva2.dll. This function in turn maps down to multiple Win32k calls such as NtGdiCreateOPMProtectedOutputs. The ideal way of fixing this would be to implement a new API in Chrome which exposed enabling HDCP then get Adobe and Widevine to use that implementation. It turns out that the Adobe DRM and Widevine teams are under greater constraints than normal development teams. After discussion with my original contact at Adobe they didn’t have access to the DRM code for Flash. I was able to have meetings with Widevine (they’re part of Google) and the Adobe DRM team but in the end I decided to go it alone and implement redirection of these APIs as part of the sandbox code. Fortunately this doesn’t compromise the security guarantees of the original API because of the way Microsoft designed it. To prevent a MitM attack against the API calls (i.e. you hook the API and return the answer the caller expects, such as HDCP is enabled) the call is secured between the caller and graphics driver using a X.509 certificate chain returned during initialization. Once the application such as Flash verifies this certificate chain is valid it will send back a session key to the graphics driver encrypted using the end certificate’s public key. The driver then decrypts the session key and all communication from then on is encrypted and hashed using variants of this key. Of course this means that the driver must contain the private key corresponding to the public key in the end certificate, though at least in the case on my workstation that shouldn’t be a major issue as the end certificate has a special Key Usage OID (1.3.6.1.4.1.311.10.5.8) and the root “Microsoft Digital Media Authority” certificate isn’t in the trusted certificate store so the chain wouldn’t be trusted anyway. Users of the API can embed the root certificate directly in their code and verify its trust before continuing. As the APIs assume that it’s already been brokered (at minimum via Win32k.sys) then adding another broker, in this case one which brokers from the PPAPI process to another process in Chrome without the Win32k lockdown policy in place, doesn’t affect the security guarantees of the API. Of course I made best efforts to verify the data being brokered to limit the potential attack surface, though I’ll admit something about sending binary blobs to a graphics driver gives me the chills. This solved the issue with enabling output protection for DRM’ed content and finally the mitigation could be enabled by default. The commit for this code can be found here. Avoiding #yolosec Implementation wise it turned out to be not too complex once I’d found all the different possible places that Win32k functions could be called. Much of the groundwork was already in place with the original Win32k Renderer lockdown, the implementation of DirectWrite and the way the Pepper APIs were structured. So ship it already! Well not so fast, this is where reality kicks in. Chrome on Windows is relied upon by millions upon millions of users worldwide and Win32k lockdown for PPAPI would affect not only Flash, but PDFium (which is used in things like the Print Preview window) and Widevine. It’s imperative that this code is tested in the real world but in such a way that the impact on stability and functionality can be measured. Chrome supports something called Variations which allow developers to selectively enable experimental features remotely and deploy them to a randomly selected group of users who’ve opted into returning usage and crash statistics to Google. For example you can do a simple A/B test with one proportion of the Chrome users left as a Control and another with Win32k lockdown enabled. Statistical analysis can be performed on the results of that test based on various metrics, such as number of crashes, hanging processes and startup performance to detect anomalous behaviour which is due to the experimental code. To avoid impacting users of Stable this is typically only done on Beta, Dev and Canary users. Having these early release versions of Chrome are really important for ensuring features work as expected and we appreciate anyone who takes the time to run them. In the end this process of testing took longer than the implementation. Issues were discovered and fixed, stability measured until finally we were ready to ship. Unfortunately in that process there was a noticeable stability issue on Windows 8.1 and below which we couldn’t track down. The stability issues are likely down to interactions with third party code (such as AV) which inject their own code into Chrome processes. If this injected code relies on calling Win32k APIs for anything there’s a high chance of this causing a crash. This stability issue led to the hard decision to initially only allow the PPAPI Win32k lockdown to run on Windows 10 (where if anything stability improved). I hope to revisit this decision in the future. As third party code is likely to be updated to support the now shipping Windows 10 lockdown it might improve stability on Windows 8/8.1. As of M54 of Chrome, Win32k lockdown is enabled by default for users on Windows 10 (with an option to disable it remotely in the unlikely event a problem surfaces). As of M56 (set for release approximately the end of January 2017) it can only be disabled with a command line switch to disable all Win32k lockdown including Renderer processes. Wrap Up From the first patch submitted in September 2015 to the final patch in June it took almost 10 months of effort to come up with a shipping mitigation. The fact that it’s had its first public success (and who knows how many non-public ones) shows that it was worth implementing this mitigation. In the latest version of Windows 10, Anniversary Edition, Microsoft have implemented a Win32k filter which makes it easier to reduce the attack surface without completely disabling all the system calls which might have sped up development. Microsoft are also taking pro-active effort to improve the Win32k code base. The Win32k filter is already used in Edge, however at the moment only Microsoft can use it as the executable signature is checked before allowing the filter to be enabled. Also it’s not clear that the filter even completely blocked the vulnerability in the recent in-the-wild exploit chain. Microsoft would only state it would “stop all observed in-the-wild instances of this exploit”. Nuking the Win32k system calls from orbit is the only way to be sure that an attacker can’t find a bug which passes through the filter. Hopefully this blog post demonstrates the time and effort required to implement what seems on the face of it a fairly simple and clear mitigation policy for an application as complex as Chrome. We’ll continue to try and use the operating system provided sandboxing mechanisms to make all users on Chrome more secure. Thanks While I took on a large proportion of the technical work it’s clear this mitigation could not have shipped without the help of others in Chrome and outside. I’d like to especially mention the following: Anantanarayanan Lyengar, for landing the original Win32k mitigations for the Renderer processes on which all this is based. Will Harris, for dealing with variations and crash reporting to ensure everything was stable. Adobe Security Team and Xing Zhang for helping to remove GDI font code from Flash. The Widevine team for advice on DRM issues. Sursa: https://googleprojectzero.blogspot.ro/2016/11/breaking-chain.html
-
Intr-o luna, maxim doua, poti vedea toate videoclipurile. Nu mi se pare mult 140 USD (600 RON) pentru a putea invata o gramada de lucruri care pe viitor iti pot aduce lejer 1-2K euro pe luna.
-
Nu sunt scumpe, ati putea incerca sa le cumparati.
-
Cea mai smechera.
-
Abusing of Protocols to Load Local Files, bypass the HTML5 Sandbox, Open Popups and more On October 25th, the fellows @MSEdgeDev twitted a link that called my attention because when I clicked on it (being on Chrome) the Windows Store App opened. It might not surprise you, but it surprised me! As far as I remembered, Chrome had this healthy habit of asking the user before opening external programs but in this case it opened directly, without warnings. This was different and caught my attention because I never accepted to open the Windows Store in Chrome. There are some extensions and protocols that will open automatically but I’ve never approved the Windows Store. The shortened Twitter link redirected to https://aka.ms/extensions-storecollection which (again) redirected to the interesting: ms-windows-store://collection/?CollectionId=edgeExtensions. It was a protocol that I was not aware of, so I immediately tried to find it in the place where most protocol associations reside: the registry. A search of “ms-windows-store“ immediately returned our string inside the PackageId of the what seemed to be the Windows Store app. Noting that we were also in a key called “Windows.Protocol” I scrolled up and down a bit to see if there were other apps inside, and found that tons of them (including MS Edge) had their own protocols registered. This is nice because it opens a new attack surface straight from the browser. But let’s press F3 to see if we find other matches. It seems that the ms-windows-store: protocol is also accepting search arguments, so we can try opening our custom search straight from Google Chrome. In fact, the Windows Store app seems to be rendering HTML with the Edge engine, which is also interesting because we might try to XSS it or if the app is native, send big chunks of data and see what happens. But we won’t be doing that now, let’s go back to regedit and press F3 to see what else we can find. This one is interesting also, because it gives us clues on how to quickly find more protocols if they are prepended with the string “URL:”. Let’s reset our search to “URL:” and see what we get. Pressing the [Home] key takes us back to the top of the registry and a search of “URL:” immediately returns the first match “URL:about:blank“, confirming that we are not crazy. Press F3 again and we find the bingnews: protocol but this time Chrome requests us confirmation to open it. No problem, let’s try it on Edge to see what happens. It opens! Next match in the registry is the calculator: protocol. Will this work? Wow! I’m sure this will piss off exploit writers. What program will they pop now? Both calc and notepadcan be open without memory corruptions, and cmd.exe is deprecated in favor to powershell now. Microsoft removed the fun out of you guys. This could be a good moment to enumerate all loadable protocols and see which apps accept arguments so we can try to inject code into them (binary or pure javascript, depending on how the app was coded and how it treats the arguments). There is a lot of interesting stuff here to play with, and if we keep searching for protocols we will find tons of apps that open (including Candy Crush which I didn’t know it was on my PC). By pressing F3 a few times I learned a lot. For example, there’s a microsoft-edge: protocol that loads URLs in a new tab. It doesn’t seem to be important, until we remember the limits that HTML pages should have. Will the popUp blocker prevent us from opening 20 microsoft-edge:http://www.google.com tabs? [ PoC – Open popUps on MS Edge ] What about the HTML5 Sandbox? If you are not familiar with it, it’s just a way to impose restrictions to a webpage using the sandbox iframe attribute or the sandbox http header. For example, if we want to render content inside an iframe and make sure it does not run javascript (not even open new tabs) we can just use this tag: <iframe src=”sandboxed.html” sandbox></iframe> And the rendered page will be completely restricted. Essentially it can only render HTML/CSS but no javascript or access to things like cookies. In fact, if we use the sandbox granularity and allow at least new windows/tabs, all of them should inherit the sandboxed attributes and opened links from that iframe will still be sanboxed. However, using the microsoft-edge protocol bypasses this completely. [ PoC – Bypass HTML5 Sandbox on MS Edge ] Nice to see that the microsoft-edge protocol allows us to bypass different restrictions. I haven’t went further than that but you can try! This is a journey of discovery, remember that a single tweet fired my motivation to play a bit and ended up giving us stuff that truly deserves more research. I continued pressing F3 in regedit and found the read: protocol which called my attention because when reading its (javascript) source code. It had the potential for a UXSS but Edge kept crashing again and again while trying. It crashed too much. For example setting the location of an iframe to “read:” was enough to crash the browser including all tabs. Want to see it? [ PoC – Crash on MS Edge ] OK, I was curious about what was happening so I appended a few bytes to the read protocol and fired up WinDbg to see if the crash was related to invalid data. Something quick and simple, no fuzzing or anything special: read:xncbmx,qwieiwqeiu;asjdiw!@#$%^&* Oh yes, I really typed something like that. The only way that I found not to crash the read protocol was to load anything coming from http. Everything else crashed the browser. So let’s attach WinDbg to Edge. A quick dirty method that I use it to simply kill the Edge process and children, reopen it and attach to the latest process that uses EdgeHtml.dll. Of course there are easier ways but … yeah, I’m just like that. Open a command line and… taskkill /f /t /im MicrosoftEdge.exe ** Open Edge and load the webpage but make sure it doesn't crash yet ** tasklist /m EdgeHtml.dll Enough. Now load WinDbg and attach to the latest listed Edge process that uses EdgeHtml. And remember to use Symbols in WinDbg. Once attached, just press F5 or g [ENTER] inside WinDbg so Edge keeps running. This is how my screen looks right now. On the left I have the page that I use to test everything and on the right, WinDbg attached to that particular Edge process. We will use a window.open to play with the read: protocol instead of an iframe because it’s more comfortable. Think about it, there are protocols/urls that might end up changing the top location regardless of how framed they are. If we start playing with a protocol inside an iframe there are chances that our own page (the top) will be unloaded, losing the code that we’ve just typed. My particular test-page saves everything that I type, so if the browser crashes it’s highly likely that I will be able to repro my manual work. But even with everything saved, when I’m playing with code that could change the URL of my test-page, I open it in a new window. Just a habit. On the left screen we can type and execute JavaScript code quickly, on the right we have WinDbg prepared to reveal us what’s happening behind this crash. Go ahead, let’s run the JavaScript code and… Bang! WinDbg breaks. ModLoad: ce960000 ce996000 C:\Windows\SYSTEM32\XmlLite.dll ModLoad: c4110000 c4161000 C:\Windows\System32\OneCoreCommonProxyStub.dll ModLoad: d6a20000 d6ab8000 C:\Windows\SYSTEM32\sxs.dll (2c90.33f0): Security check failure or stack buffer overrun - code c0000409 (!!! second chance !!!) EdgeContent!wil::details::ReportFailure+0x120: 84347de0 cd29 int 29h OK, it seems that Edge knew something went wrong because it’s in a function called “ReportFailure”, right? Come on, I know we can immediately assume that if Edge is here, it failed somewhat “gracefully”. So let’s inspect the stack trace to see where are we coming from. Type “k” in WinDbg. 0:030> k # Child-SP RetAddr Call Site 00 af248b30 88087f80 EdgeContent!wil::details::ReportFailure+0x120 01 af24a070 880659a5 EdgeContent!wil::details::ReportFailure_Hr+0x44 02 af24a0d0 8810695c EdgeContent!wil::details::in1diag3::FailFast_Hr+0x29 03 af24a120 88101bcb EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x7c 04 af24a170 880da669 EdgeContent!CReadingModeViewer::Load+0x6b 05 af24a1b0 880da5ab EdgeContent!CBrowserTab::_ReadingModeViewerLoadViaPersistMoniker+0x85 06 af24a200 880da882 EdgeContent!CBrowserTab::_ReadingModeViewerLoad+0x3f 07 af24a240 880da278 EdgeContent!CBrowserTab::_ShowReadingModeViewer+0xb2 08 af24a280 88079a9e EdgeContent!CBrowserTab::_EnterReadingMode+0x224 09 af24a320 d9e4b1d9 EdgeContent!BrowserTelemetry::Instance::2::dynamic 0a af24a3c0 8810053e shlwapi!IUnknown_Exec+0x79 0b af24a440 880fee33 EdgeContent!CReadingModeController::_NavigateToUrl+0x52 0c af24a4a0 88074f98 EdgeContent!CReadingModeController::Open+0x1d3 0d af24a500 b07df508 EdgeContent!BrowserTelemetry::Instance'::2::dynamic 0e af24a5d0 b0768c47 edgehtml!FireEvent_BeforeNavigate+0x118 Check out the first two lines, both called blah blah ReportFailure, don’t you think Edge is here because something went wrong? Of course! Let’s keep going down until we find a function name that makes sense. The next one is called blah FailFast which also smells like it’s a function Edge called knowing that something went wrong. But we want to find the code that made Edge unhappy so continue reading down. The next one is blah _LoadRMHTML. This looks much better to me, don’t you agree? In fact, its name makes me think it Loads HTML. It would be interesting to break before the crash, so why not setting a breakpoint a few lines above _LoadRMHTML? We were watching the stack trace, let’s look at the code now. Let’s first unassemble back from that point (function + offset). It’s easy, using the “ub” command in WinDbg. 0:030> ub EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x7c EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a: 8810693a call qword ptr [EdgeContent!_imp_SHCreateStreamOnFileEx (882562a8)] 88106940 test eax,eax 88106942 jns EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x7d (8810695d) 88106944 mov rcx,qword ptr [rbp+18h] 88106948 lea r8,[EdgeContent!`string (88261320)] 8810694f mov r9d,eax 88106952 mov edx,1Fh 88106957 call EdgeContent!wil::details::in1diag3::FailFast_Hr (8806597c) We will focus on the names only and ignore everything else, OK? Just like when we were trying to find a variation for the mimeType bug, we are going to speculate here and if we fail we would of course keep going deeper. But sometimes a quick look on the debugger can reveal many things. We know that Edge will crash if it arrives to the last instruction of this snippet (address 88106957, FailFast_Hr). Our intention is to see why we ended up at that location, who the hell is sending us there. But let’s start from the beginning, the first instruction on this snippet seems to be calling a function with a complicated name which apparently reveals us tons of stuff. EdgeContent!_imp_SHCreateStreamOnFileEx The first part before the ! is the module (exe, dll, etc) where this instruction is located. In this case it is EdgeContent and we don’t even care about its extension, it’s just code. After the ! comes a funny name _imp_ and then SHCreateStreamOnFileEx which seems to be a function name that “creates a stream on file”. Do you agree? In fact, the _imp_ part makes me think that maybe this is an imported function loaded from a different binary. Let’s google that name to see if we find something interesting. That’s pretty nice. The first result came with the exact name that we searched for. Let’s click on it. OK. The first parameter that this function receives is a “A pointer to a null-terminated string that specifies the file name“. Interesting! If this snippet of code is being executed, then, it should be receiving a pointer to a file name as the first argument. But how can we see the first parameter? It’s easy, we are working on Winx64, and the calling convention / parameter passing says that “First 4 parameters – RCX, RDX, R8, R9” (speaking about integers/pointers). This means that the first parameter (pointer to a file name) will be loaded in the register RCX. With this information, we can set a breakpoint before Edge calls that function and see what the RCX has at that precise moment. But let’s restart because it’s a bit late at this point: Edge already crashed, Please, re-do what’s described above (kill Edge, open it, load the page, find the process and attach). This time, instead of running (F5) the process, we will set a breakpoint. The exact address of the instruction, we don’t know, but WinDbg revealed the exact offset, when we executed our “ub” command. 0:030> ub EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x7c EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a: 8810693a ff1568f91400 call qword ptr [EdgeContent!_imp_SHCreateStreamOnFileEx (882562a8)] 88106940 85c0 test eax,eax So the breakpoint should go in EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a We type “bp” and the function name + offset [ENTER]. Then “g” to let Edge run. 0:029> bp EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a 0:029> g Great. This is exciting. We want to see what’s the file name (or string) located in the register RCX right before SHCreateStreamOnFileEx executes. Let’s run the code and feel the break. Well, I feel it baby =) breakpoints connect me to my childhood. Let’s run the JavaScript code and bang! WinDbg breaks right there. Breakpoint 0 hit EdgeContent!CReadingModeViewerEdge::_LoadRMHTML+0x5a: 8820693a ff1568f91400 call qword ptr [EdgeContent!_imp_SHCreateStreamOnFileEx (883562a8)] That’s great, now we can inspect the content where RCX is pointing. To do this we will use the “d” command (display memory) @ and the register name, just like this: 0:030> d @rcx 02fac908 71 00 77 00 69 00 65 00-69 00 77 00 71 00 65 00 q.w.i.e.i.w.q.e. 02fac918 69 00 75 00 3b 00 61 00-73 00 6a 00 64 00 69 00 i.u.;.a.s.j.d.i. 02fac928 77 00 21 00 40 00 23 00-24 00 25 00 5e 00 26 00 w.!.@.#.$.%.^.&. 02fac938 2a 00 00 00 00 00 08 00-60 9e f8 02 db 01 00 00 *.......`....... 02fac948 10 a9 70 02 db 01 00 00-01 00 00 00 00 00 00 00 ..p............. 02fac958 05 00 00 00 00 00 00 00-00 00 00 00 19 6c 01 00 .............l.. 02fac968 44 14 00 37 62 de 77 46-9d 68 27 f3 e0 92 00 00 D..7b.wF.h'..... 02fac978 00 00 00 00 00 00 08 00-00 00 00 00 00 00 00 00 ................ This isn’t nice on my eyes but on the right of the first line I see something which looks similar to a unicode string. Let’s display again as unicode (du). 0:030> du @rcx 02fac908 "qwieiwqeiu;asjdiw!@#$%^&*" Nice! The string rings me! Look at the JavaScript code that we’ve just ran. It seems that the argument passed to this function is whatever we type after the comma. With this knowledge plus knowing that it is expecting a file, we can try a full path to something in my drive. Because Edge runs inside an AppContainer, we will try a file that’s accessible. For example something from the windows/system32 directory. read:,c:\windows\system32\drivers\etc\hosts We are also removing the garbage before the comma which seems unrelated (albeit it deserves more research!). Let’s quickly detach, restart Edge, and run our new code url = "read:,c:\\windows\\system32\\drivers\\etc\\hosts"; w = window.open(url, "", "width=300,height=300"); And as expected, the local file loads in the new window without crashes. [ PoC – Open hosts on MS Edge ] Fellow bug hunter, I will stop here but I believe all these things deserve a bit more of research depending on what’s fun for you: A) Enumerate all loadable protocols and attack those applications via query-strings. Play with microsoft-edge: which bypasses the HTML5 sandbox, popup blocker and who knows what else. C) Keep going with the read: protocol. We found a way to stop it from crashing but remember there is a function SHCreateStreamOnFileEx expecting things that we can influence! It’s worth trying more. Also, we can continue working on the arguments to see if commas are used to split arguments, etc. If debugging binaries is boring for you, then you can still try to XSS the reading view. I hope you find tons of vulnerabilities! If you have questions, ping me at @magicmac2000. Have a nice day! Reported to MSRC on 2016-10-26 Sursa: https://www.brokenbrowser.com/abusing-of-protocols/
-
- 2
-
-
Cryptology ePrint Archive: Report 2016/1114 Full Disk Encryption: Bridging Theory and Practice Louiza Khati and Nicky Mouha and Damien Vergnaud Abstract: We revisit the problem of Full Disk Encryption (FDE), which refers to the encryption of each sector of a disk volume. In the context of FDE, it is assumed that there is no space to store additional data, such as an IV (Initialization Vector) or a MAC (Message Authentication Code) value. We formally define the security notions in this model against chosen-plaintext and chosen-ciphertext attacks. Then, we classify various FDE modes of operation according to their security in this setting, in the presence of various restrictions on the queries of the adversary. We will find that our approach leads to new insights for both theory and practice. Moreover, we introduce the notion of a diversifier, which does not require additional storage, but allows the plaintext of a particular sector to be encrypted to different ciphertexts. We show how a 2-bit diversifier can be implemented in the EagleTree simulator for solid state drives (SSDs), while decreasing the total number of Input/Output Operations Per Second (IOPS) by only 4%. Category / Keywords: secret-key cryptography / disk encryption theory, full disk encryption, FDE, XTS, IEEE P1619, unique first block, diversifier, provable security Original Publication (with major differences): CT-RSA 2017 - RSA Conference Cryptographers' Track Date: received 25 Nov 2016 Contact author: nicky at mouha be Available format(s): PDF | BibTeX Citation Version: 20161125:195305 (All versions of this report) Short URL: ia.cr/2016/1114 Discussion forum: Show discussion | Start new discussion Sursa: https://eprint.iacr.org/2016/1114
-
Publicat pe 25 nov. 2016 This is the first part of a full analysis for a packed Fleercivet sample. We start with basic static analysis, detection name interpretation and unpacking.
-
- 1
-
-
F-Scrack F-Scrack is a single file bruteforcer supports multi-protocol, no extra library requires except python standard library, which is ideal for a quick test. Currently support protocol: FTP, MySQL, MSSQL,MongoDB,Redis,Telnet,Elasticsearch,PostgreSQL. Compatible with OSX, Linux, Windows, Python 2.6+. Sursa: https://github.com/ysrc/F-Scrack
-
The Linux Kernel Hidden Inside Windows 10 Publicat pe 22 nov. 2016 by Alex Ionescu Initially known as "Project Astoria" and delivered in beta builds of Windows 10 Threshold 2 for Mobile, Microsoft implemented a full blown Linux 3.4 kernel in the core of the Windows operating system, including full support for VFS, BSD Sockets, ptrace, and a bonafide ELF loader. After a short cancellation, it's back and improved in Windows 10 Anniversary Update ("Redstone"), under the guise of Bash Shell interoperability. This new kernel and related components can run 100% native, unmodified Linux binaries, meaning that NT can now execute Linux system calls, schedule thread groups, fork processes, and access the VDSO! As it's implemented using a full-blown, built-in, loaded-by-default, Ring 0 driver with kernel privileges, this not a mere wrapper library or user-mode system call converter like the POSIX subsystem of yore. The very thought of an alternate virtual file system layer, networking stack, memory and process management logic, and complicated ELF parser and loader in the kernel should tantalize exploit writers - why choose from the attack surface of a single kernel, when there's now two? But it's not just about the attack surface - what effects does this have on security software? Do these frankenLinux processes show up in Procmon or other security drivers? Do they have PEBs and TEBs? Is there even an EPROCESS? And can a Windows machine, and the kernel, now be attacked by Linux/Android malware? How are Linux system calls implemented and intercepted? As usual, we'll take a look at the internals of this entirely new paradigm shift in the Windows OS, and touch the boundaries of the undocumented and unsupported to discover interesting design flaws and abusable assumptions, which lead to a wealth of new security challenges on Windows 10 Anniversary Update ("Redstone") machines.
-
- 1
-
-
Demystifying the Secure Enclave Processor Publicat pe 22 nov. 2016 by Tarjei Mandt & Mathew Solnik & David Wang The secure enclave processor (SEP) was introduced by Apple as part of the A7 SOC with the release of the iPhone 5S, most notably to support their fingerprint technology, Touch ID. SEP is designed as a security circuit configured to perform secure services for the rest of the SOC, with with no direct access from the main processor. In fact, the secure enclave processor runs it own fully functional operating system - dubbed SEPOS - with its own kernel, drivers, services, and applications. This isolated hardware design prevents an attacker from easily recovering sensitive data (such as fingerprint information and cryptographic keys) from an otherwise fully compromised device. Despite almost three years have passed since its inception, little is still known about the inner workings of the SEP and its applications. The lack of public scrutiny in this space has consequently led to a number of misconceptions and false claims about the SEP. In this presentation, we aim to shed some light on the secure enclave processor and SEPOS. In particular, we look at the hardware design and boot process of the secure enclave processor, as well as the SEPOS architecture itself. We also detail how the iOS kernel and the SEP exchange data using an elaborate mailbox mechanism, and how this data is handled by SEPOS and relayed to its services and applications. Last, but not least, we evaluate the SEP attack surface and highlight some of the findings of our research, including potential attack vectors.
-
Pangu 9 Internals Publicat pe 22 nov. 2016 by Tielei Wang & Hao Xu & Xiaobo Chen Pangu 9, the first (and only) untethered jailbreak tool for iOS 9, exploited a sequence of vulnerabilities in the iOS userland to achieve final arbitrary code execution in the kernel and persistent code signing bypass. Although these vulnerabilities were fixed in iOS 9.2, there are no details disclosed. This talk will reveal the internals of Pangu 9. Specifically, this talk will first present a logical error in a system service that is exploitable by any container app through XPC communication to gain arbitrary file read/write as mobile. Next, this talk will explain how Pangu 9 gains arbitrary code execution outside the sandbox through the system debugging feature. This talk will then elaborate a vulnerability in the process of loading the dyld_shared_cache file that enables Pangu 9 to achieve persistent code signing bypass. Finally, this talk will present a vulnerability in the backup-restore process that allows apps signed by a revoked enterprise certificate to execute without the need of the user's explicit approval of the certificate.
-
- 1
-
-
Secure Host Baseline About the Secure Host Baseline The Secure Host Baseline (SHB) provides an automated and flexible approach for assisting the DoD in deploying the latest releases of Windows 10 using a framework that can be consumed by organizations of all sizes. The DoD CIO issued a memo on November 20, 2015 directing Combatant Commands, Services, Agencies and Field Activities (CC/S/As) to rapidly deploy the Windows 10 operating system throughout their respective organizations with the objective of completing deployment by the end of January 2017. The Deputy Secretary of Defense issued a memo on February 26, 2016directing the DoD to complete a rapid deployment and transition to Microsoft Windows 10 Secure Host Baseline by the end of January 2017.[1] Formal product evaluations also support the move to Windows 10. The National Information Assurance Partnership (NIAP) oversees evaluations of commercial IT products for use in National Security Systems. The Common Criteria evaluation of Windows 10 against the NIAP Protection Profile for General Purpose Operating Systems completed April 5, 2016. The Common Criteria evaluation of Windows 10 against the NIAP Protection Profile for Mobile Device Fundamentals completed January 29, 2016. NIST FIPS 140-2 validation of Windows 10 modules was completed on June 2, 2016 (see certificate numbers 2600, 2601, 2602, 2603, 2604, 2605, 2606, and 2607). Using a Secure Host Baseline is one of NSA Information Assurance top 10 mitigation strategies. The DoD Secure Host Baseline also exemplifies other IAD top 10 mitigation strategies such as using application whitelisting, enabling anti-exploitation features, and using the latest version of the operating system and applications. About this repository This repository hosts Group Policy objects, compliance checks, and configuration tools in support of the DoD Secure Host Baseline (SHB) framework for Windows 10. Administrators of National Security Systems, such as those who are part of the Defense Industrial Base, can leverage this repository in lieu of access to the DoD SHB framework for Windows 10 which requires a Common Access Card (CAC) or Personal Identification Verification (PIV) smart card to access. Questions or comments can be submitted to the repository issue tracker or posted on Windows 10 Secure Host Baseline project forums on Software Forge which requires a CAC or PIV smart card to access. Link: https://github.com/iadgov/Secure-Host-Baseline